http://wiki.math.uwaterloo.ca/statwiki/api.php?action=feedcontributions&user=Jssambee&feedformat=atom statwiki - User contributions [US] 2023-01-28T13:03:53Z User contributions MediaWiki 1.28.3 http://wiki.math.uwaterloo.ca/statwiki/index.php?title=On_The_Convergence_Of_ADAM_And_Beyond&diff=36154 On The Convergence Of ADAM And Beyond 2018-04-04T06:30:25Z <p>Jssambee: </p> <hr /> <div>= Introduction =<br /> Stochastic gradient descent (SGD) is currently the dominant method of training deep networks. Variants of SGD that scale the gradients using information from past gradients have been very successful, since the learning rate is adjusted on a per-feature basis, with ADAGRAD being one example. However, ADAGRAD performance deteriorates when loss functions are nonconvex and gradients are dense. Several variants of ADAGRAD, such as RMSProp, ADAM, ADADELTA, and NADAM have been proposed, which address the issue by using exponential moving averages of squared past gradients, thereby limiting the update to only rely on the past few gradients. The following formula shows the per-parameter update for which is then vectorized:<br /> &lt;math&gt;<br /> g_{t, i} = \nabla_\theta J( \theta_{t, i} ).<br /> &lt;/math&gt;<br /> <br /> After vectorizing the update per-parameter using SGD becomes:<br /> &lt;math&gt;<br /> \theta_{t+1, i} = \theta_{t, i} - \eta \cdot g_{t, i}.<br /> &lt;/math&gt;<br /> <br /> The update for the parameter in the next step is calculated using the matrix vector product:<br /> &lt;math&gt;<br /> \theta_{t+1} = \theta_{t} - \dfrac{\eta}{\sqrt{G_{t} + \epsilon}} \odot g_{t}.<br /> &lt;/math&gt;<br /> <br /> This paper focuses strictly on the pitfalls in convergence of the ADAM optimizer from a theoretical standpoint and proposes a novel improvement to ADAM called AMSGrad. The paper introduces the idea that it is possible for ADAM to get &quot;stuck&quot; in its weighted average history, preventing it from converging to an optimal solution. For example, in an experiment there may be a large spike in the gradient during some mini-batches. But since ADAM weighs the current update by the exponential moving averages of squared past gradients, the effect of the large spike in gradient is lost. To tackle these issues, several variants of ADAGRAD hav been proposed. The authors' analysis suggest that this can be prevented through novel but simple adjustments to the ADAM optimization algorithm, which can improve convergence. This paper is published in ICLR 2018.<br /> <br /> == Notation ==<br /> The paper presents the following framework as a generalization to all training algorithms, allowing us to fully define any specific variant such as AMSGrad or SGD entirely within it:<br /> <br /> [[File:training_algo_framework.png|700px|center]]<br /> <br /> Where we have &lt;math&gt; x_t &lt;/math&gt; as our network parameters defined within a vector space &lt;math&gt; \mathcal{F} &lt;/math&gt;. &lt;math&gt; \prod_{\mathcal{F}} (y) = &lt;/math&gt; the projection of &lt;math&gt; y &lt;/math&gt; on to the set &lt;math&gt; \mathcal{F} &lt;/math&gt;.<br /> &lt;math&gt; \psi_t &lt;/math&gt; and &lt;math&gt; \phi_t &lt;/math&gt; correspond to arbitrary functions we will provide later, The former maps from the history of gradients to &lt;math&gt; \mathbb{R}^d &lt;/math&gt; and the latter maps from the history of the gradients to positive semi definite matrices. And finally &lt;math&gt; f_t &lt;/math&gt; is our loss function at some time &lt;math&gt; t &lt;/math&gt;, the rest should be pretty self explanatory. Using this framework and defining different &lt;math&gt; \psi_t &lt;/math&gt; , &lt;math&gt; \phi_t &lt;/math&gt; will allow us to recover all different kinds of training algorithms under this one roof.<br /> <br /> === SGD As An Example ===<br /> To recover SGD using this framework we simply select &lt;math&gt; \phi_t (g_1, \dotsc, g_t) = g_t&lt;/math&gt;, &lt;math&gt; \psi_t (g_1, \dotsc, g_t) = I &lt;/math&gt; and &lt;math&gt;\alpha_t = \alpha / \sqrt{t}&lt;/math&gt;. It is easy to see that no transformations are ultimately applied to any of the parameters based on any gradient history other than the most recent from &lt;math&gt; \phi_t &lt;/math&gt; and that &lt;math&gt; \psi_t &lt;/math&gt; in no way transforms any of the parameters by any specific amount as &lt;math&gt; V_t = I &lt;/math&gt; has no impact later on.<br /> <br /> === ADAGRAD As Another Example ===<br /> <br /> To recover ADAGRAD, we select &lt;math&gt; \phi_t (g_1, \dotsc, g_t) = g_t&lt;/math&gt;, &lt;math&gt; \psi_t (g_1, \dotsc, g_t) = \frac{\sum_{i=1}^{t} g_i^2}{t} &lt;/math&gt;, and &lt;math&gt;\alpha_t = \alpha / \sqrt{t}&lt;/math&gt;. Therefore, compared to SGD, ADAGRAD uses a different step size for each parameter, based on the past gradients for that parameter; the learning rate becomes &lt;math&gt; \alpha_t = \alpha / \sqrt{\sum_i g_{i,j}^2} &lt;/math&gt; for each parameter &lt;math&gt; j &lt;/math&gt;. The authors note that this scheme is quite efficient when the gradients are sparse.<br /> <br /> === ADAM As Another Example ===<br /> Once you can convince yourself that the recovery of SGD from the generalized framework is correct, you should understand the framework enough to see why the following setup for ADAM will allow us to recover the behaviour we want. ADAM has the ability to define a &quot;learning rate&quot; for every parameter based on how much that parameter moves over time (a.k.a its momentum) supposedly to help with the learning process.<br /> <br /> In order to do this, we will choose &lt;math&gt; \phi_t (g_1, \dotsc, g_t) = (1 - \beta_1) \sum_{i=0}^{t} {\beta_1}^{t - i} g_t &lt;/math&gt;, psi to be &lt;math&gt; \psi_t (g_1, \dotsc, g_t) = (1 - \beta_2)&lt;/math&gt;diag&lt;math&gt;( \sum_{i=0}^{t} {\beta_2}^{t - i} {g_t}^2) &lt;/math&gt;, and keep &lt;math&gt;\alpha_t = \alpha / \sqrt{t}&lt;/math&gt;. This setup is equivalent to choosing a learning rate decay of &lt;math&gt;\alpha / \sqrt{\sum_i g_{i,j}}&lt;/math&gt; for &lt;math&gt;j \in [d]&lt;/math&gt;.<br /> <br /> From this, we can now see that &lt;math&gt;m_t &lt;/math&gt; gets filled up with the exponentially weighted average of the history of our gradients that we have come across so far in the algorithm. And that as we proceed to update we scale each one of our parameters by dividing out &lt;math&gt; V_t &lt;/math&gt; (in the case of diagonal it is just one over the diagonal entry) which contains the exponentially weighted average of each parameter's momentum (&lt;math&gt; {g_t}^2 &lt;/math&gt;) across our training so far in the algorithm. Thus each parameter has its own unique scaling by its second moment or momentum. Intuitively, from a physical perspective, if each parameter is a ball rolling around in the optimization landscape what we are now doing is instead of having the ball change positions on the landscape at a fixed velocity (i.e. momentum of 0) the ball now has the ability to accelerate and speed up or slow down if it is on a steep hill or flat trough in the landscape (i.e. a momentum that can change with time).<br /> <br /> = &lt;math&gt; \Gamma_t &lt;/math&gt;, an Interesting Quantity =<br /> Now that we have an idea of what ADAM looks like in this framework, let us now investigate the following:<br /> <br /> &lt;center&gt;&lt;math&gt; \Gamma_{t + 1} = \frac{\sqrt{V_{t+1}}}{\alpha_{t+1}} - \frac{\sqrt{V_t}}{\alpha_t} &lt;/math&gt;&lt;/center&gt;<br /> <br /> Which essentially measure the change of the &quot;Inverse of the learning rate&quot; across time (since we are using alpha's as step sizes). A key observation is that for SGD and ADAGRAD, &lt;math&gt;\Gamma_t \succeq 0&lt;/math&gt; for all &lt;math&gt;t \in [T]&lt;/math&gt;, which simply follows from the update rules of SGD and ADAGRAD. Looking back to our example of SGD it's not hard to see that this quantity is strictly positive semidefinite, which leads to &quot;non-increasing&quot; learning rates, which is a desired property. However, that is not the case with ADAM, and can pose a problem in a theoretical and applied setting. The problem ADAM can face is that &lt;math&gt; \Gamma_t &lt;/math&gt; can potentially be indefinite for &lt;math&gt;t \in [T]&lt;/math&gt;, which the original proof assumed it could not be. The math for this proof is VERY long so instead we will opt for an example to showcase why this could be an issue.<br /> <br /> Consider the loss function &lt;math&gt; f_t(x) = \begin{cases} <br /> Cx &amp; \text{for } t \text{ mod 3} = 1 \\<br /> -x &amp; \text{otherwise}<br /> \end{cases} &lt;/math&gt;<br /> <br /> Where we have &lt;math&gt; C &gt; 2 &lt;/math&gt; and &lt;math&gt; \mathcal{F} &lt;/math&gt; is &lt;math&gt; [-1,1] &lt;/math&gt;. Additionally we choose &lt;math&gt; \beta_1 = 0 &lt;/math&gt; and &lt;math&gt; \beta_2 = 1/(1+C^2) &lt;/math&gt;. We then proceed to plug this into our framework from before. This function is periodic and it's easy to see that it has the gradient of C once and then a gradient of -1 twice every period. It has an optimal solution of &lt;math&gt; x = -1 &lt;/math&gt; (from a regret standpoint), but using ADAM we would eventually converge at &lt;math&gt; x = 1 &lt;/math&gt;, since &lt;math&gt; \psi_t &lt;/math&gt; would scale down the &lt;math&gt; C &lt;/math&gt; by a factor of almost &lt;math&gt; C &lt;/math&gt; so that it's unable to &quot;overpower&quot; the multiple -1's.<br /> <br /> We formalize this intuition in the results below.<br /> <br /> '''Theorem 1.''' There is an online convex optimization problem where ADAM has non-zero average regret. i.e. &lt;math&gt;R_T/T\nrightarrow 0 &lt;/math&gt; as &lt;math&gt;T\rightarrow \infty&lt;/math&gt;.<br /> <br /> One might think that adding a small constant in the denominator of the update function can help avoid this issue by modifying the update for ADAM as follow:<br /> \begin{align}<br /> \hat x_{t+1} = x_t - \alpha_t m_t/\sqrt{V_t + \epsilon \mathbb{I}}<br /> \end{align}<br /> <br /> The selection of &lt;math&gt;\epsilon&lt;/math&gt; appears to be crucial for the performance of the algorithm in practice. However, this work shows that for any constant &lt;math&gt;\epsilon &gt; 0&lt;/math&gt;, there exists an online optimization setting where ADAM has non-zero average regret asymptotically.<br /> <br /> '''Theorem 2.''' For any constant &lt;math&gt;\beta_1,\beta_2 \in [0,1)&lt;/math&gt; such that &lt;math&gt;\beta_2 &lt; \sqrt{\beta_2}&lt;/math&gt;, there is an online convex optimization problem where ADAM has non-zero average regret i.e. &lt;math&gt;R_T/T\nrightarrow 0 &lt;/math&gt; as &lt;math&gt;T\rightarrow \infty&lt;/math&gt;.<br /> <br /> '''Theorem 3.''' For any constant &lt;math&gt;\beta_1,\beta_2 \in [0,1)&lt;/math&gt; such that &lt;math&gt;\beta_2 &lt; \sqrt{\beta_2}&lt;/math&gt;, there is a stochastic convex optimization problem for which ADAM does not converge to the optimal solution.<br /> <br /> = AMSGrad as an improvement to ADAM =<br /> There is a very simple intuitive fix to ADAM to handle this problem. We simply scale our historical weighted average by the maximum we have seen so far to avoid the negative sign problem. There is a very simple one-liner adaptation of ADAM to get to AMSGRAD:<br /> [[File:AMSGrad_algo.png|700px|center]]<br /> <br /> Below are some simple plots comparing ADAM and AMSGrad, the first are from the paper and the second are from another individual who attempted to recreate the experiments. The two plots somewhat disagree with one another so take this heuristic improvement with a grain of salt.<br /> <br /> [[File:AMSGrad_vs_adam.png|900px|center]]<br /> <br /> Here is another example of a one-dimensional convex optimization problem where ADAM fails to converge<br /> <br /> [[File:AMSGrad_vs_adam3.png|900px|center]]<br /> <br /> [[File:AMSGrad_vs_adam2.png|700px|center]]<br /> <br /> = Conclusion =<br /> The authors have introduced a framework for which they could view several different training algorithms. From there they used it to recover SGD as well as ADAM. In their recovery of ADAM the authors investigated the change of the inverse of the learning rate over time to discover in certain cases there were convergence issues. They proposed a new heuristic AMSGrad to help deal with this problem and presented some empirical results that show it may have helped ADAM slightly. Thanks for your time.<br /> <br /> == Critique ==<br /> The contrived example which serves as the intuition to illustrate the failure of ADAM is not convincing, since we can construct similar failure examples for SGD as well. <br /> Consider the loss function <br /> <br /> &lt;math&gt; f_t(x) = \begin{cases} <br /> -x &amp; \text{for } t \text{ mod 2} = 1 \\<br /> -\frac{1}{2} x^2 &amp; \text{otherwise}<br /> \end{cases} <br /> &lt;/math&gt;<br /> <br /> where &lt;math&gt; x \in \mathcal{F} = [-a, 1], a \in [1, \sqrt{2}) &lt;/math&gt;. The optimal solution is &lt;math&gt;x=1&lt;/math&gt;, but starting from initial point &lt;math&gt;x_{t=0} \le -1&lt;/math&gt;, SGD will converge to &lt;math&gt;x = -a&lt;/math&gt;<br /> <br /> ==Implementation == <br /> Keras implementation of AMSGrad : https://gist.github.com/kashif/3eddc3c90e23d84975451f43f6e917da<br /> <br /> = Source =<br /> 1. Sashank J. Reddi and Satyen Kale and Sanjiv Kumar. &quot;On the Convergence of Adam and Beyond.&quot; International Conference on Learning Representations. 2018</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Robust_Imitation_of_Diverse_Behaviors&diff=36153 Robust Imitation of Diverse Behaviors 2018-04-04T06:18:42Z <p>Jssambee: /* Motivation */</p> <hr /> <div>=Introduction=<br /> One of the longest standing challenges in AI is building versatile embodied agents, both in the form of real robots and animated avatars, capable of a wide and diverse set of behaviors. State-of-the-art robots cannot compete with the effortless variety and adaptive flexibility of motor behaviors produced by toddlers. Towards addressing this challenge, the authors combine several deep generative approaches to imitation learning in a way that accentuates their individual strengths and addresses their limitations. The end product is a robust neural network policy that can imitate a large and diverse set of behaviors using few training demonstrations.<br /> <br /> =Motivation=<br /> Deep generative models have recently shown great promise in imitation learning. The authors primarily talk about two approaches viz. supervised approaches that condition on demonstrations and Generative Adversarial Imitation Learning (GAIL). The authors also talk about the strengths and limitations of the two approaches and try to combine the two approaches in order to get the best of both worlds.<br /> <br /> * Supervised approaches that condition on demonstrations using a variational autoencoder (VAE):<br /> ** They require large training datasets in order to work for non-trivial tasks<br /> ** Experiments show that the VAE learns a structured semantic embedding space, allowing for smooth policy interpolation<br /> ** They tend to be brittle and fail when the agent diverges too much from the demonstration trajectories (As proof of this brittleness, the authors cite Ross et al. (2010), who provide a theorem showing that the cost incurred by this kind of model when it deviates from a demonstration trajectory with a small probability can be amplified in a manner quadratic in the number of time steps.)<br /> ** VAEs provides a latent vector representation unlike GANS or autoregressive models which produce sharp and at times realistic image samples, but tend to be slow to sample from, that is why VAEs are used to learn demonstration trajectories.<br /> <br /> * Generative Adversarial Imitation Learning (GAIL)<br /> ** Allows learning more robust policies with fewer demonstrations<br /> ** Adversarial training leads to mode-collapse (the tendency of adversarial generative models to cover only a subset of modes of a probability distribution, resulting in a failure to produce adequately diverse samples)<br /> ** More difficult and slow to train as they do not immediately provide a latent representation of the data<br /> <br /> Thus, the former approach can model diverse behaviors without dropping modes but does not learn robust policies, while the latter approach gives robust policies but insufficiently diverse behaviors. Thus, the authors combine the favorable aspects of these two approaches. The base of their model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. Leveraging these policy representations, they develop a new version of GAIL that <br /> # is much more robust than the purely-supervised controller, especially with few demonstrations, and <br /> # avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not.<br /> <br /> =Model=<br /> The authors first introduce a variational autoencoder (VAE) for supervised imitation, consisting of a bi-directional LSTM encoder mapping demonstration sequences to embedding vectors, and two decoders. The first decoder is a multi-layer perceptron (MLP) policy mapping a trajectory embedding and the current state to a continuous action vector. The second is a dynamics model mapping the embedding and previous state to the present state while modeling correlations among states with a WaveNet. A conditional GAN is used for the GAIL step. GAIL is what actually generates the policy, but it is conditioned/initialized based on the VAE latent state.<br /> <br /> [[File: Model_Architecture.png|700px|center|]]<br /> <br /> ==Behavioral cloning with VAE suited for control==<br /> <br /> In this section, the authors follow a similar approach to Duan et al. (2017), but opt for stochastic VAEs as having a distribution &lt;math display=&quot;inline&quot;&gt;q_\phi(z|x_{1:T})&lt;/math&gt; to better regularize the latent space. In their VAE, an encoder stochastically maps a demonstration sequence to an embedding vector &lt;math display=&quot;inline&quot;&gt;z&lt;/math&gt;. Given &lt;math display=&quot;inline&quot;&gt;z&lt;/math&gt;, they decode both the state and action trajectories as shown in the figure above. To train the model, the following loss is minimized:<br /> <br /> \begin{align}<br /> L\left( \alpha, w, \phi; \tau_i \right) = - \pmb{\mathbb{E}}_{q_{\phi}(z|x_{1:T_i}^i)} \left[ \sum_{t=1}^{T_i} log \pi_\alpha \left( a_t^i|x_t^i, z \right) + log p_w \left( x_{t+1}^i|x_t^i, z\right) \right] +D_{KL}\left( q_\phi(z|x_{1:T_i}^i)||p(z) \right)<br /> \end{align}<br /> <br /> Where &lt;math&gt; \alpha &lt;/math&gt; parameterizes the action decoder, &lt;math&gt; w &lt;/math&gt; parameterizes the state decoder, &lt;math&gt; \phi &lt;/math&gt; parameterizes the state encoder, and &lt;math&gt; T_i \in \tau_i &lt;/math&gt; is the set of demonstration trajectories.<br /> <br /> The encoder &lt;math display=&quot;inline&quot;&gt;q&lt;/math&gt; uses a bi-directional LSTM. To produce the final embedding, it calculates the average of all the outputs of the second layer of this LSTM before applying a final linear transformation to<br /> generate the mean and standard deviation of a Gaussian. Then, one sample from this Gaussian is taken as the demonstration encoding.<br /> <br /> The action decoder is an MLP that maps the concatenation of the state and the embedding of the parameters of a Gaussian policy. The state decoder is similar to a conditional WaveNet model. In particular, it conditions on the embedding &lt;math display=&quot;inline&quot;&gt;z&lt;/math&gt; and previous state &lt;math display=&quot;inline&quot;&gt;x_{t-1}&lt;/math&gt; to generate the vector &lt;math display=&quot;inline&quot;&gt;x_t&lt;/math&gt; autoregressively. That is, the autoregression is over the components of the vector &lt;math display=&quot;inline&quot;&gt;x_t&lt;/math&gt;. Finally, instead of a Softmax, the model uses a mixture of Gaussians as the output of the WaveNet.<br /> <br /> ==Diverse generative adversarial imitiation learning==<br /> To enable GAIL to produce diverse solutions, the authors condition the discriminator on the embeddings generated by the VAE encoder and integrate out the GAIL objective with respect to the variational posterior &lt;math display=&quot;inline&quot;&gt;q_\phi(z|x_{1:T})&lt;/math&gt;. Specifically, the authors train the discriminator by optimizing the following objective:<br /> <br /> \begin{align}<br /> {max}_{\psi} \pmb{\mathbb{E}}_{\tau_i \sim \pi_E} \left( \pmb{\mathbb{E}}_{q(z|x_{1:T_i}^i)} \left[\frac{1}{T_i} \sum_{t=1}^{T_i} logD_{\psi} \left( x_t^i, a_t^i | z \right) + \pmb{\mathbb{E}}_{\pi_\theta} \left[ log(1 - D_\psi(x, a | z)) \right] \right] \right)<br /> \end{align}<br /> <br /> There is related work which uses a conditional GAIL objective to learn controls for multiple behaviors from state trajectories, but the discriminator conditions on an annotated class label, as in conditional GANs.<br /> <br /> The authors condition on unlabeled trajectories, which have been passed through a powerful encoder, and hence this approach is capable of one-shot imitation learning. Moreover, the VAE encoder enables to obtain a continuous latent embedding space where interpolation is possible.<br /> <br /> Since the discriminator is conditional, the reward function is also conditional and clipped so that it is upper-bounded. Conditioning on &lt;math&gt;z&lt;/math&gt; allows for the generation of an infinite number of reward functions, each tailored to imitate a different trajectory. Due to the diversity of the reward functions, the policy gradients will not collapse into one particular mode through mode skewing.<br /> <br /> To better motivate the objective, the authors propose on temporarily leaving the context of imitation learning and considering an alternative objective for training GANs<br /> <br /> \begin{align}<br /> {min}_{G}{max}_{D} V (G, D) = \int_{y} p(y) \int_{z} q(z|y) \left[ log D(y | z) + \int_{\hat{y}} G(\hat{y} | z) log (1 - D(\hat{y} | z)) d\hat{y} \right] dy dz<br /> \end{align}<br /> <br /> This function is a simplification of the previous objective function. Furthermore, it satisfies the following property.<br /> <br /> ===Lemma 1===<br /> Assuming that &lt;math display=&quot;inline&quot;&gt;q&lt;/math&gt; computes the true posterior distribution that is &lt;math display=&quot;inline&quot;&gt;q(z|y) = \frac{p(y|z)p(z)}{p(y)}&lt;/math&gt; then<br /> <br /> \begin{align}<br /> V (G, D) = \int_{z} p(z) \left[ \int_{y} p(y|z) log D(y|z) dy + \int_{\hat{y}} G(\hat{y} | z) log (1 - D(\hat{y} | z)) d\hat{y} \right] dz<br /> \end{align}<br /> <br /> If an optimal discriminator is further assumed, the cost optimized by the generator then becomes<br /> <br /> \begin{align}<br /> C(G) = 2 \int_ p p(z) JSD[p(\cdot|z) || G(\cdot|z)] dz - log4<br /> \end{align}<br /> <br /> where &lt;math display=&quot;inline&quot;&gt;JSD&lt;/math&gt; stands for the Jensen-Shannon divergence. In the context of the WaveNet described in the earlier section, &lt;math&gt;p(x)&lt;/math&gt; is the distribution of a mixture of Gaussians, and &lt;math&gt;p(z)&lt;/math&gt; is the distribution over the mixture components, so the conditional distribution over the latent &lt;math&gt;z&lt;/math&gt;, &lt;math&gt;p(x | z)&lt;/math&gt; is uni-modal, and optimizing the divergence will not lead to mode collapse.<br /> <br /> ==Policy Optimization Strategy: TRPO==<br /> <br /> [[file:robust_behaviour_alg.png | 800px]]<br /> <br /> In Algorithm 1, it states that TRPO is used for policy parameter updates. TRPO is short for Trust Region Policy Optimization, which an iterative procedure for policy optimization, developed by John Schulman, Sergey Levine, Philip Moritz, Micheal Jordan and Pieter Abbeel. This optimization methods achieves monotonic improve in fields related to robotic motions, such as walking and swimming. For more details on TRPO, please refer to the [https://arxiv.org/pdf/1502.05477.pdf original paper].<br /> <br /> =Experiments=<br /> <br /> The primary focus of the paper's experimental evaluation is to demonstrate that the architecture allows learning of robust controllers capable of producing the full spectrum of demonstration behaviors for a diverse range of challenging control problems. The authors consider three bodies: a 9 DoF robotic arm, a 9 DoF planar walker, and a 62 DoF complex humanoid (56-actuated joint angles, and a freely translating and rotating 3d root joint). While for the reaching task BC is sufficient to obtain a working controller, for the other two problems the full learning procedure is critical.<br /> <br /> The authors analyze the resulting embedding spaces and demonstrate that they exhibit a rich and sensible structure that can be exploited for control. Finally, the authors show that the encoder can be used to capture the gist of novel demonstration trajectories which can then be reproduced by the controller.<br /> <br /> ==Robotic arm reaching==<br /> In this experiment, the authors demonstrate the effectiveness of their VAE architecture and investigate the nature of the learned embedding space on a reaching task with a simulated Jaco arm.<br /> <br /> To obtain demonstrations, the authors trained 60 independent policies to reach to random target locations in the workspace starting from the same initial configuration. 30 trajectories from each of<br /> the first 50 policies were generated. These served as training data for the VAE model (1500 training trajectories in total). The remaining 10 policies were used to generate test data.<br /> <br /> Here are the trajectories produced by the VAE model.<br /> <br /> [[File: Robotic_arm_reaching_VAE.png|300px|center|]]<br /> <br /> The reaching task is relatively simple, so with this amount of data the VAE policy is fairly robust. After training, the VAE encodes and reproduces the demonstrations as shown in the figure below.<br /> <br /> [[File: Robotic_arm_reaching.png|650px|center|]]<br /> <br /> ==2D Walker==<br /> <br /> As a more challenging test compared to the reaching task, the authors consider bipedal locomotion. Here, the authors train 60 neural network policies for a 2d walker to serve as demonstrations. These policies are each trained to move at different speeds both forward and backward depending on a label provided as additional input to the policy. Target speeds for training were chosen from a set of four different speeds (m/s): -1, 0, 1, 3.<br /> <br /> [[File: 2D_Walker.png|650px|center|]]<br /> <br /> The authors trained their model with 20 episodes per policy (1200 demonstration trajectories in total, each with a length of 400 steps or 10s of simulated time). In this experiment a full approach is required: training the VAE with BC alone can imitate some of the trajectories, but it performs poorly in general, presumably because the relatively small training set does not cover the space of trajectories sufficiently densely. On this generated dataset, they also train policies with GAIL using the same architecture and hyper-parameters. Due to the lack of conditioning, GAIL does not reproduce coherently trajectories. Instead, it simply meshes different behaviors together. In addition, the policies trained with GAIL also, exhibit dramatically less diversity; see [https://www.youtube.com/watch?v=kIguLQ4OwuM video].<br /> <br /> [[File: 2D_Walker_Optimized.gif|frame|center|In the left panel, the planar walker demonstrates a particular walking style. In the right panel, the model's agent imitates this walking style using a single policy network.]]<br /> <br /> ==Complex humanoid==<br /> For this experiment, the authors consider a humanoid body of high dimensionality that poses a hard control problem. They generate training trajectories with the existing controllers, which can produce instances of one of six different movement styles. Examples of such trajectories are shown in Fig. 5.<br /> <br /> [[File: Complex_humanoid.png|650px|center|]]<br /> <br /> The training set consists of 250 random trajectories from 6 different neural network controllers that were trained to match 6 different movement styles from the CMU motion capture database. Each trajectory is 334 steps or 10s long. The authors use a second set of 5 controllers from which they generate trajectories for evaluation (3 of these policies were trained on the same movement styles as the policies used for generating training data).<br /> <br /> Surprisingly, despite the complexity of the body, supervised learning is quite effective at producing sensible controllers: The VAE policy is reasonably good at imitating the demonstration trajectories, although it lacks the robustness to be practically useful. Adversarial training dramatically improves the stability of the controller. The authors analyze the improvement quantitatively by computing the percentage of the humanoid falling down before the end of an episode while imitating either training or test policies. The results are summarized in Figure 5 right. The figure further shows sequences of frames of representative demonstration and associated imitation trajectories. Videos of demonstration and imitation behaviors can be found in the [https://www.youtube.com/watch?v=NaohsyUxpxw video].<br /> <br /> [[File: Complex_humanoid_optimized.gif|frame|center|In the left and middle panels we show two demonstrated behaviors. In the right panel, the model's agent produces an unseen transition between those behaviors.]]<br /> <br /> Also, for practical purposes, it is desirable to allow the controller to transition from one behavior to another. The authors test this possibility in an experiment similar to the one for the Jaco arm: They determine the<br /> embedding vectors of pairs of demonstration trajectories, start the trajectory by conditioning on the first embedding vector, and then transition from one behavior to the other half-way through the episode by linearly interpolating the embeddings of the two demonstration trajectories over a window of 20 control steps. Although not always successful the learned controller often transitions robustly, despite not having been trained to do so. An example of this can be seen in the gif above. More examples can be seen in this [https://www.youtube.com/watch?v=VBrIll0B24o video].<br /> <br /> =Conclusions=<br /> The authors have proposed an approach for imitation learning that combines the favorable properties of techniques for density modeling with latent variables (VAEs) with those of GAIL. The result is a model that from a moderate number of demonstration tragectories, can learn:<br /> # a semantically well structured embedding of behaviors, <br /> # a corresponding multi-task controller that allows to robustly execute diverse behaviors from this embedding space, as well as <br /> # an encoder that can map new trajectories into the embedding space and hence allows for one-shot imitation. <br /> The experimental results demonstrate that this approach can work on a variety of control problems and that it scales even to very challenging ones such as the control of a simulated humanoid with a large number of degrees of freedoms.<br /> <br /> =Critique=<br /> The paper proposes a deep-learning-based approach to imitation learning which is sample-efficient and is able to imitate many diverse behaviors. The architecture can be seen as conditional generative adversarial imitation learning (GAIL). The conditioning vector is an embedding of a demonstrated trajectory, provided by a variational autoencoder. This results in one-shot imitation learning: at test time, a new demonstration can be embedded and provided as a conditioning vector to the imitation policy. The authors evaluate the method on several simulated motor control tasks.<br /> <br /> Pros:<br /> * Addresses a challenging problem of learning complex dynamics controllers / control policies<br /> * Well-written introduction / motivation<br /> * The proposed approach is able to learn complex and diverse behaviors and outperforms both the VAE alone (quantitatively) and GAIL alone (qualitatively).<br /> * Appealing qualitative results on the three evaluation problems. Interesting experiments with motion transitioning. <br /> <br /> Cons:<br /> * Comparisons to baselines could be more detailed.<br /> * Many key details are omitted (either on purpose, placed in the appendix, or simply absent, like the lack of definitions of terms in the modeling section, details of the planner model, simulation process, or the details of experimental settings)<br /> * Experimental evaluation is largely subjective (videos of robotic arm/biped/3D human motion). Even then, a rigorous study measuring subjective performance has not been performed.<br /> * A discussion of sample efficiency compared to GAIL and VAE would be interesting.<br /> * The presentation is not always clear, in particular, I had a hard time figuring out the notation in Section 3.<br /> * There has been some work on hybrids of VAEs and GANs, which seem worth mentioning when generative models are discussed, like:<br /> # Autoencoding beyond pixels using a learned similarity metric, Larsen et al., ICML 2016<br /> # Generating Images with Perceptual Similarity Metrics based on Deep Networks, Dosovitskiy&amp;Brox. NIPS 2016<br /> These works share the intuition that good coverage of VAEs can be combined with sharp results generated by GANs.<br /> * Some more extensive analysis of the approach would be interesting. How sensitive is it to hyperparameters? How important is it to use VAE, not usual AE or supervised learning? How difficult will it be for others to apply it to new tasks?<br /> <br /> =References=<br /> # Duan, Y., Andrychowicz, M., Stadie, B., Ho, J., Schneider, J., Sutskever, I., Abbeel, P., &amp; Zaremba, W. (2017). One-shot imitation learning. Preprint arXiv:1703.07326.<br /> # Ross, Stéphane, and Drew Bagnell. &quot;Efficient reductions for imitation learning.&quot; Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.<br /> # Wang, Z., Merel, J. S., Reed, S. E., de Freitas, N., Wayne, G., &amp; Heess, N. (2017). Robust imitation of diverse behaviors. In Advances in Neural Information Processing Systems (pp. 5326-5335).<br /> # Producing flexible behaviours in simulated environments. (n.d.). Retrieved March 25, 2018, from https://deepmind.com/blog/producing-flexible-behaviours-simulated-environments/<br /> # Cmu humanoid. (2017, May 19). Retrieved March 25, 2018, from https://www.youtube.com/watch?v=NaohsyUxpxw<br /> # Cmu transitions. (2017, May 19). Retrieved March 25, 2018, from https://www.youtube.com/watch?v=VBrIll0B24o<br /> # Conditional GAN: https://arxiv.org/pdf/1411.1784.pdf</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Self_Normalizing_Neural_Networks&diff=36152 stat946w18/Self Normalizing Neural Networks 2018-04-04T05:28:22Z <p>Jssambee: /* Conclusion */</p> <hr /> <div>==Introduction and Motivation==<br /> <br /> While neural networks have been making a lot of headway in improving benchmark results and narrowing the gap with human-level performance, success has been fairly limited to visual and sequential processing tasks through advancements in convolutional network and recurrent network structures. Most data science competitions outside of these domains are still outperformed by algorithms such as gradient boosting and random forests. The traditional (densely connected) feed-forward neural networks (FNNs) are rarely used competitively, and when they do win on rare occasions, they have very shallow network architectures with just up to four layers .<br /> <br /> The authors, Klambauer et al., believe that what prevents FNNs from becoming more competitively useful is the inability to train a deeper FNN structure, which would allow the network to learn more levels of abstract representations. Primarily, the difficulty arises due to the instability of gradients in very deep FNNs leading to problems like gradient vanishing/explosion. To have a deeper network, oscillations in the distribution of activations need to be kept under control so that stable gradients can be obtained during training. Several techniques are available to normalize activations, including batch normalization , layer normalization  and weight normalization . These methods work well with CNNs and RNNs, but not so much with FNNs because backpropagating through normalization parameters introduces additional variance to the gradients, and regularization techniques like dropout further perturb the normalization effect. CNNs and RNNs are less sensitive to such perturbations, presumably due to their weight sharing architecture, but FNNs do not have such a property and thus suffer from high variance in training errors, which hinders learning. Furthermore, the aforementioned normalization techniques involve adding external layers to the model and can slow down computation, which may already be slow when working with very deep FNNs. <br /> <br /> Therefore, the authors were motivated to develop a new FNN implementation that can achieve the intended effect of normalization techniques that works well with stochastic gradient descent and dropout. Self-normalizing neural networks (SNNs) are based on the idea of scaled exponential linear units (SELU), a new activation function introduced in this paper, whose output distribution is proved to converge to a fixed point, thus making it possible to train deeper networks.<br /> <br /> ==Notations==<br /> <br /> As the paper (primarily in the supplementary materials) comes with lengthy proofs, important notations are listed first.<br /> <br /> Consider two fully-connected layers, let &lt;math display=&quot;inline&quot;&gt;x&lt;/math&gt; denote the inputs to the second layer, then &lt;math display=&quot;inline&quot;&gt;z = Wx&lt;/math&gt; represents the network inputs of the second layer, and &lt;math display=&quot;inline&quot;&gt;y = f(z)&lt;/math&gt; represents the activations in the second layer.<br /> <br /> Assume that all &lt;math display=&quot;inline&quot;&gt;x_i&lt;/math&gt;'s, &lt;math display=&quot;inline&quot;&gt;1 \leqslant i \leqslant n&lt;/math&gt;, have mean &lt;math display=&quot;inline&quot;&gt;\mu := \mathrm{E}(x_i)&lt;/math&gt; and variance &lt;math display=&quot;inline&quot;&gt;\nu := \mathrm{Var}(x_i)&lt;/math&gt; and that each &lt;math display=&quot;inline&quot;&gt;y&lt;/math&gt; has mean &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu} := \mathrm{E}(y)&lt;/math&gt; and variance &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu} := \mathrm{Var}(y)&lt;/math&gt;, then let &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; be the set of functions that maps &lt;math display=&quot;inline&quot;&gt;(\mu, \nu)&lt;/math&gt; to &lt;math display=&quot;inline&quot;&gt;(\widetilde{\mu}, \widetilde{\nu})&lt;/math&gt;. <br /> <br /> For the weight vector &lt;math display=&quot;inline&quot;&gt;w&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;n&lt;/math&gt; times the mean of the weight vector is &lt;math display=&quot;inline&quot;&gt;\omega := \sum_{i = 1}^n \omega_i&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;n&lt;/math&gt; times the second moment is &lt;math display=&quot;inline&quot;&gt;\tau := \sum_{i = 1}^{n} w_i^2&lt;/math&gt;.<br /> <br /> ==Key Concepts==<br /> <br /> ===Self-Normalizing Neural-Net (SNN)===<br /> <br /> ''A neural network is self-normalizing if it possesses a mapping &lt;math display=&quot;inline&quot;&gt;g: \Omega \rightarrow \Omega&lt;/math&gt; for each activation &lt;math display=&quot;inline&quot;&gt;y&lt;/math&gt; that maps mean and variance from one layer to the next and has a stable and attracting fixed point depending on &lt;math display=&quot;inline&quot;&gt;(\omega, \tau)&lt;/math&gt; in &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;. Furthermore, the mean and variance remain in the domain &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;, that is &lt;math display=&quot;inline&quot;&gt;g(\Omega) \subseteq \Omega&lt;/math&gt;, where &lt;math display=&quot;inline&quot;&gt;\Omega = \{ (\mu, \nu) | \mu \in [\mu_{min}, \mu_{max}], \nu \in [\nu_{min}, \nu_{max}] \}&lt;/math&gt;. When iteratively applying the mapping &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt;, each point within &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt; converges to this fixed point.''<br /> <br /> In other words, in SNNs, if the inputs from an earlier layer (&lt;math display=&quot;inline&quot;&gt;x&lt;/math&gt;) already have their mean and variance within a predefined interval &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;, then the activations to the next layer (&lt;math display=&quot;inline&quot;&gt;y = f(z = Wx)&lt;/math&gt;) should remain within those intervals. This is true across all pairs of connecting layers as the normalizing effect gets propagated through the network, hence why the term self-normalizing. When the mapping is applied iteratively, it should draw the mean and variance values closer to a fixed point within &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;, the value of which depends on &lt;math display=&quot;inline&quot;&gt;\omega&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\tau&lt;/math&gt; (recall that they are from the weight vector).<br /> <br /> We will design a FNN then construct a g that takes the mean and variance of each layer to those of the next and is a contraction mapping i.e. &lt;math&gt;g(\mu_i, \nu_i) = (\mu_{i+1}, \nu_{i+1}) \forall i &lt;/math&gt;. It should be noted that although the g required in the SNN definition depends on &lt;math display=&quot;inline&quot;&gt;(\omega, \tau)&lt;/math&gt; of an individual layer, the FNN that we construct will have the same values of &lt;math display=&quot;inline&quot;&gt;(\omega, \tau)&lt;/math&gt; for each layer. Intuitively this definition can be interpreted as saying that the mean and variance of the final layer of a sufficiently deep SNN will not change when the mean and variance of the input data change. This is because the mean and variance are passing through a contraction mapping at each layer, converging to the mapping's fixed point.<br /> <br /> The activation function that makes an SNN possible should meet the following four conditions:<br /> <br /> # It can take on both negative and positive values, so it can normalize the mean;<br /> # It has a saturation region, so it can dampen variances that are too large;<br /> # It has a slope larger than one, so it can increase variances that are too small; and<br /> # It is a continuous curve, which is necessary for the fixed point to exist (see the definition of Banach fixed point theorem to follow).<br /> <br /> Commonly used activation functions such as rectified linear units (ReLU), sigmoid, tanh, leaky ReLUs and exponential linear units (ELUs) do not meet all four criteria, therefore, a new activation function is needed.<br /> <br /> ===Scaled Exponential Linear Units (SELUs)===<br /> <br /> One of the main ideas introduced in this paper is the SELU function. As the name suggests, it is closely related to ELU ,<br /> <br /> $\mathrm{elu}(x) = \begin{cases} x &amp; x &gt; 0 \\<br /> \alpha e^x - \alpha &amp; x \leqslant 0<br /> \end{cases}$<br /> <br /> but further builds upon it by introducing a new scale parameter $\lambda$ and proving the exact values that $\alpha$ and $\lambda$ should take on to achieve self-normalization. SELU is defined as:<br /> <br /> $\mathrm{selu}(x) = \lambda \begin{cases} x &amp; x &gt; 0 \\<br /> \alpha e^x - \alpha &amp; x \leqslant 0<br /> \end{cases}$<br /> <br /> SELUs meet all four criteria listed above - it takes on positive values when &lt;math display=&quot;inline&quot;&gt;x &gt; 0&lt;/math&gt; and negative values when &lt;math display=&quot;inline&quot;&gt;x &lt; 0&lt;/math&gt;, it has a saturation region when &lt;math display=&quot;inline&quot;&gt;x&lt;/math&gt; is a larger negative value, the value of &lt;math display=&quot;inline&quot;&gt;\lambda&lt;/math&gt; can be set to greater than one to ensure a slope greater than one, and it is continuous at &lt;math display=&quot;inline&quot;&gt;x = 0&lt;/math&gt;. <br /> <br /> Figure 1 below gives an intuition for how SELUs normalize activations across layers. As shown, a variance dampening effect occurs when inputs are negative and far away from zero, and a variance increasing effect occurs when inputs are close to zero.<br /> <br /> [[File:snnf1.png|500px]]<br /> <br /> Figure 2 below plots the progression of training error on the MNIST and CIFAR10 datasets when training with SNNs versus FNNs with batch normalization at varying model depths. As shown, FNNs that adopted the SELU activation function exhibited lower and less variable training loss compared to using batch normalization, even as the depth increased to 16 and 32 layers.<br /> <br /> [[File:snnf2.png|600px]]<br /> <br /> === Banach Fixed Point Theorem and Contraction Mappings ===<br /> <br /> The underlying theory behind SNNs is the Banach fixed point theorem, which states the following: ''Let &lt;math display=&quot;inline&quot;&gt;(X, d)&lt;/math&gt; be a non-empty complete metric space with a contraction mapping &lt;math display=&quot;inline&quot;&gt;f: X \rightarrow X&lt;/math&gt;. Then &lt;math display=&quot;inline&quot;&gt;f&lt;/math&gt; has a unique fixed point &lt;math display=&quot;inline&quot;&gt;x_f \subseteq X&lt;/math&gt; with &lt;math display=&quot;inline&quot;&gt;f(x_f) = x_f&lt;/math&gt;. Every sequence &lt;math display=&quot;inline&quot;&gt;x_n = f(x_{n-1})&lt;/math&gt; with starting element &lt;math display=&quot;inline&quot;&gt;x_0 \subseteq X&lt;/math&gt; converges to the fixed point: &lt;math display=&quot;inline&quot;&gt;x_n \underset{n \rightarrow \infty}\rightarrow x_f&lt;/math&gt;.''<br /> <br /> A contraction mapping is a function &lt;math display=&quot;inline&quot;&gt;f: X \rightarrow X&lt;/math&gt; on a metric space &lt;math display=&quot;inline&quot;&gt;X&lt;/math&gt; with distance &lt;math display=&quot;inline&quot;&gt;d&lt;/math&gt;, such that for all points &lt;math display=&quot;inline&quot;&gt;\mathbf{u}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\mathbf{v}&lt;/math&gt; in &lt;math display=&quot;inline&quot;&gt;X&lt;/math&gt;: &lt;math display=&quot;inline&quot;&gt;d(f(\mathbf{u}), f(\mathbf{v})) \leqslant \delta d(\mathbf{u}, \mathbf{v})&lt;/math&gt;, for a &lt;math display=&quot;inline&quot;&gt;0 \leqslant \delta \leqslant 1&lt;/math&gt;.<br /> <br /> The easiest way to prove a contraction mapping is usually to show that the spectral norm  of its Jacobian is less than 1 , as was done for this paper.<br /> <br /> ==Proving the Self-Normalizing Property==<br /> <br /> ===Mean and Variance Mapping Function===<br /> <br /> &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; is derived under the assumption that &lt;math display=&quot;inline&quot;&gt;x_i&lt;/math&gt;'s are independent but not necessarily having the same mean and variance [[#Footnotes |(2)]]. Under this assumption (and recalling earlier notation of &lt;math display=&quot;inline&quot;&gt;\omega&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\tau&lt;/math&gt;),<br /> <br /> \begin{align}<br /> \mathrm{E}(z = \mathbf{w}^T \mathbf{x}) = \sum_{i = 1}^n w_i \mathrm{E}(x_i) = \mu \omega<br /> \end{align}<br /> <br /> \begin{align}<br /> \mathrm{Var}(z) = \mathrm{Var}(\sum_{i = 1}^n w_i x_i) = \sum_{i = 1}^n w_i^2 \mathrm{Var}(x_i) = \nu \sum_{i = 1}^n w_i^2 = \nu\tau \textrm{ .}<br /> \end{align}<br /> <br /> When the weight terms are normalized, &lt;math display=&quot;inline&quot;&gt;z&lt;/math&gt; can be viewed as a weighted sum of &lt;math display=&quot;inline&quot;&gt;x_i&lt;/math&gt;'s. Wide neural net layers with a large number of nodes is common, so &lt;math display=&quot;inline&quot;&gt;n&lt;/math&gt; is usually large, and by the Central Limit Theorem, &lt;math display=&quot;inline&quot;&gt;z&lt;/math&gt; approaches a normal distribution &lt;math display=&quot;inline&quot;&gt;\mathcal{N}(\mu\omega, \sqrt{\nu\tau})&lt;/math&gt;. <br /> <br /> Using the above property, the exact form for &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; can be obtained using the definitions for mean and variance of continuous random variables: <br /> <br /> [[File:gmapping.png|600px|center]]<br /> <br /> Analytical solutions for the integrals can be obtained as follows: <br /> <br /> [[File:gintegral.png|600px|center]]<br /> <br /> The authors are interested in the fixed point &lt;math display=&quot;inline&quot;&gt;(\mu, \nu) = (0, 1)&lt;/math&gt; as these are the parameters associated with the common standard normal distribution. The authors also proposed using normalized weights such that &lt;math display=&quot;inline&quot;&gt;\omega = \sum_{i = 1}^n = 0&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\tau = \sum_{i = 1}^n w_i^2= 1&lt;/math&gt; as it gives a simpler, cleaner expression for &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}&lt;/math&gt; in the calculations in the next steps. This weight scheme can be achieved in several ways, for example, by drawing from a normal distribution &lt;math display=&quot;inline&quot;&gt;\mathcal{N}(0, \frac{1}{n})&lt;/math&gt; or from a uniform distribution &lt;math display=&quot;inline&quot;&gt;U(-\sqrt{3}, \sqrt{3})&lt;/math&gt;.<br /> <br /> At &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu} = \mu = 0&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu} = \nu = 1&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\omega = 0&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\tau = 1&lt;/math&gt;, the constants &lt;math display=&quot;inline&quot;&gt;\lambda&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\alpha&lt;/math&gt; from the SELU function can be solved for - &lt;math display=&quot;inline&quot;&gt;\lambda_{01} \approx 1.0507&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\alpha_{01} \approx 1.6733&lt;/math&gt;. These values are used throughout the rest of the paper whenever an expression calls for &lt;math display=&quot;inline&quot;&gt;\lambda&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\alpha&lt;/math&gt;.<br /> <br /> ===Details of Moment-Mapping Integrals ===<br /> Consider the moment-mapping integrals:<br /> \begin{align}<br /> \widetilde{\mu} &amp; = \int_{-\infty}^\infty \mathrm{selu} (z) p_N(z; \mu \omega, \sqrt{\nu \tau})dz\\<br /> \widetilde{\nu} &amp; = \int_{-\infty}^\infty \mathrm{selu} (z)^2 p_N(z; \mu \omega, \sqrt{\nu \tau}) dz-\widetilde{\mu}^2.<br /> \end{align}<br /> <br /> The equation for &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu}&lt;/math&gt; can be expanded as <br /> \begin{align}<br /> \widetilde{\mu} &amp; = \frac{\lambda}{2}\left( 2\alpha\int_{-\infty}^0 (\exp(z)-1) p_N(z; \mu \omega, \sqrt{\nu \tau})dz +2\int_{0}^\infty z p_N(z; \mu \omega, \sqrt{\nu \tau})dz \right)\\<br /> &amp;= \frac{\lambda}{2}\left( 2 \alpha \frac{1}{\sqrt{2\pi\tau\nu}} \int_{-\infty}^0 (\exp(z)-1) \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 ) dz +2\frac{1}{\sqrt{2\pi\tau\nu}}\int_{0}^\infty z \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 dz \right)\\<br /> &amp;= \frac{\lambda}{2}\left( 2 \alpha\frac{1}{\sqrt{2\pi\tau\nu}}\int_{-\infty}^0 \exp(z) \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 ) dz - 2 \alpha\frac{1}{\sqrt{2\pi\tau\nu}}\int_{-\infty}^0 \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 ) dz +2\frac{1}{\sqrt{2\pi\tau\nu}}\int_{0}^\infty z \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 dz \right)\\<br /> \end{align}<br /> <br /> The first integral can be simplified via the substituiton<br /> \begin{align}<br /> q:= \frac{1}{\sqrt{2\tau \nu}}(z-\mu \omega -\tau \nu).<br /> \end{align}<br /> While the second and third can be simplified via the substitution<br /> \begin{align}<br /> q:= \frac{1}{\sqrt{2\tau \nu}}(z-\mu \omega ).<br /> \end{align}<br /> Using the definitions of &lt;math display=&quot;inline&quot;&gt;\mathrm{erf}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\mathrm{erfc}&lt;/math&gt; then yields the result of the previous section.<br /> <br /> ===Self-Normalizing Property Under Normalized Weights===<br /> <br /> Assuming the the weights normalized with &lt;math display=&quot;inline&quot;&gt;\omega=0&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\tau=1&lt;/math&gt;, it is possible to calculate the exact value for the spectral norm  of &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt;'s Jacobian around the fixed point &lt;math display=&quot;inline&quot;&gt;(\mu, \nu) = (0, 1)&lt;/math&gt;, which turns out to be &lt;math display=&quot;inline&quot;&gt;0.7877&lt;/math&gt;. Thus, at initialization, SNNs have a stable and attracting fixed point at &lt;math display=&quot;inline&quot;&gt;(0, 1)&lt;/math&gt;, which means that when &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; is applied iteratively to a pair &lt;math display=&quot;inline&quot;&gt;(\mu_{new}, \nu_{new})&lt;/math&gt;, it should draw the points closer to &lt;math display=&quot;inline&quot;&gt;(0, 1)&lt;/math&gt;. The rate of convergence is determined by the spectral norm , whose value depends on &lt;math display=&quot;inline&quot;&gt;\mu&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\nu&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\omega&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\tau&lt;/math&gt;.<br /> <br /> [[File:paper10_fig2.png|600px|frame|none|alt=Alt text|The figure illustrates, in the scenario described above, the mapping &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; of mean and variance &lt;math display=&quot;inline&quot;&gt;(\mu, \nu)&lt;/math&gt; to &lt;math display=&quot;inline&quot;&gt;(\mu_{new}, \nu_{new})&lt;/math&gt;. The arrows show the direction &lt;math display=&quot;inline&quot;&gt;(\mu, \nu)&lt;/math&gt; is mapped by &lt;math display=&quot;inline&quot;&gt;g: (\mu, \nu)\mapsto(\mu_{new}, \nu_{new})&lt;/math&gt;. One can clearly see the fixed point mapping &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; is at &lt;math display=&quot;inline&quot;&gt;(0, 1)&lt;/math&gt;.]]<br /> <br /> ===Self-Normalizing Property Under Unnormalized Weights===<br /> <br /> As weights are updated during training, there is no guarantee that they would remain normalized. The authors addressed this issue through the first key theorem presented in the paper, which states that a fixed point close to (0, 1) can still be obtained if &lt;math display=&quot;inline&quot;&gt;\mu&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\nu&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\omega&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\tau&lt;/math&gt; are restricted to a specified range. <br /> <br /> Additionally, there is no guarantee that the mean and variance of the inputs would stay within the range given by the first theorem, which led to the development of theorems #2 and #3. These two theorems established an upper and lower bound on the variance of inputs if the variance of activations from the previous layer are above or below the range specified, respectively. This ensures that the variance would not explode or vanish after being propagated through the network.<br /> <br /> The theorems come with lengthy proofs in the supplementary materials for the paper. High-level proof sketches are presented here.<br /> <br /> ====Theorem 1: Stable and Attracting Fixed Points Close to (0, 1)====<br /> <br /> '''Definition:''' We assume &lt;math display=&quot;inline&quot;&gt;\alpha = \alpha_{01}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\lambda = \lambda_{01}&lt;/math&gt;. We restrict the range of the variables to the domain &lt;math display=&quot;inline&quot;&gt;\mu \in [-0.1, 0.1]&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\omega \in [-0.1, 0.1]&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\nu \in [0.8, 1.5]&lt;/math&gt;, and &lt;math display=&quot;inline&quot;&gt;\tau \in [0.9, 1.1]&lt;/math&gt;. For &lt;math display=&quot;inline&quot;&gt;\omega = 0&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\tau = 1&lt;/math&gt;, the mapping has the stable fixed point &lt;math display=&quot;inline&quot;&gt;(\mu, \nu) = (0, 1&lt;/math&gt;. For other &lt;math display=&quot;inline&quot;&gt;\omega&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\tau&lt;/math&gt;, g has a stable and attracting fixed point depending on &lt;math display=&quot;inline&quot;&gt;(\omega, \tau)&lt;/math&gt; in the &lt;math display=&quot;inline&quot;&gt;(\mu, \nu)&lt;/math&gt;-domain: &lt;math display=&quot;inline&quot;&gt;\mu \in [-0.03106, 0.06773]&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\nu \in [0.80009, 1.48617]&lt;/math&gt;. All points within the &lt;math display=&quot;inline&quot;&gt;(\mu, \nu)&lt;/math&gt;-domain converge when iteratively applying the mapping to this fixed point.<br /> <br /> '''Proof:''' In order to show the the mapping &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; has a stable and attracting fixed point close to &lt;math display=&quot;inline&quot;&gt;(0, 1)&lt;/math&gt;, the authors again applied Banach's fixed point theorem, which states that a contraction mapping on a nonempty complete metric space that does not map outside its domain has a unique fixed point, and that all points in the &lt;math display=&quot;inline&quot;&gt;(\mu, \nu)&lt;/math&gt;-domain converge to the fixed point when &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; is iteratively applied. <br /> <br /> The two requirements are proven as follows:<br /> <br /> '''1. g is a contraction mapping.'''<br /> <br /> For &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; to be a contraction mapping in &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt; with distance &lt;math display=&quot;inline&quot;&gt;||\cdot||_2&lt;/math&gt;, there must exist a Lipschitz constant &lt;math display=&quot;inline&quot;&gt;M &lt; 1&lt;/math&gt; such that: <br /> <br /> \begin{align} <br /> \forall \mu, \nu \in \Omega: ||g(\mu) - g(\nu)||_2 \leqslant M||\mu - \nu||_2 <br /> \end{align}<br /> <br /> As stated earlier, &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; is a contraction mapping if the spectral norm  of the Jacobian &lt;math display=&quot;inline&quot;&gt;\mathcal{H}&lt;/math&gt; [[#Footnotes | (3)]] is below one, or equivalently, if the the largest singular value of &lt;math display=&quot;inline&quot;&gt;\mathcal{H}&lt;/math&gt; is less than 1.<br /> <br /> To find the singular values of &lt;math display=&quot;inline&quot;&gt;\mathcal{H}&lt;/math&gt;, the authors used an explicit formula derived by Blinn  for &lt;math display=&quot;inline&quot;&gt;2\times2&lt;/math&gt; matrices, which states that the largest singular value of the matrix is &lt;math display=&quot;inline&quot;&gt;\frac{1}{2}(\sqrt{(a_{11} + a_{22}) ^ 2 + (a_{21} - a{12})^2} + \sqrt{(a_{11} - a_{22}) ^ 2 + (a_{21} + a{12})^2})&lt;/math&gt;.<br /> <br /> For &lt;math display=&quot;inline&quot;&gt;\mathcal{H}&lt;/math&gt;, an expression for the largest singular value of &lt;math display=&quot;inline&quot;&gt;\mathcal{H}&lt;/math&gt;, made up of the first-order partial derivatives of the mapping &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; with respect to &lt;math display=&quot;inline&quot;&gt;\mu&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\nu&lt;/math&gt;, can be derived given the analytical solutions for &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}&lt;/math&gt; (and denoted &lt;math display=&quot;inline&quot;&gt;S(\mu, \omega, \nu, \tau, \lambda, \alpha)&lt;/math&gt;).<br /> <br /> From the mean value theorem, we know that for a &lt;math display=&quot;inline&quot;&gt;t \in [0, 1]&lt;/math&gt;, <br /> <br /> [[File:seq.png|600px|center]]<br /> <br /> Therefore, the distance of the singular value at &lt;math display=&quot;inline&quot;&gt;S(\mu, \omega, \nu, \tau, \lambda_{\mathrm{01}}, \alpha_{\mathrm{01}})&lt;/math&gt; and at &lt;math display=&quot;inline&quot;&gt;S(\mu + \Delta\mu, \omega + \Delta\omega, \nu + \Delta\nu, \tau \Delta\tau, \lambda_{\mathrm{01}}, \alpha_{\mathrm{01}})&lt;/math&gt; can be bounded above by <br /> <br /> [[File:seq2.png|600px|center]]<br /> <br /> An upper bound was obtained for each partial derivative term above, mainly through algebraic reformulations and by making use of the fact that many of the functions are monotonically increasing or decreasing on the variables they depend on in &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt; (see pages 17 - 25 in the supplementary materials).<br /> <br /> The &lt;math display=&quot;inline&quot;&gt;\Delta&lt;/math&gt; terms were then set (rather arbitrarily) to be: &lt;math display=&quot;inline&quot;&gt;\Delta \mu=0.0068097371&lt;/math&gt;,<br /> &lt;math display=&quot;inline&quot;&gt;\Delta \omega=0.0008292885&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\Delta \nu=0.0009580840&lt;/math&gt;, and &lt;math display=&quot;inline&quot;&gt;\Delta \tau=0.0007323095&lt;/math&gt;. Plugging in the upper bounds on the absolute values of the derivative terms for &lt;math display=&quot;inline&quot;&gt;S&lt;/math&gt; and the &lt;math display=&quot;inline&quot;&gt;\Delta&lt;/math&gt; terms yields<br /> <br /> $S(\mu + \Delta \mu,\omega + \Delta \omega,\nu + \Delta \nu,\tau + \Delta \tau,\lambda_{\rm 01},\alpha_{\rm 01}) - S(\mu,\omega,\nu,\tau,\lambda_{\rm 01},\alpha_{\rm 01}) &lt; 0.008747$<br /> <br /> Next, the largest singular value is found from a computer-assisted fine grid-search [[#Footnotes | (1)]] over the domain &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;, with grid lengths &lt;math display=&quot;inline&quot;&gt;\Delta \mu=0.0068097371&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\Delta \omega=0.0008292885&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\Delta \nu=0.0009580840&lt;/math&gt;, and &lt;math display=&quot;inline&quot;&gt;\Delta \tau=0.0007323095&lt;/math&gt;, which turned out to be &lt;math display=&quot;inline&quot;&gt;0.9912524171058772&lt;/math&gt;. Therefore, <br /> <br /> $S(\mu + \Delta \mu,\omega + \Delta \omega,\nu + \Delta \nu,\tau + \Delta \tau,\lambda_{\rm 01},\alpha_{\rm 01}) \leq 0.9912524171058772 + 0.008747 &lt; 1$<br /> <br /> Since the largest singular value is smaller than 1, &lt;math display=&quot;inline&gt;g&lt;/math&gt; is a contraction mapping.<br /> <br /> '''2. g does not map outside its domain.'''<br /> <br /> To prove that &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; does not map outside of the domain &lt;math display=&quot;inline&quot;&gt;\mu \in [-0.1, 0.1]&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\nu \in [0.8, 1.5]&lt;/math&gt;, lower and upper bounds on &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}&lt;/math&gt; were obtained to show that they stay within &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;. <br /> <br /> First, it was shown that the derivatives of &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\widetilde{\xi}&lt;/math&gt; with respect to &lt;math display=&quot;inline&quot;&gt;\mu&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\nu&lt;/math&gt; are either positive or have the sign of &lt;math display=&quot;inline&quot;&gt;\omega&lt;/math&gt; in &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;, so the minimum and maximum points are found at the borders. In &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;, it then follows that<br /> <br /> \begin{align}<br /> -0.03106 &lt;\widetilde{\mu}(-0.1,0.1, 0.8, 0.95, \lambda_{\rm 01}, \alpha_{\rm 01}) \leq &amp; \widetilde{\mu} \leq \widetilde{\mu}(0.1,0.1,1.5, 1.1, \lambda_{\rm 01}, \alpha_{\rm 01}) &lt; 0.06773<br /> \end{align}<br /> <br /> and <br /> <br /> \begin{align}<br /> 0.80467 &lt;\widetilde{\xi}(-0.1,0.1, 0.8, 0.95, \lambda_{\rm 01}, \alpha_{\rm 01}) \leq &amp; \widetilde{\xi} \leq \widetilde{\xi}(0.1,0.1,1.5, 1.1, \lambda_{\rm 01}, \alpha_{\rm 01}) &lt; 1.48617.<br /> \end{align}<br /> <br /> Since &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu} = \widetilde{\xi} - \widetilde{\mu}^2&lt;/math&gt;, <br /> <br /> \begin{align}<br /> 0.80009 &amp; \leqslant \widetilde{\nu} \leqslant 1.48617<br /> \end{align}<br /> <br /> The bounds on &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}&lt;/math&gt; are narrower than those for &lt;math display=&quot;inline&quot;&gt;\mu&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\nu&lt;/math&gt; set out in &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;, therefore &lt;math display=&quot;inline&quot;&gt;g(\Omega) \subseteq \Omega&lt;/math&gt;.<br /> <br /> ==== Theorem 2: Decreasing Variance from Above ====<br /> <br /> '''Definition:''' For &lt;math display=&quot;inline&quot;&gt;\lambda = \lambda_{01}&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\alpha = \alpha_{01}&lt;/math&gt;, and the domain &lt;math display=&quot;inline&quot;&gt;\Omega^+: -1 \leqslant \mu \leqslant 1, -0.1 \leqslant \omega \leqslant 0.1, 3 \leqslant \nu \leqslant 16&lt;/math&gt;, and &lt;math display=&quot;inline&quot;&gt;0.8 \leqslant \tau \leqslant 1.25&lt;/math&gt;, we have for the mapping of the variance &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}(\mu, \omega, \nu, \tau, \lambda, \alpha)&lt;/math&gt; under &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt;: &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}(\mu, \omega, \nu, \tau, \lambda, \alpha) &lt; \nu&lt;/math&gt;.<br /> <br /> Theorem 2 states that when &lt;math display=&quot;inline&quot;&gt;\nu \in [3, 16]&lt;/math&gt;, the mapping &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; draws it to below 3 when applied across layers, thereby establishing an upper bound of &lt;math display=&quot;inline&quot;&gt;\nu &lt; 3&lt;/math&gt; on variance.<br /> <br /> '''Proof:''' The authors proved the inequality by showing that &lt;math display=&quot;inline&quot;&gt;g(\mu, \omega, \xi, \tau, \lambda_{01}, \alpha_{01}) = \widetilde{\xi}(\mu, \omega, \xi, \tau, \lambda_{01}, \alpha_{01}) - \nu &lt; 0&lt;/math&gt;, since the second moment should be greater than or equal to variance &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}&lt;/math&gt;. The behavior of &lt;math display=&quot;inline&quot;&gt;\frac{\partial }{\partial \mu } \widetilde{\xi}(\mu, \omega, \nu, \tau, \lambda, \alpha)&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\frac{\partial }{\partial \omega } \widetilde{\xi}(\mu, \omega, \nu, \tau, \lambda, \alpha)&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\frac{\partial }{\partial \nu } \widetilde{\xi}(\mu, \omega, \nu, \tau, \lambda, \alpha)&lt;/math&gt;, and &lt;math display=&quot;inline&quot;&gt;\frac{\partial }{\partial \tau } \widetilde{\xi}(\mu, \omega, \nu, \tau, \lambda, \alpha)&lt;/math&gt; are used to find the bounds on &lt;math display=&quot;inline&quot;&gt;g(\mu, \omega, \xi, \tau, \lambda_{01}, \alpha_{01})&lt;/math&gt; (see pages 9 - 13 in the supplementary materials). Again, the partial derivative terms were monotonic, which made it possible to find the upper bound at the board values. It was shown that the maximum value of &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; does not exceed &lt;math display=&quot;inline&quot;&gt;-0.0180173&lt;/math&gt;.<br /> <br /> ==== Theorem 3: Increasing Variance from Below ====<br /> <br /> '''Definition''': We consider &lt;math display=&quot;inline&quot;&gt;\lambda = \lambda_{01}&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\alpha = \alpha_{01}&lt;/math&gt;, and the domain &lt;math display=&quot;inline&quot;&gt;\Omega^-: -0.1 \leqslant \mu \leqslant 0.1&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;-0.1 \leqslant \omega \leqslant 0.1&lt;/math&gt;. For the domain &lt;math display=&quot;inline&quot;&gt;0.02 \leqslant \nu \leqslant 0.16&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;0.8 \leqslant \tau \leqslant 1.25&lt;/math&gt; as well as for the domain &lt;math display=&quot;inline&quot;&gt;0.02 \leqslant \nu \leqslant 0.24&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;0.9 \leqslant \tau \leqslant 1.25&lt;/math&gt;, the mapping of the variance &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}(\mu, \omega, \nu, \tau, \lambda, \alpha)&lt;/math&gt; increases: &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}(\mu, \omega, \nu, \tau, \lambda, \alpha) &gt; \nu&lt;/math&gt;.<br /> <br /> Theorem 3 states that the variance &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}&lt;/math&gt; increases when variance is smaller than in &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;. The lower bound on variance is &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu} &gt; 0.16&lt;/math&gt; when &lt;math display=&quot;inline&quot;&gt;0.8 \leqslant \tau&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu} &gt; 0.24&lt;/math&gt; when &lt;math display=&quot;inline&quot;&gt;0.9 \leqslant \tau&lt;/math&gt; under the proposed mapping.<br /> <br /> '''Proof:''' According to the mean value theorem, for a &lt;math display=&quot;inline&quot;&gt;t \in [0, 1]&lt;/math&gt;,<br /> <br /> [[File:th3.png|700px|center]]<br /> <br /> Similar to the proof for Theorem 2 (except we are interested in the smallest &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}&lt;/math&gt; instead of the biggest), the lower bound for &lt;math display=&quot;inline&quot;&gt;\frac{\partial }{\partial \nu} \widetilde{\xi}(\mu,\omega,\nu+t(\nu_{\mathrm{min}}-\nu),\tau,\lambda_{\rm 01},\alpha_{\rm 01})&lt;/math&gt; can be derived, and substituted into the relationship &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu} = \widetilde{\xi}(\mu,\omega,\nu,\tau,\lambda_{\rm 01},\alpha_{\rm 01}) - (\widetilde{\mu}(\mu,\omega,\nu,\tau,\lambda_{\rm 01},\alpha_{\rm 01}))^2&lt;/math&gt;. The lower bound depends on &lt;math display=&quot;inline&quot;&gt;\tau&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\nu&lt;/math&gt;, and in the &lt;math display=&quot;inline&quot;&gt;\Omega^{-1}&lt;/math&gt; listed, it is slightly above &lt;math display=&quot;inline&quot;&gt;\nu&lt;/math&gt;.<br /> <br /> == Implementation Details ==<br /> <br /> === Initialization ===<br /> <br /> As previously explained, SNNs work best when inputs to the network are standardized, and the weights are initialized with mean of 0 and variance of &lt;math display=&quot;inline&quot;&gt;\frac{1}{n}&lt;/math&gt; to help converge to the fixed point &lt;math display=&quot;inline&quot;&gt;(\mu, \nu) = (0, 1)&lt;/math&gt;.<br /> <br /> === Dropout Technique ===<br /> <br /> The authors reason that regular dropout, randomly setting activations to 0 with probability &lt;math display=&quot;inline&quot;&gt;1 - q&lt;/math&gt;, is not compatible with SELUs. This is because the low variance region in SELUs is at &lt;math display=&quot;inline&quot;&gt;\lim_{x \rightarrow -\infty} = -\lambda \alpha&lt;/math&gt;, not 0. Contrast this with ReLUs, which work well with dropout since they have &lt;math display=&quot;inline&quot;&gt;\lim_{x \rightarrow -\infty} = 0&lt;/math&gt; as the saturation region. Therefore, a new dropout technique for SELUs was needed, termed ''alpha dropout''.<br /> <br /> With alpha dropout, activations are randomly set to &lt;math display=&quot;inline&quot;&gt;-\lambda\alpha = \alpha'&lt;/math&gt;, which for this paper corresponds to the constant &lt;math display=&quot;inline&quot;&gt;1.7581&lt;/math&gt;, with probability &lt;math display=&quot;inline&quot;&gt;1 - q&lt;/math&gt;.<br /> <br /> The updated mean and variance of the activations are now:<br /> $\mathrm{E}(xd + \alpha'(1 - d)) = \mu q + \alpha'(1 - q)$ <br /> <br /> and<br /> <br /> $\mathrm{Var}(xd + \alpha'(1 - d)) = q((1-q)(\alpha' - \mu)^2 + \nu)$<br /> <br /> Activations need to be transformed (e.g. scaled) after dropout to maintain the same mean and variance. In regular dropout, conserving the mean and variance correlates to scaling activations by a factor of 1/q while training. To ensure that mean and variance are unchanged after alpha dropout, the authors used an affine transformation &lt;math display=&quot;inline&quot;&gt;a(xd + \alpha'(1 - d)) + b&lt;/math&gt;, and solved for the values of &lt;math display=&quot;inline&quot;&gt;a&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;b&lt;/math&gt; to give &lt;math display=&quot;inline&quot;&gt;a = (\frac{\nu}{q((1-q)(\alpha' - \mu)^2 + \nu)})^{\frac{1}{2}}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;b = \mu - a(q\mu + (1-q)\alpha'))&lt;/math&gt;. As the values for &lt;math display=&quot;inline&quot;&gt;\mu&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\nu&lt;/math&gt; are set to &lt;math display=&quot;inline&quot;&gt;0&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;1&lt;/math&gt; throughout the paper, these expressions can be simplified into &lt;math display=&quot;inline&quot;&gt;a = (q + \alpha'^2 q(1-q))^{-\frac{1}{2}}&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;b = -(q + \alpha^2 q (1-q))^{-\frac{1}{2}}((1 - q)\alpha')&lt;/math&gt;, where &lt;math display=&quot;inline&quot;&gt;\alpha' \approx 1.7581&lt;/math&gt;.<br /> <br /> Empirically, the authors found that dropout rates (1-q) of &lt;math display=&quot;inline&quot;&gt;0.05&lt;/math&gt; or &lt;math display=&quot;inline&quot;&gt;0.10&lt;/math&gt; worked well with SNNs.<br /> <br /> === Optimizers ===<br /> <br /> Through experiments, the authors found that stochastic gradient descent, momentum, Adadelta and Adamax work well on SNNs. For Adam, configuration parameters &lt;math display=&quot;inline&quot;&gt;\beta_2 = 0.99&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\epsilon = 0.01&lt;/math&gt; were found to be more effective.<br /> <br /> ==Experimental Results==<br /> <br /> Three sets of experiments were conducted to compare the performance of SNNs to six other FNN structures and to other machine learning algorithms, such as support vector machines and random forests. The experiments were carried out on (1) 121 UCI Machine Learning Repository datasets, (2) the Tox21 chemical compounds toxicity effects dataset (with 12,000 compounds and 270,000 features), and (3) the HTRU2 dataset of statistics on radio wave signals from pulsar candidates (with 18,000 observations and eight features). In each set of experiment, hyperparameter search was conducted on a validation set to select parameters such as the number of hidden units, number of hidden layers, learning rate, regularization parameter, and dropout rate (see pages 95 - 107 of the supplementary material for exact hyperparameters considered). Whenever models of different setups gave identical results on the validation data, preference was given to the structure with more layers, lower learning rate and higher dropout rate.<br /> <br /> The six FNN structures considered were: (1) FNNs with ReLU activations, no normalization and “Microsoft weight initialization” (MSRA)  to control the variance of input signals ; (2) FNNs with batch normalization , in which normalization is applied to activations of the same mini-batch; (3) FNNs with layer normalization , in which normalization is applied on a per layer basis for each training example; (4) FNNs with weight normalization , whereby each layer’s weights are normalized by learning the weight’s magnitude and direction instead of the weight vector itself; (5) highway networks, in which layers are not restricted to being sequentially connected ; and (6) an FNN-version of residual networks , with residual blocks made up of two or three densely connected layers.<br /> <br /> On the Tox21 dataset, the authors demonstrated the self-normalizing effect by comparing the distribution of neural inputs &lt;math display=&quot;inline&quot;&gt;z&lt;/math&gt; at initialization and after 40 epochs of training to that of the standard normal. As Figure 3 show, the distribution of &lt;math display=&quot;inline&quot;&gt;z&lt;/math&gt; remained similar to a normal distribution.<br /> <br /> [[File:snnf3.png|600px]]<br /> <br /> On all three sets of classification tasks, the authors demonstrated that SNN outperformed the other FNN counterparts on accuracy and AUC measures, came close to the state-of-the-art results on the Tox21 dataset with an 8-layer network, and produced a new state-of-the-art AUC on predicting pulsars for the HTRU2 dataset by a small margin (achieving an AUC 0.98, averaged over 10 cross-validation folds, versus the previous record of 0.976).<br /> <br /> On UCI datasets with fewer than 1,000 observations, SNNs did not outperform SVMs or random forests in terms of average rank in accuracy, but on datasets with at least 1,000 observations, SNNs showed the best overall performance (average rank of 5.8, compared to 6.1 for support vector machines and 6.6 for random forests). Through hyperparameter tuning, it was also discovered that the average depth of FNNs is 10.8 layers, more than the other FNN architectures tried.<br /> <br /> Here are the results on the Tox21 challenge. The challenge requires prediction of toxic effects of 12000 chemicals based on their chemical structures. SNN with 8 layers had the best performance.<br /> <br /> [[File:tox21.png|600px]]<br /> <br /> ==Future Work==<br /> <br /> Although not the focus of this paper, the authors also briefly noted that their initial experiments with applying SELUs on relatively simple CNN structures showed promising results, which is not surprising given that ELUs, which do not have the self-normalizing property, has already been shown to work well with CNNs, demonstrating faster convergence than ReLU networks and even pushing the state-of-the-art error rates on CIFAR-100 at the time of publishing in 2015 .<br /> <br /> Since the paper was published, SELUs have been adopted by several researchers, not just with FNNs [https://github.com/bioinf-jku/SNNs see link], but also with CNNs, GANs, autoencoders, reinforcement learning and RNNs. In a few cases, researchers for those papers concluded that networks trained with SELUs converged faster than those trained with ReLUs, and that SELUs have the same convergence quality as batch normalization. There is potential for SELUs to be incorporated into more architectures in the future.<br /> <br /> ==Critique==<br /> <br /> Overall, the authors presented a convincing case for using SELUs (along with proper initialization and alpha dropout) on FNNs. FNNs trained with SELU have more layers than those with other normalization techniques, so the work here provides a promising direction for making traditional FNNs more powerful. There are not as many well-established benchmark datasets to evaluate FNNs, but the experiments carried out, particularly on the larger Tox21 dataset, showed that SNNs can be very effective at classification tasks.<br /> <br /> The proofs provide a satisfactory explanation for why SELUs have a self-normalising property within the specified domain, but during their introduction the authors give 4 criteria that an activation function must satisfy in order to be self-normalising. Those criteria make intuitive sense, but there is a lack of firm justification which creates some confusion. For example, they state that SNNs cannot be derived from tanh units, even though &lt;math&gt; tanh(\lambda x) &lt;/math&gt; satisfies all 4 criteria if &lt;math&gt; \lambda &lt;/math&gt; is larger than 1. Assuming the authors did not overlook such a simple modification, there must be some additional criteria for an activation function to have a self-normalising property.<br /> <br /> The only question I have with the proofs is the lack of explanation for how the domains, &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt;, &lt;math display=&quot;inline&quot;&gt;\Omega^-&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\Omega^+&lt;/math&gt; are determined, which is an important consideration because they are used for deriving the upper and lower bounds on expressions needed for proving the three theorems. The ranges appear somewhat set through trial-and-error and heuristics to ensure the numbers work out (e.g. make the spectral norm  of &lt;math display=&quot;inline&quot;&gt;\mathcal{J}&lt;/math&gt; as large as can be below 1 so as to ensure &lt;math display=&quot;inline&quot;&gt;g&lt;/math&gt; is a contraction mapping), so it is not clear whether they are unique conditions, or that the parameters will remain within those prespecified ranges throughout training; and if the parameters can stray away from the ranges provided, then the issue of what will happen to the self-normalizing property was not addressed. Perhaps that is why the authors gave preference to models with a deeper structure and smaller learning rate during experiments to help the parameters stay within their domains. Further, in addition to the hyperparameters considered, it would be helpful to know the final values that went into the best-performing models, for a better understanding of what range of values work better for SNNs empirically.<br /> <br /> ==Conclusion==<br /> <br /> The SNN structure proposed in this paper is built on the traditional FNN structure with a few modifications, including the use of SELUs as the activation function (with &lt;math display=&quot;inline&quot;&gt;\lambda \approx 1.0507&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\alpha \approx 1.6733&lt;/math&gt;), alpha dropout, network weights initialized with mean of zero and variance &lt;math display=&quot;inline&quot;&gt;\frac{1}{n}&lt;/math&gt;, and inputs normalized to mean of zero and variance of one. It is simple to implement while being backed up by detailed theory. <br /> <br /> When properly initialized, SELUs will draw neural inputs towards a fixed point of zero mean and unit variance as the activations are propagated through the layers. The self-normalizing property is maintained even when weights deviate from their initial values during training (under mild conditions). When the variance of inputs goes beyond the prespecified range imposed, they are still bounded above and below so SNNs do not suffer from exploding and vanishing gradients. This self-normalizing property allows SNNs to be more robust to perturbations in stochastic gradient descent, so deeper structures with better prediction performance can be built. <br /> <br /> In the experiments conducted, the authors demonstrated that SNNs outperformed FNNs trained with other normalization techniques, such as batch, layer and weight normalization, and specialized architectures, such as highway or residual networks, on several classification tasks, including on the UCI Machine Learning Repository datasets. SELUs help in reducing the computation time for normalizing the network relative to RELU+BN and hence is promising. The adoption of SELUs by other researchers also lends credence to the potential for SELUs to be implemented in more neural network architectures.<br /> <br /> ==References==<br /> <br /> # Ba, Kiros and Hinton. &quot;Layer Normalization&quot;. arXiv:1607.06450. (2016).<br /> # Blinn. &quot;Consider the Lowly 2X2 Matrix.&quot; IEEE Computer Graphics and Applications. (1996).<br /> # Clevert, Unterthiner, Hochreiter. &quot;Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs).&quot; arXiv: 1511.07289. (2015).<br /> # He, Zhang, Ren and Sun. &quot;Deep Residual Learning for Image Recognition.&quot; arXiv:1512.03385. (2015).<br /> # He, Zhang, Ren and Sun. &quot;Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.&quot; arXiv:1502.01852. (2015). <br /> # Ioffe and Szegedy. &quot;Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariance Shift.&quot; arXiv:1502.03167. (2015).<br /> # Klambauer, Unterthiner, Mayr and Hochreiter. &quot;Self-Normalizing Neural Networks.&quot; arXiv: 1706.02515. (2017).<br /> # Salimans and Kingma. &quot;Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks.&quot; arXiv:1602.07868. (2016).<br /> # Srivastava, Greff and Schmidhuber. &quot;Highway Networks.&quot; arXiv:1505.00387 (2015).<br /> # Unterthiner, Mayr, Klambauer and Hochreiter. &quot;Toxicity Prediction Using Deep Learning.&quot; arXiv:1503.01445. (2015). <br /> # https://en.wikipedia.org/wiki/Central_limit_theorem <br /> # http://mathworld.wolfram.com/SpectralNorm.html <br /> # https://www.math.umd.edu/~petersd/466/fixedpoint.pdf<br /> <br /> ==Online Resources==<br /> https://github.com/bioinf-jku/SNNs (GitHub repository maintained by some of the paper's authors)<br /> <br /> ==Footnotes==<br /> <br /> 1. Error propagation analysis: The authors performed an error analysis to quantify the potential numerical imprecisions propagated through the numerous operations performed. The potential imprecision &lt;math display=&quot;inline&quot;&gt;\epsilon&lt;/math&gt; was quantified by applying the mean value theorem<br /> <br /> $|f(x + \Delta x - f(x)| \leqslant ||\triangledown f(x + t\Delta x|| ||\Delta x|| \textrm{ for } t \in [0, 1]\textrm{.}$ <br /> <br /> The error propagation rules, or &lt;math display=&quot;inline&quot;&gt;|f(x + \Delta x - f(x)|&lt;/math&gt;, was first obtained for simple operations such as addition, subtraction, multiplication, division, square root, exponential function, error function and complementary error function. Them, the error bounds on the compound terms making up &lt;math display=&quot;inline&quot;&gt;\Delta (S(\mu, \omega, \nu, \tau, \lambda, \alpha)&lt;/math&gt; were found by decomposing them into the simpler expressions. If each of the variables have a precision of &lt;math display=&quot;inline&quot;&gt;\epsilon&lt;/math&gt;, then it turns out &lt;math display=&quot;inline&quot;&gt;S&lt;/math&gt; has a precision better than &lt;math display=&quot;inline&quot;&gt;292\epsilon&lt;/math&gt;. For a machine with a precision of &lt;math display=&quot;inline&quot;&gt;2^{-56}&lt;/math&gt;, the rounding error is &lt;math display=&quot;inline&quot;&gt;\epsilon \approx 10^{-16}&lt;/math&gt;, and &lt;math display=&quot;inline&quot;&gt;292\epsilon &lt; 10^{-13}&lt;/math&gt;. In addition, all computations are correct up to 3 ulps (“unit in last place”) for the hardware architectures and GNU C library used, with 1 ulp being the highest precision that can be achieved.<br /> <br /> 2. Independence Assumption: The classic definition of central limit theorem requires &lt;math display=&quot;inline&quot;&gt;x_i&lt;/math&gt;’s to be independent and identically distributed, which is not guaranteed to hold true in a neural network layer. However, according to the Lyapunov CLT, the &lt;math display=&quot;inline&quot;&gt;x_i&lt;/math&gt;’s do not need to be identically distributed as long as the &lt;math display=&quot;inline&quot;&gt;(2 + \delta)&lt;/math&gt;th moment exists for the variables and meet the Lyapunov condition for the rate of growth of the sum of the moments . In addition, CLT has also shown to be valid under weak dependence under mixing conditions . Therefore, the authors argue that the central limit theorem can be applied with network inputs.<br /> <br /> 3. &lt;math display=&quot;inline&quot;&gt;\mathcal{H}&lt;/math&gt; versus &lt;math display=&quot;inline&quot;&gt;\mathcal{J}&lt;/math&gt; Jacobians: In solving for the largest singular value of the Jacobian &lt;math display=&quot;inline&quot;&gt;\mathcal{H}&lt;/math&gt; for the mapping &lt;math display=&quot;inline&quot;&gt;g: (\mu, \nu)&lt;/math&gt;, the authors first worked with the terms in the Jacobian &lt;math display=&quot;inline&quot;&gt;\mathcal{J}&lt;/math&gt; for the mapping &lt;math display=&quot;inline&quot;&gt;h: (\mu, \nu) \rightarrow (\widetilde{\mu}, \widetilde{\xi})&lt;/math&gt; instead, because the influence of &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu}&lt;/math&gt; on &lt;math display=&quot;inline&quot;&gt;\widetilde{\nu}&lt;/math&gt; is small when &lt;math display=&quot;inline&quot;&gt;\widetilde{\mu}&lt;/math&gt; is small in &lt;math display=&quot;inline&quot;&gt;\Omega&lt;/math&gt; and &lt;math display=&quot;inline&quot;&gt;\mathcal{H}&lt;/math&gt; can be easily expressed as terms in &lt;math display=&quot;inline&quot;&gt;\mathcal{J}&lt;/math&gt;. &lt;math display=&quot;inline&quot;&gt;\mathcal{J}&lt;/math&gt; was referenced in the paper, but I used &lt;math display=&quot;inline&quot;&gt;\mathcal{H}&lt;/math&gt; in the summary here to avoid confusion.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Rethinking_the_Smaller-Norm-Less-Informative_Assumption_in_Channel_Pruning_of_Convolutional_Layers&diff=36151 stat946w18/Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolutional Layers 2018-04-04T05:09:31Z <p>Jssambee: </p> <hr /> <div>== Introduction ==<br /> <br /> With the recent and ongoing surge in low-power, intelligent agents (such as wearables, smartphones, and IoT devices), there exists a growing need for machine learning models to work well in resource-constrained environments. Deep learning models have achieved state-of-the-art on a broad range of tasks; however, they are difficult to deploy in their original forms. For example, AlexNet (Krizhevsky et al., 2012), a model for image classification, contains 61 million parameters and requires 1.5 billion floating point operations (FLOPs) in one inference pass. A more accurate model, ResNet-50 (He et al., 2016), has 25 million parameters but requires 4.08 FLOPs. A high-end desktop GPU such as a Titan Xp is capable of [https://www.nvidia.com/en-us/titan/titan-xp/ (12 TFLOPS (tera-FLOPs per second))], while the Adreno 540 GPU used in a Samsung Galaxy S8 is only capable of [https://gflops.surge.sh (567 GFLOPS)] which is less than 5% of the Titan Xp. Clearly, it would be difficult to deploy and run these models on low-power devices.<br /> <br /> In general, model compression can be accomplished using four main, not mutually exclusive methods (Cheng et al., 2017): weight pruning, quantization, matrix transformations, and weight tying. By not mutually exclusive, we mean that these methods can be used not only separately but also in combination for compressing a single model; the use of one method does not exclude any of the other methods from being viable. <br /> <br /> Ye et al. (2018) explores pruning entire channels in a convolutional neural network (CNN). Past work has mostly focused on norm[based or error-based heuristics to prune channels; instead, Ye et al. (2018) show that their approach is easily reproducible and has favorable qualities from an optimization standpoint. In other words, they argue that the norm-based assumption is not as informative or theoretically justified as their approach, and provide strong empirical evidence of these findings.<br /> <br /> == Motivation ==<br /> <br /> Some previous works on pruning channel filters (Li et al., 2016; Molchanov et al., 2016) have focused on using the L1 norm to determine the importance of a channel. Ye et al. (2018) show that, in the deep linear convolution case, penalizing the per-layer norm is coarse-grained; they argue that one cannot assign different coefficients to L1 penalties associated with different layers without risking the loss function being susceptible to trivial re-parameterizations. As an example, consider the following deep linear convolutional neural network with modified LASSO loss:<br /> <br /> $$\min \mathbb{E}_D \lVert W_{2n} * \dots * W_1 x - y\rVert^2 + \lambda \sum_{i=1}^n \lVert W_{2i} \rVert_1$$<br /> <br /> where W are the weights and * is convolution. Here we have chosen the coefficient 0 for the L1 penalty associated with odd-numbered layers and the coefficient 1 for the L1 penalty associated with even-numbered layers. This loss is susceptible to trivial re-parameterizations: without affecting the least-squares loss, we can always reduce the LASSO loss by halving the weights of all even-numbered layers and doubling the weights of all odd-numbered layers.<br /> <br /> Furthermore, batch normalization (Ioffe, 2015) is incompatible with this method of weight regularization. Consider batch normalization at the &lt;math&gt;l&lt;/math&gt;-th layer.<br /> <br /> &lt;center&gt;&lt;math&gt;x^{l+1} = max\{\gamma \cdot BN_{\mu,\sigma,\epsilon}(W^l * x^l) + \beta, 0\}&lt;/math&gt;&lt;/center&gt;<br /> <br /> Due to the batch normalization, any uniform scaling of &lt;math&gt;W^l&lt;/math&gt; which would change &lt;math&gt;l_1&lt;/math&gt; and &lt;math&gt;l_2&lt;/math&gt; norms, but has no have no effect on &lt;math&gt;x^{l+1}&lt;/math&gt;. Thus, when trying to minimize weight norms of multiple layers, it is unclear how to properly choose penalties for each layer. Therefore, penalizing the norm of a filter in a deep convolutional network is hard to justify from a theoretical perspective.<br /> <br /> In contrast with these existing approaches, the authors focus on enforcing sparsity of a tiny set of parameters in CNN — scale parameter &lt;math&gt;\gamma&lt;/math&gt; in all batch normalization. Not only placing sparse constraints on &lt;math&gt;\gamma&lt;/math&gt; is simpler and easier to monitor, but more importantly, they put forward two reasons:<br /> <br /> 1. Every &lt;math&gt;\gamma&lt;/math&gt; always multiplies a normalized random variable, thus the channel importance becomes comparable across different layers by measuring the magnitude values of &lt;math&gt;\gamma&lt;/math&gt;;<br /> <br /> 2. The reparameterization effect across different layers is avoided if its subsequent convolution layer is also batch-normalized. In other words, the impacts from the scale changes of &lt;math&gt;\gamma&lt;/math&gt; parameter are independent across different layers.<br /> <br /> Thus, although not providing a complete theoretical guarantee on loss, Ye et al. (2018) develop a pruning technique that claims to be more justified than norm-based pruning is.<br /> <br /> == Method ==<br /> <br /> At a high level, Ye et al. (2018) propose that, instead of discovering sparsity via penalizing the per-filter or per-channel norm, penalize the batch normalization scale parameters ''gamma'' instead. The reasoning is that by having fewer parameters to constrain and working with normalized values, sparsity is easier to enforce, monitor, and learn. Having sparse batch normalization terms has the effect of pruning '''entire''' channels: if ''gamma'' is zero, then the output at that layer becomes constant (the bias term), and thus the preceding channels can be pruned.<br /> <br /> === Summary ===<br /> <br /> The basic algorithm can be summarized as follows:<br /> <br /> 1. Penalize the L1-norm of the batch normalization scaling parameters in the loss<br /> <br /> 2. Train until loss plateaus<br /> <br /> 3. Remove channels that correspond to a downstream zero in batch normalization<br /> <br /> 4. Fine-tune the pruned model using regular learning<br /> <br /> === Details ===<br /> <br /> There still exist a few problems that this summary has not addressed so far. Sub-gradient descent is known to have inverse square root convergence rate on subdifferentials (Gordon et al., 2012), so the sparsity gradient descent update may be suboptimal. Furthermore, the sparse penalty needs to be normalized with respect to previous channel sizes, since the penalty should be roughly equally distributed across all convolution layers.<br /> <br /> ==== Slow Convergence ====<br /> To address the issue of slow convergence, Ye et al. (2018) use an iterative shrinking-thresholding algorithm (ISTA) (Beck &amp; Teboulle, 2009) to update the batch normalization scale parameter. The intuition for ISTA is that the structure of the optimization objective can be taken advantage of. Consider: $$L(x) = f(x) + g(x).$$<br /> <br /> Let ''f'' be the model loss and ''g'' be the non-differentiable penalty (LASSO). ISTA is able to use the structure of the loss and converge in O(1/n), instead of O(1/sqrt(n)) when using subgradient descent, which assumes no structure about the loss. Even though ISTA is used in convex settings, Ye et. al (2018) argue that it still performs better than gradient descent.<br /> <br /> ==== Penalty Normalization ====<br /> <br /> In the paper, Ye et al. (2018) normalize the per-layer sparse penalty with respect to the global input size, the current layer kernel areas, the previous layer kernel areas, and the local input feature map area.<br /> <br /> [[File:Screenshot_from_2018-02-28_17-06-41.png]] (Ye et al., 2018)<br /> <br /> To control the global penalty, a hyperparamter ''rho'' is multiplied with all the per-layer ''lambda'' in the final loss.<br /> <br /> === Steps ===<br /> <br /> The final algorithm can be summarized as follows:<br /> <br /> 1. Compute the per-layer normalized sparse penalty constant &lt;math&gt;\lambda&lt;/math&gt;<br /> <br /> 2. Compute the global LASSO loss with global scaling constant &lt;math&gt;\rho&lt;/math&gt;<br /> <br /> 3. Until convergence, train scaling parameters using ISTA and non-scaling parameters using regular gradient descent.<br /> <br /> 4. Remove channels that correspond to a downstream zero in batch normalization<br /> <br /> 5. Fine-tune the pruned model using regular learning<br /> <br /> == Results ==<br /> <br /> The authors show state-of-the-art performance, compared with other channel-pruning approaches. It is important to note that it would be unfair to compare against general pruning approaches; channel pruning specifically removes channels without introducing '''intra-kernel sparsity''', whereas other pruning approaches introduce irregular kernel sparsity and hence computational inefficiencies.<br /> <br /> === CIFAR-10 Experiment ===<br /> <br /> [[File:Screenshot_from_2018-02-28_17-24-25.png]]<br /> <br /> For the convNet, reducing the number of parameters in the base model increased the accuracy in model A. This suggests that the base model is over-parameterized. Otherwise, there would be a trade-off of accuracy and model efficiency.<br /> <br /> === ILSVRC2012 Experiment ===<br /> <br /> The authors note that while ResNet-101 takes hundreds of epochs to train, pruning only takes 5-10, with fine-tuning adding another 2, giving an empirical example how long pruning might take in practice.<br /> <br /> [[File:Screenshot_from_2018-02-28_17-24-36.png]]<br /> <br /> === Image Foreground-Background Segmentation Experiment ===<br /> <br /> The authors note that it is common practice to take a network with pre-trained on a large task and fine-tune it to apply it to a different, smaller task. One might expect there might be some extra channels that while useful for the large task, can be omitted for the simpler task. This experiment replicated that use-case by taking a NN originally trained on multiple datasets and applying the proposed pruning method. The authors note that the pruned network actually improves over the original network in all but the most challenging test dataset, which is in line with the initial expectation. The results are shown in table below<br /> <br /> [[File:paper8_Segmentation.png|700px]]<br /> <br /> == Conclusion ==<br /> <br /> Pruning large neural architectures to fit on low-power devices is an important task. For a real quantitative measure of efficiency, it would be interesting to conduct actual power measurements on the pruned models versus baselines; reduction in FLOPs doesn't necessarily correspond with vastly reduced power since memory accesses dominate energy consumption (Han et al., 2015). However, the reduction in the number of FLOPs and parameters is encouraging, so moderate power savings should be expected.<br /> <br /> It would also be interesting to combine multiple approaches, or &quot;throw the whole kitchen sink&quot; at this task. Han et al. (2015) sparked much recent interest by successfully combining weight pruning, quantization, and Huffman coding without loss in accuracy. However, their approach introduced irregular sparsity in the convolutional layers, so a direct comparison cannot be made.<br /> <br /> In conclusion, this novel, theoretically-motivated interpretation of channel pruning was successfully applied to several important tasks.<br /> <br /> == Implementation == <br /> A PyTorch implementation is available here: https://github.com/jack-willturner/batchnorm-pruning<br /> <br /> <br /> == References ==<br /> <br /> * Krizhevsky, A., Sutskever, I., &amp; Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).<br /> * He, K., Zhang, X., Ren, S., &amp; Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).<br /> * Cheng, Y., Wang, D., Zhou, P., &amp; Zhang, T. (2017). A Survey of Model Compression and Acceleration for Deep Neural Networks. arXiv preprint arXiv:1710.09282.<br /> * Ye, J., Lu, X., Lin, Z., &amp; Wang, J. Z. (2018). Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. arXiv preprint arXiv:1802.00124.<br /> * Li, H., Kadav, A., Durdanovic, I., Samet, H., &amp; Graf, H. P. (2016). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710.<br /> * Molchanov, P., Tyree, S., Karras, T., Aila, T., &amp; Kautz, J. (2016). Pruning convolutional neural networks for resource efficient inference.<br /> * Ioffe, S., &amp; Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456).<br /> * Gordon, G., &amp; Tibshirani, R. (2012). Subgradient method. https://www.cs.cmu.edu/~ggordon/10725-F12/slides/06-sg-method.pdf<br /> * Beck, A., &amp; Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1), 183-202.<br /> * Han, S., Mao, H., &amp; Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/AmbientGAN:_Generative_Models_from_Lossy_Measurements&diff=36150 stat946w18/AmbientGAN: Generative Models from Lossy Measurements 2018-04-04T05:00:58Z <p>Jssambee: /* Empirical Results */</p> <hr /> <div>= Introduction =<br /> Generative Adversarial Networks operate by simulating complex distributions but training them requires access to large amounts of high quality data. Often, we only have access to noisy or partial observations, which will, from here on, be referred to as measurements of the true data. If we know the measurement function and would like to train a generative model for the true data, there are several ways to continue which have varying degrees of success. We will use noisy MNIST data as an illustrative example, and show the results of 1. ignoring the problem, 2. trying to recover the lost information, and 3. using AmbientGAN as a way to recover the true data distribution. Suppose we only see MNIST data that has been run through a Gaussian kernel (blurred) with some noise from a &lt;math&gt;N(0, 0.5^2)&lt;/math&gt; distribution added to each pixel:<br /> <br /> &lt;gallery mode=&quot;packed&quot;&gt;<br /> File:mnist.png| True Data (Unobserved)<br /> File:mnistmeasured.png| Measured Data (Observed)<br /> &lt;/gallery&gt;<br /> <br /> <br /> === Ignore the problem ===<br /> [[File:GANignore.png|500px]] [[File:mnistignore.png|300px]]<br /> <br /> Train a generative model directly on the measured data. This will obviously be unable to generate the true distribution before measurement has occurred. <br /> <br /> <br /> === Try to recover the information lost ===<br /> [[File:GANrecovery.png|420px]] [[File:mnistrecover.png|300px]]<br /> <br /> Works better than ignoring the problem but depends on how easily the measurement function can be inverted.<br /> <br /> === AmbientGAN ===<br /> [[File:GANambient.png|500px]] [[File:mnistambient.png|300px]]<br /> <br /> Ashish Bora, Eric Price and Alexandros G. Dimakis propose AmbientGAN as a way to recover the true underlying distribution from measurements of the true data. AmbientGAN works by training a generator which attempts to have the measurements of the output it generates fool the discriminator. The discriminator must distinguish between real and generated measurements. This paper is published in ICLR 2018.<br /> <br /> The paper makes the following contributions: '''theoretically''' they show that the distribution of measured images uniquely determines the distribution of original images. This implies that a pure Nash equilibrium for the GAN game must find a generative model that matches the true distribution. They show similar results for a dropout measurement model, where each pixel is set to zero with some probability p, and a random projection measurement model, where they observe the inner product of the image with a random Gaussian vector. '''Empirically''' they consider CelebA and MNIST dataset for which the measurement model is unknown and show that Ambient GAN recovers a lot of the underlying structure.<br /> <br /> = Related Work = <br /> Currently there exist two distinct approaches for constructing neural network based generative models; they are autoregressive [4,5] and adversarial  based methods. The adversarial model has shown to be very successful in modeling complex data distributions such as images, 3D models, state action distributions and many more. This paper is related to the work in  where the authors create 3D object shapes from a dataset of 2D projections. This paper states that the work in  is a special case of the AmbientGAN framework where the measurement process creates 2D projections using weighted sums of voxel occupancies.<br /> <br /> = Datasets and Model Architectures=<br /> We used three datasets for our experiments: MNIST, CelebA and CIFAR-10 datasets We briefly describe the generative models used for the experiments. For the MNIST dataset, we use two GAN models. The first model is a conditional DCGAN, while the second model is an unconditional Wasserstein GAN with gradient penalty (WGANGP). For the CelebA dataset, we use an unconditional DCGAN. For the CIFAR-10 dataset, we use an Auxiliary Classifier Wasserstein GAN with gradient penalty (ACWGANGP). For measurements with 2D outputs, i.e. Block-Pixels, Block-Patch, Keep-Patch, Extract-Patch, and Convolve+Noise, we use the same discriminator architectures as in the original work. For 1D projections, i.e. Pad-Rotate-Project, Pad-Rotate-Project-θ, we use fully connected discriminators. The architecture of the fully connected discriminator used for the MNIST dataset was 25-25-1 and for the CelebA dataset was 100-100-1.<br /> <br /> = Model =<br /> For the following variables superscript &lt;math&gt;r&lt;/math&gt; represents the true distributions while superscript &lt;math&gt;g&lt;/math&gt; represents the generated distributions. Let &lt;math&gt;x&lt;/math&gt;, represent the underlying space and &lt;math&gt;y&lt;/math&gt; for the measurement.<br /> <br /> Thus, &lt;math&gt;p_x^r&lt;/math&gt; is the real underlying distribution over &lt;math&gt;\mathbb{R}^n&lt;/math&gt; that we are interested in. However if we assume that our (known) measurement functions, &lt;math&gt;f_\theta: \mathbb{R}^n \to \mathbb{R}^m&lt;/math&gt; are parameterized by &lt;math&gt;\Theta \sim p_\theta&lt;/math&gt;, we can then observe &lt;math&gt;Y = f_\theta(x) \sim p_y^r&lt;/math&gt; where &lt;math&gt;p_y^r&lt;/math&gt; is a distribution over the measurements &lt;math&gt;y&lt;/math&gt;.<br /> <br /> Mirroring the standard GAN setup we let &lt;math&gt;Z \in \mathbb{R}^k, Z \sim p_z&lt;/math&gt; and &lt;math&gt;\Theta \sim p_\theta&lt;/math&gt; be random variables coming from a distribution that is easy to sample. <br /> <br /> If we have a generator &lt;math&gt;G: \mathbb{R}^k \to \mathbb{R}^n&lt;/math&gt; then we can generate &lt;math&gt;X^g = G(Z)&lt;/math&gt; which has distribution &lt;math&gt;p_x^g&lt;/math&gt; a measurement &lt;math&gt;Y^g = f_\Theta(G(Z))&lt;/math&gt; which has distribution &lt;math&gt;p_y^g&lt;/math&gt;. <br /> <br /> Unfortunately, we do not observe any &lt;math&gt;X^g \sim p_x&lt;/math&gt; so we cannot use the discriminator directly on &lt;math&gt;G(Z)&lt;/math&gt; to train the generator. Instead we will use the discriminator to distinguish between the &lt;math&gt;Y^g -<br /> f_\Theta(G(Z))&lt;/math&gt; and &lt;math&gt;Y^r&lt;/math&gt;. That is, we train the discriminator, &lt;math&gt;D: \mathbb{R}^m \to \mathbb{R}&lt;/math&gt; to detect if a measurement came from &lt;math&gt;p_y^r&lt;/math&gt; or &lt;math&gt;p_y^g&lt;/math&gt;.<br /> <br /> AmbientGAN has the objective function:<br /> <br /> \begin{align}<br /> \min_G \max_D \mathbb{E}_{Y^r \sim p_y^r}[q(D(Y^r))] + \mathbb{E}_{Z \sim p_z, \Theta \sim p_\theta}[q(1 - D(f_\Theta(G(Z))))]<br /> \end{align}<br /> <br /> where &lt;math&gt;q(.)&lt;/math&gt; is the quality function; for the standard GAN &lt;math&gt;q(x) = log(x)&lt;/math&gt; and for Wasserstein GAN &lt;math&gt;q(x) = x&lt;/math&gt;.<br /> <br /> As a technical limitation we require &lt;math&gt;f_\theta&lt;/math&gt; to be differentiable with respect to each input for all values of &lt;math&gt;\theta&lt;/math&gt;.<br /> <br /> With this set up we sample &lt;math&gt;Z \sim p_z&lt;/math&gt;, &lt;math&gt;\Theta \sim p_\theta&lt;/math&gt;, and &lt;math&gt;Y^r \sim U\{y_1, \cdots, y_s\}&lt;/math&gt; each iteration and use them to compute the stochastic gradients of the objective function. We alternate between updating &lt;math&gt;G&lt;/math&gt; and updating &lt;math&gt;D&lt;/math&gt;.<br /> <br /> = Empirical Results =<br /> <br /> The paper continues to present results of AmbientGAN under various measurement functions when compared to baseline models. We have already seen one example in the introduction: a comparison of AmbientGAN in the Convolve + Noise Measurement case compared to the ignore-baseline, and the unmeasure-baseline. <br /> <br /> === Convolve + Noise ===<br /> Additional results with the convolve + noise case with the celebA dataset. The AmbientGAN is compared to the baseline results with Wiener deconvolution. It is clear that AmbientGAN has superior performance in this case. The measurement is created using a Gaussian kernel and IID Gaussian noise, with &lt;math&gt;f_{\Theta}(x) = k*x + \Theta&lt;/math&gt;, where &lt;math&gt;*&lt;/math&gt; is the convolution operation, &lt;math&gt;k&lt;/math&gt; is the convolution kernel, and &lt;math&gt;\Theta \sim p_{\theta}&lt;/math&gt; is the noise distribution.<br /> <br /> [[File:paper7_fig3.png]]<br /> <br /> Images undergone convolve + noise transformations (left). Results with Wiener deconvolution (middle). Results with AmbientGAN (right).<br /> <br /> === Block-Pixels ===<br /> With the block-pixels measurement function each pixel is independently set to 0 with probability &lt;math&gt;p&lt;/math&gt;.<br /> <br /> [[File:block-pixels.png]]<br /> <br /> Measurements from the celebA dataset with &lt;math&gt;p=0.95&lt;/math&gt; (left). Images generated from GAN trained on unmeasured (via blurring) data (middle). Results generated from AmbientGAN (right).<br /> <br /> === Block-Patch ===<br /> <br /> [[File:block-patch.png]]<br /> <br /> A random 14x14 patch is set to zero (left). Unmeasured using-navier-stoke inpainting (middle). AmbientGAN (right). <br /> <br /> === Pad-Rotate-Project-&lt;math&gt;\theta&lt;/math&gt; ===<br /> <br /> [[File:pad-rotate-project-theta.png]]<br /> <br /> Results generated by AmbientGAN where the measurement function 0 pads the images, rotates it by &lt;math&gt;\theta&lt;/math&gt;, and projects it on to the x axis. For each measurement the value of &lt;math&gt;\theta&lt;/math&gt; is known. <br /> <br /> The generated images only have the basic features of a face and is referred to as a failure case in the paper. However the measurement function performs relatively well given how lossy the measurement function is. <br /> <br /> For the Keep-Patch measurement model, no pixels outside a box are known and thus inpainting methods are not suitable. For the Pad-Rotate-Project-θ measurements, a conventional technique is to sample many angles, and use techniques for inverting the Radon transform . However, since only a few projections are observed at a time, these methods aren’t readily applicable hence it is unclear how to obtain an approximate inverse function shown below. <br /> <br /> [[File:keep-patch.png]]<br /> <br /> === Explanation of Inception Score ===<br /> To evaluate GAN performance, the authors make use of the inception score, a metric introduced by Salimans et al.(2016). To evaluate the inception score on a datapoint, a pre-trained inception classification model (Szegedy et al. 2016) is applied to that datapoint, and the KL divergence between its label distribution conditional on the datapoint and its marginal label distribution is computed. This KL divergence is the inception score. The idea is that meaningful images should be recognized by the inception model as belonging to some class, and so the conditional distribution should have low entropy, while the model should produce a variety of images, so the marginal should have high entropy. Thus an effective GAN should have a high inception score.<br /> <br /> === MNIST Inception ===<br /> <br /> [[File:MNIST-inception.png]]<br /> <br /> AmbientGAN was compared with baselines through training several models with different probability &lt;math&gt;p&lt;/math&gt; of blocking pixels. The plot on the left shows that the inception scores change as the block probability &lt;math&gt;p&lt;/math&gt; changes. All four models are similar when no pixels are blocked &lt;math&gt;(p=0)&lt;/math&gt;. By the increase of the blocking probability, AmbientGAN models present a relatively stable performance and perform better than the baseline models. Therefore, AmbientGAN is more robust than all other baseline models.<br /> <br /> The plot on the right reveals the changes in inception scores while the standard deviation of the additive Gaussian noise increased. Baselines perform better when the noise is small. By the increase of the variance, AmbientGAN models present a much better performance compare to the baseline models. Further AmbientGAN retains high inception scores as measurements become more and more lossy.<br /> <br /> For 1D projection, Pad-Rotate-Project model achieved an inception score of 4.18. Pad-Rotate-Project-θ model achieved an inception score of 8.12, which is close to the score of vanilla GAN 8.99.<br /> <br /> === CIFAR-10 Inception ===<br /> <br /> [[File:CIFAR-inception.png]]<br /> <br /> AmbientGAN is faster to train and more robust even on more complex distributions such as CIFAR-10. Similar trends were observed on the CIFAR-10 data, and AmbientGAN maintains relatively stable inception score as the block probability was increased.<br /> <br /> === Robustness To Measurement Model ===<br /> <br /> In order to empirically gauge robustness to measurement modelling error, the authors used the block-pixels measurement model: the image dataset was computed with &lt;math&gt; p^* = 0.5 &lt;/math&gt;, and several versions of the model were trained, each using different values of blocking probability &lt;math&gt; p &lt;/math&gt;. The inception scores were calculated and plotted as a function of &lt;math&gt; p &lt;/math&gt;. This is shown on the left below:<br /> <br /> [[File:robustnessambientgan.png | 800px]]<br /> <br /> The authors observe that the inception score peaks when the model uses the correct probability, but decreases smoothly as the probability moves away, demonstrating some robustness.<br /> <br /> = Theoretical Results =<br /> <br /> The theoretical results in the paper prove the true underlying distribution of &lt;math&gt;p_x^r&lt;/math&gt; can be recovered when we have data that comes from the Gaussian-Projection measurement, Fourier transform measurement and the block-pixels measurement. The do this by showing the distribution of the measurements &lt;math&gt;p_y^r&lt;/math&gt; corresponds to a unique distribution &lt;math&gt;p_x^r&lt;/math&gt;. Thus even when the measurement itself is non-invertible the effect of the measurement on the distribution &lt;math&gt;p_x^r&lt;/math&gt; is invertible. Lemma 5.1 ensures this is sufficient to provide the AmbientGAN training process with a consistency guarantee. For full proofs of the results please see appendix A. <br /> <br /> === Lemma 5.1 === <br /> Let &lt;math&gt;p_x^r&lt;/math&gt; be the true data distribution, and &lt;math&gt;p_\theta&lt;/math&gt; be the distributions over the parameters of the measurement function. Let &lt;math&gt;p_y^r&lt;/math&gt; be the induced measurement distribution. <br /> <br /> Assume for &lt;math&gt;p_\theta&lt;/math&gt; there is a unique probability distribution &lt;math&gt;p_x^r&lt;/math&gt; that induces &lt;math&gt;p_y^r&lt;/math&gt;. <br /> <br /> Then for the standard GAN model if the discriminator &lt;math&gt;D&lt;/math&gt; is optimal such that &lt;math&gt;D(\cdot) = \frac{p_y^r(\cdot)}{p_y^r(\cdot) + p_y^g(\cdot)}&lt;/math&gt;, then a generator &lt;math&gt;G&lt;/math&gt; is optimal if and only if &lt;math&gt;p_x^g = p_x^r&lt;/math&gt;. <br /> <br /> === Theorems 5.2===<br /> For the Gussian-Projection measurement model, there is a unique underlying distribution &lt;math&gt;p_x^{r} &lt;/math&gt; that can induce the observed measurement distribution &lt;math&gt;p_y^{r} &lt;/math&gt;.<br /> <br /> === Theorems 5.3===<br /> Let &lt;math&gt; \mathcal{F} (\cdot) &lt;/math&gt; denote the Fourier transform and let &lt;math&gt;supp (\cdot) &lt;/math&gt; be the support of a function. Consider the Convolve+Noise measurement model with the convolution kernel &lt;math&gt; k &lt;/math&gt;and additive noise distribution &lt;math&gt;p_\theta &lt;/math&gt;. If &lt;math&gt; supp( \mathcal{F} (k))^{c}=\phi &lt;/math&gt; and &lt;math&gt; supp( \mathcal{F} (p_\theta))^{c}=\phi &lt;/math&gt;, then there is a unique distribution &lt;math&gt;p_x^{r} &lt;/math&gt; that can induce the measurement distribution &lt;math&gt;p_y^{r} &lt;/math&gt;.<br /> <br /> === Theorems 5.4===<br /> Assume that each image pixel takes values in a finite set P. Thus &lt;math&gt;x \in P^n \subset \mathbb{R}^{n} &lt;/math&gt;. Assume &lt;math&gt;0 \in P &lt;/math&gt;, and consider the Block-Pixels measurement model with &lt;math&gt;p &lt;/math&gt; being the probability of blocking a pixel. If &lt;math&gt;p &lt;1&lt;/math&gt;, then there is a unique distribution &lt;math&gt;p_x^{r} &lt;/math&gt; that can induce the measurement distribution &lt;math&gt;p_y^{r} &lt;/math&gt;. Further, for any &lt;math&gt; \epsilon &gt; 0, \delta \in (0, 1] &lt;/math&gt;, given a dataset of<br /> \begin{equation}<br /> s=\Omega \left( \frac{|P|^{2n}}{(1-p)^{2n} \epsilon^{2}} log \left( \frac{|P|^{n}}{\delta} \right) \right)<br /> \end{equation}<br /> IID measurement samples from pry , if the discriminator D is optimal, then with probability &lt;math&gt; \geq 1 - \delta &lt;/math&gt; over the dataset, any optimal generator G must satisfy &lt;math&gt; d_{TV} \left( p^g_x , p^r_x \right) \leq \epsilon &lt;/math&gt;, where &lt;math&gt; d_{TV} \left( \cdot, \cdot \right) &lt;/math&gt; is the total variation distance.<br /> <br /> = Conclusion =<br /> Generative models are powerful tools, but constructing a generative model requires a large, high quality dataset of the distribution of interest. The authors show how to relax this requirement, by learning a distribution from a dataset that only contains incomplete, noisy measurements of the distribution. This allows for the construction of new generative models of distributions for which no high quality dataset exists.<br /> <br /> = Future Research =<br /> <br /> One critical weakness of AmbientGAN is the assumption that the measurement model is known and that this &lt;math&gt;f_theta&lt;/math&gt; is also differentiable. It would be nice to be able to train an AmbientGAN model when we have an unknown measurement model but also a small sample of unmeasured data, or at the very least to remove the differentiability restriction from math&gt;f_theta&lt;/math&gt;.<br /> <br /> A related piece of work is [https://arxiv.org/abs/1802.01284 here]. In particular, Algorithm 2 in the paper excluding the discriminator is similar to AmbientGAN.<br /> <br /> =Open Source Code=<br /> An implementation of Ambient GAN can be found here: https://github.com/AshishBora/ambient-gan.<br /> <br /> = References =<br /> # https://openreview.net/forum?id=Hy7fDog0b<br /> # Salimans, Tim, et al. &quot;Improved techniques for training gans.&quot; Advances in Neural Information Processing Systems. 2016.<br /> # Szegedy, Christian, et al. &quot;Rethinking the inception architecture for computer vision.&quot; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.<br /> # Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv:1312.6114, 2013.<br /> # Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016a.<br /> # Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural infor- mation processing systems, pp. 2672–2680, 2014.<br /> # Matheus Gadelha, Subhransu Maji, and Rui Wang. 3d shape induction from 2d views of multiple objects. arXiv preprint arXiv:1612.05872, 2016.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:keep-patch.png&diff=36149 File:keep-patch.png 2018-04-04T04:57:16Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=One-Shot_Imitation_Learning&diff=36148 One-Shot Imitation Learning 2018-04-04T04:14:35Z <p>Jssambee: /* Criticisms */</p> <hr /> <div>= Introduction =<br /> Robotic systems can be used for many applications, but to truly be useful for complex applications, they need to overcome 2 challenges: having the intent of the task at hand communicated to them, and being able to perform the manipulations necessary to complete this task. It is preferable to use demonstration to teach the robotic systems rather than natural language, as natural language may often fail to convey the details and intricacies required for the task. However, current work on learning from demonstrations is only successful with large amounts of feature engineering or a large number of demonstrations. The proposed model aims to achieve 'one-shot' imitation learning, ie. learning to complete a new task from just a single demonstration of it without any other supervision. As input, the proposed model takes the observation of the current instance of a task, and a demonstration of successfully solving a different instance of the same task. Strong generalization was achieved by using a soft attention mechanism on both the sequence of actions and states that the demonstration consists of, as well as on the vector of element locations within the environment. The success of this proposed model at completing a series of block stacking tasks can be viewed at http://bit.ly/nips2017-oneshot.<br /> <br /> = Related Work =<br /> While one-shot imitation learning is a novel combination of ideas, each of the components has previously been studied.<br /> * Imitation Learning: <br /> ** Behavioural learning uses supervised learning to map from observations to actions (e.g. [https://papers.nips.cc/paper/95-alvinn-an-autonomous-land-vehicle-in-a-neural-network.pdf (Pomerleau 1988)], [https://arxiv.org/pdf/1011.0686.pdf (Ross et. al 2011)])<br /> ** Inverse reinforcement learning estimates a reward function that considers demonstrations as optimal behavior (e.g. [http://ai.stanford.edu/~ang/papers/icml00-irl.pdf (Ng et. al 2000)])<br /> * One-Shot Learning:<br /> ** Typically a form of meta-learning<br /> ** Previously used for variety of tasks but all domain-specific<br /> ** [https://arxiv.org/abs/1703.03400 (Finn et al. 2017)] proposed a generic solution but excluded imitation learning<br /> * Reinforcement Learning:<br /> ** Demonstrated to work on variety of tasks and environments, in particular on games and robotic control<br /> ** Requires large amount of trials and a user-specified reward function<br /> * Multi-task/Transfer Learning:<br /> ** Shown to be particularly effective at computer vision tasks<br /> ** Not meant for one-shot learning<br /> * Attention Modelling:<br /> ** The proposed model makes use of the attention model from [https://arxiv.org/abs/1409.0473 (Bahdanau et al. 2016)]<br /> ** The attention modelling over demonstration is similar in nature to the seq2seq models from the well known [https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf (Sutskever et al. 2014)]<br /> <br /> = One-Shot Imitation Learning =<br /> <br /> [[File:oneshot1.jpg|1000px]]<br /> <br /> The figure above shows the differences between traditional and one-shot imitation learning. In a), the traditional method may require training different policies for performing similar tasks that are similar in nature. For example, stacking blocks to a height of 2 and to a height of 3. In b), the one-shot imitation learning allows the same policy to be used for these tasks given a single demonstration, achieving good performance without any additional system interactions. In c), the policy is trained by using a set of different training tasks, with enough examples so that the learned results can be generalized to other similar tasks. Each task has a set of successful demonstrations. Each iteration of training uses two demonstrations from a task, one is used as the input passing into the algorithm and the other is used at the output, the results from the two are then conditioned to produce the correct action.<br /> <br /> == Problem Formalization ==<br /> The problem is briefly formalized with the authors describing a distribution of tasks, an individual task, a distribution of demonstrations for this task, and a single demonstration respectively as $T, t\sim T, D(t), d\sim D(t)$<br /> In addition, an action, an observation, parameters, and a policy are respectively defined as $a, o, \theta, \pi_\theta(a|o,d)$<br /> In particular, a demonstration is a sequence of observation and action pairs $d = [(o_1, a_1),(o_2, a_2), . . . ,(o_T , a_T )]$<br /> Assuming that $$T$$ and some evaluation function $$R_t(d): R^T \rightarrow R$$ are given, and that succesful demonstrations are available for each task, then the objective is to maximize expectation of the policy performance over $t\sim T, d\sim D(t)$.<br /> <br /> == Block Stacking Tasks ==<br /> The tasks that the authors focus on is block stacking. A user specifies in what final configuration cubic blocks should be stacked, and the goal is to use a 7-DOF Fetch robotic arm to arrange the blocks in this configuration. The number of blocks, and their desired configuration (ie. number of towers, the height of each tower, and order of blocks within each tower) can be varied and encoded as a string. For example, 'abc def' would signify 2 towers of height 3, with block A on block B on block C in one tower, and block D on block E on block F in a second tower. To add complexity, the initial configuration of the blocks can vary and is encoded as a set of 3-dimensional vectors describing the position of each block relative to the robotic arm.<br /> <br /> == Algorithm ==<br /> To avoid needing to specify a reward function, the authors use behavioral cloning and DAGGER, 2 imitation learning methods that require only demonstrations, for training. In each training step, a list of tasks is sampled, and for each, a demonstration with injected noise along with some observation-action pairs are sampled. Given the current observation and demonstration as input, the policy is trained against the sampled actions by minimizing L2 norm for continuous actions, and cross-entropy for discrete ones. Adamax is used as the optimizer with a learning rate of 0.001.<br /> <br /> = Architecture =<br /> The authors propose a novel architecture for imitation learning, consisting of 3 networks.<br /> <br /> While, in principle, a generic neural network could learn the mapping from demonstration and current observation to appropriate action, the authors propose the following architecture which they claim as one of the main contributions of this paper, and believe it would be useful for complex tasks in the future.<br /> The proposed architecture consists of three modules: the demonstration network, the context network, and the manipulation network.<br /> <br /> [[File:oneshot2.jpg|1000px|center]]<br /> <br /> == Demonstration Network ==<br /> This network takes a demonstration as input and produces an embedding with size linearly proportional to the number of blocks and the size of the demonstration.<br /> === Temporal Dropout ===<br /> Since a demonstration for block stacking can be very long, the authors randomly discard 95% of the time steps, a process they call 'temporal dropout'. The reduced size of the demonstrations allows multiple trajectories to be explored during testing to calculate an ensemble estimate. Dilated temporal convolutions and neighborhood attention are then repeatedly applied to the downsampled demonstrations.<br /> <br /> === Neighborhood Attention ===<br /> Since demonstration sizes can vary, a mechanism is needed that is not restricted to fixed-length inputs. While soft attention is one such mechanism, the problem with it is that there may be increasingly large amounts of information lost if soft attention is used to map longer demonstrations to the same fixed length as shorter demonstrations. As a solution, the authors propose having the same number of outputs as inputs, but with attention performed on other inputs relative to the current input.<br /> <br /> A query &lt;math&gt;q&lt;/math&gt;, a list of context vectors &lt;math&gt;\{c_j\}&lt;/math&gt;, and a list of memory vectors &lt;math&gt;\{m_j\}&lt;/math&gt; are given as input to soft attention. Each attention weight is given by the product of a learned weight vector and a nonlinearity applied to the sum of the query and corresponding context vector. Softmaxed weights applied to the corresponding memory vector form the output of the soft attention.<br /> <br /> $Inputs: q, \{c_j\}, \{m_j\}$<br /> $Weights: w_i \leftarrow v^Ttanh(q+c_i)$<br /> $Output: \sum_i{m_i\frac{\exp(w_i)}{\sum_j{\exp(w_j)}}}$<br /> <br /> A list of same-length embeddings, coming from a previous neighbourhood attention layer or a projection from the list of block coordinates, is given as input to neighborhood attention. For each block, two separate linear layers produce a query vector and a context vector, while a memory vector is a list of tuples that describe the position of each block joined with the input embedding for that block. Soft attention is then performed on this query, context vector, and memory vector. The authors claim that the intuition behind this process is to allow each block to provide information about itself relative to the other blocks in the environment. Finally, for each block, a linear transformation is performed on the vector composed by concatenating the input embedding, the result of the soft attention for that block, and the robot's state.<br /> <br /> For an environment with B blocks:<br /> $State: s$<br /> $Block_i: b_i \leftarrow (x_i, y_i, z_i)$<br /> $Embeddings: h_1^{in}, ..., h_B^{in}$ <br /> $Query_i: q_i \leftarrow Linear(h_i^{in})$<br /> $Context_i: c_i \leftarrow Linear(h_i^{in})$<br /> $Memory_i: m_i \leftarrow (b_i, h_i^{in})$<br /> $Result_i: result_i \leftarrow SoftAttn(q_i, \{c_j\}_{j=1}^B, \{m_k\}_{k=1}^B)$<br /> $Output_i: output_i \leftarrow Linear(concat(h_i^{in}, result_i, b_i, s))$<br /> <br /> == Context network ==<br /> This network takes the current state and the embedding produced by the demonstration network as inputs and outputs a fixed-length &quot;context embedding&quot; which captures only the information relevant for the manipulation network at this particular step.<br /> === Attention over demonstration ===<br /> The current state is used to compute a query vector which is then used for attending over all the steps of the embedding. Since at each time step there are multiple blocks, the weights for each are summed together to produce a scalar for each time step. Neighbourhood attention is then applied several times, using an LSTM with untied weights, since the information at each time steps needs to be propagated to each block's embedding. <br /> <br /> Performing attention over the demonstration yields a vector whose size is independent of the demonstration size; however, it is still dependent on the number of blocks in the environment, so it is natural to now attend over the state in order to get a fixed-length vector.<br /> === Attention over current state ===<br /> The authors propose that in general, within each subtask, only a limited number of blocks are relevant for performing the subtask. If the subtask is to stack A on B, then intuitively, one would suppose that only block A and B are relevant, and perhaps any blocks that may be blocking access to either A or B. This is not enforced during training, but once soft attention is applied to the current state to produce a fixed-length context embedding, the authors believe that the model does indeed learn in this way.<br /> <br /> == Manipulation network ==<br /> Given the context embedding as input, this simple feedforward network decides on the particular action needed, to complete the subtask of stacking one particular 'source' block on top of another 'target' block.<br /> <br /> = Experiments = <br /> The proposed model was tested on the block stacking tasks. the experiments were designed at answering the following questions:<br /> * How does training with behavioral cloning compare with DAGGER?<br /> * How does conditioning on the entire demonstration compare to conditioning on the final state?<br /> * How does conditioning on the entire demonstration compare to conditioning on a “snapshot” of the trajectory?<br /> * Can the authors' framework generalize to tasks that it has never seen during training?<br /> For the experiments, 140 training tasks and 43 testing tasks were collected, each with between 2 to 10 blocks and a different, desired final layout. Over 1000 demonstrations for each task were collected using a hard-coded policy rather than a human user. The authors compare 4 different architectures in these experiments:<br /> * Behavioural cloning used to train the proposed model<br /> * DAGGER used to train the proposed model<br /> * The proposed model, trained with DAGGER, but conditioned on the desired final state rather than an entire demonstration<br /> * The proposed model, trained with DAGGER, but conditioned on a 'snapshot' of the environment at the end of each subtask (ie. every time a block is stacked on another block)<br /> <br /> == Performance Evaluation ==<br /> [[File:oneshot3.jpg|1000px]]<br /> <br /> The most confident action at each timestep is chosen in 100 different task configurations, and results are averaged over tasks that had the same number of blocks. The results suggest that the performance of each of the architectures is comparable to that of the hard-coded policy which they aim to imitate. Performance degrades similarly across all architectures and the hard-coded policy as the number of blocks increases. On the harder tasks, conditioning on the entire demonstration led to better performance than conditioning on snapshots or on the final state. The authors believe that this may be due to the lack of information when conditioning only on the final state as well as due to regularization caused by temporal dropout which leads to data augmentation when conditioning on the full demonstration but is omitted when conditioning only on the snapshots or final state. Both DAGGER and behavioral cloning performed comparably well. As mentioned above, noise injection was used in training to improve performance; in practice, additional noise can still be injected but some may already come from other sources.<br /> <br /> == Visualization ==<br /> The authors visualize the attention mechanisms underlying the main policy architecture to have a better understanding about how it operates. There are two kinds of attention that the authors are mainly interested in, one where the policy attends to different time steps in the demonstration, and the other where the policy attends to different blocks in the current state. The figures below show some of the policy attention heatmaps over time.<br /> <br /> [[File:paper6_Visualization.png|800px]]<br /> <br /> = Conclusions =<br /> The proposed model successfully learns to complete new instances of a new task from just a single demonstration. The model was demonstrated to work on a series of block stacking tasks. The authors propose several extensions including enabling few-shot learning when one demonstration is insufficient, using image data as the demonstrations, and attempting many other tasks aside from block stacking.<br /> <br /> = Criticisms =<br /> While the paper shows an incredibly impressive result: the ability to learn a new task from just a single demonstration, there are a few points that need clearing up.<br /> Firstly, the authors use a hard-coded policy in their experiments rather than a human. It is clear that the performance of this policy begins to degrade quickly as the complexity of the task increases. It would be useful to know what this hard-coded policy actually was, and if the proposed model could still have comparable performance if a more successful demonstration, perhaps one by a human user, were performed. Give the current popularity of adversarial examples, it would also be interesting to see the performance when conditioned on an &quot;adversarial&quot; demonstration, that achieves the correct final state, but intentionally performs complex or obfuscated steps to get there.<br /> Second, it would be useful to see the model's performance on a more complex family of tasks than block stacking, since although each block stacking task is slightly different, the differences may turn out be insignificant compared to other tasks that this model should work on if it is to be a general imitation learning architecture; intuitively, the space of all possible moves and configurations is not large for the task. Also it is a bit misleading as there seems to be a need for more demonstrations to first get a reasonable policy that can generalize, leading to generic policy and then use just one demonstration on a new task expecting the policy to generalize. So it seems there is some sort of pre training involved here. Regardless, this work is a big step forward for imitation learning, permitting a wider range of tasks for which there is little training data and no reward function available, to still be successfully solved.<br /> <br /> = Illustrative Example: Particle Reaching =<br /> <br /> [[File:f1.png]]<br /> <br /> Figure 1: [Left] Agent, [Middle] Orange square is target, [Right] Green triangle is target .<br /> <br /> Another simple yet insightful example of the One-Shot Imitation Learning is the particle reaching problem which provides a relatively simple suite of tasks from which the network needs to solve an arbitrary one. The problem is formulated such that for each task: there is an agent which can move based on a 2D force vector, and n landmarks at varying 2D locations (n varies from task to task) with the goal of moving the agent to the specific landmark reached in the demonstration. This is illustrated in Figure 1. <br /> <br /> [[File:f2.png|450px]]<br /> <br /> Figure 2: Experimental results .<br /> <br /> Some insight comes from the use of different network architectures to solve this problem. The three architectures to compare (described below) are plain LSTM, LSTM with attention, and final state with attention. The key insight is that the architectures go from generic to specific, with the best generalization performance achieved with the most specific architecture, final state with attention, as seen in Figure 2. It is important to note that this conclusion does not carry forward to more complicated tasks such as the block stacking task.<br /> *Plain LSTM: 512 hidden units, with the input being the demonstration trajectory (the position of the agent changes over time and approaches one of the targets). Output of the LSTM with the current state (from the task needed to be solved) is the input for a multi-layer perceptron (MLP) for finding the solution.<br /> *LSTM with attention: Output of LSTM is now a set of weights for the different targets during training. These weights and the test state are used in the test task. The, now, 2D output is the input for an MLP as before.<br /> *Final state with attention: Looks only at the final state of the demonstration since it can sufficiently provide the needed detail of which target to reach (trajectory is not required). Similar to previous architecture, produces weights used by MLP.<br /> <br /> = Source =<br /> # Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. &quot;Neural machine translation by jointly learning to align and translate.&quot; arXiv preprint arXiv:1409.0473 (2014).<br /> # Duan, Yan, Marcin Andrychowicz, Bradly Stadie, OpenAI Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. &quot;One-shot imitation learning.&quot; In Advances in neural information processing systems, pp. 1087-1098. 2017.<br /> # Y. Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba. One-shot imitation learning. arXiv preprint arXiv:1703.07326, 2017. (Newer revision)<br /> # Finn, Chelsea, Pieter Abbeel, and Sergey Levine. &quot;Model-agnostic meta-learning for fast adaptation of deep networks.&quot; arXiv preprint arXiv:1703.03400 (2017).<br /> # Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. &quot;Sequence to sequence learning with neural networks.&quot; Advances in neural information processing systems. 2014.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Tensorized_LSTMs&diff=36063 stat946w18/Tensorized LSTMs 2018-04-03T06:49:29Z <p>Jssambee: LSTM Behaviour for different tasks</p> <hr /> <div>= Presented by =<br /> <br /> Chen, Weishi(Edward)<br /> <br /> = Introduction =<br /> <br /> Long Short-Term Memory (LSTM) is a popular approach to boosting the ability of Recurrent Neural Networks to store longer term temporal information. The capacity of an LSTM network can be increased by widening and adding layers (illustrations will be provided later). <br /> <br /> <br /> However, usually the LSTM model introduces additional parameters, while LSTM with additional layers and wider layers increases the time required for model training and evaluation. As an alternative, this paper &lt;Wider and Deeper, Cheaper and Faster: Tensorized LSTMs for Sequence Learning&gt; has proposed a model based on LSTM called the '''Tensorized LSTM''' in which the hidden states are represented by '''tensors''' and updated via a '''cross-layer convolution'''. <br /> <br /> * By increasing the tensor size, the network can be widened efficiently without additional parameters since the parameters are shared across different locations in the tensor<br /> * By delaying the output, the network can be deepened implicitly with little additional run-time since deep computations for each time step are merged into temporal computations of the sequence. <br /> <br /> <br /> Also, the paper presents experiments that were conducted on five challenging sequence learning tasks to show the potential of the proposed model.<br /> <br /> = A Quick Introduction to RNN and LSTM =<br /> <br /> We consider the time-series prediction task of producing a desired output &lt;math&gt;y_t&lt;/math&gt; at each time-step t∈ {1, ..., T} given an observed input sequence &lt;math&gt;x_{1:t} = {x_1,x_2, ···, x_t}&lt;/math&gt;, where &lt;math&gt;x_t∈R^R&lt;/math&gt; and &lt;math&gt;y_t∈R^S&lt;/math&gt; are vectors. RNN learns how to use a hidden state vector &lt;math&gt;h_t ∈ R^M&lt;/math&gt; to encapsulate the relevant features of the entire input history x1:t (indicates all inputs from to initial time-step to final step before predication - illustration given below) up to time-step t.<br /> <br /> \begin{align}<br /> h_{t-1}^{cat} = [x_t, h_{t-1}] \hspace{2cm} (1)<br /> \end{align}<br /> <br /> Where &lt;math&gt;h_{t-1}^{cat} ∈R^{R+M}&lt;/math&gt; is the concatenation of the current input &lt;math&gt;x_t&lt;/math&gt; and the previous hidden state &lt;math&gt;h_{t−1}&lt;/math&gt;, which expands the dimensionality of intermediate information.<br /> <br /> The update of the hidden state h_t is defined as:<br /> <br /> \begin{align}<br /> a_{t} =h_{t-1}^{cat} W^h + b^h \hspace{2cm} (2)<br /> \end{align}<br /> <br /> and<br /> <br /> \begin{align}<br /> h_t = \Phi(a_t) \hspace{2cm} (3)<br /> \end{align}<br /> <br /> &lt;math&gt;W^h∈R^(R+M)xM &lt;/math&gt; guarantees each hidden status provided by the previous step is of dimension M. &lt;math&gt; a_t ∈R^M &lt;/math&gt; is the hidden activation, and φ(·) is the element-wise hyperbolic tangent. Finally, the output &lt;math&gt; y_t &lt;/math&gt; at time-step t is generated by:<br /> <br /> \begin{align}<br /> y_t = \varphi(h_{t}^{cat} W^y + b^y) \hspace{2cm} (4)<br /> \end{align}<br /> <br /> where &lt;math&gt;W^y∈R^{M×S}&lt;/math&gt; and &lt;math&gt;b^y∈R^S&lt;/math&gt;, and &lt;math&gt;\varphi(·)&lt;/math&gt; can be any differentiable function. Note that the &lt;math&gt;\phi&lt;/math&gt; is a non-linear, element-wise function which generates hidden output, while &lt;math&gt;\varphi&lt;/math&gt; generates the final network output.<br /> <br /> [[File:StdRNN.png|650px|center||Figure 1: Recurrent Neural Network]]<br /> <br /> One shortfall of RNN is the problem of vanishing/exploding gradients. This shortfall is significant, especially when modeling long-range dependencies. One alternative is to instead use LSTM (Long Short-Term Memory), which alleviates these problems by employing several gates to selectively modulate the information flow across each neuron. Since LSTMs have been successfully used in sequence models, it is natural to consider them for accommodating more complex analytical needs.<br /> <br /> [[File:LSTM_Gated.png|650px|center||Figure 2: LSTM]]<br /> <br /> = Structural Measurement of Sequential Model =<br /> <br /> We can consider the capacity of a network consists of two components: the '''width''' (the amount of information handled in parallel) and the depth (the number of computation steps). <br /> <br /> A way to '''widen''' the LSTM is to increase the number of units in a hidden layer; however, the parameter number scales quadratically with the number of units. To deepen the LSTM, the popular Stacked LSTM (sLSTM) stacks multiple LSTM layers. The drawback of sLSTM, however, is that runtime is proportional to the number of layers and information from the input is potentially lost (due to gradient vanishing/explosion) as it propagates vertically through the layers. This paper introduced a way to both widen and deepen the LSTM whilst keeping the parameter number and runtime largely unchanged. In summary, we make the following contributions:<br /> <br /> '''(a)''' Tensorize RNN hidden state vectors into higher-dimensional tensors, to enable more flexible parameter sharing and can be widened more efficiently without additional parameters.<br /> <br /> '''(b)''' Based on (a), merge RNN deep computations into its temporal computations so that the network can be deepened with little additional runtime, resulting in a Tensorized RNN (tRNN).<br /> <br /> '''(c)''' We extend the tRNN to an LSTM, namely the Tensorized LSTM (tLSTM), which integrates a novel memory cell convolution to help to prevent the vanishing/exploding gradients.<br /> <br /> = Method =<br /> <br /> Go through the methodology.<br /> <br /> == Part 1: Tensorize RNN hidden State vectors ==<br /> <br /> '''Definition:''' Tensorization is defined as the transformation or mapping of lower-order data to higher-order data. For example, the low-order data can be a vector, and the tensorized result is a matrix, a third-order tensor or a higher-order tensor. The ‘low-order’ data can also be a matrix or a third-order tensor, for example. In the latter case, tensorization can take place along one or multiple modes.<br /> <br /> [[File:VecTsor.png|320px|center||Figure 3: Vector Third-order tensorization of a vector]]<br /> <br /> '''Optimization Methodology Part 1:''' It can be seen that in an RNN, the parameter number scales quadratically with the size of the hidden state. A popular way to limit the parameter number when widening the network is to organize parameters as higher-dimensional tensors which can be factorized into lower-rank sub-tensors that contain significantly fewer elements, which is is known as tensor factorization. <br /> <br /> '''Optimization Methodology Part 2:''' Another common way to reduce the parameter number is to share a small set of parameters across different locations in the hidden state, similar to Convolutional Neural Networks (CNNs).<br /> <br /> '''Effects:''' This '''widens''' the network since the hidden state vectors are in fact broadcast to interact with the tensorized parameters. <br /> <br /> <br /> <br /> We adopt parameter sharing to cutdown the parameter number for RNNs, since compared with factorization, it has the following advantages: <br /> <br /> (i) '''Scalability,''' the number of shared parameters can be set independent of the hidden state size<br /> <br /> (ii) '''Separability,''' the information flow can be carefully managed by controlling the receptive field, allowing one to shift RNN deep computations to the temporal domain<br /> <br /> <br /> <br /> We also explicitly tensorize the RNN hidden state vectors, since compared with vectors, tensors have a better: <br /> <br /> (i) '''Flexibility,''' one can specify which dimensions to share parameters and then can just increase the size of those dimensions without introducing additional parameters<br /> <br /> (ii) '''Efficiency,''' with higher-dimensional tensors, the network can be widened faster w.r.t. its depth when fixing the parameter number (explained later). <br /> <br /> <br /> '''Illustration:''' For ease of exposition, we first consider 2D tensors (matrices): we tensorize the hidden state &lt;math&gt;h_t∈R^{M}&lt;/math&gt; to become &lt;math&gt;Ht∈R^{P×M}&lt;/math&gt;, '''where P is the tensor size,''' and '''M the channel size'''. We locally-connect the first dimension of &lt;math&gt;H_t&lt;/math&gt; (which is P - the tensor size) in order to share parameters, and fully-connect the second dimension of &lt;math&gt;H_t&lt;/math&gt; (which is M - the channel size) to allow global interactions. This is analogous to the CNN which fully-connects one dimension (e.g., the RGB channel for input images) to globally fuse different feature planes. Also, if one compares &lt;math&gt;H_t&lt;/math&gt; to the hidden state of a Stacked RNN (sRNN) (see Figure Blow). <br /> <br /> [[File:Screen_Shot_2018-03-26_at_11.28.37_AM.png|160px|center||Figure 4: Stacked RNN]]<br /> <br /> [[File:ind.png|60px|center||Figure 4: Stacked RNN]]<br /> <br /> Then P is akin to the number of stacked hidden layers (vertical length in the graph), and M the size of each hidden layer (each white node in the graph). We start to describe our model based on 2D tensors, and finally show how to strengthen the model with higher-dimensional tensors.<br /> <br /> == Part 2: Merging Deep Computations ==<br /> <br /> Since an RNN is already deep in its temporal direction, we can deepen an input-to-output computation by associating the input &lt;math&gt;x_t&lt;/math&gt; with a (delayed) future output. In doing this, we need to ensure that the output &lt;math&gt;y_t&lt;/math&gt; is separable, i.e., not influenced by any future input &lt;math&gt;x_{t^{'}}&lt;/math&gt; &lt;math&gt;(t^{'}&gt;t)&lt;/math&gt;. Thus, we concatenate the projection of &lt;math&gt;x_t&lt;/math&gt; to the top of the previous hidden state &lt;math&gt;H_{t−1}&lt;/math&gt;, then gradually shift the input information down when the temporal computation proceeds, and finally generate &lt;math&gt;y_t&lt;/math&gt; from the bottom of &lt;math&gt;H_{t+L−1}&lt;/math&gt;, where L−1 is the number of delayed time-steps for computations of depth L. <br /> <br /> An example with L= 3 is shown in Figure.<br /> <br /> [[File:tRNN.png|160px|center||Figure 5: skewed sRNN]]<br /> <br /> [[File:ind.png|60px|center||Figure 5: skewed sRNN]]<br /> <br /> <br /> This is in fact a skewed sRNN (or tRNN without feedback). However, the method does not need to change the network structure and also allows different kinds of interactions as long as the output is separable; for example, one can increase the local connections and '''use feedback''' (shown in figure below), which can be beneficial for sRNNs (or tRNN). <br /> <br /> [[File:tRNN_wF.png|160px|center||Figure 5: skewed sRNN with F]]<br /> <br /> [[File:ind.png|60px|center||Figure 5: skewed sRNN with F]]<br /> <br /> '''In order to share parameters, we update &lt;math&gt;H_t&lt;/math&gt; using a convolution with a learnable kernel.''' In this manner we increase the complexity of the input-to-output mapping (by delaying outputs) and limit parameter growth (by sharing transition parameters using convolutions).<br /> <br /> To examine the resulting model mathematically, let &lt;math&gt;H^{cat}_{t−1}∈R^{(P+1)×M}&lt;/math&gt; be the concatenated hidden state, and &lt;math&gt;p∈Z_+&lt;/math&gt; the location at a tensor. The channel vector &lt;math&gt;h^{cat}_{t−1, p }∈R^M&lt;/math&gt; at location p of &lt;math&gt;H^{cat}_{t−1}&lt;/math&gt; (the p-th channel of H) is defined as:<br /> <br /> \begin{align}<br /> h^{cat}_{t-1, p} = x_t W^x + b^x \hspace{1cm} if p = 1 \hspace{1cm} (5)<br /> \end{align}<br /> <br /> \begin{align}<br /> h^{cat}_{t-1, p} = h_{t-1, p-1} \hspace{1cm} if p &gt; 1 \hspace{1cm} (6)<br /> \end{align}<br /> <br /> where &lt;math&gt;W^x ∈ R^{R×M}&lt;/math&gt; and &lt;math&gt;b^x ∈ R^M&lt;/math&gt; (recall the dimension of input x is R). Then, the update of tensor &lt;math&gt;H_t&lt;/math&gt; is implemented via a convolution:<br /> <br /> \begin{align}<br /> A_t = H^{cat}_{t-1} \circledast \{W^h, b^h \} \hspace{2cm} (7)<br /> \end{align}<br /> <br /> \begin{align}<br /> H_t = \Phi{A_t} \hspace{2cm} (8)<br /> \end{align}<br /> <br /> where &lt;math&gt;W^h∈R^{K×M^i×M^o}&lt;/math&gt; is the kernel weight of size K, with &lt;math&gt;M^i =M&lt;/math&gt; input channels and &lt;math&gt;M^o =M&lt;/math&gt; output channels, &lt;math&gt;b^h ∈ R^{M^o}&lt;/math&gt; is the kernel bias, &lt;math&gt;A_t ∈ R^{P×M^o}&lt;/math&gt; is the hidden activation, and &lt;math&gt;\circledast&lt;/math&gt; is the convolution operator. Since the kernel convolves across different hidden layers, we call it the cross-layer convolution. The kernel enables interaction, both bottom-up and top-down across layers. Finally, we generate &lt;math&gt;y_t&lt;/math&gt; from the channel vector &lt;math&gt;h_{t+L−1,P}∈R^M&lt;/math&gt; which is located at the bottom of &lt;math&gt;H_{t+L−1}&lt;/math&gt;:<br /> <br /> \begin{align}<br /> y_t = \varphi(h_{t+L−1}, _PW^y + b^y) \hspace{2cm} (9)<br /> \end{align}<br /> <br /> Where &lt;math&gt;W^y ∈R^{M×S}&lt;/math&gt; and &lt;math&gt;b^y ∈R^S&lt;/math&gt;. To guarantee that the receptive field of &lt;math&gt;y_t&lt;/math&gt; only covers the current and previous inputs x1:t. (Check the Skewed sRNN again below):<br /> <br /> [[File:tRNN_wF.png|160px|center||Figure 5: skewed sRNN with F]]<br /> <br /> [[File:ind.png|60px|center||Figure 5: skewed sRNN with F]]<br /> <br /> === Quick Summary of Set of Parameters ===<br /> <br /> '''1. &lt;math&gt; W^x&lt;/math&gt; and &lt;math&gt;b_x&lt;/math&gt;''' connect input to the first hidden node<br /> <br /> '''2. &lt;math&gt; W^h&lt;/math&gt; and &lt;math&gt;b_h&lt;/math&gt;''' convolute between layers<br /> <br /> '''3. &lt;math&gt; W^y&lt;/math&gt; and &lt;math&gt;b_y&lt;/math&gt;''' produce output of each stages<br /> <br /> <br /> == Part 3: Extending to LSTMs==<br /> <br /> Similar to standard RNN, to allow the tRNN (skewed sRNN) to capture long-range temporal dependencies, one can straightforwardly extend it<br /> to a tLSTM by replacing the tRNN tensors:<br /> <br /> \begin{align}<br /> [A^g_t, A^i_t, A^f_t, A^o_t] = H^{cat}_{t-1} \circledast \{W^h, b^h \} \hspace{2cm} (10)<br /> \end{align}<br /> <br /> \begin{align}<br /> [G_t, I_t, F_t, O_t]= [\Phi{(A^g_t)}, σ(A^i_t), σ(A^f_t), σ(A^o_t)] \hspace{2cm} (11)<br /> \end{align}<br /> <br /> Which are pretty similar to tRNN case, the main differences can be observes for memory cells of tLSTM (Ct):<br /> <br /> \begin{align}<br /> C_t= G_t \odot I_t + C_{t-1} \odot F_t \hspace{2cm} (12)<br /> \end{align}<br /> <br /> \begin{align}<br /> H_t= \Phi{(C_t )} \odot O_t \hspace{2cm} (13)<br /> \end{align}<br /> <br /> Note that since the previous memory cell &lt;math&gt;C_{t-1}&lt;/math&gt; is only gated along the temporal direction, increasing the tensor size ''P'' might result in the loss of long-range dependencies from the input to the output.<br /> <br /> Summary of the terms: <br /> <br /> 1. '''&lt;math&gt;\{W^h, b^h \}&lt;/math&gt;:''' Kernel of size K <br /> <br /> 2. '''&lt;math&gt;A^g_t, A^i_t, A^f_t, A^o_t \in \mathbb{R}^{P\times M}&lt;/math&gt;:''' Activations for the new content &lt;math&gt;G_t&lt;/math&gt;<br /> <br /> 3. '''&lt;math&gt;I_t&lt;/math&gt;:''' Input gate<br /> <br /> 4. '''&lt;math&gt;F_t&lt;/math&gt;:''' Forget gate<br /> <br /> 5. '''&lt;math&gt;O_t&lt;/math&gt;:''' Output gate<br /> <br /> 6. '''&lt;math&gt;C_t \in \mathbb{R}^{P\times M}&lt;/math&gt;:''' Memory cell<br /> <br /> Then, see graph below for illustration:<br /> <br /> [[File:tLSTM_wo_MC.png ‎|160px|center||Figure 5: tLSTM wo MC]]<br /> <br /> [[File:ind.png|60px|center||Figure 5: tLSTM wo MC]]<br /> <br /> To further evolve tLSTM, we invoke the '''Memory Cell Convolution''' to capture long-range dependencies from multiple directions, we additionally introduce a novel memory cell convolution, by which the memory cells can have a larger receptive field (figure provided below). <br /> <br /> [[File:tLSTM_w_MC.png ‎|160px|center||Figure 5: tLSTM w MC]]<br /> <br /> [[File:ind.png|60px|center||Figure 5: tLSTM w MC]]<br /> <br /> One can also dynamically generate this convolution kernel so that it is both time - and location-dependent, allowing for flexible control over long-range dependencies from different directions. Mathematically, it can be represented in with the following formulas:<br /> <br /> \begin{align}<br /> [A^g_t, A^i_t, A^f_t, A^o_t, A^q_t] = H^{cat}_{t-1} \circledast \{W^h, b^h \} \hspace{2cm} (14)<br /> \end{align}<br /> <br /> \begin{align}<br /> [G_t, I_t, F_t, O_t, Q_t]= [\Phi{(A^g_t)}, σ(A^i_t), σ(A^f_t), σ(A^o_t), ς(A^q_t)] \hspace{2cm} (15)<br /> \end{align}<br /> <br /> \begin{align}<br /> W_t^c(p) = reshape(q_{t,p}, [K, 1, 1]) \hspace{2cm} (16)<br /> \end{align}<br /> <br /> \begin{align}<br /> C_{t-1}^{conv}= C_{t-1} \circledast W_t^c(p) \hspace{2cm} (17)<br /> \end{align}<br /> <br /> \begin{align}<br /> C_t= G_t \odot I_t + C_{t-1}^{conv} \odot F_t \hspace{2cm} (18)<br /> \end{align}<br /> <br /> \begin{align}<br /> H_t= \Phi{(C_t )} \odot O_t \hspace{2cm} (19)<br /> \end{align}<br /> <br /> where the kernel &lt;math&gt;{W^h, b^h}&lt;/math&gt; has additional &lt;K&gt; output channels to generate the activation &lt;math&gt;A^q_t ∈ R^{P×&lt;K&gt;}&lt;/math&gt; for the dynamic kernel bank &lt;math&gt;Q_t∈R^{P × &lt;K&gt;}&lt;/math&gt;, &lt;math&gt;q_{t,p}∈R^{&lt;K&gt;}&lt;/math&gt; is the vectorized adaptive kernel at the location p of &lt;math&gt;Q_t&lt;/math&gt;, and &lt;math&gt;W^c_t(p) ∈ R^{K×1×1}&lt;/math&gt; is the dynamic kernel of size K with a single input/output channel, which is reshaped from &lt;math&gt;q_{t,p}&lt;/math&gt;. Each channel of the previous memory cell &lt;math&gt;C_{t-1}&lt;/math&gt; is convolved with &lt;math&gt;W_t^c(p)&lt;/math&gt; whose values vary with &lt;math&gt;p&lt;/math&gt;, to form a memory cell convolution, which produces a convolved memory cell &lt;math&gt;C_{t-1}^{conv} \in \mathbb{R}^{P\times M}&lt;/math&gt;. Note the paper also employed a softmax function ς(·) to normalize the channel dimension of &lt;math&gt;Q_t&lt;/math&gt;. which can also stabilize the value of memory cells and help to prevent the vanishing/exploding gradients. An illustration is provided below to better illustrate the process:<br /> <br /> [[File:MCC.png ‎|240px|center||Figure 5: MCC]]<br /> <br /> To improve training, the authors introduced a new normalization technique for ''t''LSTM termed channel normalization (adapted from layer normalization), in which the channel vector are normalized at different locations with their own statistics. Note that layer normalization does not work well with ''t''LSTM, because lower level information is near the input and higher level information is near the output. Channel normalization (CN) is defined as: <br /> <br /> \begin{align}<br /> \mathrm{CN}(\mathbf{Z}; \mathbf{\Gamma}, \mathbf{B}) = \mathbf{\hat{Z}} \odot \mathbf{\Gamma} + \mathbf{B} \hspace{2cm} (20)<br /> \end{align}<br /> <br /> where &lt;math&gt;\mathbf{Z}&lt;/math&gt;, &lt;math&gt;\mathbf{\hat{Z}}&lt;/math&gt;, &lt;math&gt;\mathbf{\Gamma}&lt;/math&gt;, &lt;math&gt;\mathbf{B} \in \mathbb{R}^{P \times M^z}&lt;/math&gt; are the original tensor, normalized tensor, gain parameter and bias parameter. The &lt;math&gt;m^z&lt;/math&gt;-th channel of &lt;math&gt;\mathbf{Z}&lt;/math&gt; is normalized element-wisely: <br /> <br /> \begin{align}<br /> \hat{z_{m^z}} = (z_{m^z} - z^\mu)/z^{\sigma} \hspace{2cm} (21)<br /> \end{align}<br /> <br /> where &lt;math&gt;z^{\mu}&lt;/math&gt;, &lt;math&gt;z^{\sigma} \in \mathbb{R}^P&lt;/math&gt; are the mean and standard deviation along the channel dimension of &lt;math&gt;\mathbf{Z}&lt;/math&gt;, and &lt;math&gt;\hat{z_{m^z}} \in \mathbb{R}^P&lt;/math&gt; is the &lt;math&gt;m^z&lt;/math&gt;-th channel &lt;math&gt;\mathbf{\hat{Z}}&lt;/math&gt;. Channel normalization introduces very few additional parameters compared to the number of other parameters in the model.<br /> <br /> = Results and Evaluation =<br /> <br /> Summary of list of models tLSTM family (may be useful later):<br /> <br /> (a) sLSTM (baseline): the implementation of sLSTM with parameters shared across all layers.<br /> <br /> (b) 2D tLSTM: the standard 2D tLSTM.<br /> <br /> (c) 2D tLSTM–M: removing memory (M) cell convolutions from (b).<br /> <br /> (d) 2D tLSTM–F: removing (–) feedback (F) connections from (b).<br /> <br /> (e) 3D tLSTM: tensorizing (b) into 3D tLSTM.<br /> <br /> (f) 3D tLSTM+LN: applying (+) Layer Normalization.<br /> <br /> (g) 3D tLSTM+CN: applying (+) Channel Normalization.<br /> <br /> === Efficiency Analysis ===<br /> <br /> '''Fundaments:''' For each configuration, fix the parameter number and increase the tensor size to see if the performance of tLSTM can be boosted without increasing the parameter number. Can also investigate how the runtime is affected by the depth, where the runtime is measured by the average GPU milliseconds spent by a forward and backward pass over one timestep of a single example. <br /> <br /> '''Dataset:''' The Hutter Prize Wikipedia dataset consists of 100 million characters taken from 205 different characters including alphabets, XML markups and special symbols. We model the dataset at the character-level, and try to predict the next character of the input sequence.<br /> <br /> All configurations are evaluated with depths L = 1, 2, 3, 4. Bits-per-character(BPC) is used to measure the model performance and the results are shown in the figure below.<br /> [[File:wiki.png ‎|280px|center||Figure 5: WifiPerf]]<br /> [[File:Wiki_Performance.png ‎|480px|center||Figure 5: WifiPerf]]<br /> <br /> === Accuracy Analysis ===<br /> <br /> The MNIST dataset  consists of 50000/10000/10000 handwritten digit images of size 28×28 for training/validation/test. Two tasks are used for evaluation on this dataset:<br /> <br /> (a) '''Sequential MNIST:''' The goal is to classify the digit after sequentially reading the pixels in a scan-line order. It is therefore a 784 time-step sequence learning task where a single output is produced at the last time-step; the task requires very long range dependencies in the sequence.<br /> <br /> (b) '''Sequential Permuted MNIST:''' We permute the original image pixels in a fixed random order, resulting in a permuted MNIST (pMNIST) problem that has even longer range dependencies across pixels and is harder.<br /> <br /> In both tasks, all configurations are evaluated with M = 100 and L= 1, 3, 5. The model performance is measured by the classification accuracy and results are shown in the figure below.<br /> <br /> [[File:MNISTperf.png ‎|480px|center]]<br /> <br /> <br /> <br /> [[File:Acc_res.png ‎|480px|center||Figure 5: MNIST]]<br /> <br /> [[File:33_mnist.PNG|center|thumb|800px| This figure displays a visualization of the means of the diagonal channels of the tLSTM memory cells per task. The columns indicate the time steps and the rows indicate the diagonal locations. The values are normalized between 0 and 1.]]<br /> <br /> It can be seen in the above figure that tLSTM behaves differently with different tasks:<br /> <br /> - Wikipedia: the input can be carried to the output location with less modification if it is sufficient to determine the next character, and vice versa<br /> <br /> - addition: the first integer is gradually encoded into memories and then interacts (performs addition) with the second integer, producing the sum <br /> <br /> - memorization: the network behaves like a shift register that continues to move the input symbol to the output location at the correct timestep<br /> <br /> - sequential MNIST: the network is more sensitive to the pixel value change (representing the contour, or topology of the digit) and can gradually accumulate evidence for the final prediction <br /> <br /> - sequential pMNIST: the network is sensitive to high value pixels (representing the foreground digit), and we conjecture that this is because the permutation destroys the topology of the digit, making each high value pixel potentially important.<br /> <br /> = Conclusions =<br /> <br /> The paper introduced the Tensorized LSTM, which employs tensors to share parameters and utilizes the temporal computation to perform the deep computation for sequential tasks. Then validated the model<br /> on a variety of tasks, showing its potential over other popular approaches.<br /> <br /> = Critique(to be edited) =<br /> <br /> = References =<br /> #Zhen He, Shaobing Gao, Liang Xiao, Daxue Liu, Hangen He, and David Barber. &lt;Wider and Deeper, Cheaper and Faster: Tensorized LSTMs for Sequence Learning&gt; (2017)<br /> #Ali Ghodsi, &lt;Deep Learning: STAT 946 - Winter 2018&gt;</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Dynamic_Routing_Between_Capsules_STAT946&diff=36058 Dynamic Routing Between Capsules STAT946 2018-04-03T05:51:04Z <p>Jssambee: </p> <hr /> <div>= Presented by =<br /> <br /> Yang, Tong(Richard)<br /> <br /> = Contributions =<br /> <br /> This paper introduces the concept of &quot;capsules&quot; and an approach to implement its concept in neural networks. Capsules are a group of neurons used to represent various properties of an entity/object present in the image, such as pose, deformation, and even the existence of the entity. Instead of the obvious representation of a logistic unit for the probability of existence, the paper explores using the length of the capsule output vector to represent existence, and the orientation to represent other properties of the entity. The paper has the following major contributions:<br /> <br /> * Proposed an alternative approach to max-pooling, which is called routing-by-agreement.<br /> * Demonstrated an mathematical structure for capsule layers and routing mechanism that builds a prototype architecture for capsule networks. <br /> * Presented the promising results of CapsNet that confirms its value as a new direction for development in deep learning.<br /> <br /> = Hinton's Critiques on CNN =<br /> <br /> In the past talk, Hinton tried to explained why max-pooling is the biggest problem in current convolutional network structure, here are some highlights from his talk. <br /> <br /> == Four arguments against pooling ==<br /> <br /> * It is a bad fit to the psychology of shape perception: It does not explain why we assign intrinsic coordinate frames to objects and why they have such huge effects.<br /> <br /> * It solves the wrong problem: We want equivariance, not invariance. Disentangling rather than discarding.<br /> <br /> * It fails to use the underlying linear structure: It does not make use of the natural linear manifold that perfectly handles the largest source of variance in images.<br /> <br /> * Pooling is a poor way to do dynamic routing: We need to route each part of the input to the neurons that know how to deal with it. Finding the best routing is equivalent to parsing the image.<br /> <br /> ===Intuition Behind Capsules ===<br /> We try to achieve viewpoint invariance in the activities of neurons by doing max-pooling. Invariance here means that by changing the input a little, the output still stays the same while the activity is just the output signal of a neuron. In other words, when in the input image we shift the object that we want to detect by a little bit, networks activities (outputs of neurons) will not change because of max pooling and the network will still detect the object. But the spacial relationships are not taken care of in this approach so instead capsules are used, because they encapsulate all important information about the state of the features they are detecting in a form of a vector. Capsules encode probability of detection of a feature as the length of their output vector. And the state of the detected feature is encoded as the direction in which that vector points to. So when detected feature moves around the image or its state somehow changes, the probability still stays the same (length of vector does not change), but its orientation changes.<br /> <br /> == Equivariance ==<br /> <br /> To deal with the invariance problem of CNN, Hinton proposes the concept called equivariance, which is the foundation of capsule concept.<br /> <br /> === Two types of equivariance ===<br /> <br /> ==== Place-coded equivariance ====<br /> If a low-level part moves to a very different position it will be represented by a different capsule.<br /> <br /> ==== Rate-coded equivariance ====<br /> If a part only moves a small distance it will be represented by the same capsule but the pose outputs of the capsule will change.<br /> <br /> Higher-level capsules have bigger domains so low-level place-coded equivariance gets converted into high-level rate-coded equivariance.<br /> <br /> = Dynamic Routing =<br /> <br /> In the second section of this paper, authors give a mathematical representations for two key features in routing algorithm in capsule network, which are squashing and agreement. The general setting for this algorithm is between two arbitrary capsules i and j. Capsule j is assumed to be an arbitrary capsule from the first layer of capsules, and capsule i is an arbitrary capsule from the layer below. The purpose of routing algorithm is generate a vector output for routing decision between capsule j and capsule i. Furthermore, this vector output will be used in the decision for choice of dynamic routing. <br /> <br /> == Routing Algorithm ==<br /> <br /> The routing algorithm is as the following:<br /> <br /> [[File:DRBC_Figure_1.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> In the following sections, each part of this algorithm will be explained in details.<br /> <br /> === Log Prior Probability ===<br /> <br /> &lt;math&gt;b_{ij}&lt;/math&gt; represents the log prior probabilities that capsule i should be coupled to capsule j, and updated in each routing iteration. As line 2 suggests, the initial values of &lt;math&gt;b_{ij}&lt;/math&gt; for all possible pairs of capsules are set to 0. In the very first routing iteration, &lt;math&gt;b_{ij}&lt;/math&gt; equals to zero. For each routing iteration, &lt;math&gt;b_{ij}&lt;/math&gt; gets updated by the value of agreement, which will be explained later.<br /> <br /> === Coupling Coefficient === <br /> <br /> &lt;math&gt;c_{ij}&lt;/math&gt; represents the coupling coefficient between capsule j and capsule i. It is calculated by applying the softmax function on the log prior probability &lt;math&gt;b_{ij}&lt;/math&gt;. The mathematical transformation is shown below (Equation 3 in paper): <br /> <br /> \begin{align}<br /> c_{ij} = \frac{exp(b_ij)}{\sum_{k}exp(b_ik)}<br /> \end{align}<br /> <br /> &lt;math&gt;c_{ij}&lt;/math&gt; are served as weights for computing the weighted sum and probabilities. Therefore, as probabilities, they have the following properties:<br /> <br /> \begin{align}<br /> c_{ij} \geq 0, \forall i, j<br /> \end{align}<br /> <br /> and, <br /> <br /> \begin{align}<br /> \sum_{i,j}c_{ij} = 1, \forall i, j<br /> \end{align}<br /> <br /> === Predicted Output from Layer Below === <br /> <br /> &lt;math&gt;u_{i}&lt;/math&gt; are the output vector from capsule i in the lower layer, and &lt;math&gt;\hat{u}_{j|i}&lt;/math&gt; are the input vector for capsule j, which are the &quot;prediction vectors&quot; from the capsules in the layer below. &lt;math&gt;\hat{u}_{j|i}&lt;/math&gt; is produced by multiplying &lt;math&gt;u_{i}&lt;/math&gt; by a weight matrix &lt;math&gt;W_{ij}&lt;/math&gt;, such as the following:<br /> <br /> \begin{align}<br /> \hat{u}_{j|i} = W_{ij}u_i<br /> \end{align}<br /> <br /> where &lt;math&gt;W_{ij}&lt;/math&gt; encodes some spatial relationship between capsule j and capsule i.<br /> <br /> === Capsule ===<br /> <br /> By using the definitions from previous sections, the total input vector for an arbitrary capsule j can be defined as:<br /> <br /> \begin{align}<br /> s_j = \sum_{i}c_{ij}\hat{u}_{j|i}<br /> \end{align}<br /> <br /> which is a weighted sum over all prediction vectors by using coupling coefficients.<br /> <br /> === Squashing ===<br /> <br /> The length of &lt;math&gt;s_j&lt;/math&gt; is arbitrary, which is needed to be addressed with. The next step is to convert its length between 0 and 1, since we want the length of the output vector of a capsule to represent the probability that the entity represented by the capsule is present in the current input. The &quot;squashing&quot; process is shown below:<br /> <br /> \begin{align}<br /> v_j = \frac{||s_j||^2}{1+||s_j||^2}\frac{s_j}{||s_j||}<br /> \end{align}<br /> <br /> Notice that &quot;squashing&quot; is not just normalizing the vector into unit length. In addition, it does extra non-linear transformation to ensure that short vectors get shrunk to almost zero length and long vectors get shrunk to a length slightly below 1. The reason for doing this is to make decision of routing, which is called &quot;routing by agreement&quot; much easier to make between capsule layers.<br /> <br /> === Agreement ===<br /> <br /> The final step of a routing iteration is to form an routing agreement &lt;math&gt;a_{ij}&lt;/math&gt;, which is represents as a scalar product:<br /> <br /> \begin{align}<br /> a_{ij} = v_{j}\hat{u}_{j|i}<br /> \end{align}<br /> <br /> As we mentioned in &quot;squashing&quot; section, the length of &lt;math&gt;v_{j}&lt;/math&gt; is either close to 0 or close to 1, which will effect the magnitude of &lt;math&gt;a_{ij}&lt;/math&gt; in this case. Therefore, the magnitude of &lt;math&gt;a_{ij}&lt;/math&gt; indicate the how strong the routing algorithm agrees on taking the route between capsule j and capsule i. For each routing iteration, the log prior probability, &lt;math&gt;b_{ij}&lt;/math&gt; will be updated by adding the value of its agreement value, which will effect how the coupling coefficients are computed in the next routing iteration. Because of the &quot;squashing&quot; process, we will eventually end up with a capsule j with its &lt;math&gt;v_{j}&lt;/math&gt; close to 1 while all other capsules with its &lt;math&gt;v_{j}&lt;/math&gt; close to 0, which indicates that this capsule j should be activated.<br /> <br /> = CapsNet Architecture =<br /> <br /> The second part of this paper discuss the experiment results from a 3-layer CapsNet, the architecture can be divided into two parts, encoder and decoder. <br /> <br /> == Encoder == <br /> <br /> [[File:DRBC_Architecture.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> === How many routing iteration to use? === <br /> In appendix A of this paper, the authors have shown the empirical results from 500 epochs of training at different choice of routing iterations. According to their observation, more routing iterations increases the capacity of CapsNet but tends to bring additional risk of overfitting. Moreover, CapsNet with routing iterations less than three are not effective in general. As result, they suggest 3 iterations of routing for all experiments.<br /> <br /> === Marginal loss for digit existence ===<br /> <br /> The experiments performed include segmenting overlapping digits on MultiMINST data set, so the loss function has be adjusted for presents of multiple digits. The marginal lose &lt;math&gt;L_k&lt;/math&gt; for each capsule k is calculate by:<br /> <br /> \begin{align}<br /> L_k = T_k max(0, m^+ - ||v_k||)^2 + \lambda(1 - T_k) max(0, ||v_k|| - m^-)^2<br /> \end{align}<br /> <br /> where &lt;math&gt;m^+ = 0.9&lt;/math&gt;, &lt;math&gt;m^- = 0.1&lt;/math&gt;, and &lt;math&gt;\lambda = 0.5&lt;/math&gt;.<br /> <br /> &lt;math&gt;T_k&lt;/math&gt; is an indicator for presence of digit of class k, it takes value of 1 if and only if class k is presented. If class k is not presented, &lt;math&gt;\lambda&lt;/math&gt; down-weight the loss which shrinks the lengths of the activity vectors for all the digit capsules. By doing this, The loss function penalizes the initial learning for all absent digit class, since we would like the top-level capsule for digit class k to have long instantiation vector if and only if that digit class is present in the input.<br /> <br /> === Layer 1: Conv1 === <br /> <br /> The first layer of CapsNet. Similar to CNN, this is just convolutional layer that converts pixel intensities to activities of local feature detectors. <br /> <br /> * Layer Type: Convolutional Layer.<br /> * Input: &lt;math&gt;28 \times 28&lt;/math&gt; pixels.<br /> * Kernel size: &lt;math&gt;9 \times 9&lt;/math&gt;.<br /> * Number of Kernels: 256.<br /> * Activation function: ReLU.<br /> * Output: &lt;math&gt;20 \times 20 \times 256&lt;/math&gt; tensor.<br /> <br /> === Layer 2: PrimaryCapsules ===<br /> <br /> The second layer is formed by 32 primary 8D capsules. By 8D, it means that each primary capsule contains 8 convolutional units with a &lt;math&gt;9 \times 9&lt;/math&gt; kernel and a stride of 2. Each capsule will take a &lt;math&gt;20 \times 20 \times 256&lt;/math&gt; tensor from Conv1 and produce an output of a &lt;math&gt;6 \times 6 \times 8&lt;/math&gt; tensor.<br /> <br /> * Layer Type: Convolutional Layer<br /> * Input: &lt;math&gt;20 \times 20 \times 256&lt;/math&gt; tensor.<br /> * Number of capsules: 32.<br /> * Number of convolutional units in each capsule: 8.<br /> * Size of each convolutional unit: &lt;math&gt;6 \times 6&lt;/math&gt;.<br /> * Output: &lt;math&gt;6 \times 6 \times 8&lt;/math&gt; 8-dimensional vectors.<br /> <br /> === Layer 3: DigitsCaps ===<br /> <br /> The last layer has 10 16D capsules, one for each digit. Not like the PrimaryCapsules layer, this layer is fully connected. Since this is the top capsule layer, dynamic routing mechanism will be applied between DigitsCaps and PrimaryCapsules. The process begins by taking a transformation of predicted output from PrimaryCapsules layer. Each output is a 8-dimensional vector, which needed to be mapped to a 16-dimensional space. Therefore, the weight matrix, &lt;math&gt;W_{ij}&lt;/math&gt; is a &lt;math&gt;8 \times 16&lt;/math&gt; matrix. The next step is to acquire coupling coefficients from routing algorithm and to perform &quot;squashing&quot; to get the output. <br /> <br /> * Layer Type: Fully connected layer.<br /> * Input: &lt;math&gt;6 \times 6 \times 8&lt;/math&gt; 8-dimensional vectors.<br /> * Output: &lt;math&gt;16 \times 10 &lt;/math&gt; matrix.<br /> <br /> === The loss function ===<br /> <br /> The output of the loss function would be a ten-dimensional one-hot encoded vector with 9 zeros and 1 one at the correct position.<br /> <br /> <br /> == Regularization Method: Reconstruction ==<br /> <br /> This is regularization method introduced in the implementation of CapsNet. The method is to introduce a reconstruction loss (scaled down by 0.0005) to margin loss during training. The authors argue this would encourage the digit capsules to encode the instantiation parameters the input digits. All the reconstruction during training is by using the true labels of the image input. The results from experiments also confirms that adding the reconstruction regularizer enforces the pose encoding in CapsNet and thus boots the performance of routing procedure. <br /> <br /> === Decoder ===<br /> <br /> The decoder consists of 3 fully connected layers, each layer maps pixel intensities to pixel intensities. The number of parameters in each layer and the activation functions used are indicated in the figure below:<br /> <br /> [[File:DRBC_Decoder.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> === Result ===<br /> <br /> The authors includes some results for CapsNet classification test accuracy to justify the result of reconstruction. We can see that for CapsNet with 1 routing iteration and CapsNet with 3 routing iterations, implement reconstruction shows significant improvements in both MINIST and MultiMINST data set. These improvements show the importance of routing and reconstruction regularizer. <br /> <br /> [[File:DRBC_Reconstruction.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> = Experiment Results for CapsNet = <br /> <br /> In this part, the authors demonstrate experiment results of CapsNet on different data sets, such as MINIST and different variation of MINST, such as expanded MINST, affNIST, MultiMNIST. Moreover, they also briefly discuss the performance on some other popular data set such CIFAR 10. <br /> <br /> == MINST ==<br /> <br /> === Highlights ===<br /> <br /> * CapsNet archives state-of-the-art performance on MINST.<br /> * CapsNet with shallow structure (3 layers) achieves performance that only achieves by deeper network before.<br /> <br /> === Interpretation of Each Capsule ===<br /> <br /> The authors suggest that they found evidence that dimension of some capsule always captures some variance of the digit, while some others represents the global combinations of different variations, this would open some possibility for interpretation of capsules in the future. Some results from perturbations are shown below: <br /> <br /> [[File:DRBC_Dimension.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> == affNIST == <br /> <br /> affNIT data set contains different affine transformation of original MINST data set. By the concept of capsule, CapsNet should gain more robustness from its equivariance nature, and the result confirms this. Compare the baseline CNN, CapsNet achieves 13% improvement on accuracy.<br /> <br /> == MultiMNIST ==<br /> <br /> The MultiMNIST is basically the overlapped version of MINIST. An important point to notice here is that this data set is generated by overlaying a digit on top of another digit from the same set but different class. In other words, the case of stacking digits from the same class is not allowed in MultiMINST. For example, stacking a 5 on a 0 is allowed, but stacking a 5 on another 5 is not. The reason is that CapsNet suffers from the &quot;crowding&quot; effect which will be discussed in the weakness of CapsNet section. <br /> <br /> == Other data sets ==<br /> <br /> CapsNet is used on other data sets such as CIFAR1-, smallNORB and SVHN. The results are not comparable with state-of-the-art performance, but it is still promising since this architecture is the very first, while other networks have been development for a long time.<br /> <br /> = Conclusion = <br /> <br /> This paper discuss the specific part of capsule network, which is the routing-by-agreement mechanism. The authors suggest this is a great approach to solve the current problem with max-pooling in convolutional neural network. Moreover, as author mentioned, the approach mentioned in this paper is only one possible implementation of the capsule concept. The preliminary results from experiment using a simple shallow CapsNet also demonstrate unparalleled performance that indicates the capsules are a direction worth exploring. <br /> <br /> = Weakness of Capsule Network =<br /> <br /> * Routing algorithm introduces internal loops for each capsule. As number of capsules and layers increases, these internal loops may exponentially expand the training time. <br /> * Capsule network suffers a perceptual phenomenon called &quot;crowding&quot;, which is common for human vision as well. To address this weakness, capsules have to make a very strong representation assumption that at each location of the image, there is at most one instance of the type of entity that capsule represents. This is also the reason for not allowing overlaying digits from same class in generating process of MultiMINST.<br /> * Other criticisms include that the design of capsule networks requires domain knowledge or feature engineering, contrary to the abstraction-oriented goals of deep learning.<br /> <br /> = Implementations = <br /> 1) Tensorflow Implementation : https://github.com/naturomics/CapsNet-Tensorflow<br /> <br /> 2) Keras Implementation. : https://github.com/XifengGuo/CapsNet-Keras</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Dynamic_Routing_Between_Capsules_STAT946&diff=36055 Dynamic Routing Between Capsules STAT946 2018-04-03T05:37:53Z <p>Jssambee: /* Hinton's Critiques on CNN */</p> <hr /> <div>= Presented by =<br /> <br /> Yang, Tong(Richard)<br /> <br /> = Contributions =<br /> <br /> This paper introduces the concepts of capsule and an approach to implement its concept in neural network. It has the following major contributions:<br /> <br /> * Proposed an alternative approach to max-pooling, which is called routing-by-agreement.<br /> * Demonstrated an mathematical structure for capsule layers and routing mechanism that builds an prototype architecture for capsule network. <br /> * Presented the promising results of CapsNet that confirms its value as a new direction for development in deep learning.<br /> <br /> = Hinton's Critiques on CNN =<br /> <br /> In the past talk, Hinton tried to explained why max-pooling is the biggest problem in current convolutional network structure, here are some highlights from his talk. <br /> <br /> == Four arguments against pooling ==<br /> <br /> * It is a bad fit to the psychology of shape perception: It does not explain why we assign intrinsic coordinate frames to objects and why they have such huge effects.<br /> <br /> * It solves the wrong problem: We want equivariance, not invariance. Disentangling rather than discarding.<br /> <br /> * It fails to use the underlying linear structure: It does not make use of the natural linear manifold that perfectly handles the largest source of variance in images.<br /> <br /> * Pooling is a poor way to do dynamic routing: We need to route each part of the input to the neurons that know how to deal with it. Finding the best routing is equivalent to parsing the image.<br /> <br /> ===Intuition Behind Capsules ===<br /> We try to achieve viewpoint invariance in the activities of neurons by doing max-pooling. Invariance here means that by changing the input a little, the output still stays the same while the activity is just the output signal of a neuron. In other words, when in the input image we shift the object that we want to detect by a little bit, networks activities (outputs of neurons) will not change because of max pooling and the network will still detect the object. But the spacial relationships are not taken care of in this approach so instead capsules are used, because they encapsulate all important information about the state of the features they are detecting in a form of a vector. Capsules encode probability of detection of a feature as the length of their output vector. And the state of the detected feature is encoded as the direction in which that vector points to. So when detected feature moves around the image or its state somehow changes, the probability still stays the same (length of vector does not change), but its orientation changes.<br /> <br /> == Equivariance ==<br /> <br /> To deal with the invariance problem of CNN, Hinton proposes the concept called equivariance, which is the foundation of capsule concept.<br /> <br /> === Two types of equivariance ===<br /> <br /> ==== Place-coded equivariance ====<br /> If a low-level part moves to a very different position it will be represented by a different capsule.<br /> <br /> ==== Rate-coded equivariance ====<br /> If a part only moves a small distance it will be represented by the same capsule but the pose outputs of the capsule will change.<br /> <br /> Higher-level capsules have bigger domains so low-level place-coded equivariance gets converted into high-level rate-coded equivariance.<br /> <br /> = Dynamic Routing =<br /> <br /> In the second section of this paper, authors give a mathematical representations for two key features in routing algorithm in capsule network, which are squashing and agreement. The general setting for this algorithm is between two arbitrary capsules i and j. Capsule j is assumed to be an arbitrary capsule from the first layer of capsules, and capsule i is an arbitrary capsule from the layer below. The purpose of routing algorithm is generate a vector output for routing decision between capsule j and capsule i. Furthermore, this vector output will be used in the decision for choice of dynamic routing. <br /> <br /> == Routing Algorithm ==<br /> <br /> The routing algorithm is as the following:<br /> <br /> [[File:DRBC_Figure_1.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> In the following sections, each part of this algorithm will be explained in details.<br /> <br /> === Log Prior Probability ===<br /> <br /> &lt;math&gt;b_{ij}&lt;/math&gt; represents the log prior probabilities that capsule i should be coupled to capsule j, and updated in each routing iteration. As line 2 suggests, the initial values of &lt;math&gt;b_{ij}&lt;/math&gt; for all possible pairs of capsules are set to 0. In the very first routing iteration, &lt;math&gt;b_{ij}&lt;/math&gt; equals to zero. For each routing iteration, &lt;math&gt;b_{ij}&lt;/math&gt; gets updated by the value of agreement, which will be explained later.<br /> <br /> === Coupling Coefficient === <br /> <br /> &lt;math&gt;c_{ij}&lt;/math&gt; represents the coupling coefficient between capsule j and capsule i. It is calculated by applying the softmax function on the log prior probability &lt;math&gt;b_{ij}&lt;/math&gt;. The mathematical transformation is shown below (Equation 3 in paper): <br /> <br /> \begin{align}<br /> c_{ij} = \frac{exp(b_ij)}{\sum_{k}exp(b_ik)}<br /> \end{align}<br /> <br /> &lt;math&gt;c_{ij}&lt;/math&gt; are served as weights for computing the weighted sum and probabilities. Therefore, as probabilities, they have the following properties:<br /> <br /> \begin{align}<br /> c_{ij} \geq 0, \forall i, j<br /> \end{align}<br /> <br /> and, <br /> <br /> \begin{align}<br /> \sum_{i,j}c_{ij} = 1, \forall i, j<br /> \end{align}<br /> <br /> === Predicted Output from Layer Below === <br /> <br /> &lt;math&gt;u_{i}&lt;/math&gt; are the output vector from capsule i in the lower layer, and &lt;math&gt;\hat{u}_{j|i}&lt;/math&gt; are the input vector for capsule j, which are the &quot;prediction vectors&quot; from the capsules in the layer below. &lt;math&gt;\hat{u}_{j|i}&lt;/math&gt; is produced by multiplying &lt;math&gt;u_{i}&lt;/math&gt; by a weight matrix &lt;math&gt;W_{ij}&lt;/math&gt;, such as the following:<br /> <br /> \begin{align}<br /> \hat{u}_{j|i} = W_{ij}u_i<br /> \end{align}<br /> <br /> where &lt;math&gt;W_{ij}&lt;/math&gt; encodes some spatial relationship between capsule j and capsule i.<br /> <br /> === Capsule ===<br /> <br /> By using the definitions from previous sections, the total input vector for an arbitrary capsule j can be defined as:<br /> <br /> \begin{align}<br /> s_j = \sum_{i}c_{ij}\hat{u}_{j|i}<br /> \end{align}<br /> <br /> which is a weighted sum over all prediction vectors by using coupling coefficients.<br /> <br /> === Squashing ===<br /> <br /> The length of &lt;math&gt;s_j&lt;/math&gt; is arbitrary, which is needed to be addressed with. The next step is to convert its length between 0 and 1, since we want the length of the output vector of a capsule to represent the probability that the entity represented by the capsule is present in the current input. The &quot;squashing&quot; process is shown below:<br /> <br /> \begin{align}<br /> v_j = \frac{||s_j||^2}{1+||s_j||^2}\frac{s_j}{||s_j||}<br /> \end{align}<br /> <br /> Notice that &quot;squashing&quot; is not just normalizing the vector into unit length. In addition, it does extra non-linear transformation to ensure that short vectors get shrunk to almost zero length and long vectors get shrunk to a length slightly below 1. The reason for doing this is to make decision of routing, which is called &quot;routing by agreement&quot; much easier to make between capsule layers.<br /> <br /> === Agreement ===<br /> <br /> The final step of a routing iteration is to form an routing agreement &lt;math&gt;a_{ij}&lt;/math&gt;, which is represents as a scalar product:<br /> <br /> \begin{align}<br /> a_{ij} = v_{j}\hat{u}_{j|i}<br /> \end{align}<br /> <br /> As we mentioned in &quot;squashing&quot; section, the length of &lt;math&gt;v_{j}&lt;/math&gt; is either close to 0 or close to 1, which will effect the magnitude of &lt;math&gt;a_{ij}&lt;/math&gt; in this case. Therefore, the magnitude of &lt;math&gt;a_{ij}&lt;/math&gt; indicate the how strong the routing algorithm agrees on taking the route between capsule j and capsule i. For each routing iteration, the log prior probability, &lt;math&gt;b_{ij}&lt;/math&gt; will be updated by adding the value of its agreement value, which will effect how the coupling coefficients are computed in the next routing iteration. Because of the &quot;squashing&quot; process, we will eventually end up with a capsule j with its &lt;math&gt;v_{j}&lt;/math&gt; close to 1 while all other capsules with its &lt;math&gt;v_{j}&lt;/math&gt; close to 0, which indicates that this capsule j should be activated.<br /> <br /> = CapsNet Architecture =<br /> <br /> The second part of this paper discuss the experiment results from a 3-layer CapsNet, the architecture can be divided into two parts, encoder and decoder. <br /> <br /> == Encoder == <br /> <br /> [[File:DRBC_Architecture.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> === How many routing iteration to use? === <br /> In appendix A of this paper, the authors have shown the empirical results from 500 epochs of training at different choice of routing iterations. According to their observation, more routing iterations increases the capacity of CapsNet but tends to bring additional risk of overfitting. Moreover, CapsNet with routing iterations less than three are not effective in general. As result, they suggest 3 iterations of routing for all experiments.<br /> <br /> === Marginal loss for digit existence ===<br /> <br /> The experiments performed include segmenting overlapping digits on MultiMINST data set, so the loss function has be adjusted for presents of multiple digits. The marginal lose &lt;math&gt;L_k&lt;/math&gt; for each capsule k is calculate by:<br /> <br /> \begin{align}<br /> L_k = T_k max(0, m^+ - ||v_k||)^2 + \lambda(1 - T_k) max(0, ||v_k|| - m^-)^2<br /> \end{align}<br /> <br /> where &lt;math&gt;m^+ = 0.9&lt;/math&gt;, &lt;math&gt;m^- = 0.1&lt;/math&gt;, and &lt;math&gt;\lambda = 0.5&lt;/math&gt;.<br /> <br /> &lt;math&gt;T_k&lt;/math&gt; is an indicator for presence of digit of class k, it takes value of 1 if and only if class k is presented. If class k is not presented, &lt;math&gt;\lambda&lt;/math&gt; down-weight the loss which shrinks the lengths of the activity vectors for all the digit capsules. By doing this, The loss function penalizes the initial learning for all absent digit class, since we would like the top-level capsule for digit class k to have long instantiation vector if and only if that digit class is present in the input.<br /> <br /> === Layer 1: Conv1 === <br /> <br /> The first layer of CapsNet. Similar to CNN, this is just convolutional layer that converts pixel intensities to activities of local feature detectors. <br /> <br /> * Layer Type: Convolutional Layer.<br /> * Input: &lt;math&gt;28 \times 28&lt;/math&gt; pixels.<br /> * Kernel size: &lt;math&gt;9 \times 9&lt;/math&gt;.<br /> * Number of Kernels: 256.<br /> * Activation function: ReLU.<br /> * Output: &lt;math&gt;20 \times 20 \times 256&lt;/math&gt; tensor.<br /> <br /> === Layer 2: PrimaryCapsules ===<br /> <br /> The second layer is formed by 32 primary 8D capsules. By 8D, it means that each primary capsule contains 8 convolutional units with a &lt;math&gt;9 \times 9&lt;/math&gt; kernel and a stride of 2. Each capsule will take a &lt;math&gt;20 \times 20 \times 256&lt;/math&gt; tensor from Conv1 and produce an output of a &lt;math&gt;6 \times 6 \times 8&lt;/math&gt; tensor.<br /> <br /> * Layer Type: Convolutional Layer<br /> * Input: &lt;math&gt;20 \times 20 \times 256&lt;/math&gt; tensor.<br /> * Number of capsules: 32.<br /> * Number of convolutional units in each capsule: 8.<br /> * Size of each convolutional unit: &lt;math&gt;6 \times 6&lt;/math&gt;.<br /> * Output: &lt;math&gt;6 \times 6 \times 8&lt;/math&gt; 8-dimensional vectors.<br /> <br /> === Layer 3: DigitsCaps ===<br /> <br /> The last layer has 10 16D capsules, one for each digit. Not like the PrimaryCapsules layer, this layer is fully connected. Since this is the top capsule layer, dynamic routing mechanism will be applied between DigitsCaps and PrimaryCapsules. The process begins by taking a transformation of predicted output from PrimaryCapsules layer. Each output is a 8-dimensional vector, which needed to be mapped to a 16-dimensional space. Therefore, the weight matrix, &lt;math&gt;W_{ij}&lt;/math&gt; is a &lt;math&gt;8 \times 16&lt;/math&gt; matrix. The next step is to acquire coupling coefficients from routing algorithm and to perform &quot;squashing&quot; to get the output. <br /> <br /> * Layer Type: Fully connected layer.<br /> * Input: &lt;math&gt;6 \times 6 \times 8&lt;/math&gt; 8-dimensional vectors.<br /> * Output: &lt;math&gt;16 \times 10 &lt;/math&gt; matrix.<br /> <br /> === The loss function ===<br /> <br /> The output of the loss function would be a ten-dimensional one-hot encoded vector with 9 zeros and 1 one at the correct position.<br /> <br /> <br /> == Regularization Method: Reconstruction ==<br /> <br /> This is regularization method introduced in the implementation of CapsNet. The method is to introduce a reconstruction loss (scaled down by 0.0005) to margin loss during training. The authors argue this would encourage the digit capsules to encode the instantiation parameters the input digits. All the reconstruction during training is by using the true labels of the image input. The results from experiments also confirms that adding the reconstruction regularizer enforces the pose encoding in CapsNet and thus boots the performance of routing procedure. <br /> <br /> === Decoder ===<br /> <br /> The decoder consists of 3 fully connected layers, each layer maps pixel intensities to pixel intensities. The number of parameters in each layer and the activation functions used are indicated in the figure below:<br /> <br /> [[File:DRBC_Decoder.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> === Result ===<br /> <br /> The authors includes some results for CapsNet classification test accuracy to justify the result of reconstruction. We can see that for CapsNet with 1 routing iteration and CapsNet with 3 routing iterations, implement reconstruction shows significant improvements in both MINIST and MultiMINST data set. These improvements show the importance of routing and reconstruction regularizer. <br /> <br /> [[File:DRBC_Reconstruction.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> = Experiment Results for CapsNet = <br /> <br /> In this part, the authors demonstrate experiment results of CapsNet on different data sets, such as MINIST and different variation of MINST, such as expanded MINST, affNIST, MultiMNIST. Moreover, they also briefly discuss the performance on some other popular data set such CIFAR 10. <br /> <br /> == MINST ==<br /> <br /> === Highlights ===<br /> <br /> * CapsNet archives state-of-the-art performance on MINST.<br /> * CapsNet with shallow structure (3 layers) achieves performance that only achieves by deeper network before.<br /> <br /> === Interpretation of Each Capsule ===<br /> <br /> The authors suggest that they found evidence that dimension of some capsule always captures some variance of the digit, while some others represents the global combinations of different variations, this would open some possibility for interpretation of capsules in the future. Some results from perturbations are shown below: <br /> <br /> [[File:DRBC_Dimension.png|650px|center||Source: Sabour, Frosst, Hinton, 2017]]<br /> <br /> == affNIST == <br /> <br /> affNIT data set contains different affine transformation of original MINST data set. By the concept of capsule, CapsNet should gain more robustness from its equivariance nature, and the result confirms this. Compare the baseline CNN, CapsNet achieves 13% improvement on accuracy.<br /> <br /> == MultiMNIST ==<br /> <br /> The MultiMNIST is basically the overlapped version of MINIST. An important point to notice here is that this data set is generated by overlaying a digit on top of another digit from the same set but different class. In other words, the case of stacking digits from the same class is not allowed in MultiMINST. For example, stacking a 5 on a 0 is allowed, but stacking a 5 on another 5 is not. The reason is that CapsNet suffers from the &quot;crowding&quot; effect which will be discussed in the weakness of CapsNet section. <br /> <br /> == Other data sets ==<br /> <br /> CapsNet is used on other data sets such as CIFAR1-, smallNORB and SVHN. The results are not comparable with state-of-the-art performance, but it is still promising since this architecture is the very first, while other networks have been development for a long time.<br /> <br /> = Conclusion = <br /> <br /> This paper discuss the specific part of capsule network, which is the routing-by-agreement mechanism. The authors suggest this is a great approach to solve the current problem with max-pooling in convolutional neural network. Moreover, as author mentioned, the approach mentioned in this paper is only one possible implementation of the capsule concept. The preliminary results from experiment using a simple shallow CapsNet also demonstrate unparalleled performance that indicates the capsules are a direction worth exploring. <br /> <br /> = Weakness of Capsule Network =<br /> <br /> * Routing algorithm introduces internal loops for each capsule. As number of capsules and layers increases, these internal loops may exponentially expand the training time. <br /> * Capsule network suffers a perceptual phenomenon called &quot;crowding&quot;, which is common for human vision as well. To address this weakness, capsules have to make a very strong representation assumption that at each location of the image, there is at most one instance of the type of entity that capsule represents. This is also the reason for not allowing overlaying digits from same class in generating process of MultiMINST.<br /> * Other criticisms include that the design of capsule networks requires domain knowledge or feature engineering, contrary to the abstraction-oriented goals of deep learning.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=MarrNet:_3D_Shape_Reconstruction_via_2.5D_Sketches&diff=35914 MarrNet: 3D Shape Reconstruction via 2.5D Sketches 2018-03-31T05:39:12Z <p>Jssambee: </p> <hr /> <div>= Introduction =<br /> Humans are able to quickly recognize 3D shapes from images, even in spite of drastic differences in object texture, material, lighting, and background.<br /> <br /> [[File:marrnet_intro_image.png|700px|thumb|center|Objects in real images. The appearance of the same shaped object varies based on colour, texture, lighting, background, etc. However, the 2.5D sketches (e.g. depth or normal maps) of the object remain constant, and can be seen as an abstraction of the object which is used to reconstruct the 3D shape.]]<br /> <br /> In this work, the authors propose a novel end-to-end trainable model that sequentially estimates 2.5D sketches and 3D object shape from images and also enforce the re projection consistency between the 3D shape and the estimated sketch. 2.5D is the construction of 3D environment using the 2D retina projection along with depth perception obtained from the image. The two step approach makes the network more robust to differences in object texture, material, lighting and background. Based on the idea from [Marr, 1982] that human 3D perception relies on recovering 2.5D sketches, which include depth maps (contains information related to the distance of surfaces from a viewpoint) and surface normal maps (technique for adding the illusion of depth details to surfaces using an image's RGB information), the authors design an end-to-end trainable pipeline which they call MarrNet. MarrNet first estimates depth, normal maps, and silhouette, followed by a 3D shape. MarrNet uses an encoder-decoder structure for the sub-components of the framework. <br /> <br /> The authors claim several unique advantages to their method. Single image 3D reconstruction is a highly under-constrained problem, requiring strong prior knowledge of object shapes. As well, accurate 3D object annotations using real images are not common, and many previous approaches rely on purely synthetic data. However, most of these methods suffer from domain adaptation due to imperfect rendering.<br /> <br /> Using 2.5D sketches can alleviate the challenges of domain transfer. It is straightforward to generate perfect object surface normals and depths using a graphics engine. Since 2.5D sketches contain only depth, surface normal, and silhouette information, the second step of recovering 3D shape can be trained purely from synthetic data. As well, the introduction of differentiable constraints between 2.5D sketches and 3D shape makes it possible to fine-tune the system, even without any annotations.<br /> <br /> The framework is evaluated on both synthetic objects from ShapeNet, and real images from PASCAL 3D+, showing good qualitative and quantitative performance in 3D shape reconstruction.<br /> <br /> = Related Work =<br /> <br /> == 2.5D Sketch Recovery ==<br /> Researchers have explored recovering 2.5D information from shading, texture, and colour images in the past. More recently, the development of depth sensors has led to the creation of large RGB-D datasets, and papers on estimating depth, surface normals, and other intrinsic images using deep networks. While this method employs 2.5D estimation, the final output is a full 3D shape of an object.<br /> <br /> [[File:2-5d_example.PNG|700px|thumb|center|Results from the paper: Learning Non-Lambertian Object Intrinsics across ShapeNet Categories. The results show that neural networks can be trained to recover 2.5D information from an image. The top row predicts the albedo and the bottom row predicts the shading. It can be observed that the results are still blurry and the fine details are not fully recovered.]]<br /> <br /> == Single Image 3D Reconstruction ==<br /> The development of large-scale shape repositories like ShapeNet has allowed for the development of models encoding shape priors for single image 3D reconstruction. These methods normally regress voxelized 3D shapes, relying on synthetic data or 2D masks for training. A voxel is an abbreviation for volume element, the three-dimensional version of a pixel. The formulation in the paper tackles domain adaptation better, since the network can be fine-tuned on images without any annotations.<br /> <br /> == 2D-3D Consistency ==<br /> Intuitively, the 3D shape can be constrained to be consistent with 2D observations. This idea has been explored for decades, and has been widely used in 3D shape completion with the use of depths and silhouettes. A few recent papers [5,6,7,8] discussed enforcing differentiable 2D-3D constraints between shape and silhouettes to enable joint training of deep networks for the task of 3D reconstruction. In this work, this idea is exploited to develop differentiable constraints for consistency between the 2.5D sketches and 3D shape.<br /> <br /> = Approach =<br /> The 3D structure is recovered from a single RGB view using three steps, shown in the figure below. The first step estimates 2.5D sketches, including depth, surface normal, and silhouette of the object. The second step estimates a 3D voxel representation of the object. The third step uses a reprojection consistency function to enforce the 2.5D sketch and 3D structure alignment.<br /> <br /> [[File:marrnet_model_components.png|700px|thumb|center|MarrNet architecture. 2.5D sketches of normals, depths, and silhouette are first estimated. The sketches are then used to estimate the 3D shape. Finally, re-projection consistency is used to ensure consistency between the sketch and 3D output.]]<br /> <br /> == 2.5D Sketch Estimation ==<br /> The first step takes a 2D RGB image and predicts the 2.5 sketch with surface normal, depth, and silhouette of the object. The goal is to estimate intrinsic object properties from the image, while discarding non-essential information such as texture and lighting. An encoder-decoder architecture is used. The encoder is a A ResNet-18 network, which takes a 256 x 256 RGB image and produces 512 feature maps of size 8 x 8. The decoder is four sets of 5 x 5 fully convolutional and ReLU layers, followed by four sets of 1 x 1 convolutional and ReLU layers. The output is 256 x 256 resolution depth, surface normal, and silhouette images.<br /> <br /> == 3D Shape Estimation ==<br /> The second step estimates a voxelized 3D shape using the 2.5D sketches from the first step. The focus here is for the network to learn the shape prior that can explain the input well, and can be trained on synthetic data without suffering from the domain adaptation problem since it only takes in surface normal and depth images as input. The network architecture is inspired by the TL network, and 3D-VAE-GAN, with an encoder-decoder structure. The normal and depth image, masked by the estimated silhouette, are passed into 5 sets of convolutional, ReLU, and pooling layers, followed by two fully connected layers, with a final output width of 200. The 200-dimensional vector is passed into a decoder of 5 fully convolutional and ReLU layers, outputting a 128 x 128 x 128 voxelized estimate of the input.<br /> <br /> == Re-projection Consistency ==<br /> The third step consists of a depth re-projection loss and surface normal re-projection loss. Here, &lt;math&gt;v_{x, y, z}&lt;/math&gt; represents the value at position &lt;math&gt;(x, y, z)&lt;/math&gt; in a 3D voxel grid, with &lt;math&gt;v_{x, y, z} \in [0, 1] ∀ x, y, z&lt;/math&gt;. &lt;math&gt;d_{x, y}&lt;/math&gt; denotes the estimated depth at position &lt;math&gt;(x, y)&lt;/math&gt;, &lt;math&gt;n_{x, y} = (n_a, n_b, n_c)&lt;/math&gt; denotes the estimated surface normal. Orthographic projection is used.<br /> <br /> [[File:marrnet_reprojection_consistency.png|700px|thumb|center|Reprojection consistency for voxels. Left and middle: criteria for depth and silhouettes. Right: criterion for surface normals]]<br /> <br /> === Depths ===<br /> The voxel with depth &lt;math&gt;v_{x, y}, d_{x, y}&lt;/math&gt; should be 1, while all voxels in front of it should be 0. This ensures the estimated 3D shape matches the estimated depth values. The projected depth loss and its gradient are defined as follows:<br /> <br /> &lt;math&gt;<br /> L_{depth}(x, y, z)=<br /> \left\{<br /> \begin{array}{ll}<br /> v^2_{x, y, z}, &amp; z &lt; d_{x, y} \\<br /> (1 - v_{x, y, z})^2, &amp; z = d_{x, y} \\<br /> 0, &amp; z &gt; d_{x, y} \\<br /> \end{array}<br /> \right.<br /> &lt;/math&gt;<br /> <br /> &lt;math&gt;<br /> \frac{∂L_{depth}(x, y, z)}{∂v_{x, y, z}} =<br /> \left\{<br /> \begin{array}{ll}<br /> 2v{x, y, z}, &amp; z &lt; d_{x, y} \\<br /> 2(v_{x, y, z} - 1), &amp; z = d_{x, y} \\<br /> 0, &amp; z &gt; d_{x, y} \\<br /> \end{array}<br /> \right.<br /> &lt;/math&gt;<br /> <br /> When &lt;math&gt;d_{x, y} = \infty&lt;/math&gt;, all voxels in front of it should be 0.<br /> <br /> === Surface Normals ===<br /> Since vectors &lt;math&gt;n_{x} = (0, −n_{c}, n_{b})&lt;/math&gt; and &lt;math&gt;n_{y} = (−n_{c}, 0, n_{a})&lt;/math&gt; are orthogonal to the normal vector &lt;math&gt;n_{x, y} = (n_{a}, n_{b}, n_{c})&lt;/math&gt;, they can be normalized to obtain &lt;math&gt;n’_{x} = (0, −1, n_{b}/n_{c})&lt;/math&gt; and &lt;math&gt;n’_{y} = (−1, 0, n_{a}/n_{c})&lt;/math&gt; on the estimated surface plane at &lt;math&gt;(x, y, z)&lt;/math&gt;. The projected surface normal tried to guarantee voxels at &lt;math&gt;(x, y, z) ± n’_{x}&lt;/math&gt; and &lt;math&gt;(x, y, z) ± n’_{y}&lt;/math&gt; should be 1 to match the estimated normal. The constraints are only applied when the target voxels are inside the estimated silhouette.<br /> <br /> The projected surface normal loss is defined as follows, with &lt;math&gt;z = d_{x, y}&lt;/math&gt;:<br /> <br /> &lt;math&gt;<br /> L_{normal}(x, y, z) =<br /> (1 - v_{x, y-1, z+\frac{n_b}{n_c}})^2 + (1 - v_{x, y+1, z-\frac{n_b}{n_c}})^2 + <br /> (1 - v_{x-1, y, z+\frac{n_a}{n_c}})^2 + (1 - v_{x+1, y, z-\frac{n_a}{n_c}})^2<br /> &lt;/math&gt;<br /> <br /> Gradients along x are:<br /> <br /> &lt;math&gt;<br /> \frac{dL_{normal}(x, y, z)}{dv_{x-1, y, z+\frac{n_a}{n_c}}} = 2(v_{x-1, y, z+\frac{n_a}{n_c}}-1)<br /> &lt;/math&gt;<br /> and<br /> &lt;math&gt;<br /> \frac{dL_{normal}(x, y, z)}{dv_{x+1, y, z-\frac{n_a}{n_c}}} = 2(v_{x+1, y, z-\frac{n_a}{n_c}}-1)<br /> &lt;/math&gt;<br /> <br /> Gradients along y are similar to x.<br /> <br /> = Training =<br /> The 2.5D and 3D estimation components are first pre-trained separately on synthetic data from ShapeNet, and then fine-tuned on real images.<br /> <br /> For pre-training, the 2.5D sketch estimator is trained on synthetic ShapeNet depth, surface normal, and silhouette ground truth, using an L2 loss. The 3D estimator is trained with ground truth voxels using a cross-entropy loss.<br /> <br /> Reprojection consistency loss is used to fine-tune the 3D estimation using real images, using the predicted depth, normals, and silhouette. A straightforward implementation leads to shapes that explain the 2.5D sketches well, but lead to unrealistic 3D appearance due to overfitting.<br /> <br /> Instead, the decoder of the 3D estimator is fixed, and only the encoder is fine-tuned. The model is fine-tuned separately on each image for 40 iterations, which takes up to 10 seconds on the GPU. Without fine-tuning, testing time takes around 100 milliseconds. SGD is used for optimization with batch size of 4, learning rate of 0.001, and momentum of 0.9.<br /> <br /> = Evaluation =<br /> Qualitative and quantitative results are provided using different variants of the framework. The framework is evaluated on both synthetic and real images on three datasets; ShapeNet, PASCAL 3D+, and IKEA. Intersection-over-Union (IoU) is the main measurement of comparison between the models. However the authors note that models which focus on the IoU metric fail to capture the details of the object they are trying to model, disregarding details to focus on the overall shape. To counter this drawback they poll people on which reconstruction is preferred. IoU is also computationally inefficient since it has to check over all possible scales.<br /> <br /> == ShapeNet ==<br /> Synthesized images of 6,778 chairs from ShapeNet are rendered from 20 random viewpoints. The chairs are placed in front of random background from the SUN dataset, and the RGB, depth, normal, and silhouette images are rendered using the physics-based renderer Mitsuba for more realistic images.<br /> <br /> === Method ===<br /> MarrNet is trained without the final fine-tuning stage, since 3D shapes are available. A baseline is created that directly predicts the 3D shape using the same 3D shape estimator architecture with no 2.5D sketch estimation.<br /> <br /> === Results ===<br /> The baseline output is compared to the full framework, and the figure below shows that MarrNet provides model outputs with more details and smoother surfaces than the baseline. The estimated normal and depth images are able to extract intrinsic information about object shape while leaving behind non-essential information such as textures from the original images. Quantitatively, the full model also achieves 0.57 integer over union score (which compares the overlap of the predicted model and ground truth), which is higher than the direct prediction baseline.<br /> <br /> [[File:marrnet_shapenet_results.png|700px|thumb|center|ShapeNet results.]]<br /> <br /> == PASCAL 3D+ ==<br /> Rough 3D models are provided from real-life images.<br /> <br /> === Method ===<br /> Each module is pre-trained on the ShapeNet dataset, and then fine-tuned on the PASCAL 3D+ dataset. Three variants of the model are tested. The first is trained using ShapeNet data only with no fine-tuning. The second is fine-tuned without fixing the decoder. The third is fine-tuned with a fixed decoder.<br /> <br /> === Results ===<br /> The figure below shows the results of the ablation study. The model trained only on synthetic data provides reasonable estimates. However, fine-tuning without fixing the decoder leads to impossible shapes from certain views. The third model keeps the shape prior, providing more details in the final shape.<br /> <br /> [[File:marrnet_pascal_3d_ablation.png|600px|thumb|center|Ablation studies using the PASCAL 3D+ dataset.]]<br /> <br /> Additional comparisons are made with the state-of-the-art (DRC) on the provided ground truth shapes. MarrNet achieves 0.39 IoU, while DRC achieves 0.34. Since PASCAL 3D+ only has rough annotations, with only 10 CAD chair models for all images, computing IoU with these shapes is not very informative. Instead, human studies are conducted and MarrNet reconstructions are preferred 74% of the time over DRC, and 42% of the time to ground truth. This shows how MarrNet produces nice shapes and also highlights the fact that ground truth shapes are not very good.<br /> <br /> [[File:human_studies.png|400px|thumb|center|Human preferences on chairs in PASCAL 3D+ (Xiang et al. 2014). The numbers show the percentage of how often humans prefered the 3D shape from DRC (state-of-the-art), MarrNet, or GT.]]<br /> <br /> <br /> [[File:marrnet_pascal_3d_drc_comparison.png|600px|thumb|center|Comparison between DRC and MarrNet results.]]<br /> <br /> Several failure cases are shown in the figure below. Specifically, the framework does not seem to work well on thin structures.<br /> <br /> [[File:marrnet_pascal_3d_failure_cases.png|500px|thumb|center|Failure cases on PASCAL 3D+. The algorithm cannot recover thin structures.]]<br /> <br /> == IKEA ==<br /> This dataset contains images of IKEA furniture, with accurate 3D shape and pose annotations. Objects are often heavily occluded or truncated.<br /> <br /> === Results ===<br /> Qualitative results are shown in the figure below. The model is shown to deal with mild occlusions in real life scenarios. Human studes show that MarrNet reconstructions are preferred 61% of the time to 3D-VAE-GAN.<br /> <br /> [[File:marrnet_ikea_results.png|700px|thumb|center|Results on chairs in the IKEA dataset, and comparison with 3D-VAE-GAN.]]<br /> <br /> == Other Data ==<br /> MarrNet is also applied on cars and airplanes. Shown below, smaller details such as the horizontal stabilizer and rear-view mirrors are recovered.<br /> <br /> [[File:marrnet_airplanes_and_cars.png|700px|thumb|center|Results on airplanes and cars from the PASCAL 3D+ dataset, and comparison with DRC.]]<br /> <br /> MarrNet is also jointly trained on three object categories, and successfully recovers the shapes of different categories. Results are shown in the figure below.<br /> <br /> [[File:marrnet_multiple_categories.png|700px|thumb|center|Results when trained jointly on all three object categories (cars, airplanes, and chairs).]]<br /> <br /> = Commentary =<br /> Qualitatively, the results look quite impressive. The 2.5D sketch estimation seems to distill the useful information for more realistic looking 3D shape estimation. The disentanglement of 2.5D and 3D estimation steps also allows for easier training and domain adaptation from synthetic data.<br /> <br /> As the authors mention, the IoU metric is not very descriptive, and most of the comparisons in this paper are only qualitative, mainly being human preference studies. A better quantitative evaluation metric would greatly help in making an unbiased comparison between different results.<br /> <br /> As seen in several of the results, the network does not deal well with objects that have thin structures, which is particularly noticeable with many of the chair arm rests. As well, looking more carefully at some results, it seems that fine-tuning only the 3D encoder does not seem to transfer well to unseen objects, since shape priors have already been learned by the decoder.<br /> <br /> Also there is ambiguity in terms of how the aforementioned self-supervision can work as the authors claim that the model can be fine-tuned using a single image itself. If the parameters are constrained to a single image, then it means it will not generalize well. It is not clearly explained as to what can be fine-tuned.<br /> <br /> = Conclusion =<br /> The proposed MarrNet employs a novel model to estimate 2.5D sketches for 3D shape reconstruction. The sketches are shown to improve the model’s performance, and make it easy to adapt to images across different domains and categories. Differentiable loss functions are created such that the model can be fine-tuned end-to-end on images without ground truth. The experiments show that the model performs well, and human studies show that the results are preferred over other methods.<br /> <br /> = Implementation =<br /> The following repository provides the source code for the paper. The repository provides the source code as written by the authors: https://github.com/jiajunwu/marrnet<br /> <br /> = References =<br /> # Jiajun Wu, Yifan Wang, Tianfan Xue, Xingyuan Sun, William T. Freeman, Joshua B. Tenenbaum. MarrNet: 3D Shape Reconstruction via 2.5D Sketches, 2017<br /> # David Marr. Vision: A computational investigation into the human representation and processing of visual information. W. H. Freeman and Company, 1982.<br /> # Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In CVPR, 2017.<br /> # JiajunWu, Chengkai Zhang, Tianfan Xue,William T Freeman, and Joshua B Tenenbaum. Learning a Proba- bilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling. In NIPS, 2016b.<br /> # Wu, J. (n.d.). Jiajunwu/marrnet. Retrieved March 25, 2018, from https://github.com/jiajunwu/marrnet<br /> # Jiajun Wu, Tianfan Xue, Joseph J Lim, Yuandong Tian, Joshua B Tenenbaum, Antonio Torralba, and William T Freeman. Single image 3d interpreter network. In ECCV, 2016a.<br /> # Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NIPS, 2016.<br /> # Danilo Jimenez Rezende, SM Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised learning of 3d structure from images. In NIPS, 2016.<br /> # Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In CVPR, 2017.<br /> # Rohit Girdhar, David F. Fouhey, Mikel Rodriguez and Abhinav Gupta, Learning a Predictable and Generative Vector Representation for Objects, in ECCV 2016</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=MarrNet:_3D_Shape_Reconstruction_via_2.5D_Sketches&diff=35913 MarrNet: 3D Shape Reconstruction via 2.5D Sketches 2018-03-31T05:19:17Z <p>Jssambee: /* Commentary */</p> <hr /> <div>= Introduction =<br /> Humans are able to quickly recognize 3D shapes from images, even in spite of drastic differences in object texture, material, lighting, and background.<br /> <br /> [[File:marrnet_intro_image.png|700px|thumb|center|Objects in real images. The appearance of the same shaped object varies based on colour, texture, lighting, background, etc. However, the 2.5D sketches (e.g. depth or normal maps) of the object remain constant, and can be seen as an abstraction of the object which is used to reconstruct the 3D shape.]]<br /> <br /> In this work, the authors propose a novel end-to-end trainable model that sequentially estimates 2.5D sketches and 3D object shape from images and also enforce the re projection consistency between the 3D shape and the estimated sketch. 2.5D is the construction of 3D environment using the 2D retina projection along with depth perception obtained from the image. The two step approach makes the network more robust to differences in object texture, material, lighting and background. Based on the idea from [Marr, 1982] that human 3D perception relies on recovering 2.5D sketches, which include depth maps (contains information related to the distance of surfaces from a viewpoint) and surface normal maps (technique for adding the illusion of depth details to surfaces using an image's RGB information), the authors design an end-to-end trainable pipeline which they call MarrNet. MarrNet first estimates depth, normal maps, and silhouette, followed by a 3D shape. MarrNet uses an encoder-decoder structure for the sub-components of the framework. <br /> <br /> The authors claim several unique advantages to their method. Single image 3D reconstruction is a highly under-constrained problem, requiring strong prior knowledge of object shapes. As well, accurate 3D object annotations using real images are not common, and many previous approaches rely on purely synthetic data. However, most of these methods suffer from domain adaptation due to imperfect rendering.<br /> <br /> Using 2.5D sketches can alleviate the challenges of domain transfer. It is straightforward to generate perfect object surface normals and depths using a graphics engine. Since 2.5D sketches contain only depth, surface normal, and silhouette information, the second step of recovering 3D shape can be trained purely from synthetic data. As well, the introduction of differentiable constraints between 2.5D sketches and 3D shape makes it possible to fine-tune the system, even without any annotations.<br /> <br /> The framework is evaluated on both synthetic objects from ShapeNet, and real images from PASCAL 3D+, showing good qualitative and quantitative performance in 3D shape reconstruction.<br /> <br /> = Related Work =<br /> <br /> == 2.5D Sketch Recovery ==<br /> Researchers have explored recovering 2.5D information from shading, texture, and colour images in the past. More recently, the development of depth sensors has led to the creation of large RGB-D datasets, and papers on estimating depth, surface normals, and other intrinsic images using deep networks. While this method employs 2.5D estimation, the final output is a full 3D shape of an object.<br /> <br /> [[File:2-5d_example.PNG|700px|thumb|center|Results from the paper: Learning Non-Lambertian Object Intrinsics across ShapeNet Categories. The results show that neural networks can be trained to recover 2.5D information from an image. The top row predicts the albedo and the bottom row predicts the shading. It can be observed that the results are still blurry and the fine details are not fully recovered.]]<br /> <br /> == Single Image 3D Reconstruction ==<br /> The development of large-scale shape repositories like ShapeNet has allowed for the development of models encoding shape priors for single image 3D reconstruction. These methods normally regress voxelized 3D shapes, relying on synthetic data or 2D masks for training. A voxel is an abbreviation for volume element, the three-dimensional version of a pixel. The formulation in the paper tackles domain adaptation better, since the network can be fine-tuned on images without any annotations.<br /> <br /> == 2D-3D Consistency ==<br /> Intuitively, the 3D shape can be constrained to be consistent with 2D observations. This idea has been explored for decades, and has been widely used in 3D shape completion with the use of depths and silhouettes. A few recent papers [5,6,7,8] discussed enforcing differentiable 2D-3D constraints between shape and silhouettes to enable joint training of deep networks for the task of 3D reconstruction. In this work, this idea is exploited to develop differentiable constraints for consistency between the 2.5D sketches and 3D shape.<br /> <br /> = Approach =<br /> The 3D structure is recovered from a single RGB view using three steps, shown in the figure below. The first step estimates 2.5D sketches, including depth, surface normal, and silhouette of the object. The second step estimates a 3D voxel representation of the object. The third step uses a reprojection consistency function to enforce the 2.5D sketch and 3D structure alignment.<br /> <br /> [[File:marrnet_model_components.png|700px|thumb|center|MarrNet architecture. 2.5D sketches of normals, depths, and silhouette are first estimated. The sketches are then used to estimate the 3D shape. Finally, re-projection consistency is used to ensure consistency between the sketch and 3D output.]]<br /> <br /> == 2.5D Sketch Estimation ==<br /> The first step takes a 2D RGB image and predicts the 2.5 sketch with surface normal, depth, and silhouette of the object. The goal is to estimate intrinsic object properties from the image, while discarding non-essential information such as texture and lighting. An encoder-decoder architecture is used. The encoder is a A ResNet-18 network, which takes a 256 x 256 RGB image and produces 512 feature maps of size 8 x 8. The decoder is four sets of 5 x 5 fully convolutional and ReLU layers, followed by four sets of 1 x 1 convolutional and ReLU layers. The output is 256 x 256 resolution depth, surface normal, and silhouette images.<br /> <br /> == 3D Shape Estimation ==<br /> The second step estimates a voxelized 3D shape using the 2.5D sketches from the first step. The focus here is for the network to learn the shape prior that can explain the input well, and can be trained on synthetic data without suffering from the domain adaptation problem since it only takes in surface normal and depth images as input. The network architecture is inspired by the TL network, and 3D-VAE-GAN, with an encoder-decoder structure. The normal and depth image, masked by the estimated silhouette, are passed into 5 sets of convolutional, ReLU, and pooling layers, followed by two fully connected layers, with a final output width of 200. The 200-dimensional vector is passed into a decoder of 5 fully convolutional and ReLU layers, outputting a 128 x 128 x 128 voxelized estimate of the input.<br /> <br /> == Re-projection Consistency ==<br /> The third step consists of a depth re-projection loss and surface normal re-projection loss. Here, &lt;math&gt;v_{x, y, z}&lt;/math&gt; represents the value at position &lt;math&gt;(x, y, z)&lt;/math&gt; in a 3D voxel grid, with &lt;math&gt;v_{x, y, z} \in [0, 1] ∀ x, y, z&lt;/math&gt;. &lt;math&gt;d_{x, y}&lt;/math&gt; denotes the estimated depth at position &lt;math&gt;(x, y)&lt;/math&gt;, &lt;math&gt;n_{x, y} = (n_a, n_b, n_c)&lt;/math&gt; denotes the estimated surface normal. Orthographic projection is used.<br /> <br /> [[File:marrnet_reprojection_consistency.png|700px|thumb|center|Reprojection consistency for voxels. Left and middle: criteria for depth and silhouettes. Right: criterion for surface normals]]<br /> <br /> === Depths ===<br /> The voxel with depth &lt;math&gt;v_{x, y}, d_{x, y}&lt;/math&gt; should be 1, while all voxels in front of it should be 0. This ensures the estimated 3D shape matches the estimated depth values. The projected depth loss and its gradient are defined as follows:<br /> <br /> &lt;math&gt;<br /> L_{depth}(x, y, z)=<br /> \left\{<br /> \begin{array}{ll}<br /> v^2_{x, y, z}, &amp; z &lt; d_{x, y} \\<br /> (1 - v_{x, y, z})^2, &amp; z = d_{x, y} \\<br /> 0, &amp; z &gt; d_{x, y} \\<br /> \end{array}<br /> \right.<br /> &lt;/math&gt;<br /> <br /> &lt;math&gt;<br /> \frac{∂L_{depth}(x, y, z)}{∂v_{x, y, z}} =<br /> \left\{<br /> \begin{array}{ll}<br /> 2v{x, y, z}, &amp; z &lt; d_{x, y} \\<br /> 2(v_{x, y, z} - 1), &amp; z = d_{x, y} \\<br /> 0, &amp; z &gt; d_{x, y} \\<br /> \end{array}<br /> \right.<br /> &lt;/math&gt;<br /> <br /> When &lt;math&gt;d_{x, y} = \infty&lt;/math&gt;, all voxels in front of it should be 0.<br /> <br /> === Surface Normals ===<br /> Since vectors &lt;math&gt;n_{x} = (0, −n_{c}, n_{b})&lt;/math&gt; and &lt;math&gt;n_{y} = (−n_{c}, 0, n_{a})&lt;/math&gt; are orthogonal to the normal vector &lt;math&gt;n_{x, y} = (n_{a}, n_{b}, n_{c})&lt;/math&gt;, they can be normalized to obtain &lt;math&gt;n’_{x} = (0, −1, n_{b}/n_{c})&lt;/math&gt; and &lt;math&gt;n’_{y} = (−1, 0, n_{a}/n_{c})&lt;/math&gt; on the estimated surface plane at &lt;math&gt;(x, y, z)&lt;/math&gt;. The projected surface normal tried to guarantee voxels at &lt;math&gt;(x, y, z) ± n’_{x}&lt;/math&gt; and &lt;math&gt;(x, y, z) ± n’_{y}&lt;/math&gt; should be 1 to match the estimated normal. The constraints are only applied when the target voxels are inside the estimated silhouette.<br /> <br /> The projected surface normal loss is defined as follows, with &lt;math&gt;z = d_{x, y}&lt;/math&gt;:<br /> <br /> &lt;math&gt;<br /> L_{normal}(x, y, z) =<br /> (1 - v_{x, y-1, z+\frac{n_b}{n_c}})^2 + (1 - v_{x, y+1, z-\frac{n_b}{n_c}})^2 + <br /> (1 - v_{x-1, y, z+\frac{n_a}{n_c}})^2 + (1 - v_{x+1, y, z-\frac{n_a}{n_c}})^2<br /> &lt;/math&gt;<br /> <br /> Gradients along x are:<br /> <br /> &lt;math&gt;<br /> \frac{dL_{normal}(x, y, z)}{dv_{x-1, y, z+\frac{n_a}{n_c}}} = 2(v_{x-1, y, z+\frac{n_a}{n_c}}-1)<br /> &lt;/math&gt;<br /> and<br /> &lt;math&gt;<br /> \frac{dL_{normal}(x, y, z)}{dv_{x+1, y, z-\frac{n_a}{n_c}}} = 2(v_{x+1, y, z-\frac{n_a}{n_c}}-1)<br /> &lt;/math&gt;<br /> <br /> Gradients along y are similar to x.<br /> <br /> = Training =<br /> The 2.5D and 3D estimation components are first pre-trained separately on synthetic data from ShapeNet, and then fine-tuned on real images.<br /> <br /> For pre-training, the 2.5D sketch estimator is trained on synthetic ShapeNet depth, surface normal, and silhouette ground truth, using an L2 loss. The 3D estimator is trained with ground truth voxels using a cross-entropy loss.<br /> <br /> Reprojection consistency loss is used to fine-tune the 3D estimation using real images, using the predicted depth, normals, and silhouette. A straightforward implementation leads to shapes that explain the 2.5D sketches well, but lead to unrealistic 3D appearance due to overfitting.<br /> <br /> Instead, the decoder of the 3D estimator is fixed, and only the encoder is fine-tuned. The model is fine-tuned separately on each image for 40 iterations, which takes up to 10 seconds on the GPU. Without fine-tuning, testing time takes around 100 milliseconds. SGD is used for optimization with batch size of 4, learning rate of 0.001, and momentum of 0.9.<br /> <br /> = Evaluation =<br /> Qualitative and quantitative results are provided using different variants of the framework. The framework is evaluated on both synthetic and real images on three datasets; ShapeNet, PASCAL 3D+, and IKEA. Intersection-over-Union (IoU) is the main measurement of comparison between the models. However the authors note that models which focus on the IoU metric fail to capture the details of the object they are trying to model, disregarding details to focus on the overall shape. To counter this drawback they poll people on which reconstruction is preferred. IoU is also computationally inefficient since it has to check over all possible scales.<br /> <br /> == ShapeNet ==<br /> Synthesized images of 6,778 chairs from ShapeNet are rendered from 20 random viewpoints. The chairs are placed in front of random background from the SUN dataset, and the RGB, depth, normal, and silhouette images are rendered using the physics-based renderer Mitsuba for more realistic images.<br /> <br /> === Method ===<br /> MarrNet is trained without the final fine-tuning stage, since 3D shapes are available. A baseline is created that directly predicts the 3D shape using the same 3D shape estimator architecture with no 2.5D sketch estimation.<br /> <br /> === Results ===<br /> The baseline output is compared to the full framework, and the figure below shows that MarrNet provides model outputs with more details and smoother surfaces than the baseline. The estimated normal and depth images are able to extract intrinsic information about object shape while leaving behind non-essential information such as textures from the original images. Quantitatively, the full model also achieves 0.57 integer over union score (which compares the overlap of the predicted model and ground truth), which is higher than the direct prediction baseline.<br /> <br /> [[File:marrnet_shapenet_results.png|700px|thumb|center|ShapeNet results.]]<br /> <br /> == PASCAL 3D+ ==<br /> Rough 3D models are provided from real-life images.<br /> <br /> === Method ===<br /> Each module is pre-trained on the ShapeNet dataset, and then fine-tuned on the PASCAL 3D+ dataset. Three variants of the model are tested. The first is trained using ShapeNet data only with no fine-tuning. The second is fine-tuned without fixing the decoder. The third is fine-tuned with a fixed decoder.<br /> <br /> === Results ===<br /> The figure below shows the results of the ablation study. The model trained only on synthetic data provides reasonable estimates. However, fine-tuning without fixing the decoder leads to impossible shapes from certain views. The third model keeps the shape prior, providing more details in the final shape.<br /> <br /> [[File:marrnet_pascal_3d_ablation.png|600px|thumb|center|Ablation studies using the PASCAL 3D+ dataset.]]<br /> <br /> Additional comparisons are made with the state-of-the-art (DRC) on the provided ground truth shapes. MarrNet achieves 0.39 IoU, while DRC achieves 0.34. Since PASCAL 3D+ only has rough annotations, with only 10 CAD chair models for all images, computing IoU with these shapes is not very informative. Instead, human studies are conducted and MarrNet reconstructions are preferred 74% of the time over DRC, and 42% of the time to ground truth. This shows how MarrNet produces nice shapes and also highlights the fact that ground truth shapes are not very good.<br /> <br /> [[File:human_studies.png|400px|thumb|center|Human preferences on chairs in PASCAL 3D+ (Xiang et al. 2014). The numbers show the percentage of how often humans prefered the 3D shape from DRC (state-of-the-art), MarrNet, or GT.]]<br /> <br /> <br /> [[File:marrnet_pascal_3d_drc_comparison.png|600px|thumb|center|Comparison between DRC and MarrNet results.]]<br /> <br /> Several failure cases are shown in the figure below. Specifically, the framework does not seem to work well on thin structures.<br /> <br /> [[File:marrnet_pascal_3d_failure_cases.png|500px|thumb|center|Failure cases on PASCAL 3D+. The algorithm cannot recover thin structures.]]<br /> <br /> == IKEA ==<br /> This dataset contains images of IKEA furniture, with accurate 3D shape and pose annotations. Objects are often heavily occluded or truncated.<br /> <br /> === Results ===<br /> Qualitative results are shown in the figure below. The model is shown to deal with mild occlusions in real life scenarios. Human studes show that MarrNet reconstructions are preferred 61% of the time to 3D-VAE-GAN.<br /> <br /> [[File:marrnet_ikea_results.png|700px|thumb|center|Results on chairs in the IKEA dataset, and comparison with 3D-VAE-GAN.]]<br /> <br /> == Other Data ==<br /> MarrNet is also applied on cars and airplanes. Shown below, smaller details such as the horizontal stabilizer and rear-view mirrors are recovered.<br /> <br /> [[File:marrnet_airplanes_and_cars.png|700px|thumb|center|Results on airplanes and cars from the PASCAL 3D+ dataset, and comparison with DRC.]]<br /> <br /> MarrNet is also jointly trained on three object categories, and successfully recovers the shapes of different categories. Results are shown in the figure below.<br /> <br /> [[File:marrnet_multiple_categories.png|700px|thumb|center|Results when trained jointly on all three object categories (cars, airplanes, and chairs).]]<br /> <br /> = Commentary =<br /> Qualitatively, the results look quite impressive. The 2.5D sketch estimation seems to distill the useful information for more realistic looking 3D shape estimation. The disentanglement of 2.5D and 3D estimation steps also allows for easier training and domain adaptation from synthetic data.<br /> <br /> As the authors mention, the IoU metric is not very descriptive, and most of the comparisons in this paper are only qualitative, mainly being human preference studies. A better quantitative evaluation metric would greatly help in making an unbiased comparison between different results.<br /> <br /> As seen in several of the results, the network does not deal well with objects that have thin structures, which is particularly noticeable with many of the chair arm rests. As well, looking more carefully at some results, it seems that fine-tuning only the 3D encoder does not seem to transfer well to unseen objects, since shape priors have already been learned by the decoder.<br /> <br /> Also there is ambiguity in terms of how the aforementioned self-supervision can work as the authors claim that the model can be fine-tuned using a single image itself. If the parameters are constrained to a single image, then it means it will not generalize well. It is not clearly explained as to what can be fine-tuned.<br /> <br /> = Conclusion =<br /> The proposed MarrNet employs a novel model to estimate 2.5D sketches for 3D shape reconstruction. The sketches are shown to improve the model’s performance, and make it easy to adapt to images across different domains and categories. Differentiable loss functions are created such that the model can be fine-tuned end-to-end on images without ground truth. The experiments show that the model performs well, and human studies show that the results are preferred over other methods.<br /> <br /> = Implementation =<br /> The following repository provides the source code for the paper. The repository provides the source code as written by the authors: https://github.com/jiajunwu/marrnet<br /> <br /> = References =<br /> # Jiajun Wu, Yifan Wang, Tianfan Xue, Xingyuan Sun, William T. Freeman, Joshua B. Tenenbaum. MarrNet: 3D Shape Reconstruction via 2.5D Sketches, 2017<br /> # David Marr. Vision: A computational investigation into the human representation and processing of visual information. W. H. Freeman and Company, 1982.<br /> # Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In CVPR, 2017.<br /> # JiajunWu, Chengkai Zhang, Tianfan Xue,William T Freeman, and Joshua B Tenenbaum. Learning a Proba- bilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling. In NIPS, 2016b.<br /> # Wu, J. (n.d.). Jiajunwu/marrnet. Retrieved March 25, 2018, from https://github.com/jiajunwu/marrnet<br /> # Jiajun Wu, Tianfan Xue, Joseph J Lim, Yuandong Tian, Joshua B Tenenbaum, Antonio Torralba, and William T Freeman. Single image 3d interpreter network. In ECCV, 2016a.<br /> # Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NIPS, 2016.<br /> # Danilo Jimenez Rezende, SM Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised learning of 3d structure from images. In NIPS, 2016.<br /> # Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In CVPR, 2017.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Multi-scale_Dense_Networks_for_Resource_Efficient_Image_Classification&diff=35912 Multi-scale Dense Networks for Resource Efficient Image Classification 2018-03-31T04:49:49Z <p>Jssambee: /* Architecture */</p> <hr /> <div>= Introduction = <br /> <br /> Multi-Scale Dense Networks, MSDNets, are designed to address the growing demand for efficient object recognition. The issue with existing recognition networks is that they are either:<br /> efficient networks, but don't do well on hard examples, or large networks that do well on all examples but require a large amount of resources.<br /> <br /> In order to be efficient on all difficulties MSDNets propose a structure that can accurately output classifications for varying levels of computational requirements. The two cases that are used to evaluate the network are:<br /> Anytime Prediction: What is the best prediction the network can provide when suddenly prompted.<br /> Budget Batch Predictions: Given a maximum amount of computational resources how well does the network do on the batch.<br /> <br /> = Related Networks =<br /> <br /> == Computationally Efficient Networks ==<br /> <br /> Much of the existing work on convolution networks that are computationally efficient at test time focus on reducing model size after training. Many existing methods for refining an accurate network to be more efficient include weight pruning [3,4,5], quantization of weights [6,7] (during or after training), and knowledge distillation [8,9], which trains smaller student networks to reproduce the output of a much larger teacher network. The proposed work differs from these approaches as it trains a single model which trades computation efficiency for accuracy at test time without re-training or finetuning.<br /> <br /> == Resource Efficient Networks == <br /> <br /> Unlike the above, resource efficient concepts consider limited resources as a part of the structure/loss.<br /> Examples of work in this area include: <br /> * Efficient variants to existing state of the art networks<br /> * Gradient boosted decision trees, which incorporate computational limitations into the training<br /> * Fractal nets<br /> * Adaptive computation time method<br /> <br /> == Related architectures ==<br /> <br /> MSDNets pull on concepts from a number of existing networks:<br /> * Neural fabrics and others, are used to quickly establish a low resolution feature map, which is integral for classification.<br /> * Deeply supervised nets, introduced the incorporation of multiple classifiers throughout the network<br /> * The feature concatenation method from DenseNets allows the later classifiers to not be disrupted by the weight updates from earlier classifiers.<br /> <br /> = Problem Setup =<br /> The authors consider two settings that impose computational constraints at prediction time.<br /> <br /> == Anytime Prediction ==<br /> In the anytime prediction setting (Grubb &amp; Bagnell, 2012), there is a finite computational budget &lt;math&gt;B &gt; 0&lt;/math&gt; available for each test example &lt;math&gt;x&lt;/math&gt;. Once the budget is exhausted, the prediction for the class is output using early exit. The budget is nondeterministic and varies per test instance.<br /> <br /> == Budgeted Batch Classification ==<br /> In the budgeted batch classification setting, the model needs to classify a set of examples &lt;math&gt;D_test = {x_1, . . . , x_M}&lt;/math&gt; within a finite computational budget &lt;math&gt;B &gt; 0&lt;/math&gt; that is known in advance.<br /> <br /> = Multi-Scale Dense Networks =<br /> <br /> == Integral Contributions ==<br /> <br /> The way MSDNets aims to provide efficient classification with varying computational costs is to create one network that outputs results at depths. While this may seem trivial, as intermediate classifiers can be inserted into any existing network, two major problems arise.<br /> <br /> === Coarse Level Features Needed For Classification ===<br /> <br /> [[File:paper29 fig3.png | 700px|thumb|center]]<br /> <br /> The term coarse level feature refers to a set of filters in a CNN with low resolution. There are several ways to create such features. These methods are typically refereed to as down sampling. Some example of layers that perform this function are: max pooling, average pooling and convolution with strides. In this architecture, convolution with strides will be used to create coarse features. <br /> <br /> Coarse level features are needed to gain context of scene. In typical CNN based networks, the features propagate from fine to coarse. Classifiers added to the early, fine featured, layers do not output accurate predictions due to the lack of context.<br /> <br /> Figure 3 depicts relative accuracies of the intermediate classifiers and shows that the accuracy of a classifier is highly correlated with its position in the network. It is easy to see, specifically with the case of ResNet, that the classifiers improve in a staircase pattern. All of the experiments were performed on Cifar-100 dataset and it can be seen that the intermediate classifiers perform worst than the final classifiers, thus highlighting the problem with the lack of coarse level features early on.<br /> <br /> To address this issue, MSDNets proposes an architecture in which uses multi scaled feature maps. The feature maps at a particular layer and scale are computed by concatenating results from up to two convolutions: a standard convolution is first applied to same-scale features from the previous layer to pass on high-resolution information that subsequent layers can use to construct better coarse features, and if possible, a strided convolution is also applied on the finer-scale feature map from the previous layer to produce coarser features amenable to classification. The network is quickly formed to contain a set number of scales ranging from fine to coarse. These scales are propagated throughout, so that for the length of the network there are always coarse level features for classification and fine features for learning more difficult representations.<br /> <br /> === Training of Early Classifiers Interferes with Later Classifiers ===<br /> <br /> When training a network containing intermediate classifiers, the training of early classifiers will cause the early layers to focus on features for that classifier. These learned features may not be as useful to the later classifiers and degrade their accuracy.<br /> <br /> MSDNets use dense connectivity to avoid this issue. By concatenating all prior layers to learn future layers, the gradient propagation is spread throughout the available features. This allows later layers to not be reliant on any single prior, providing opportunities to learn new features that priors have ignored.<br /> <br /> == Architecture ==<br /> <br /> [[File:MSDNet_arch.png | 700px|thumb|center|Left: the MSDNet architecture. Right: example calculations for each output given 3 scales and 4 layers.]]<br /> <br /> The architecture of MSDNet is a structure of convolutions with a set number of layers and a set number of scales. Layers allow the network to build on the previous information to generate more accurate predictions, while the scales allow the network to maintain coarse level features throughout.<br /> <br /> The first layer is a special, mini-CNN-network, that quickly fills all required scales with features. The following layers are generated through the convolutions of the previous layers and scales.<br /> <br /> Each output at a given s scale is given by the convolution of all prior outputs of the same scale, and the strided-convolution of all prior outputs from the previous scale. <br /> <br /> The classifiers consists of two convolutional layers, an average pooling layer and a linear layer and are run on the concatenation of all of the coarsest outputs from the preceding layers.<br /> <br /> === Loss Function ===<br /> <br /> The loss is calculated as a weighted sum of each classifier's logistic loss: <br /> <br /> &lt;math&gt;\frac{1}{|\mathcal{D}|} \sum_{x,y \in \mathcal{D}} \sum_{k}w_k L(f_k) &lt;/math&gt;<br /> <br /> Here &lt;math&gt;w_i&lt;/math&gt; represents the weights and &lt;math&gt;L(f_k)&lt;/math&gt; represents the logistic loss of each classifier. The weighted loss is taken as an average over a set of training samples. The weights can be determined from a budget of computational power, but results also show that setting all to 1 is also acceptable.<br /> <br /> === Computational Limit Inclusion ===<br /> <br /> When running in a budgeted batch scenario, the network attempts to provide the best overall accuracy. To do this with a set limit on computational resources, it works to use less of the budget on easy detections in order to allow more time to be spent on hard ones. <br /> In order to facilitate this, the classifiers are designed to exit when the confidence of the classification exceeds a preset threshold. To determine the threshold for each classifier, &lt;math&gt;|D_{test}|\sum_{k}(q_k C_k) \leq B &lt;/math&gt; must be true. Where &lt;math&gt;|D_{test}|&lt;/math&gt; is the total number of test samples, &lt;math&gt;C_k&lt;/math&gt; is the computational requirement to get an output from the &lt;math&gt;k&lt;/math&gt;th classifier, and &lt;math&gt;q_k &lt;/math&gt; is the probability that a sample exits at the &lt;math&gt;k&lt;/math&gt;th classifier. Assuming that all classifiers have the same base probability, &lt;math&gt;q&lt;/math&gt;, then &lt;math&gt;q_k&lt;/math&gt; can be used to find the threshold.<br /> <br /> === Network Reduction and Lazy Evaluation ===<br /> There are two ways to reduce the computational needs of MSDNets:<br /> <br /> # Reduce the size of the network by splitting it into &lt;math&gt;S&lt;/math&gt; blocks along the depth dimension and keeping the &lt;math&gt;(S-i+1)&lt;/math&gt; scales in the &lt;math&gt;i^{\text{th}}&lt;/math&gt; block.Whenever a scale is removed, a transition layer merges the concatenated features using 1x1 convolution and feeds the fine grained features to coarser scales.<br /> # Remove unnecessary computations: Group the computation in &quot;diagonal blocks&quot;; this propagates the example along paths that are required for the evaluation of the next classifier.<br /> <br /> The strategy of minimizing unnecessary computations when the computational budget is over is known as the ''lazy evaluation''.<br /> <br /> = Experiments = <br /> <br /> When evaluating on CIFAR-10 and CIFAR-100 ensembles and multi-classifier versions of ResNets and DenseNets, as well as FractalNet are used to compare with MSDNet. <br /> <br /> When evaluating on ImageNet ensembles and individual versions of ResNets and DenseNets are compared with MSDNets.<br /> <br /> == Anytime Prediction ==<br /> <br /> In anytime prediction MSDNets are shown to have highly accurate with very little budget, and continue to remain above the alternate methods as the budget increases. The authors attributed this to the fact that MSDNets are able to produce low-resolution feature maps well-suited for classification after just a few layers, in contrast to the high-resolution feature maps in early layers of ResNets or DenseNets. Ensemble networks need to repeat computations of similar low-level features repeatedly when new models need to be evaluated, so their accuracy results do not increase as fast when computational budget increases. <br /> <br /> [[File:MSDNet_anytime.png | 700px|thumb|center|Accuracy of the anytime classification models.]] [[File:cifar10msdnet.png | 700px|thumb|center|CIFAR-10 results.]]<br /> <br /> == Budget Batch ==<br /> <br /> For budget batch 3 MSDNets are designed with classifiers set-up for varying ranges of budget constraints. On both dataset options the MSDNets exceed all alternate methods with a fraction of the budget required.<br /> <br /> [[File:MSDNet_budgetbatch.png | 700px|thumb|center|Accuracy of the budget batch classification models.]]<br /> <br /> The following figure shows examples of what was deemed &quot;easy&quot; and &quot;hard&quot; examples by the network. The top row contains images of either red wine or volcanos that were easily classified, thus exiting the network early and reducing required computations. The bottom row contains examples of &quot;hard&quot; images that were incorrectly classified by the first classifier but were correctly classified by the last layer.<br /> <br /> [[File:MSDNet_visualizingearlyclassifying.png | 700px|thumb|center|Examples of &quot;hard&quot;/&quot;easy&quot; classification]]<br /> <br /> = Ablation study =<br /> Additional experiments were performed to shed light on multi-scale feature maps, dense connectivity, and intermediate classifiers. This experiment started with an MSDNet with six intermediate classifiers and each of these components were removed, one at a time. To make our comparisons fair, the computational costs of the full networks were kept similar by adapting the network width. After removing all the three components, a VGG-like convolutional network is obtained. The classification accuracy of all classifiers is shown in the image below.<br /> <br /> [[File:Screenshot_from_2018-03-29_14-58-03.png]]<br /> <br /> = Critique = <br /> <br /> The problem formulation and scenario evaluation were very well formulated, and according to independent reviews, the results were reproducible. Where the paper could improve is on explaining how to implement the threshold; it isn't very well explained how the use of the validation set can be used to set the threshold value.<br /> <br /> = Implementation =<br /> The following repository provides the source code for the paper, written by the authors: https://github.com/gaohuang/MSDNet<br /> <br /> = Sources =<br /> # Huang, G., Chen, D., Li, T., Wu, F., Maaten, L., &amp; Weinberger, K. Q. (n.d.). Multi-Scale Dense Networks for Resource Efficient Image Classification. ICLR 2018. doi:1703.09844 <br /> # Huang, G. (n.d.). Gaohuang/MSDNet. Retrieved March 25, 2018, from https://github.com/gaohuang/MSDNet<br /> # LeCun, Yann, John S. Denker, and Sara A. Solla. &quot;Optimal brain damage.&quot; Advances in neural information processing systems. 1990.<br /> # Hassibi, Babak, David G. Stork, and Gregory J. Wolff. &quot;Optimal brain surgeon and general network pruning.&quot; Neural Networks, 1993., IEEE International Conference on. IEEE, 1993.<br /> # Li, Hao, et al. &quot;Pruning filters for efficient convnets.&quot; arXiv preprint arXiv:1608.08710 (2016).<br /> # Hubara, Itay, et al. &quot;Binarized neural networks.&quot; Advances in neural information processing systems. 2016.<br /> # Rastegari, Mohammad, et al. &quot;Xnor-net: Imagenet classification using binary convolutional neural networks.&quot; European Conference on Computer Vision. Springer, Cham, 2016.<br /> # Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In ACM SIGKDD, pp. 535–541. ACM, 2006.<br /> # Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning Workshop, 2014.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=35717 Do Deep Neural Networks Suffer from Crowding 2018-03-27T17:28:00Z <p>Jssambee: </p> <hr /> <div>= Introduction =<br /> Since the increase in popularity of Deep Neural Networks (DNNs), there has been lots of research in making machines capable of recognizing objects the same way humans do. Humans can recognize objects in a way that is invariant to scale, translation, and clutter. Crowding is another visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called flankers, are placed close to it. This paper focuses on studying the impact of crowding on DNNs trained for object recognition by adding clutter to the images and then analyzing which models and settings suffer less from such effects. <br /> <br /> [[File:paper25_fig_crowding_ex.png|center|600px]]<br /> The figure shows a visual example of crowding . Keep your eyes still and look at the dot in the center and try to identify the &quot;A&quot; in the two circles. You should see that it is much easier to make out the &quot;A&quot; in the right than in the left circle. The same &quot;A&quot; exists in both circles, however, the left circle contains flankers which are those line segments.<br /> <br /> Another common example to visualize the same:<br /> [[File:crowding-tigger.jpg|center|600px]]<br /> <br /> <br /> ===What is the problem in CNNs?===<br /> CNNs fall short in explaining human perceptual invariance. First, CNNs typically take input at a single uniform resolution. Biological measurements suggest that resolution is not uniform across the human visual field, but rather decays with eccentricity, i.e. distance from the center of focus. Even more importantly, CNNs rely not only on weight-sharing but also on data augmentation to achieve transformation invariance and so obviously a lot of processing is needed for CNNs.<br /> <br /> The paper investigates two types of DNNs for crowding: traditional deep convolutional neural networks (DCNN) and a multi-scale eccentricity-dependent model which is an extension of the DCNNs and inspired by the retina where the receptive field size of the convolutional filters in the model grows with increasing distance from the center of the image, called the eccentricity and is explained below. The authors focus on the dependence of crowding on image factors, such as flanker configuration, target-flanker similarity, target eccentricity and premature pooling in particular. Along with that, there is major emphasis on reducing the training time of the networks since the motive is to have a simple network capable of learning space-invariant features.<br /> <br /> = Models =<br /> <br /> == Deep Convolutional Neural Networks ==<br /> The DCNN is a basic architecture with 3 convolutional layers, spatial 3x3 max-pooling with varying strides and a fully connected layer for classification as shown in the below figure. <br /> [[File:DCNN.png|800px|center]]<br /> <br /> The network is fed with images resized to 60x60, with mini-batches of 128 images, 32 feature channels for all convolutional layers, and convolutional filters of size 5x5 and stride 1.<br /> <br /> As highlighted earlier, the effect of pooling is into main consideration and hence three different configurations have been investigated as below: <br /> <br /> # '''No total pooling''' Feature maps sizes decrease only due to boundary effects, as the 3x3 max pooling has stride 1. The square feature maps sizes after each pool layer are 60-54-48-42.<br /> # '''Progressive pooling''' 3x3 pooling with a stride of 2 halves the square size of the feature maps, until we pool over what remains in the final layer, getting rid of any spatial information before the fully connected layer. (60-27-11-1).<br /> # '''At end pooling''' Same as no total pooling, but before the fully connected layer, max-pool over the entire feature map. (60-54-48-1).<br /> <br /> ==Eccentricity-dependent Model==<br /> In order to take care of the scale invariance in the input image, the eccentricity dependent DNN is utilized. This was proposed as a model of the human visual cortex by [https://arxiv.org/pdf/1406.1770.pdf, Poggio et al] and later further studied in . The main intuition behind this architecture is that as we increase eccentricity, the receptive fields also increase and hence the model will become invariant to changing input scales. The authors note that the width of each scale is roughly related to the amount of translation invariance for objects at that scale, simply because once the object is outside that window, the filter no longer observes it. Therefore, the authors say that the architecture emphasizes scale invariance over translation invariance, in contrast to traditional DCNNs. From a biological perspective, eye movement can compensate for the limitations of translation invariance, but compensating for scale invariance requires changing distance from the object. In this model, the input image is cropped into varying scales (11 crops increasing by a factor of &lt;math&gt;\sqrt{2}&lt;/math&gt; which are then resized to 60x60 pixels) and then fed to the network. Exponentially interpolated crops are used over linearly interpolated crops since they produce fewer boundary effects while maintaining the same behavior qualitatively. The model computes an invariant representation of the input by sampling the inverted pyramid at a discrete set of scales with the same number of filters at each scale. Since the same number of filters are used for each scale, the smaller crops will be sampled at a high resolution while the larger crops will be sampled with a low resolution. These scales are fed into the network as an input channel to the convolutional layers and share the weights across scale and space. Due to the downsampling of the input image, this is equivalent to having receptive fields of varying sizes. Intuitively, this means that the network generalizes learnings across scales and is guaranteed by during back-propagation by averaging the error derivatives over all scale channels, then using the averages to compute weight adjustments. The same set of weight adjustments to the convolutional units across different scale channels is applied.<br /> [[File:EDM.png|2000x450px|center]]<br /> <br /> <br /> The architecture of this model is the same as the previous DCNN model with the only change being the extra filters added for each of the scales, so the number of parameters remains the same as DCNN models. The authors perform spatial pooling, the aforementioned ''At end pooling'' is used here, and scale pooling which helps in reducing the number of scales by taking the maximum value of corresponding locations in the feature maps across multiple scales. It has three configurations: (1) at the beginning, in which all the different scales are pooled together after the first layer, 11-1-1-1-1 (2) progressively, 11-7-5-3-1 and (3) at the end, 11-11-11-11-1, in which all 11 scales are pooled together at the last layer.<br /> <br /> ===Contrast Normalization===<br /> Since there are multiple scales of an input image, in some experiments, normalization is performed such that the sum of the pixel intensities in each scale is in the same range [0,1] (this is to prevent smaller crops, which have more non-black pixels, from disproportionately dominating max-pooling across scales). The normalized pixel intensities are then divided by a factor proportional to the crop area [[File:sqrtf.png|60px]] where i=1 is the smallest crop.<br /> <br /> =Experiments=<br /> Targets are the set of objects to be recognized and flankers are the set of objects the model has not been trained to recognize, which act as clutter with respect to these target objects. The target objects are the even MNIST numbers having translational variance (shifted at different locations of the image along the horizontal axis), while flankers are from odd MNIST numbers, not MNIST dataset (contains alphabet letters) and Omniglot dataset (contains characters). Examples of the target and flanker configurations are shown below: <br /> [[File:eximages.png|800px|center]]<br /> <br /> The target and the object are referred to as ''a'' and ''x'' respectively with the below four configurations: <br /> # No flankers. Only the target object. (a in the plots) <br /> # One central flanker closer to the center of the image than the target. (xa) <br /> # One peripheral flanker closer to the boundary of the image that the target. (ax) <br /> # Two flankers spaced equally around the target, being both the same object (xax).<br /> <br /> Training is done using backpropagation with images of size &lt;math&gt;1920 px^2&lt;/math&gt; with embedded targets objects and flankers of size of &lt;math&gt;120 px^2&lt;/math&gt;. The training and test images are divided as per the usual MNIST configuration. To determine if there is a difference between the peripheral flankers and the central flankers, all the tests are performed in the right half image plane.<br /> <br /> ==DNNs trained with Target and Flankers==<br /> This is a constant spacing training setup where identical flankers are placed at a distance of 120 pixels either side of the target(xax) with the target having translational variance. The tests are evaluated on (i) DCNN with at the end pooling, and (ii) eccentricity-dependent model with 11-11-11-11-1 scale pooling, at the end spatial pooling and contrast normalization. The test data has different flanker configurations as described above.<br /> [[File:result1.png|x450px|center]]<br /> <br /> ===Observations===<br /> * With the flanker configuration same as the training one, models are better at recognizing objects in clutter rather than isolated objects for all image locations<br /> * If the target-flanker spacing is changed, then models perform worse<br /> * the eccentricity model is much better at recognizing objects in isolation than the DCNN because the multi-scale crops divide the image into discrete regions, letting the model learn from image parts as well as the whole image<br /> * Only the eccentricity-dependent model is robust to different flanker configurations not included in training when the target is centered.<br /> <br /> ==DNNs trained with Images with the Target in Isolation==<br /> Here the target objects are in isolation and with translational variance while the test-set is the same set of flanker configurations as used before. The constant spacing and constant eccentricity effect have been evaluated.<br /> [[File:result2.png|750x400px|center]]<br /> In addition to the evaluation of DCNNs in constant target eccentricity at 240 pixels, here they are tested with images in which the target is fixed at 720 pixels from the center of the image, as shown in Fig 3. Since the target is already at the edge of the visual field, a flanker cannot be more peripheral in the image than the target. Same results as for the 240 pixels target eccentricity can be extracted. The closer the flanker is to the target, the more accuracy decreases. Also, it can be seen that when the target is close to the image boundary, recognition is poor because of boundary effects eroding away information about the target<br /> [[File:paper25_supplemental1.png|800px|center]]<br /> <br /> ===DCNN Observations===<br /> * The recognition gets worse with the increase in the number of flankers.<br /> * Convolutional networks are capable of being invariant to translations.<br /> * In the constant target eccentricity setup, where the target is fixed at the center of the image with varying target-flanker spacing, we observe that as the distance between target and flankers increase, recognition gets better.<br /> * Spatial pooling helps the network in learning invariance.<br /> * Flankers similar to the target object helps in recognition since they activate the convolutional filter more.<br /> * notMNIST data affects leads to more crowding since they have many more edges and white image pixels which activate the convolutional layers more.<br /> <br /> ===Eccentric Model===<br /> The set-up is the same as explained earlier.<br /> [[File:result3.png|750x400px|center]]<br /> <br /> ====Observations====<br /> * The recognition accuracy is dependent on the eccentricity of the target object.<br /> * If the target is placed at the center and no contrast normalization is done, then the recognition accuracy is high since this model concentrates the most on the central region of the image.<br /> * If contrast normalization is done, then all the scales will contribute equal amount and hence the eccentricity dependence is removed.<br /> * Early pooling is harmful since it might take away the useful information very early which might be useful to the network.<br /> <br /> Without contrast normalization, the middle portion of the image can be focused more with high resolution so the target at the center with no normalization performs well in that case. But if normalization is done, then all the segments of the image contribute to the classification and hence the overall accuracy is not that great but the system becomes robust to the changes in eccentricity.<br /> <br /> ==Complex Clutter==<br /> Here, the targets are randomly embedded into images of the Places dataset and shifted along horizontally in order to investigate model robustness when the target is not at the image center. Tests are performed on DCNN and the eccentricity model with and without contrast normalization using at end pooling. The results are shown in Figure 9 below. <br /> <br /> [[File:result4.png|750x400px|center]]<br /> <br /> ====Observations====<br /> * Only eccentricity model without contrast normalization can recognize the target and only when the target is close to the image center.<br /> * The eccentricity model does not need to be trained on different types of clutter to become robust to those types of clutter, but it needs to fixate on the relevant part of the image to recognize the target.<br /> <br /> =Conclusions=<br /> We often think that just training the network with data similar to the test data would achieve good results in a general scenario too but that's not the case as we trained the model with flankers and it did not give us the ideal results for the target objects.<br /> *'''Flanker Configuration''': When models are trained with images of objects in isolation, adding flankers harms recognition. Adding two flankers is the same or worse than adding just one and the smaller the spacing between flanker and target, the more crowding occurs. This is because the pooling operation merges nearby responses, such as the target and flankers if they are close.<br /> *'''Similarity between target and flanker''': Flankers more similar to targets cause more crowding, because of the selectivity property of the learned DNN filters.<br /> *'''Dependence on target location and contrast normalization''': In DCNNs and eccentricity-dependent models with contrast normalization, recognition accuracy is the same across all eccentricities. In eccentricity-dependent networks without contrast normalization, recognition does not decrease despite the presence of clutter when the target is at the center of the image.<br /> *'''Effect of pooling''': adding pooling leads to better recognition accuracy of the models. Yet, in the eccentricity model, pooling across the scales too early in the hierarchy leads to lower accuracy.<br /> * The Eccentricity Dependent Models can be used for modeling the feedforward path of the primate visual cortex. <br /> * If target locations are proposed, then the system can become even more robust and hence a simple network can become robust to clutter while also reducing the amount of training data and time needed<br /> <br /> =Critique=<br /> This paper just tries to check the impact of flankers on targets as to how crowding can affect recognition but it does not propose anything novel in terms of architecture to take care of such type of crowding. The paper only shows that the eccentricity based model does better (than plain DCNN model) when the target is placed at the center of the image but maybe windowing over the frames the same way that a convolutional model passes a filter over an image, instead of taking crops starting from the middle, might help.<br /> <br /> This paper focuses on image classification. For a stronger argument, their model could be applied to the task of object detection. Perhaps crowding does not have as large of an impact when the objects of interest are localized by a region proposal network.<br /> <br /> This paper does not provide a convincing argument that the problem of crowding as experienced by humans somehow shares a similar mechanism to the problem of DNN accuracy falling when there is more clutter in the scene. The multi-scale architecture does not seem all that close to the distribution of rods and cones in the retina[https://www.ncbi.nlm.nih.gov/books/NBK10848/figure/A763/?report=objectonly]. It might be that the eccentric model does well when the target is centered because it is being sampled by more scales, not because it is similar to a primate visual cortex, and primates are able to recognize an object in clutter when looking directly at it.<br /> <br /> =References=<br /> # Volokitin A, Roig G, Poggio T:&quot;Do Deep Neural Networks Suffer from Crowding?&quot; Conference on Neural Information Processing Systems (NIPS). 2017<br /> # Francis X. Chen, Gemma Roig, Leyla Isik, Xavier Boix and Tomaso Poggio: &quot;Eccentricity Dependent Deep Neural Networks for Modeling Human Vision&quot; Journal of Vision. 17. 808. 10.1167/17.10.808.<br /> # J Harrison, W &amp; W Remington, R &amp; Mattingley, Jason. (2014). Visual crowding is anisotropic along the horizontal meridian during smooth pursuit. Journal of vision. 14. 10.1167/14.1.21. http://willjharrison.com/2014/01/new-paper-visual-crowding-is-anisotropic-along-the-horizontal-meridian-during-smooth-pursuit/</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:crowding-tigger.jpg&diff=35667 File:crowding-tigger.jpg 2018-03-27T07:01:24Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=35666 Do Deep Neural Networks Suffer from Crowding 2018-03-27T06:51:00Z <p>Jssambee: </p> <hr /> <div>= Introduction =<br /> Since the increase in popularity of Deep Neural Networks (DNNs), there has been lots of research in making machines capable of recognizing objects the same way humans do. Humans can recognize objects in a way that is invariant to scale, translation, and clutter. Crowding is another visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called flankers, are placed close to it. This paper focuses on studying the impact of crowding on DNNs trained for object recognition by adding clutter to the images and then analyzing which models and settings suffer less from such effects. <br /> <br /> [[File:paper25_fig_crowding_ex.png|center|600px]]<br /> The figure shows a visual example of crowding . Keep your eyes still and look at the dot in the center and try to identify the &quot;A&quot; in the two circles. You should see that it is much easier to make out the &quot;A&quot; in the right than in the left circle. The same &quot;A&quot; exists in both circles, however, the left circle contains flankers which are those line segments.<br /> <br /> ===What is the problem in CNNs?===<br /> CNNs fall short in explaining human perceptual invariance. First, CNNs typically take input at a single uniform resolution. Biological measurements suggest that resolution is not uniform across the human visual field, but rather decays with eccentricity, i.e. distance from the center of focus. Even more importantly, CNNs rely not only on weight-sharing but also on data augmentation to achieve transformation invariance and so obviously a lot of processing is needed for CNNs.<br /> <br /> The paper investigates two types of DNNs for crowding: traditional deep convolutional neural networks (DCNN) and a multi-scale eccentricity-dependent model which is an extension of the DCNNs and inspired by the retina where the receptive field size of the convolutional filters in the model grows with increasing distance from the center of the image, called the eccentricity and is explained below. The authors focus on the dependence of crowding on image factors, such as flanker configuration, target-flanker similarity, target eccentricity and premature pooling in particular. Along with that, there is major emphasis on reducing the training time of the networks since the motive is to have a simple network capable of learning space-invariant features.<br /> <br /> = Models =<br /> <br /> == Deep Convolutional Neural Networks ==<br /> The DCNN is a basic architecture with 3 convolutional layers, spatial 3x3 max-pooling with varying strides and a fully connected layer for classification as shown in the below figure. <br /> [[File:DCNN.png|800px|center]]<br /> <br /> The network is fed with images resized to 60x60, with mini-batches of 128 images, 32 feature channels for all convolutional layers, and convolutional filters of size 5x5 and stride 1.<br /> <br /> As highlighted earlier, the effect of pooling is into main consideration and hence three different configurations have been investigated as below: <br /> <br /> # '''No total pooling''' Feature maps sizes decrease only due to boundary effects, as the 3x3 max pooling has stride 1. The square feature maps sizes after each pool layer are 60-54-48-42.<br /> # '''Progressive pooling''' 3x3 pooling with a stride of 2 halves the square size of the feature maps, until we pool over what remains in the final layer, getting rid of any spatial information before the fully connected layer. (60-27-11-1).<br /> # '''At end pooling''' Same as no total pooling, but before the fully connected layer, max-pool over the entire feature map. (60-54-48-1).<br /> <br /> ==Eccentricity-dependent Model==<br /> In order to take care of the scale invariance in the input image, the eccentricity dependent DNN is utilized. This was proposed as a model of the human visual cortex by [https://arxiv.org/pdf/1406.1770.pdf, Poggio et al] and later further studied in . The main intuition behind this architecture is that as we increase eccentricity, the receptive fields also increase and hence the model will become invariant to changing input scales. The authors note that the width of each scale is roughly related to the amount of translation invariance for objects at that scale, simply because once the object is outside that window, the filter no longer observes it. Therefore, the authors say that the architecture emphasizes scale invariance over translation invariance, in contrast to traditional DCNNs. From a biological perspective, eye movement can compensate for the limitations of translation invariance, but compensating for scale invariance requires changing distance from the object. In this model, the input image is cropped into varying scales (11 crops increasing by a factor of &lt;math&gt;\sqrt{2}&lt;/math&gt; which are then resized to 60x60 pixels) and then fed to the network. Exponentially interpolated crops are used over linearly interpolated crops since they produce fewer boundary effects while maintaining the same behavior qualitatively. The model computes an invariant representation of the input by sampling the inverted pyramid at a discrete set of scales with the same number of filters at each scale. Since the same number of filters are used for each scale, the smaller crops will be sampled at a high resolution while the larger crops will be sampled with a low resolution. These scales are fed into the network as an input channel to the convolutional layers and share the weights across scale and space. Due to the downsampling of the input image, this is equivalent to having receptive fields of varying sizes. Intuitively, this means that the network generalizes learnings across scales and is guaranteed by during back-propagation by averaging the error derivatives over all scale channels, then using the averages to compute weight adjustments. The same set of weight adjustments to the convolutional units across different scale channels is applied.<br /> [[File:EDM.png|2000x450px|center]]<br /> <br /> <br /> The architecture of this model is the same as the previous DCNN model with the only change being the extra filters added for each of the scales, so the number of parameters remains the same as DCNN models. The authors perform spatial pooling, the aforementioned ''At end pooling'' is used here, and scale pooling which helps in reducing the number of scales by taking the maximum value of corresponding locations in the feature maps across multiple scales. It has three configurations: (1) at the beginning, in which all the different scales are pooled together after the first layer, 11-1-1-1-1 (2) progressively, 11-7-5-3-1 and (3) at the end, 11-11-11-11-1, in which all 11 scales are pooled together at the last layer.<br /> <br /> ===Contrast Normalization===<br /> Since there are multiple scales of an input image, in some experiments, normalization is performed such that the sum of the pixel intensities in each scale is in the same range [0,1] (this is to prevent smaller crops, which have more non-black pixels, from disproportionately dominating max-pooling across scales). The normalized pixel intensities are then divided by a factor proportional to the crop area [[File:sqrtf.png|60px]] where i=1 is the smallest crop.<br /> <br /> =Experiments=<br /> Targets are the set of objects to be recognized and flankers are the set of objects the model has not been trained to recognize, which act as clutter with respect to these target objects. The target objects are the even MNIST numbers having translational variance (shifted at different locations of the image along the horizontal axis), while flankers are from odd MNIST numbers, not MNIST dataset (contains alphabet letters) and Omniglot dataset (contains characters). Examples of the target and flanker configurations are shown below: <br /> [[File:eximages.png|800px|center]]<br /> <br /> The target and the object are referred to as ''a'' and ''x'' respectively with the below four configurations: <br /> # No flankers. Only the target object. (a in the plots) <br /> # One central flanker closer to the center of the image than the target. (xa) <br /> # One peripheral flanker closer to the boundary of the image that the target. (ax) <br /> # Two flankers spaced equally around the target, being both the same object (xax).<br /> <br /> Training is done using backpropagation with images of size &lt;math&gt;1920 px^2&lt;/math&gt; with embedded targets objects and flankers of size of &lt;math&gt;120 px^2&lt;/math&gt;. The training and test images are divided as per the usual MNIST configuration. To determine if there is a difference between the peripheral flankers and the central flankers, all the tests are performed in the right half image plane.<br /> <br /> ==DNNs trained with Target and Flankers==<br /> This is a constant spacing training setup where identical flankers are placed at a distance of 120 pixels either side of the target(xax) with the target having translational variance. The tests are evaluated on (i) DCNN with at the end pooling, and (ii) eccentricity-dependent model with 11-11-11-11-1 scale pooling, at the end spatial pooling and contrast normalization. The test data has different flanker configurations as described above.<br /> [[File:result1.png|x450px|center]]<br /> <br /> ===Observations===<br /> * With the flanker configuration same as the training one, models are better at recognizing objects in clutter rather than isolated objects for all image locations<br /> * If the target-flanker spacing is changed, then models perform worse<br /> * the eccentricity model is much better at recognizing objects in isolation than the DCNN because the multi-scale crops divide the image into discrete regions, letting the model learn from image parts as well as the whole image<br /> * Only the eccentricity-dependent model is robust to different flanker configurations not included in training when the target is centered.<br /> <br /> ==DNNs trained with Images with the Target in Isolation==<br /> Here the target objects are in isolation and with translational variance while the test-set is the same set of flanker configurations as used before.<br /> [[File:result2.png|750x400px|center]]<br /> In addition to the evaluation of DCNNs in constant target eccentricity at 240 pixels, here they are tested with images in which the target is fixed at 720 pixels from the center of the image, as shown in Fig 3. Since the target is already at the edge of the visual field, a flanker cannot be more peripheral in the image than the target. Same results as for the 240 pixels target eccentricity can be extracted. The closer the flanker is to the target, the more accuracy decreases. Also, it can be seen that when the target is close to the image boundary, recognition is poor because of boundary effects eroding away information about the target<br /> [[File:paper25_supplemental1.png|800px|center]]<br /> <br /> ===DCNN Observations===<br /> * The recognition gets worse with the increase in the number of flankers.<br /> * Convolutional networks are capable of being invariant to translations.<br /> * In the constant target eccentricity setup, where the target is fixed at the center of the image with varying target-flanker spacing, we observe that as the distance between target and flankers increase, recognition gets better.<br /> * Spatial pooling helps the network in learning invariance.<br /> * Flankers similar to the target object helps in recognition since they activate the convolutional filter more.<br /> * notMNIST data affects leads to more crowding since they have many more edges and white image pixels which activate the convolutional layers more.<br /> <br /> ===Eccentric Model===<br /> The set-up is the same as explained earlier.<br /> [[File:result3.png|750x400px|center]]<br /> <br /> ====Observations====<br /> * The recognition accuracy is dependent on the eccentricity of the target object.<br /> * If the target is placed at the center and no contrast normalization is done, then the recognition accuracy is high since this model concentrates the most on the central region of the image.<br /> * If contrast normalization is done, then all the scales will contribute equal amount and hence the eccentricity dependence is removed.<br /> * Early pooling is harmful since it might take away the useful information very early which might be useful to the network.<br /> <br /> ==Complex Clutter==<br /> Here, the targets are randomly embedded into images of the Places dataset and shifted along horizontally in order to investigate model robustness when the target is not at the image center. Tests are performed on DCNN and the eccentricity model with and without contrast normalization using at end pooling. The results are shown in Figure 9 below. <br /> <br /> [[File:result4.png|750x400px|center]]<br /> <br /> ====Observations====<br /> * Only eccentricity model without contrast normalization can recognize the target and only when the target is close to the image center.<br /> * The eccentricity model does not need to be trained on different types of clutter to become robust to those types of clutter, but it needs to fixate on the relevant part of the image to recognize the target.<br /> <br /> =Conclusions=<br /> We often think that just training the network with data similar to the test data would achieve good results in a general scenario too but that's not the case as we trained the model with flankers and it did not give us the ideal results for the target objects.<br /> *'''Flanker Configuration''': When models are trained with images of objects in isolation, adding flankers harms recognition. Adding two flankers is the same or worse than adding just one and the smaller the spacing between flanker and target, the more crowding occurs. This is because the pooling operation merges nearby responses, such as the target and flankers if they are close.<br /> *'''Similarity between target and flanker''': Flankers more similar to targets cause more crowding, because of the selectivity property of the learned DNN filters.<br /> *'''Dependence on target location and contrast normalization''': In DCNNs and eccentricity-dependent models with contrast normalization, recognition accuracy is the same across all eccentricities. In eccentricity-dependent networks without contrast normalization, recognition does not decrease despite the presence of clutter when the target is at the center of the image.<br /> *'''Effect of pooling''': adding pooling leads to better recognition accuracy of the models. Yet, in the eccentricity model, pooling across the scales too early in the hierarchy leads to lower accuracy.<br /> * The Eccentricity Dependent Models can be used for modeling the feedforward path of the primate visual cortex. <br /> * If target locations are proposed, then the system can become even more robust and hence a simple network can become robust to clutter while also reducing the amount of training data and time needed<br /> <br /> =Critique=<br /> This paper just tries to check the impact of flankers on targets as to how crowding can affect recognition but it does not propose anything novel in terms of architecture to take care of such type of crowding. The paper only shows that the eccentricity based model does better (than plain DCNN model) when the target is placed at the center of the image but maybe windowing over the frames the same way that a convolutional model passes a filter over an image, instead of taking crops starting from the middle, might help.<br /> <br /> This paper focuses on image classification. For a stronger argument, their model could be applied to the task of object detection. Perhaps crowding does not have as large of an impact when the objects of interest are localized by a region proposal network.<br /> <br /> This paper does not provide a convincing argument that the problem of crowding as experienced by humans somehow shares a similar mechanism to the problem of DNN accuracy falling when there is more clutter in the scene. The multi-scale architecture does not seem all that close to the distribution of rods and cones in the retina[https://www.ncbi.nlm.nih.gov/books/NBK10848/figure/A763/?report=objectonly]. It might be that the eccentric model does well when the target is centered because it is being sampled by more scales, not because it is similar to a primate visual cortex, and primates are able to recognize an object in clutter when looking directly at it.<br /> <br /> =References=<br /> # Volokitin A, Roig G, Poggio T:&quot;Do Deep Neural Networks Suffer from Crowding?&quot; Conference on Neural Information Processing Systems (NIPS). 2017<br /> # Francis X. Chen, Gemma Roig, Leyla Isik, Xavier Boix and Tomaso Poggio: &quot;Eccentricity Dependent Deep Neural Networks for Modeling Human Vision&quot; Journal of Vision. 17. 808. 10.1167/17.10.808.<br /> # J Harrison, W &amp; W Remington, R &amp; Mattingley, Jason. (2014). Visual crowding is anisotropic along the horizontal meridian during smooth pursuit. Journal of vision. 14. 10.1167/14.1.21. http://willjharrison.com/2014/01/new-paper-visual-crowding-is-anisotropic-along-the-horizontal-meridian-during-smooth-pursuit/</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Spherical_CNNs&diff=35279 Spherical CNNs 2018-03-22T20:37:59Z <p>Jssambee: /* Conclusions */</p> <hr /> <div>= Introduction =<br /> Convolutional Neural Networks (CNNs), or network architectures involving CNNs, are the current state of the art for learning 2D image processing tasks such as semantic segmentation and object detection. CNNs work well in large part due to the property of being translationally equivariant. This property allows a network trained to detect a certain type of object to still detect the object even if it is translated to another position in the image. However, this does not correspond well to spherical signals since projecting a spherical signal onto a plane will result in distortions, as demonstrated in Figure 1. There are many different types of spherical projections onto a 2D plane, as most people know from the various types of world maps, none of which provide all the necessary properties for rotation-invariant learning. Applications where spherical CNNs can be applied include omnidirectional vision for robots, molecular regression problems, and weather/climate modelling.<br /> <br /> [[File:paper26-fig1.png|center]]<br /> <br /> The implementation of a spherical CNN is challenging mainly because no perfectly symmetrical grids for the sphere exists which makes it difficult to define the rotation of a spherical filter by one pixel and the computational efficiency of the system.<br /> <br /> The main contributions of this paper are the following:<br /> # The theory of spherical CNNs.<br /> # The first automatically differentiable implementation of the generalized Fourier transform for &lt;math&gt;S^2&lt;/math&gt; and SO(3). The provided PyTorch code by the authors is easy to use, fast, and memory efficient.<br /> # The first empirical support for the utility of spherical CNNs for rotation-invariant learning problems.<br /> <br /> = Notation =<br /> Below are listed several important terms:<br /> * '''Unit Sphere''' &lt;math&gt;S^2&lt;/math&gt; is defined as a sphere where all of its points are distance of 1 from the origin. The unit sphere can be parameterized by the spherical coordinates &lt;math&gt;\alpha ∈ [0, 2π]&lt;/math&gt; and &lt;math&gt;β ∈ [0, π]&lt;/math&gt;. This is a two-dimensional manifold with respect to &lt;math&gt;\alpha&lt;/math&gt; and &lt;math&gt;β&lt;/math&gt;.<br /> * '''&lt;math&gt;S^2&lt;/math&gt; Sphere''' The three dimensional surface from a 3D sphere<br /> * '''Spherical Signals''' In the paper spherical images and filters are modeled as continuous functions &lt;math&gt;f : s^2 → \mathbb{R}^K&lt;/math&gt;. K is the number of channels. Such as how RGB images have 3 channels a spherical signal can have numerous channels describing the data. Examples of channels which were used can be found in the experiments section.<br /> * '''Rotations - SO(3)''' The group of 3D rotations on an &lt;math&gt;S^2&lt;/math&gt; sphere. Sometimes called the &quot;special orthogonal group&quot;. In this paper the ZYZ-Euler parameterization is used to represent SO(3) rotations with &lt;math&gt;\alpha, \beta&lt;/math&gt;, and &lt;math&gt;\gamma&lt;/math&gt;. Any rotation can be broken down into first a rotation (&lt;math&gt;\alpha&lt;/math&gt;) about the Z-axis, then a rotation (&lt;math&gt;\beta&lt;/math&gt;) about the new Y-axis (Y'), followed by a rotation (&lt;math&gt;\gamma&lt;/math&gt;) about the new Z axis (Z&quot;). [In the rest of this paper, to integrate functions on SO(3), the authors use a rotationally invariant probability measure on the Borel subsets of SO(3). This measure is an example of a Haar measure. Haar measures generalize the idea of rotationally invariant probability measures to general topological groups. For more on Haar measures, see (Feldman 2002) ]<br /> <br /> = Related Work =<br /> The related work presented in this paper is very brief, in large part due to the novelty of spherical CNNs and the length of the rest of the paper. The authors enumerate numerous papers which attempt to exploit larger groups of symmetries such as the translational symmetries of CNNs but do not go into specific details for any of these attempts. They do state that all the previous works are limited to discrete groups with the exception of SO(2)-steerable networks.<br /> The authors also mention that previous works exist that analyze spherical images but that these do not have an equivariant architecture. They claim that Spherical CNNs are &quot;the first to achieve equivariance to a continuous, non-commutative group (SO(3))&quot;. They also claim to be the first to use the generalized Fourier transform for speed effective performance of group correlation.<br /> <br /> = Correlations on the Sphere and Rotation Group =<br /> Spherical correlation is like planar correlation except instead of translation, there is rotation. The definitions for each are provided as follows:<br /> <br /> '''Planar correlation''' The value of the output feature map at translation &lt;math&gt;\small x ∈ Z^2&lt;/math&gt; is computed as an inner product between the input feature map and a filter, shifted by &lt;math&gt;\small x&lt;/math&gt;.<br /> <br /> '''Spherical correlation''' The value of the output feature map evaluated at rotation &lt;math&gt;\small R ∈ SO(3)&lt;/math&gt; is computed as an inner product between the input feature map and a filter, rotated by &lt;math&gt;\small R&lt;/math&gt;.<br /> <br /> '''Rotation of Spherical Signals''' The paper introduces the rotation operator &lt;math&gt;L_R&lt;/math&gt;. The rotation operator simply rotates a function (which allows us to rotate the the spherical filters) by &lt;math&gt;R^{-1}&lt;/math&gt;. With this definition we have the property that &lt;math&gt;L_{RR'} = L_R L_{R'}&lt;/math&gt;.<br /> <br /> '''Inner Products''' The inner product of spherical signals is simply the integral summation on the vector space over the entire sphere.<br /> <br /> &lt;math&gt;\langle\psi , f \rangle = \int_{S^2} \sum_{k=1}^K \psi_k (x)f_k (x)dx&lt;/math&gt;<br /> <br /> &lt;math&gt;dx&lt;/math&gt; here is SO(3) rotation invariant and is equivalent to &lt;math&gt;d \alpha sin(\beta) d \beta / 4 \pi &lt;/math&gt; in spherical coordinates. This comes from the ZYZ-Euler paramaterization where any rotation can be broken down into first a rotation about the Z-axis, then a rotation about the new Y-axis (Y'), followed by a rotation about the new Z axis (Z&quot;). More details on this are given in Appendix A in the paper.<br /> <br /> By this definition, the invariance of the inner product is then guaranteed for any rotation &lt;math&gt;R ∈ SO(3)&lt;/math&gt;. In other words, when subjected to rotations, the volume under a spherical heightmap does not change. The following equations show that &lt;math&gt;L_R&lt;/math&gt; has a distinct adjoint (&lt;math&gt;L_{R^{-1}}&lt;/math&gt;) and that &lt;math&gt;L_R&lt;/math&gt; is unitary and thus preserves orthogonality and distances.<br /> <br /> &lt;math&gt;\langle L_R \psi \,, f \rangle = \int_{S^2} \sum_{k=1}^K \psi_k (R^{-1} x)f_k (x)dx&lt;/math&gt;<br /> <br /> ::::&lt;math&gt;= \int_{S^2} \sum_{k=1}^K \psi_k (x)f_k (Rx)dx&lt;/math&gt;<br /> <br /> ::::&lt;math&gt;= \langle \psi , L_{R^{-1}} f \rangle&lt;/math&gt;<br /> <br /> '''Spherical Correlation''' With the above knowledge the definition of spherical correlation of two signals &lt;math&gt;f&lt;/math&gt; and &lt;math&gt;\psi&lt;/math&gt; is:<br /> <br /> &lt;math&gt;[\psi \star f](R) = \langle L_R \psi \,, f \rangle = \int_{S^2} \sum_{k=1}^K \psi_k (R^{-1} x)f_k (x)dx&lt;/math&gt;<br /> <br /> The output of the above equation is a function on SO(3). This can be thought of as for each rotation combination of &lt;math&gt;\alpha , \beta , \gamma &lt;/math&gt; there is a different volume under the correlation. The authors make a point of noting that previous work by Driscoll and Healey only ensures circular symmetries about the Z axis and their new formulation ensures symmetry about any rotation.<br /> <br /> '''Rotation of SO(3) Signals''' The first layer of Spherical CNNs take a function on the sphere (&lt;math&gt;S^2&lt;/math&gt;) and output a function on SO(3). Therefore, if a Spherical CNN with more than one layer is going to be built there needs to be a way to find the correlation between two signals on SO(3). The authors then generalize the rotation operator (&lt;math&gt;L_R&lt;/math&gt;) to encompass acting on signals from SO(3). This new definition of &lt;math&gt;L_R&lt;/math&gt; is as follows: (where &lt;math&gt;R^{-1}Q&lt;/math&gt; is a composition of rotations, i.e. multiplication of rotation matrices)<br /> <br /> &lt;math&gt;[L_Rf](Q)=f(R^{-1} Q)&lt;/math&gt;<br /> <br /> '''Rotation Group Correlation''' The correlation of two signals (&lt;math&gt;f,\psi&lt;/math&gt;) on SO(3) with K channels is defined as the following:<br /> <br /> &lt;math&gt;[\psi \star f](R) = \langle L_R \psi , f \rangle = \int_{SO(3)} \sum_{k=1}^K \psi_k (R^{-1} Q)f_k (Q)dQ&lt;/math&gt;<br /> <br /> where dQ represents the ZYZ-Euler angles &lt;math&gt;d \alpha sin(\beta) d \beta d \gamma / 8 \pi^2 &lt;/math&gt;. A complete derivation of this can be found in Appendix A.<br /> <br /> '''Equivariance''' The equivariance for the rotation group correlation is similarly demonstrated. A layer is equivariant if for some operator &lt;math&gt;T_R&lt;/math&gt;, &lt;math&gt;\Phi \circ L_R = T_R \circ \Phi&lt;/math&gt;, and: <br /> <br /> &lt;math&gt;[\psi \star [L_Qf]](R) = \langle L_R \psi , L_Qf \rangle = \langle L_{Q^{-1} R} \psi , f \rangle = [\psi \star f](Q^{-1}R) = [L_Q[\psi \star f]](R) &lt;/math&gt;.<br /> <br /> = Implementation with GFFT =<br /> The authors leverage the Generalized Fourier Transform (GFT) and Generalized Fast Fourier Transform (GFFT) algorithms to compute the correlations outlined in the previous section. The Fast Fourier Transform (FFT) can compute correlations and convolutions efficiently by means of the Fourier theorem. The Fourier theorem states that a continuous periodic function can be expressed as a sum of a series of sine or cosine terms (called Fourier coefficients). The FFT can be generalized to &lt;math&gt;S^2&lt;/math&gt; and SO(3) and is then called the GFT. The GFT is a linear projection of a function onto orthogonal basis functions. The basis functions are a set of irreducible unitary representations for a group (such as for &lt;math&gt;S^2&lt;/math&gt; or SO(3)). For &lt;math&gt;S^2&lt;/math&gt; the basis functions are the spherical harmonics &lt;math&gt;Y_m^l(x)&lt;/math&gt;. For SO(3) these basis functions are called the Wigner D-functions &lt;math&gt;D_{mn}^l(R)&lt;/math&gt;. For both sets of functions the indices are restricted to &lt;math&gt;l\geq0&lt;/math&gt; and &lt;math&gt;-l \leq m,n \geq l&lt;/math&gt;. The Wigner D-functions are also orthogonal so the Fourier coefficients can be computed by the inner product with the Wigner D-functions (See Appendix C for complete proof). The Wigner D-functions are complete which means that any function (which is well behaved) on SO(3) can be expressed as a linear combination of the Wigner D-functions. The GFT of a function on SO(3) is thus:<br /> <br /> &lt;math&gt;\hat{f^l} = \int_X f(x) D^l(x)dx&lt;/math&gt;<br /> <br /> where &lt;math&gt;\hat{f}&lt;/math&gt; represents the Fourier coefficients. For &lt;math&gt;S^2&lt;/math&gt; we have the same equation but with the basis functions &lt;math&gt;Y^l&lt;/math&gt;.<br /> <br /> The inverse SO(3) Fourier transform is:<br /> <br /> &lt;math&gt;f(R)=[\mathcal{F}^{-1} \hat{f}](R) = \sum_{l=0}^b (2l + 1) \sum_{m=-l}^l \sum_{n=-l}^l \hat{f_{mn}^l} D_{mn}^l(R) &lt;/math&gt;<br /> <br /> The bandwidth b represents the maximum frequency and is related to the resolution of the spatial grid. Kostelec and Rockmore are referenced for more knowledge on this topic.<br /> <br /> The authors give proofs (Appendix D) that the SO(3) correlation satisfies the Fourier theorem and the &lt;math&gt;S^2&lt;/math&gt; correlation of spherical signals can be computed by the outer products of the &lt;math&gt;S^2&lt;/math&gt;-FTs (Shown in Figure 2).<br /> <br /> [[File:paper26-fig2.png|center]]<br /> <br /> The GFFT algorithm details are taken from Kostelec and Rockmore. The authors claim they have the first automatically differentiable implementation of the GFT for &lt;math&gt;S^2&lt;/math&gt; and SO(3). The authors do not provide any run time comparisons for real time applications (they just mentioned that FFT can be computed in &lt;math&gt;O(n\mathrm{log}n)&lt;/math&gt; time) or any comparisons on training times with/without GFFT. However, they do provide the source code of their implementation at: https://github.com/jonas-koehler/s2cnn.<br /> <br /> = Experiments =<br /> The authors provide several experiments. The first set of experiments are designed to show the numerical stability and accuracy of the outlined methods. The second group of experiments demonstrates how the algorithms can be applied to current problem domains.<br /> <br /> ==Equivariance Error==<br /> In this experiment the authors try to show experimentally that their theory of equivariance holds. They express that they had doubts about the equivariance in practice due to potential discretization artifacts since equivariance was proven for the continuous case, with the potential consequence of equivariance not holding being that the weight sharing scheme becomes less effective. The experiment is set up by first testing the equivariance of the SO(3) correlation at different resolutions. 500 random rotations and feature maps (with 10 channels) are sampled. They then calculate the approximation error &lt;math&gt;\small\Delta = \dfrac{1}{n} \sum_{i=1}^n std(L_{R_i} \Phi(f_i) - \phi(L_{R_i} f_i))/std(\Phi(f_i))&lt;/math&gt;<br /> Note: The authors do not mention what the std function is however it is likely the standard deviation function as 'std' is the command for standard deviation in MATLAB.<br /> &lt;math&gt;\Phi&lt;/math&gt; is a composition of SO(3) correlation layers with filters which have been randomly initialized. The authors mention that they were expecting &lt;math&gt;\Delta&lt;/math&gt; to be zero in the case of perfect equivariance. This is due to, as proven earlier, the following two terms equaling each other in the continuous case: &lt;math&gt;\small L_{R_i} \Phi(f_i) - \phi(L_{R_i} f_i)&lt;/math&gt;. The results are shown in Figure 3. <br /> <br /> [[File:paper26-fig3.png|center]]<br /> <br /> &lt;math&gt;\Delta&lt;/math&gt; only grows with resolution/layers when there is no activation function. With ReLU activation the error stays constant once slightly higher than 0 resolution. The authors indicate that the error must therefore be from the feature map rotation since this type of error is exact only for bandlimited functions.<br /> <br /> ==MNIST Data==<br /> The experiment using MNIST data was created by projecting MNIST digits onto a sphere using stereographic projection to create the resulting images as seen in Figure 4.<br /> <br /> [[File:paper26-fig4.png|center]]<br /> <br /> The authors created two datasets, one with the projected digits and the other with the same projected digits which were then subjected to a random rotation. The spherical CNN architecture used was &lt;math&gt;\small S^2&lt;/math&gt;conv-ReLU-SO(3)conv-ReLU-FC-softmax and was attempted with bandwidths of 30,10,6 and 20,40,10 channels for each layer respectively. This model was compared to a baseline CNN with layers conv-ReLU-conv-ReLU-FC-softmax with 5x5 filters, 32,64,10 channels and stride of 3. For comparison this leads to approximately 68K parameters for the baseline and 58K parameters for the spherical CNN. Results can be seen in Table 1. It is clear from the results that the spherical CNN architecture made the network rotationally invariant. Performance on the rotated set is almost identical to the non-rotated set. This is true even when trained on the non-rotated set and tested on the rotated set. Compare this to the non-spherical architecture which becomes unusable when rotating the digits.<br /> <br /> [[File:paper26-tab1.png|center]]<br /> <br /> ==SHREC17==<br /> The SHREC dataset contains 3D models from the ShapeNet dataset which are classified into categories. It consists of a regularly aligned dataset and a rotated dataset. The models from the SHREC17 dataset were projected onto a sphere by means of raycasting. Different properties of the objects obtained from the raycast of the original model and the convex hull of the model make up the different channels which are input into the spherical CNN.<br /> <br /> <br /> [[File:paper26-fig5.png|center]]<br /> <br /> <br /> The network architecture used is an initial &lt;math&gt;\small S^2&lt;/math&gt;conv-BN-ReLU block which is followed by two SO(3)conv-BN-ReLU blocks. The output is then fed into a MaxPool-BN block then a linear layer to the output for final classification. The architecture for this experiment has ~1.4M parameters, far exceeding the scale of the spherical CNNs in the other experiments.<br /> <br /> This architecture achieves state of the art results on the SHREC17 tasks. The model places 2nd or 3rd in all categories but was not submitted as the SHREC17 task is closed. Table 2 shows the comparison of results with the top 3 submissions in each category. In the table, P@N stands for precision, R@N stands for recall, F1@N stands for F-score, mAP stands for mean average precision, and NDCG stands for normalized discounted cumulative gain in relevance based on whether the category and subcategory labels are predicted correctly. The authors claim the results show empirical proof of the usefulness of spherical CNNs. They elaborate that this is largely due to the fact that most architectures on the SHREC17 competition are highly specialized whereas their model is fairly general.<br /> <br /> <br /> [[File:paper26-tab2.png|center]]<br /> <br /> ==Molecular Atomization==<br /> In this experiment a spherical CNN is implemented with an architecture resembling that of ResNet. They use the QM7 dataset (Blum et al. 2009) which has the task of predicting atomization energy of molecules. The QM7 dataset is a subset of GDB-13 (database of organic molecules) composed of all molecules up to 23 atoms. The positions and charges given in the dataset are projected onto the sphere using potential functions. This is done as follows. First, for each atom, a sphere is defined around its position with the radius of the sphere kept uniform across all atoms. The radius is chosen as the minimal radius so no intersections between atoms occur in the training set. Finally, using potential functions, a T channel spherical signal is produced for each atom in the molecule as shown in the figure below. A summary of their results is shown in Table 3 along with some of the spherical CNN architecture details. It shows the different RMSE obtained from different methods. The results from this final experiment also seem to be promising as the network the authors present achieves the second best score. They also note that the first place method grows exponentially with the number of atoms per molecule so is unlikely to scale well.<br /> <br /> [[File:paper26-tab3.png|center]]<br /> <br /> [[File:paper26-f6.png|center]]<br /> <br /> = Conclusions =<br /> This paper presents a novel architecture called Spherical CNNs and introduces a trainable signal representation for spherical signals rotationally equivariant by design. The paper defines &lt;math&gt;\small S^2&lt;/math&gt; and SO(3) cross correlations, shows the theory behind their rotational invariance for continuous functions, and demonstrates that the invariance also applies to the discrete case. An effective GFFT algorithm was implemented and evaluated on two very different datasets with close to state of the art results, demonstrating that there are practical applications to Spherical CNNs.<br /> <br /> For future work the authors believe that improvements can be obtained by generalizing the algorithms to the SE(3) group (SE(3) simply adds translations in 3D space to the SO(3) group). The authors also briefly mention their excitement for applying Spherical CNNs to omnidirectional vision such as in drones and autonomous cars. They state that there is very little publicly available omnidirectional image data which could be why they did not conduct any experiments in this area.<br /> <br /> = Commentary =<br /> The reviews on Spherical CNNs are very positive and it is ranked in the top 1% of papers submitted to ICLR 2018. Positive points are the novelty of the architecture, the wide variety of experiments performed, and the writing. One critique of the original submission is that the related works section only lists, instead of describing, previous methods and that a description of the methods would have provided more clarity. The authors have since expanded the section however I found that it is still limited which the authors attribute to length limitations. Another critique is that the evaluation does not provide enough depth. For example, it would have been great to see an example of omnidirectional vision for spherical networks. However, this is to be expected as it is just the introduction of spherical CNNs and more work is sure to come.<br /> <br /> = Source Code =<br /> Source code is available at:<br /> https://github.com/jonas-koehler/s2cnn<br /> <br /> = Sources =<br /> * T. Cohen et al. Spherical CNNs, 2018.<br /> * J. Feldman. Haar Measure. http://www.math.ubc.ca/~feldman/m606/haar.pdf<br /> * P. Kostelec, D. Rockmore. FFTs on the Rotation Group, 2008.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Spherical_CNNs&diff=35276 Spherical CNNs 2018-03-22T20:25:53Z <p>Jssambee: /* Introduction */</p> <hr /> <div>= Introduction =<br /> Convolutional Neural Networks (CNNs), or network architectures involving CNNs, are the current state of the art for learning 2D image processing tasks such as semantic segmentation and object detection. CNNs work well in large part due to the property of being translationally equivariant. This property allows a network trained to detect a certain type of object to still detect the object even if it is translated to another position in the image. However, this does not correspond well to spherical signals since projecting a spherical signal onto a plane will result in distortions, as demonstrated in Figure 1. There are many different types of spherical projections onto a 2D plane, as most people know from the various types of world maps, none of which provide all the necessary properties for rotation-invariant learning. Applications where spherical CNNs can be applied include omnidirectional vision for robots, molecular regression problems, and weather/climate modelling.<br /> <br /> [[File:paper26-fig1.png|center]]<br /> <br /> The implementation of a spherical CNN is challenging mainly because no perfectly symmetrical grids for the sphere exists which makes it difficult to define the rotation of a spherical filter by one pixel and the computational efficiency of the system.<br /> <br /> The main contributions of this paper are the following:<br /> # The theory of spherical CNNs.<br /> # The first automatically differentiable implementation of the generalized Fourier transform for &lt;math&gt;S^2&lt;/math&gt; and SO(3). The provided PyTorch code by the authors is easy to use, fast, and memory efficient.<br /> # The first empirical support for the utility of spherical CNNs for rotation-invariant learning problems.<br /> <br /> = Notation =<br /> Below are listed several important terms:<br /> * '''Unit Sphere''' &lt;math&gt;S^2&lt;/math&gt; is defined as a sphere where all of its points are distance of 1 from the origin. The unit sphere can be parameterized by the spherical coordinates &lt;math&gt;\alpha ∈ [0, 2π]&lt;/math&gt; and &lt;math&gt;β ∈ [0, π]&lt;/math&gt;. This is a two-dimensional manifold with respect to &lt;math&gt;\alpha&lt;/math&gt; and &lt;math&gt;β&lt;/math&gt;.<br /> * '''&lt;math&gt;S^2&lt;/math&gt; Sphere''' The three dimensional surface from a 3D sphere<br /> * '''Spherical Signals''' In the paper spherical images and filters are modeled as continuous functions &lt;math&gt;f : s^2 → \mathbb{R}^K&lt;/math&gt;. K is the number of channels. Such as how RGB images have 3 channels a spherical signal can have numerous channels describing the data. Examples of channels which were used can be found in the experiments section.<br /> * '''Rotations - SO(3)''' The group of 3D rotations on an &lt;math&gt;S^2&lt;/math&gt; sphere. Sometimes called the &quot;special orthogonal group&quot;. In this paper the ZYZ-Euler parameterization is used to represent SO(3) rotations with &lt;math&gt;\alpha, \beta&lt;/math&gt;, and &lt;math&gt;\gamma&lt;/math&gt;. Any rotation can be broken down into first a rotation (&lt;math&gt;\alpha&lt;/math&gt;) about the Z-axis, then a rotation (&lt;math&gt;\beta&lt;/math&gt;) about the new Y-axis (Y'), followed by a rotation (&lt;math&gt;\gamma&lt;/math&gt;) about the new Z axis (Z&quot;). [In the rest of this paper, to integrate functions on SO(3), the authors use a rotationally invariant probability measure on the Borel subsets of SO(3). This measure is an example of a Haar measure. Haar measures generalize the idea of rotationally invariant probability measures to general topological groups. For more on Haar measures, see (Feldman 2002) ]<br /> <br /> = Related Work =<br /> The related work presented in this paper is very brief, in large part due to the novelty of spherical CNNs and the length of the rest of the paper. The authors enumerate numerous papers which attempt to exploit larger groups of symmetries such as the translational symmetries of CNNs but do not go into specific details for any of these attempts. They do state that all the previous works are limited to discrete groups with the exception of SO(2)-steerable networks.<br /> The authors also mention that previous works exist that analyze spherical images but that these do not have an equivariant architecture. They claim that Spherical CNNs are &quot;the first to achieve equivariance to a continuous, non-commutative group (SO(3))&quot;. They also claim to be the first to use the generalized Fourier transform for speed effective performance of group correlation.<br /> <br /> = Correlations on the Sphere and Rotation Group =<br /> Spherical correlation is like planar correlation except instead of translation, there is rotation. The definitions for each are provided as follows:<br /> <br /> '''Planar correlation''' The value of the output feature map at translation &lt;math&gt;\small x ∈ Z^2&lt;/math&gt; is computed as an inner product between the input feature map and a filter, shifted by &lt;math&gt;\small x&lt;/math&gt;.<br /> <br /> '''Spherical correlation''' The value of the output feature map evaluated at rotation &lt;math&gt;\small R ∈ SO(3)&lt;/math&gt; is computed as an inner product between the input feature map and a filter, rotated by &lt;math&gt;\small R&lt;/math&gt;.<br /> <br /> '''Rotation of Spherical Signals''' The paper introduces the rotation operator &lt;math&gt;L_R&lt;/math&gt;. The rotation operator simply rotates a function (which allows us to rotate the the spherical filters) by &lt;math&gt;R^{-1}&lt;/math&gt;. With this definition we have the property that &lt;math&gt;L_{RR'} = L_R L_{R'}&lt;/math&gt;.<br /> <br /> '''Inner Products''' The inner product of spherical signals is simply the integral summation on the vector space over the entire sphere.<br /> <br /> &lt;math&gt;\langle\psi , f \rangle = \int_{S^2} \sum_{k=1}^K \psi_k (x)f_k (x)dx&lt;/math&gt;<br /> <br /> &lt;math&gt;dx&lt;/math&gt; here is SO(3) rotation invariant and is equivalent to &lt;math&gt;d \alpha sin(\beta) d \beta / 4 \pi &lt;/math&gt; in spherical coordinates. This comes from the ZYZ-Euler paramaterization where any rotation can be broken down into first a rotation about the Z-axis, then a rotation about the new Y-axis (Y'), followed by a rotation about the new Z axis (Z&quot;). More details on this are given in Appendix A in the paper.<br /> <br /> By this definition, the invariance of the inner product is then guaranteed for any rotation &lt;math&gt;R ∈ SO(3)&lt;/math&gt;. In other words, when subjected to rotations, the volume under a spherical heightmap does not change. The following equations show that &lt;math&gt;L_R&lt;/math&gt; has a distinct adjoint (&lt;math&gt;L_{R^{-1}}&lt;/math&gt;) and that &lt;math&gt;L_R&lt;/math&gt; is unitary and thus preserves orthogonality and distances.<br /> <br /> &lt;math&gt;\langle L_R \psi \,, f \rangle = \int_{S^2} \sum_{k=1}^K \psi_k (R^{-1} x)f_k (x)dx&lt;/math&gt;<br /> <br /> ::::&lt;math&gt;= \int_{S^2} \sum_{k=1}^K \psi_k (x)f_k (Rx)dx&lt;/math&gt;<br /> <br /> ::::&lt;math&gt;= \langle \psi , L_{R^{-1}} f \rangle&lt;/math&gt;<br /> <br /> '''Spherical Correlation''' With the above knowledge the definition of spherical correlation of two signals &lt;math&gt;f&lt;/math&gt; and &lt;math&gt;\psi&lt;/math&gt; is:<br /> <br /> &lt;math&gt;[\psi \star f](R) = \langle L_R \psi \,, f \rangle = \int_{S^2} \sum_{k=1}^K \psi_k (R^{-1} x)f_k (x)dx&lt;/math&gt;<br /> <br /> The output of the above equation is a function on SO(3). This can be thought of as for each rotation combination of &lt;math&gt;\alpha , \beta , \gamma &lt;/math&gt; there is a different volume under the correlation. The authors make a point of noting that previous work by Driscoll and Healey only ensures circular symmetries about the Z axis and their new formulation ensures symmetry about any rotation.<br /> <br /> '''Rotation of SO(3) Signals''' The first layer of Spherical CNNs take a function on the sphere (&lt;math&gt;S^2&lt;/math&gt;) and output a function on SO(3). Therefore, if a Spherical CNN with more than one layer is going to be built there needs to be a way to find the correlation between two signals on SO(3). The authors then generalize the rotation operator (&lt;math&gt;L_R&lt;/math&gt;) to encompass acting on signals from SO(3). This new definition of &lt;math&gt;L_R&lt;/math&gt; is as follows: (where &lt;math&gt;R^{-1}Q&lt;/math&gt; is a composition of rotations, i.e. multiplication of rotation matrices)<br /> <br /> &lt;math&gt;[L_Rf](Q)=f(R^{-1} Q)&lt;/math&gt;<br /> <br /> '''Rotation Group Correlation''' The correlation of two signals (&lt;math&gt;f,\psi&lt;/math&gt;) on SO(3) with K channels is defined as the following:<br /> <br /> &lt;math&gt;[\psi \star f](R) = \langle L_R \psi , f \rangle = \int_{SO(3)} \sum_{k=1}^K \psi_k (R^{-1} Q)f_k (Q)dQ&lt;/math&gt;<br /> <br /> where dQ represents the ZYZ-Euler angles &lt;math&gt;d \alpha sin(\beta) d \beta d \gamma / 8 \pi^2 &lt;/math&gt;. A complete derivation of this can be found in Appendix A.<br /> <br /> '''Equivariance''' The equivariance for the rotation group correlation is similarly demonstrated. A layer is equivariant if for some operator &lt;math&gt;T_R&lt;/math&gt;, &lt;math&gt;\Phi \circ L_R = T_R \circ \Phi&lt;/math&gt;, and: <br /> <br /> &lt;math&gt;[\psi \star [L_Qf]](R) = \langle L_R \psi , L_Qf \rangle = \langle L_{Q^{-1} R} \psi , f \rangle = [\psi \star f](Q^{-1}R) = [L_Q[\psi \star f]](R) &lt;/math&gt;.<br /> <br /> = Implementation with GFFT =<br /> The authors leverage the Generalized Fourier Transform (GFT) and Generalized Fast Fourier Transform (GFFT) algorithms to compute the correlations outlined in the previous section. The Fast Fourier Transform (FFT) can compute correlations and convolutions efficiently by means of the Fourier theorem. The Fourier theorem states that a continuous periodic function can be expressed as a sum of a series of sine or cosine terms (called Fourier coefficients). The FFT can be generalized to &lt;math&gt;S^2&lt;/math&gt; and SO(3) and is then called the GFT. The GFT is a linear projection of a function onto orthogonal basis functions. The basis functions are a set of irreducible unitary representations for a group (such as for &lt;math&gt;S^2&lt;/math&gt; or SO(3)). For &lt;math&gt;S^2&lt;/math&gt; the basis functions are the spherical harmonics &lt;math&gt;Y_m^l(x)&lt;/math&gt;. For SO(3) these basis functions are called the Wigner D-functions &lt;math&gt;D_{mn}^l(R)&lt;/math&gt;. For both sets of functions the indices are restricted to &lt;math&gt;l\geq0&lt;/math&gt; and &lt;math&gt;-l \leq m,n \geq l&lt;/math&gt;. The Wigner D-functions are also orthogonal so the Fourier coefficients can be computed by the inner product with the Wigner D-functions (See Appendix C for complete proof). The Wigner D-functions are complete which means that any function (which is well behaved) on SO(3) can be expressed as a linear combination of the Wigner D-functions. The GFT of a function on SO(3) is thus:<br /> <br /> &lt;math&gt;\hat{f^l} = \int_X f(x) D^l(x)dx&lt;/math&gt;<br /> <br /> where &lt;math&gt;\hat{f}&lt;/math&gt; represents the Fourier coefficients. For &lt;math&gt;S^2&lt;/math&gt; we have the same equation but with the basis functions &lt;math&gt;Y^l&lt;/math&gt;.<br /> <br /> The inverse SO(3) Fourier transform is:<br /> <br /> &lt;math&gt;f(R)=[\mathcal{F}^{-1} \hat{f}](R) = \sum_{l=0}^b (2l + 1) \sum_{m=-l}^l \sum_{n=-l}^l \hat{f_{mn}^l} D_{mn}^l(R) &lt;/math&gt;<br /> <br /> The bandwidth b represents the maximum frequency and is related to the resolution of the spatial grid. Kostelec and Rockmore are referenced for more knowledge on this topic.<br /> <br /> The authors give proofs (Appendix D) that the SO(3) correlation satisfies the Fourier theorem and the &lt;math&gt;S^2&lt;/math&gt; correlation of spherical signals can be computed by the outer products of the &lt;math&gt;S^2&lt;/math&gt;-FTs (Shown in Figure 2).<br /> <br /> [[File:paper26-fig2.png|center]]<br /> <br /> The GFFT algorithm details are taken from Kostelec and Rockmore. The authors claim they have the first automatically differentiable implementation of the GFT for &lt;math&gt;S^2&lt;/math&gt; and SO(3). The authors do not provide any run time comparisons for real time applications (they just mentioned that FFT can be computed in &lt;math&gt;O(n\mathrm{log}n)&lt;/math&gt; time) or any comparisons on training times with/without GFFT. However, they do provide the source code of their implementation at: https://github.com/jonas-koehler/s2cnn.<br /> <br /> = Experiments =<br /> The authors provide several experiments. The first set of experiments are designed to show the numerical stability and accuracy of the outlined methods. The second group of experiments demonstrates how the algorithms can be applied to current problem domains.<br /> <br /> ==Equivariance Error==<br /> In this experiment the authors try to show experimentally that their theory of equivariance holds. They express that they had doubts about the equivariance in practice due to potential discretization artifacts since equivariance was proven for the continuous case, with the potential consequence of equivariance not holding being that the weight sharing scheme becomes less effective. The experiment is set up by first testing the equivariance of the SO(3) correlation at different resolutions. 500 random rotations and feature maps (with 10 channels) are sampled. They then calculate the approximation error &lt;math&gt;\small\Delta = \dfrac{1}{n} \sum_{i=1}^n std(L_{R_i} \Phi(f_i) - \phi(L_{R_i} f_i))/std(\Phi(f_i))&lt;/math&gt;<br /> Note: The authors do not mention what the std function is however it is likely the standard deviation function as 'std' is the command for standard deviation in MATLAB.<br /> &lt;math&gt;\Phi&lt;/math&gt; is a composition of SO(3) correlation layers with filters which have been randomly initialized. The authors mention that they were expecting &lt;math&gt;\Delta&lt;/math&gt; to be zero in the case of perfect equivariance. This is due to, as proven earlier, the following two terms equaling each other in the continuous case: &lt;math&gt;\small L_{R_i} \Phi(f_i) - \phi(L_{R_i} f_i)&lt;/math&gt;. The results are shown in Figure 3. <br /> <br /> [[File:paper26-fig3.png|center]]<br /> <br /> &lt;math&gt;\Delta&lt;/math&gt; only grows with resolution/layers when there is no activation function. With ReLU activation the error stays constant once slightly higher than 0 resolution. The authors indicate that the error must therefore be from the feature map rotation since this type of error is exact only for bandlimited functions.<br /> <br /> ==MNIST Data==<br /> The experiment using MNIST data was created by projecting MNIST digits onto a sphere using stereographic projection to create the resulting images as seen in Figure 4.<br /> <br /> [[File:paper26-fig4.png|center]]<br /> <br /> The authors created two datasets, one with the projected digits and the other with the same projected digits which were then subjected to a random rotation. The spherical CNN architecture used was &lt;math&gt;\small S^2&lt;/math&gt;conv-ReLU-SO(3)conv-ReLU-FC-softmax and was attempted with bandwidths of 30,10,6 and 20,40,10 channels for each layer respectively. This model was compared to a baseline CNN with layers conv-ReLU-conv-ReLU-FC-softmax with 5x5 filters, 32,64,10 channels and stride of 3. For comparison this leads to approximately 68K parameters for the baseline and 58K parameters for the spherical CNN. Results can be seen in Table 1. It is clear from the results that the spherical CNN architecture made the network rotationally invariant. Performance on the rotated set is almost identical to the non-rotated set. This is true even when trained on the non-rotated set and tested on the rotated set. Compare this to the non-spherical architecture which becomes unusable when rotating the digits.<br /> <br /> [[File:paper26-tab1.png|center]]<br /> <br /> ==SHREC17==<br /> The SHREC dataset contains 3D models from the ShapeNet dataset which are classified into categories. It consists of a regularly aligned dataset and a rotated dataset. The models from the SHREC17 dataset were projected onto a sphere by means of raycasting. Different properties of the objects obtained from the raycast of the original model and the convex hull of the model make up the different channels which are input into the spherical CNN.<br /> <br /> <br /> [[File:paper26-fig5.png|center]]<br /> <br /> <br /> The network architecture used is an initial &lt;math&gt;\small S^2&lt;/math&gt;conv-BN-ReLU block which is followed by two SO(3)conv-BN-ReLU blocks. The output is then fed into a MaxPool-BN block then a linear layer to the output for final classification. The architecture for this experiment has ~1.4M parameters, far exceeding the scale of the spherical CNNs in the other experiments.<br /> <br /> This architecture achieves state of the art results on the SHREC17 tasks. The model places 2nd or 3rd in all categories but was not submitted as the SHREC17 task is closed. Table 2 shows the comparison of results with the top 3 submissions in each category. In the table, P@N stands for precision, R@N stands for recall, F1@N stands for F-score, mAP stands for mean average precision, and NDCG stands for normalized discounted cumulative gain in relevance based on whether the category and subcategory labels are predicted correctly. The authors claim the results show empirical proof of the usefulness of spherical CNNs. They elaborate that this is largely due to the fact that most architectures on the SHREC17 competition are highly specialized whereas their model is fairly general.<br /> <br /> <br /> [[File:paper26-tab2.png|center]]<br /> <br /> ==Molecular Atomization==<br /> In this experiment a spherical CNN is implemented with an architecture resembling that of ResNet. They use the QM7 dataset (Blum et al. 2009) which has the task of predicting atomization energy of molecules. The QM7 dataset is a subset of GDB-13 (database of organic molecules) composed of all molecules up to 23 atoms. The positions and charges given in the dataset are projected onto the sphere using potential functions. This is done as follows. First, for each atom, a sphere is defined around its position with the radius of the sphere kept uniform across all atoms. The radius is chosen as the minimal radius so no intersections between atoms occur in the training set. Finally, using potential functions, a T channel spherical signal is produced for each atom in the molecule as shown in the figure below. A summary of their results is shown in Table 3 along with some of the spherical CNN architecture details. It shows the different RMSE obtained from different methods. The results from this final experiment also seem to be promising as the network the authors present achieves the second best score. They also note that the first place method grows exponentially with the number of atoms per molecule so is unlikely to scale well.<br /> <br /> [[File:paper26-tab3.png|center]]<br /> <br /> [[File:paper26-f6.png|center]]<br /> <br /> = Conclusions =<br /> This paper presents a novel architecture called Spherical CNNs. The paper defines &lt;math&gt;\small S^2&lt;/math&gt; and SO(3) cross correlations, shows the theory behind their rotational invariance for continuous functions, and demonstrates that the invariance also applies to the discrete case. An effective GFFT algorithm was implemented and evaluated on two very different datasets with close to state of the art results, demonstrating that there are practical applications to Spherical CNNs.<br /> <br /> For future work the authors believe that improvements can be obtained by generalizing the algorithms to the SE(3) group (SE(3) simply adds translations in 3D space to the SO(3) group). The authors also briefly mention their excitement for applying Spherical CNNs to omnidirectional vision such as in drones and autonomous cars. They state that there is very little publicly available omnidirectional image data which could be why they did not conduct any experiments in this area.<br /> <br /> = Commentary =<br /> The reviews on Spherical CNNs are very positive and it is ranked in the top 1% of papers submitted to ICLR 2018. Positive points are the novelty of the architecture, the wide variety of experiments performed, and the writing. One critique of the original submission is that the related works section only lists, instead of describing, previous methods and that a description of the methods would have provided more clarity. The authors have since expanded the section however I found that it is still limited which the authors attribute to length limitations. Another critique is that the evaluation does not provide enough depth. For example, it would have been great to see an example of omnidirectional vision for spherical networks. However, this is to be expected as it is just the introduction of spherical CNNs and more work is sure to come.<br /> <br /> = Source Code =<br /> Source code is available at:<br /> https://github.com/jonas-koehler/s2cnn<br /> <br /> = Sources =<br /> * T. Cohen et al. Spherical CNNs, 2018.<br /> * J. Feldman. Haar Measure. http://www.math.ubc.ca/~feldman/m606/haar.pdf<br /> * P. Kostelec, D. Rockmore. FFTs on the Rotation Group, 2008.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=35003 Do Deep Neural Networks Suffer from Crowding 2018-03-21T06:37:22Z <p>Jssambee: </p> <hr /> <div>= Introduction =<br /> Ever since the evolution of Deep Networks, there has been tremendous amount of research and effort that has been put into making machines capable of recognizing objects the same way as humans do. Humans can recognize objects in a way that is invariant to scale, translation, and clutter. Crowding is another visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called<br /> flankers, are placed close to it and this is a very common real-life experience. This paper focuses on studying the impact of crowding on Deep Neural Networks (DNNs) by adding clutter to the images and then analyzing which models and settings suffer less from such effects. <br /> <br /> The paper investigates two types of DNNs for crowding: traditional deep convolutional neural networks(DCNN) and a multi-scale eccentricity-dependent model which is an extension of the DCNNs and inspired by the retina where the receptive field size of the convolutional filters in the model grows with increasing distance from the center of the image, called the eccentricity and will be explained below. The authors focus on the dependence of crowding on image factors, such as flanker configuration, target-flanker similarity, target eccentricity and premature pooling in particular.<br /> <br /> = Models =<br /> == Deep Convolutional Neural Networks ==<br /> The DCNN is a basic architecture with 3 convolutional layers, spatial 3x3 max-pooling with varying strides and a fully connected layer for classification as shown in the below figure. <br /> [[File:DCNN.png|800px|center]]<br /> <br /> The network is fed with images resized to 60x60, with minibatches of 128 images, 32 feature channels for all convolutional layers, and convolutional filters of size 5x5 and stride 1.<br /> <br /> As highlighted earlier, the effect of pooling is into main consideration and hence three different configurations have been investigated as below: <br /> <br /> 1. '''No total pooling''' Feature maps sizes decrease only due to boundary effects, as the 3x3 max pooling has stride 1. The square feature maps sizes after each pool layer are 60-54-48-42.<br /> <br /> 2. '''Progressive pooling''' 3x3 pooling with a stride of 2 halves the square size of the feature maps, until we pool over what remains in the final layer, getting rid of any spatial information before the fully connected layer. (60-27-11-1).<br /> <br /> 3. '''At end pooling''' Same as no total pooling, but before the fully connected layer, max-pool over the entire feature map. (60-54-48-1).<br /> <br /> ===What is the problem in CNNs?===<br /> CNNs fall short in explaining human perceptual invariance. First, CNNs typically take input at a single uniform resolution. Biological measurements suggest that resolution is not uniform across the human visual field, but rather decays with eccentricity, i.e. distance from the center of focus Even more importantly, CNNs rely on data augmentation to achieve transformation-invariance and obviously a lot of processing is needed for CNNs.<br /> <br /> ==Eccentricity-dependent Model==<br /> In order to take care of the scale invariance in the input image, the eccentricity dependent DNN is utilized. The main intuition behind this architecture is that as we increase eccentricity, the receptive fields also increase and hence the model will become invariant to changing input scales. In this model the input image is cropped into varying scales(11 crops increasing by a factor of &lt;math&gt;\sqrt{2}&lt;/math&gt; which are then resized to 60x60 pixels) and then fed to the network. The model computes an invariant representation of the input by sampling the inverted pyramid at a discrete set of scales with the same number of filters at each scale. Since the same number of filters are used for each scale, the smaller crops will be sampled at a high resolution while the larger crops will be sampled with a low resolution. These scales are fed into the network as an input channel to the convolutional layers and share the weights across scale and space.<br /> [[File:EDM.png|2000x450px|center]]<br /> <br /> The architecture of this model is the same as the previous DCNN model with the only change being the extra filters added for each of the scales. The authors perform spatial pooling, the aforementioned ''At end pooling'' is used here, and scale pooling which helps in reducing number of scales by taking the maximum value of corresponding locations in the feature maps across multiple scales. It has three configurations: (1) at the beginning, in which all the different scales are pooled together after the first layer, 11-1-1-1-1 (2) progressively, 11-7-5-3-1 and (3) at the end, 11-11-11-11-1, in which all 11 scales are pooled together at the last layer.<br /> <br /> ===Contrast Normalization===<br /> Since we have multiple scales of input image, we perform normalization such that the sum of the pixel intensities in each scale is in the same range [0,1]. These are then divided by a factor proportional to the crop area [[File:sqrtf.png|60px]] where i=1 is the smallest crop.<br /> <br /> =Experiments and its Set-Up =<br /> Targets are the set of objects to be recognized and flankers act as clutter with respect to these target objects. The target objects are the even MNIST numbers having translational variance (shifted at different locations of the image along the horizontal axis). Examples of the target and flanker configurations is shown below: <br /> [[File:eximages.png|800px|center]]<br /> <br /> The target and the object are referred to as ''a'' and ''x'' respectively with the below four conifgurations: (1) No flankers. Only the target object. (a in the plots) (2) One central flanker closer to the center of the image than the target. (xa) (3) One peripheral flanker closer to the boundary of the image that the target. (ax) (4) Two flankers spaced equally around the target, being both the same object (xax).<br /> <br /> ==DNNs trained with Target and Flankers==<br /> This is a constant spacing training setup where identical flankers are placed at a distance of 120 pixels either side of the target(xax) with the target having translational variance. THe tests are evaluated on (i) DCNN with at the end pooling, and (ii) eccentricity-dependent model with 11-11-11-11-1 scale pooling, at the end spatial pooling and contrast normalization. The test data has different flanker configurations as described above.<br /> [[File:result1.png|x450px|center]]<br /> <br /> ===Observations===<br /> * With the flanker configuration same as the training one, models are better at recognizing objects in clutter rather than isolated objects for all image locations<br /> <br /> * If the target-flanker spacing is changed, then models perform worse<br /> <br /> * the eccentricity model is much better at recognizing objects in isolation than the DCNN because the multi-scale crops divide the image into discrete regions, letting the model learn from image parts as well as the whole image<br /> <br /> * Only the eccentricity-dependent model is robust to different flanker configurations not included in training, when the target is centered.<br /> <br /> ==DNNs trained with Images with the Target in Isolation==<br /> Here the target objects are in isolation and with translational variance while the test-set is the same set of flanker configurations as used before.<br /> [[File:result2.png|750x400px|center]]<br /> ===DCNN Observations===<br /> * The recognition gets worse with the increase in the number of flankers.<br /> <br /> * Convolutional networks are capable of being invariant to translations.<br /> <br /> * In the constant target eccentricity setup, where target is fixed at the center of the image with varying target-flanker spacing, we observe that as the distance between target and flankers increase, recognition gets better.<br /> <br /> * Spatial pooling helps in learning invariance.<br /> <br /> *Flankers similar to the target object helps in recognition since they dont activate the convolutional filter more.<br /> <br /> * notMNIST data affects leads to more crowding since they have many more edges and white image pixels which activate the convolutional layers more.<br /> ===Eccentric Model===<br /> The set-up is the same as explained earlier.<br /> [[File:result3.png|750x400px|center]]<br /> ====Observations====<br /> * If target is placed at the center and no contrast normalization is done, then the recognition accuracy is high since this model concentrates the most on the central region of the image.<br /> <br /> * If contrast normalization is done, then all the scales will contribute equal amount and hence the eccentricity dependence is removed.<br /> <br /> * Early pooling is harmful since it migh take away the useful information very early which might be useful to the network.<br /> <br /> ==Complex Clutter==<br /> Here the tarets are embedded into images (The places dataset here) and then tests are performed.<br /> [[File:result4.png|750x400px|center]]<br /> <br /> ====Observations====<br /> - The eccentricity model without contrast normalization only can recognize the target and only when the target is close to the image center.<br /> -<br /> =Conclusions=<br /> We often think that just training the network with data similar to the test data would achieve good results in a general scenario too but thats not the case as we trained the model with flankers and it did not give us the ideal results for the target obects.<br /> *'''Flanker Configuration''': When models are trained with images of objects in isolation, adding flankers harms recognition. Adding two flankers is the same or worse than adding just one and the smaller the spacing between flanker and target, the more crowding occurs. These is because the pooling operation merges nearby responses, such as the target and flankers if they are close.<br /> <br /> *'''Similarity between target and flanker''': Flankers more similar to targets cause more crowding, because of the selectivity property of the learned DNN filters.<br /> <br /> *'''Dependence on target location and contrast normalization''': In DCNNs and eccentricitydependent models with contrast normalization, recognition accuracy is the same across all eccentricities. In eccentricity-dependent networks without contrast normalization, recognition does not decrease despite presence of clutter when the target is at the center of the image.<br /> <br /> *'''Effect of pooling''': adding pooling leads to better recognition accuracy of the models. Yet, in the eccentricity model, pooling across the scales too early in the hierarchy leads to lower accuracy.<br /> <br /> =Critique=<br /> This paper just tries to check the impact of flankers on targets as to how crowding can affect recognition but it does not propose anything novel in terms of architecture to take care of such a type of crowding. The eccentricity based model does well only when the target is placed at the center of the image but maybe windowing over the frames instead of taking crops starting from the middle might help.<br /> <br /> =References=<br /> 1) Volokitin A, Roig G, Poggio T:&quot;Do Deep Neural Networks Suffer from Crowding?&quot; Conference on Neural Information Processing Systems (NIPS). 2017<br /> <br /> 2) Francis X. Chen, Gemma Roig, Leyla Isik, Xavier Boix and Tomaso Poggio: &quot;Eccentricity Dependent Deep Neural Networks for Modeling Human Vision&quot; Journal of Vision. 17. 808. 10.1167/17.10.808.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:sqrtf.png&diff=35002 File:sqrtf.png 2018-03-21T06:35:38Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=35001 Do Deep Neural Networks Suffer from Crowding 2018-03-21T06:18:01Z <p>Jssambee: </p> <hr /> <div>= Introduction =<br /> Ever since the evolution of Deep Networks, there has been tremendous amount of research and effort that has been put into making machines capable of recognizing objects the same way as humans do. Humans can recognize objects in a way that is invariant to scale, translation, and clutter. Crowding is another visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called<br /> flankers, are placed close to it and this is a very common real-life experience. This paper focuses on studying the impact of crowding on Deep Neural Networks (DNNs) by adding clutter to the images and then analyzing which models and settings suffer less from such effects. <br /> <br /> The paper investigates two types of DNNs for crowding: traditional deep convolutional neural networks(DCNN) and a multi-scale eccentricity-dependent model which is an extension of the DCNNs and inspired by the retina where the receptive field size of the convolutional filters in the model grows with increasing distance from the center of the image, called the eccentricity and will be explained below. The authors focus on the dependence of crowding on image factors, such as flanker configuration, target-flanker similarity, target eccentricity and premature pooling in particular.<br /> <br /> = Models =<br /> == Deep Convolutional Neural Networks ==<br /> The DCNN is a basic architecture with 3 convolutional layers, spatial 3x3 max-pooling with varying strides and a fully connected layer for classification as shown in the below figure. <br /> [[File:DCNN.png|800px|center]]<br /> <br /> The network is fed with images resized to 60x60, with minibatches of 128 images, 32 feature channels for all convolutional layers, and convolutional filters of size 5x5 and stride 1.<br /> <br /> As highlighted earlier, the effect of pooling is into main consideration and hence three different configurations have been investigated as below: <br /> <br /> 1. '''No total pooling''' Feature maps sizes decrease only due to boundary effects, as the 3x3 max pooling has stride 1. The square feature maps sizes after each pool layer are 60-54-48-42.<br /> <br /> 2. '''Progressive pooling''' 3x3 pooling with a stride of 2 halves the square size of the feature maps, until we pool over what remains in the final layer, getting rid of any spatial information before the fully connected layer. (60-27-11-1).<br /> <br /> 3. '''At end pooling''' Same as no total pooling, but before the fully connected layer, max-pool over the entire feature map. (60-54-48-1).<br /> <br /> ===What is the problem in CNNs?===<br /> CNNs fall short in explaining human perceptual invariance. First, CNNs typically take input at a single uniform resolution. Biological measurements suggest that resolution is not uniform across the human visual field, but rather decays with eccentricity, i.e. distance from the center of focus Even more importantly, CNNs rely on data augmentation to achieve transformation-invariance and obviously a lot of processing is needed for CNNs.<br /> <br /> ==Eccentricity-dependent Model==<br /> In order to take care of the scale invariance in the input image, the eccentricity dependent DNN is utilized. The main intuition behind this architecture is that as we increase eccentricity, the receptive fields also increase and hence the model will become invariant to changing input scales. In this model the input image is cropped into varying scales(11 crops increasing by a factor of ........... which are then resized to 60x60 pixels) and then fed to the network. The model computes an invariant representation of the input by sampling the inverted pyramid at a discrete set of scales with the same number of filters at each scale. Since the same number of filters are used for each scale, the smaller crops will be sampled at a high resolution while the larger crops will be sampled with a low resolution. These scales are fed into the network as an input channel to the convolutional layers and share the weights across scale and space.<br /> [[File:EDM.png|2000x450px|center]]<br /> <br /> The architecture of this model is the same as the previous DCNN model with the only change being the extra filters added for each of the scales. The authors perform spatial pooling, the aforementioned ''At end pooling'' is used here, and scale pooling which helps in reducing number of scales by taking the maximum value of corresponding locations in the feature maps across multiple scales. It has three configurations: (1) at the beginning, in which all the different scales are pooled together after the first layer, 11-1-1-1-1 (2) progressively, 11-7-5-3-1 and (3) at the end, 11-11-11-11-1, in which all 11 scales are pooled together at the last layer.<br /> <br /> ===Contrast Normalization===<br /> Since we have multiple scales of input image, we perform normalization such that the sum of the pixel intensities in each scale is in the same range [0,1] followed by dividing them by a factor proportional to the crop area.<br /> <br /> =Experiments and its Set-Up =<br /> Targets are the set of objects to be recognized and flankers act as clutter with respect to these target objects. The target objects are the even MNIST numbers having translational variance (shifted at different locations of the image along the horizontal axis). Examples of the target and flanker configurations is shown below: <br /> [[File:eximages.png|800px|center]]<br /> <br /> The target and the object are referred to as ''a'' and ''x'' respectively with the below four conifgurations: (1) No flankers. Only the target object. (a in the plots) (2) One central flanker closer to the center of the image than the target. (xa) (3) One peripheral flanker closer to the boundary of the image that the target. (ax) (4) Two flankers spaced equally around the target, being both the same object (xax).<br /> <br /> ==DNNs trained with Target and Flankers==<br /> This is a constant spacing training setup where identical flankers are placed at a distance of 120 pixels either side of the target(xax) with the target having translational variance. THe tests are evaluated on (i) DCNN with at the end pooling, and (ii) eccentricity-dependent model with 11-11-11-11-1 scale pooling, at the end spatial pooling and contrast normalization. The test data has different flanker configurations as described above.<br /> [[File:result1.png|x450px|center]]<br /> <br /> ===Observations===<br /> - With the flanker configuration same as the training one, models are better at recognizing objects in clutter rather than isolated objects for all image locations<br /> -If the target-flanker spacing is changed, then models perform worse<br /> -the eccentricity model is much better at recognizing objects in isolation than the DCNN because the multi-scale crops divide the image into discrete regions, letting the model learn from image parts as well as the whole image<br /> -Only the eccentricity-dependent model is robust to different flanker configurations not included in training, when the target is centered.<br /> <br /> ==DNNs trained with Images with the Target in Isolation==<br /> Here the target objects are in isolation and with translational variance while the test-set is the same set of flanker configurations as used before.<br /> [[File:result2.png|750x400px|center]]<br /> ===DCNN Observations===<br /> - The recognition gets worse with the increase in the number of flankers.<br /> - Convolutional networks are capable of being invariant to translations.<br /> - In the constant target eccentricity setup, where target is fixed at the center of the image with varying target-flanker spacing, we observe that as the distance between target and flankers increase, recognition gets better.<br /> - Spatial pooling helps in learning invariance.<br /> -Flankers similar to the target object helps in recognition since they dont activate the convolutional filter more.<br /> - notMNIST data affects leads to more crowding since they have many more edges and white image pixels which activate the convolutional layers more.<br /> ===Eccentric Model===<br /> The set-up is the same as explained earlier.<br /> [[File:result3.png|750x400px|center]]<br /> ===Observations===<br /> - If target is placed at the center and no contrast normalization is done, then the recognition accuracy is high since this model concentrates the most on the central region of the image.<br /> - If contrast normalization is done, then all the scales will contribute equal amount and hence the eccentricity dependence is removed.<br /> - Early pooling is harmful since it migh take away the useful information very early which might be useful to the network.<br /> <br /> ==Complex Clutter==<br /> Here the tarets are embedded into images (The places dataset here) and then tests are performed.<br /> [[File:result4.png|750x400px|center]]<br /> <br /> ===Observations===<br /> - The eccentricity model without contrast normalization only can recognize the target and only when the target is close to the image center.<br /> -<br /> =Conclusions=<br /> We often think that just training the network with data similar to the test data would achieve good results in a general scenario too but thats not the case as we trained the model with flankers and it did not give us the ideal results for the target obects.<br /> *'''Flanker Configuration''': When models are trained with images of objects in isolation, adding flankers harms recognition. Adding two flankers is the same or worse than adding just one and the smaller the spacing between flanker and target, the more crowding occurs. These is because the pooling operation merges nearby responses, such as the target and flankers if they are close.<br /> <br /> *'''Similarity between target and flanker''': Flankers more similar to targets cause more crowding, because of the selectivity property of the learned DNN filters.<br /> <br /> *'''Dependence on target location and contrast normalization''': In DCNNs and eccentricitydependent models with contrast normalization, recognition accuracy is the same across all eccentricities. In eccentricity-dependent networks without contrast normalization, recognition does not decrease despite presence of clutter when the target is at the center of the image.<br /> <br /> *'''Effect of pooling''': adding pooling leads to better recognition accuracy of the models. Yet, in the eccentricity model, pooling across the scales too early in the hierarchy leads to lower accuracy.<br /> <br /> =Critique=<br /> This paper just tries to check the impact of flankers on targets as to how crowding can affect recognition but it does not propose anything novel in terms of architecture to take care of such a type of crowding. The eccentricity based model does well only when the target is placed at the center of the image but maybe windowing over the frames instead of taking crops starting from the middle might help.<br /> <br /> =References=<br /> 1) Volokitin A, Roig G, Poggio T:&quot;Do Deep Neural Networks Suffer from Crowding?&quot; Conference on Neural Information Processing Systems (NIPS). 2017<br /> <br /> 2) Francis X. Chen, Gemma Roig, Leyla Isik, Xavier Boix and Tomaso Poggio: &quot;Eccentricity Dependent Deep Neural Networks for Modeling Human Vision&quot; Journal of Vision. 17. 808. 10.1167/17.10.808.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:result4.png&diff=35000 File:result4.png 2018-03-21T06:15:30Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=34997 Do Deep Neural Networks Suffer from Crowding 2018-03-21T05:53:42Z <p>Jssambee: </p> <hr /> <div>= Still working on this. =<br /> = Introduction =<br /> Ever since the evolution of Deep Networks, there has been tremendous amount of research and effort that has been put into making machines capable of recognizing objects the same way as humans do. Humans can recognize objects in a way that is invariant to scale, translation, and clutter. Crowding is another visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called<br /> flankers, are placed close to it and this is a very common real-life experience. This paper focuses on studying the impact of crowding on Deep Neural Networks (DNNs) by adding clutter to the images and then analyzing which models and settings suffer less from such effects. <br /> <br /> The paper investigates two types of DNNs for crowding: traditional deep convolutional neural networks(DCNN) and a multi-scale eccentricity-dependent model which is an extension of the DCNNs and inspired by the retina where the receptive field size of the convolutional filters in the model grows with increasing distance from the center of the image, called the eccentricity and will be explained below. The authors focus on the dependence of crowding on image factors, such as flanker configuration, target-flanker similarity, target eccentricity and premature pooling in particular.<br /> <br /> = Models =<br /> == Deep Convolutional Neural Networks ==<br /> The DCNN is a basic architecture with 3 convolutional layers, spatial 3x3 max-pooling with varying strides and a fully connected layer for classification as shown in the below figure. <br /> [[File:DCNN.png|800px|center]]<br /> <br /> The network is fed with images resized to 60x60, with minibatches of 128 images, 32 feature channels for all convolutional layers, and convolutional filters of size 5x5 and stride 1.<br /> <br /> As highlighted earlier, the effect of pooling is into main consideration and hence three different configurations have been investigated as below: <br /> <br /> 1. '''No total pooling''' Feature maps sizes decrease only due to boundary effects, as the 3x3 max pooling has stride 1. The square feature maps sizes after each pool layer are 60-54-48-42.<br /> <br /> 2. '''Progressive pooling''' 3x3 pooling with a stride of 2 halves the square size of the feature maps, until we pool over what remains in the final layer, getting rid of any spatial information before the fully connected layer. (60-27-11-1).<br /> <br /> 3. '''At end pooling''' Same as no total pooling, but before the fully connected layer, max-pool over the entire feature map. (60-54-48-1).<br /> <br /> ===What is the problem in CNNs?===<br /> CNNs fall short in explaining human perceptual invariance. First, CNNs typically take input at a single uniform resolution. Biological measurements suggest that resolution is not uniform across the human visual field, but rather decays with eccentricity, i.e. distance from the center of focus Even more importantly, CNNs rely on data augmentation to achieve transformation-invariance and obviously a lot of processing is needed for CNNs.<br /> <br /> ==Eccentricity-dependent Model==<br /> In order to take care of the scale invariance in the input image, the eccentricity dependent DNN is utilized. The main intuition behind this architecture is that as we increase eccentricity, the receptive fields also increase and hence the model will become invariant to changing input scales. In this model the input image is cropped into varying scales(11 crops increasing by a factor of ........... which are then resized to 60x60 pixels) and then fed to the network. The model computes an invariant representation of the input by sampling the inverted pyramid at a discrete set of scales with the same number of filters at each scale. Since the same number of filters are used for each scale, the smaller crops will be sampled at a high resolution while the larger crops will be sampled with a low resolution. These scales are fed into the network as an input channel to the convolutional layers and share the weights across scale and space.<br /> [[File:EDM.png|2000x450px|center]]<br /> <br /> The architecture of this model is the same as the previous DCNN model with the only change being the extra filters added for each of the scales. The authors perform spatial pooling, the aforementioned ''At end pooling'' is used here, and scale pooling which helps in reducing number of scales by taking the maximum value of corresponding locations in the feature maps across multiple scales. It has three configurations: (1) at the beginning, in which all the different scales are pooled together after the first layer, 11-1-1-1-1 (2) progressively, 11-7-5-3-1 and (3) at the end, 11-11-11-11-1, in which all 11 scales are pooled together at the last layer.<br /> <br /> ===Contrast Normalization===<br /> Since we have multiple scales of input image, we perform normalization such that the sum of the pixel intensities in each scale is in the same range [0,1] followed by dividing them by a factor proportional to the crop area.<br /> <br /> =Experiments and its Set-Up =<br /> Targets are the set of objects to be recognized and flankers act as clutter with respect to these target objects. The target objects are the even MNIST numbers having translational variance (shifted at different locations of the image along the horizontal axis). Examples of the target and flanker configurations is shown below: <br /> [[File:eximages.png|800px|center]]<br /> <br /> The target and the object are referred to as ''a'' and ''x'' respectively with the below four conifgurations: (1) No flankers. Only the target object. (a in the plots) (2) One central flanker closer to the center of the image than the target. (xa) (3) One peripheral flanker closer to the boundary of the image that the target. (ax) (4) Two flankers spaced equally around the target, being both the same object (xax).<br /> <br /> ==DNNs trained with Target and Flankers==<br /> This is a constant spacing training setup where identical flankers are placed at a distance of 120 pixels either side of the target(xax) with the target having translational variance. THe tests are evaluated on (i) DCNN with at the end pooling, and (ii) eccentricity-dependent model with 11-11-11-11-1 scale pooling, at the end spatial pooling and contrast normalization. The test data has different flanker configurations as described above.<br /> [[File:result1.png|x450px|center]]<br /> <br /> ===Observations===<br /> - With the flanker configuration same as the training one, models are better at recognizing objects in clutter rather than isolated objects for all image locations<br /> -If the target-flanker spacing is changed, then models perform worse<br /> -the eccentricity model is much better at recognizing objects in isolation than the DCNN because the multi-scale crops divide the image into discrete regions, letting the model learn from image parts as well as the whole image<br /> -Only the eccentricity-dependent model is robust to different flanker configurations not included in training, when the target is centered.<br /> <br /> ==DNNs trained with Images with the Target in Isolation==<br /> Here the target objects are in isolation and with translational variance while the test-set is the same set of flanker configurations as used before.<br /> [[File:result2.png|750x400px|center]]<br /> ===DCNN Observations===<br /> - The recognition gets worse with the increase in the number of flankers.<br /> - Convolutional networks are capable of being invariant to translations.<br /> - In the constant target eccentricity setup, where target is fixed at the center of the image with varying target-flanker spacing, we observe that as the distance between target and flankers increase, recognition gets better.<br /> - Spatial pooling helps in learning invariance.<br /> -Flankers similar to the target object helps in recognition since they dont activate the convolutional filter more.<br /> - notMNIST data affects leads to more crowding since they have many more edges and white image pixels which activate the convolutional layers more.<br /> ===Eccentric Model===<br /> [[File:result3.png|750x400px|center]]<br /> <br /> =Conclusions=<br /> We often think that just training the network with data similar to the test data would achieve good results in a general scenario too but thats not the case as we trained the model with flankers and it did not give us the ideal results for the target obects.<br /> *'''Flanker Configuration''': When models are trained with images of objects in isolation, adding flankers harms recognition. Adding two flankers is the same or worse than adding just one and the smaller the spacing between flanker and target, the more crowding occurs. These is because the pooling operation merges nearby responses, such as the target and flankers if they are close.<br /> <br /> *'''Similarity between target and flanker''': Flankers more similar to targets cause more crowding, because of the selectivity property of the learned DNN filters.<br /> <br /> *'''Dependence on target location and contrast normalization''': In DCNNs and eccentricitydependent models with contrast normalization, recognition accuracy is the same across all eccentricities. In eccentricity-dependent networks without contrast normalization, recognition does not decrease despite presence of clutter when the target is at the center of the image.<br /> <br /> *'''Effect of pooling''': adding pooling leads to better recognition accuracy of the models. Yet, in the eccentricity model, pooling across the scales too early in the hierarchy leads to lower accuracy.<br /> <br /> =Critique=<br /> This paper just tries to check the impact of flankers on targets as to how crowding can affect recognition but it does not propose anything novel in terms of architecture to take care of such a type of crowding. The eccentricity based model does well only when the target is placed at the center of the image but maybe windowing over the frames instead of taking crops starting from the middle might help.<br /> <br /> =References=<br /> 1) Volokitin A, Roig G, Poggio T:&quot;Do Deep Neural Networks Suffer from Crowding?&quot; Conference on Neural Information Processing Systems (NIPS). 2017<br /> <br /> 2) Francis X. Chen, Gemma Roig, Leyla Isik, Xavier Boix and Tomaso Poggio: &quot;Eccentricity Dependent Deep Neural Networks for Modeling Human Vision&quot; Journal of Vision. 17. 808. 10.1167/17.10.808.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=End-to-End_Differentiable_Adversarial_Imitation_Learning&diff=34866 End-to-End Differentiable Adversarial Imitation Learning 2018-03-20T23:35:40Z <p>Jssambee: /* Introduction */</p> <hr /> <div>= Introduction =<br /> The ability to imitate an expert policy is very beneficial in the case of automating human demonstrated tasks. Assuming that a sequence of state action pairs (trajectories) of an expert policy are available, a new policy can be trained that imitates the expert without having access to the original reward signal used by the expert. There are two main approaches to solve the problem of imitating a policy; they are Behavioural Cloning (BC) and Inverse Reinforcement Learning (IRL). BC directly learns the conditional distribution of actions over states in a supervised fashion by training on single time-step state-action pairs. The disadvantage of BC is that the training requires large amounts of expert data, which is hard to obtain. In addition, an agent trained using BC is unaware of how its action can affect future state distribution. The second method using IRL involves recovering a reward signal under which the expert is uniquely optimal; the main disadvantage is that it’s an ill-posed problem.<br /> <br /> To address the problem of imitating an expert policy, techniques based on Generative Adversarial Networks (GANs) have been proposed in recent years. GANs use a discriminator to guide the generative model towards producing patterns like those of the expert. This idea was used by (Ho &amp; Ermon, 2016) in their work titled Generative Adversarial Imitation Learning (GAIL) to imitate an expert policy in a model-free setup. A model free setup is the one where the agent cannot make predictions about what the next state and reward will be before it takes each action since the transition function to move from state A to state B is not learned. The disadvantage of GAIL’s model-free approach is that backpropagation required gradient estimation which tends to suffer from high variance, which results in the need for large sample sizes and variance reduction methods. This paper proposed a model-based method (MGAIL) to address these issues by training a policy which is represented by the information propagated from the discriminator to the generator..<br /> <br /> = Background =<br /> == Imitation Learning ==<br /> A common technique for performing imitation learning is to train a policy &lt;math&gt; \pi &lt;/math&gt; that minimizes some loss function &lt;math&gt; l(s, \pi(s)) &lt;/math&gt; with respect to a discounted state distribution encountered by the expert: &lt;math&gt; d_\pi(s) = (1-\gamma)\sum_{t=0}^{\infty}\gamma^t p(s_t) &lt;/math&gt;. This can be obtained using any supervised learning (SL) algorithm, but the policy's prediction affects future state distributions; this violates the independent and identically distributed (i.i.d) assumption made my most SL algorithms. This process is susceptible to compounding errors since a slight deviation in the learner's behavior can lead to different state distributions not encountered by the expert policy. <br /> <br /> This issue was overcome through the use of the Forward Training (FT) algorithm which trains a non-stationary policy iteratively overtime. At each time step a new policy is trained on the state distribution induced by the previously trained policies. This is continued till the end of the time horizon to obtain a policy that can mimic the expert policy. This requirement to train a policy at each time step till the end makes the FT algorithm impractical for cases where the time horizon is very large or undefined. This short coming is resolved using the Stochastic Mixing Iterative Learning (SMILe) algorithm. SMILe trains a stochastic stationary policy over several iterations under the trajectory distribution induced by the previously trained policy.<br /> <br /> == Generative Adversarial Networks ==<br /> GANs learn a generative model that can fool the discriminator by using a two-player zero-sum game:<br /> <br /> \begin{align} <br /> \underset{G}{\operatorname{argmin}}\; \underset{D\in (0,1)}{\operatorname{argmax}} = \mathbb{E}_{x\sim p_E}[log(D(x)]\ +\ \mathbb{E}_{z\sim p_z}[log(1 - D(G(z)))]<br /> \end{align}<br /> <br /> In the above equation, &lt;math&gt; p_E &lt;/math&gt; represents the expert distribution and &lt;math&gt; p_z &lt;/math&gt; represents the input noise distribution from which the input to the generator is sampled. The generator produces patterns and the discriminator judges if the pattern was generated or from the expert data. When the discriminator cannot distinguish between the two distributions the game ends and the generator has learned to mimic the expert. GANs rely on basic ideas such as binary classification and algorithms such as backpropagation in order to learn the expert distribution.<br /> <br /> GAIL applies GANs to the task of imitating an expert policy in a model-free approach. GAIL uses similar objective functions like GANs, but the expert distribution in GAIL represents the joint distribution over state action tuples:<br /> <br /> \begin{align} <br /> \underset{\pi}{\operatorname{argmin}}\; \underset{D\in (0,1)}{\operatorname{argmax}} = \mathbb{E}_{\pi}[log(D(s,a)]\ +\ \mathbb{E}_{\pi_E}[log(1 - D(s,a))] - \lambda H(\pi))<br /> \end{align}<br /> <br /> where &lt;math&gt; H(\pi) \triangleq \mathbb{E}_{\pi}[-log\: \pi(a|s)]&lt;/math&gt; is the entropy.<br /> <br /> This problem cannot be solved using the standard methods described for GANs because the generator in GAIL represents a stochastic policy. The exact form of the first term in the above equation is given by: &lt;math&gt; \mathbb{E}_{s\sim \rho_\pi(s)}\mathbb{E}_{a\sim \pi(\cdot |s)} [log(D(s,a)] &lt;/math&gt;.<br /> <br /> The two-player game now depends on the stochastic properties (&lt;math&gt; \theta &lt;/math&gt;) of the policy, and it is unclear how to differentiate the above equation with respect to &lt;math&gt; \theta &lt;/math&gt;. This problem can be overcome using score functions such as REINFORCE to obtain an unbiased gradient estimation:<br /> <br /> \begin{align}<br /> \nabla_\theta\mathbb{E}_{\pi} [log\; D(s,a)] \cong \hat{\mathbb{E}}_{\tau_i}[\nabla_\theta\; log\; \pi_\theta(a|s)Q(s,a)]<br /> \end{align}<br /> <br /> where &lt;math&gt; Q(\hat{s},\hat{a}) &lt;/math&gt; is the score function of the gradient:<br /> <br /> \begin{align}<br /> Q(\hat{s},\hat{a}) = \hat{\mathbb{E}}_{\tau_i}[log\; D(s,a) | s_0 = \hat{s}, a_0 = \hat{a}]<br /> \end{align}<br /> <br /> <br /> REINFORCE gradients suffer from high variance which makes them difficult to work with even after applying variance reduction techniques. In order to better understand the changes required to fool the discriminator we need access to the gradients of the discriminator network, which can be obtained from the Jacobian of the discriminator. This paper demonstrates the use of a forward model along with the Jacobian of the discriminator to train a policy, without using high-variance gradient estimations.<br /> <br /> = Algorithm =<br /> This section first analyzes the characteristics of the discriminator network, then describes how a forward model can enable policy imitation through GANs. Lastly, the model based adversarial imitation learning algorithm is presented.<br /> <br /> == The discriminator network ==<br /> The discriminator network is trained to predict the conditional distribution: &lt;math&gt; D(s,a) = p(y|s,a) &lt;/math&gt; where &lt;math&gt; y \in (\pi_E, \pi) &lt;/math&gt;.<br /> <br /> The discriminator is trained on an even distribution of expert and generated examples; hence &lt;math&gt; p(\pi) = p(\pi_E) = \frac{1}{2} &lt;/math&gt;. Given this, we can rearrange and factor &lt;math&gt; D(s,a) &lt;/math&gt; to obtain:<br /> <br /> \begin{aligned}<br /> D(s,a) &amp;= p(\pi|s,a) \\<br /> &amp; = \frac{p(s,a|\pi)p(\pi)}{p(s,a|\pi)p(\pi) + p(s,a|\pi_E)p(\pi_E)} \\<br /> &amp; = \frac{p(s,a|\pi)}{p(s,a|\pi) + p(s,a|\pi_E)} \\<br /> &amp; = \frac{1}{1 + \frac{p(s,a|\pi_E)}{p(s,a|\pi)}} \\<br /> &amp; = \frac{1}{1 + \frac{p(a|s,\pi_E)}{p(a|s,\pi)} \cdot \frac{p(s|\pi_E)}{p(s|\pi)}} \\<br /> \end{aligned}<br /> <br /> Define &lt;math&gt; \varphi(s,a) &lt;/math&gt; and &lt;math&gt; \psi(s) &lt;/math&gt; to be:<br /> <br /> \begin{aligned}<br /> \varphi(s,a) = \frac{p(a|s,\pi_E)}{p(a|s,\pi)}, \psi(s) = \frac{p(s|\pi_E)}{p(s|\pi)}<br /> \end{aligned}<br /> <br /> to get the final expression for &lt;math&gt; D(s,a) &lt;/math&gt;:<br /> \begin{aligned}<br /> D(s,a) = \frac{1}{1 + \varphi(s,a)\cdot \psi(s)}<br /> \end{aligned}<br /> <br /> &lt;math&gt; \varphi(s,a) &lt;/math&gt; represents a policy likelihood ratio, and &lt;math&gt; \psi(s) &lt;/math&gt; represents a state distribution likelihood ratio. Based on these expressions, the paper states that the discriminator makes its decisions by answering two questions. The first question relates to state distribution: what is the likelihood of encountering state &lt;math&gt; s &lt;/math&gt; under the distribution induces by &lt;math&gt; \pi_E &lt;/math&gt; vs &lt;math&gt; \pi &lt;/math&gt;? The second question is about behavior: given a state &lt;math&gt; s &lt;/math&gt;, how likely is action a under &lt;math&gt; \pi_E &lt;/math&gt; vs &lt;math&gt; \pi &lt;/math&gt;? The desired change in state is given by &lt;math&gt; \psi_s \equiv \partial \psi / \partial s &lt;/math&gt;; this information can by obtained from the partial derivatives of &lt;math&gt; D(s,a) &lt;/math&gt;:<br /> <br /> \begin{aligned}<br /> \nabla_aD &amp;= - \frac{\varphi_a(s,a)\psi(s)}{(1 + \varphi(s,a)\psi(s))^2} \\<br /> \nabla_sD &amp;= - \frac{\varphi_s(s,a)\psi(s) + \varphi(s,a)\psi_s(s)}{(1 + \varphi(s,a)\psi(s))^2} \\<br /> \end{aligned}<br /> <br /> <br /> == Backpropagating through stochastic units ==<br /> There is interest in training stochastic policies because stochasticity encourages exploration for Policy Gradient methods. This is a problem for algorithms that build differentiable computation graphs where the gradients flow from one component to another since it is unclear how to backpropagate through stochastic units. The following subsections show how to estimate the gradients of continuous and categorical stochastic elements for continuous and discrete action domains respectively.<br /> <br /> === Continuous Action Distributions ===<br /> In the case of continuous action policies, re-parameterization was used to enable computing the derivatives of stochastic models. Assuming that the stochastic policy has a Gaussian distribution the policy &lt;math&gt; \pi &lt;/math&gt; can be written as &lt;math&gt; \pi_\theta(a|s) = \mu_\theta(s) + \xi \sigma_\theta(s) &lt;/math&gt;, where &lt;math&gt; \xi \sim N(0,1) &lt;/math&gt;. This way, the authors are able to get a Monte-Carlo estimator of the derivative of the expected value of &lt;math&gt; D(s, a) &lt;/math&gt; with respect to &lt;math&gt; \theta &lt;/math&gt;:<br /> <br /> \begin{align}<br /> \nabla_\theta\mathbb{E}_{\pi(a|s)}D(s,a) = \mathbb{E}_{\rho (\xi )}\nabla_a D(a,s) \nabla_\theta \pi_\theta(a|s) \cong \frac{1}{M}\sum_{i=1}^{M} \nabla_a D(s,a) \nabla_\theta \pi_\theta(a|s)\Bigr|_{\substack{\xi=\xi_i}}<br /> \end{align}<br /> <br /> <br /> === Categorical Action Distributions ===<br /> In the case of discrete action domains, the paper uses categorical re-parameterization with Gumbel-Softmax. This method relies on the Gumble-Max trick which is a method for drawing samples from a categorical distribution with class probabilities &lt;math&gt; \pi(a_1|s),\pi(a_2|s),...,\pi(a_N|s) &lt;/math&gt;:<br /> <br /> \begin{align}<br /> a_{argmax} = \underset{i}{argmax}[g_i + log\ \pi(a_i|s)]<br /> \end{align}<br /> <br /> <br /> Gumbel-Softmax provides a differentiable approximation of the samples obtained using the Gumble-Max trick:<br /> <br /> \begin{align}<br /> a_{softmax} = \frac{exp[\frac{1}{\tau}(g_i + log\ \pi(a_i|s))]}{\sum_{j=1}^{k}exp[\frac{1}{\tau}(g_j + log\ \pi(a_i|s))]}<br /> \end{align}<br /> <br /> <br /> In the above equation, the hyper-parameter &lt;math&gt; \tau &lt;/math&gt; (temperature) trades bias for variance. When &lt;math&gt; \tau &lt;/math&gt; gets closer to zero, the softmax operator acts like argmax resulting in a low bias, but high variance; vice versa when the &lt;math&gt; \tau &lt;/math&gt; is large.<br /> <br /> The authors use &lt;math&gt; a_{softmax} &lt;/math&gt; to interact with the environment; argmax is applied over &lt;math&gt; a_{softmax} &lt;/math&gt; to obtain a single “pure” action, but the continuous approximation is used in the backward pass using the estimation: &lt;math&gt; \nabla_\theta\; a_{argmax} \approx \nabla_\theta\; a_{softmax} &lt;/math&gt;.<br /> <br /> == Backpropagating through a Forward model ==<br /> The above subsections presented the means for extracting the partial derivative &lt;math&gt; \nabla_aD &lt;/math&gt;. The main contribution of this paper is incorporating the use of &lt;math&gt; \nabla_sD &lt;/math&gt;. In a model-free approach the state &lt;math&gt; s &lt;/math&gt; is treated as a fixed input, therefore &lt;math&gt; \nabla_sD &lt;/math&gt; is discarded. This is illustrated in Figure 1. This work uses a model-based approach which makes incorporating &lt;math&gt; \nabla_sD &lt;/math&gt; more involved. In the model-based approach, a state &lt;math&gt; s_t &lt;/math&gt; can be written as a function of the previous state action pair: &lt;math&gt; s_t = f(s_{t-1}, a_{t-1}) &lt;/math&gt;, where &lt;math&gt; f &lt;/math&gt; represents the forward model. Using the forward model and the law of total derivatives we get:<br /> <br /> \begin{align}<br /> \nabla_\theta D(s_t,a_t)\Bigr|_{\substack{s=s_t, a=a_t}} &amp;= \frac{\partial D}{\partial a}\frac{\partial a}{\partial \theta}\Bigr|_{\substack{a=a_t}} + \frac{\partial D}{\partial s}\frac{\partial s}{\partial \theta}\Bigr|_{\substack{s=s_t}} \\<br /> &amp;= \frac{\partial D}{\partial a}\frac{\partial a}{\partial \theta}\Bigr|_{\substack{a=a_t}} + \frac{\partial D}{\partial s}\left (\frac{\partial f}{\partial s}\frac{\partial s}{\partial \theta}\Bigr|_{\substack{s=s_{t-1}}} + \frac{\partial f}{\partial a}\frac{\partial a}{\partial \theta}\Bigr|_{\substack{a=a_{t-1}}} \right )<br /> \end{align}<br /> <br /> <br /> Using this formula, the error regarding deviations of future states &lt;math&gt; (\psi_s) &lt;/math&gt; propagate back in time and influence the actions of policies in earlier times. This is summarized in Figure 2.<br /> <br /> [[File:modelFree_blockDiagram.PNG]]<br /> <br /> Figure 1: Block-diagram of the model-free approach: given a state &lt;math&gt; s &lt;/math&gt;, the policy outputs &lt;math&gt; \mu &lt;/math&gt; which is fed to a stochastic sampling unit. An action &lt;math&gt; a &lt;/math&gt; is sampled, and together with &lt;math&gt; s &lt;/math&gt; are presented to the discriminator network. In the backward phase, the error message &lt;math&gt; \delta_a &lt;/math&gt; is blocked at the stochastic sampling unit. From there, a high-variance gradient estimation is used (&lt;math&gt; \delta_{HV} &lt;/math&gt;). Meanwhile, the error message &lt;math&gt; \delta_s &lt;/math&gt; is flushed.<br /> <br /> [[File:modelBased_blockDiagram.PNG|1000px]]<br /> <br /> Figure 2: Block diagram of model-based adversarial imitation learning. This diagram describes the computation graph for training the policy (i.e. G). The discriminator network D is fixed at this stage and is trained separately. At time &lt;math&gt; t &lt;/math&gt; of the forward pass, &lt;math&gt; \pi &lt;/math&gt; outputs a distribution over actions: &lt;math&gt; \mu_t = \pi(s_t) &lt;/math&gt;, from which an action at is sampled. For example, in the continuous case, this is done using the re-parametrization trick: &lt;math&gt; a_t = \mu_t + \xi \cdot \sigma &lt;/math&gt;, where &lt;math&gt; \xi \sim N(0,1) &lt;/math&gt;. The next state &lt;math&gt; s_{t+1} = f(s_t, a_t) &lt;/math&gt; is computed using the forward model (which is also trained separately), and the entire process repeats for time &lt;math&gt; t+1 &lt;/math&gt;. In the backward pass, the gradient of &lt;math&gt; \pi &lt;/math&gt; is comprised of a.) the error message &lt;math&gt; \delta_a &lt;/math&gt; (Green) that propagates fluently through the differentiable approximation of the sampling process. And b.) the error message &lt;math&gt; \delta_s &lt;/math&gt; (Blue) of future time-steps, that propagate back through the differentiable forward model.<br /> <br /> == MGAIL Algorithm ==<br /> Shalev- Shwartz et al. (2016) and Heess et al. (2015) built a multi-step computation graph for describing the familiar policy gradient objective; in this case it is given by:<br /> <br /> \begin{align}<br /> J(\theta) = \mathbb{E}\left [ \sum_{t=0}^{T} \gamma ^t D(s_t,a_t)|\theta\right ]<br /> \end{align}<br /> <br /> <br /> Using the results from Heess et al. (2015) this paper demonstrates how to differentiate &lt;math&gt; J(\theta) &lt;/math&gt; over a trajectory of &lt;math&gt;(s,a,s’) &lt;/math&gt; transitions:<br /> <br /> \begin{align}<br /> J_s &amp;= \mathbb{E}_{p(a|s)}\mathbb{E}_{p(s'|s,a)}\left [ D_s + D_a \pi_s + \gamma J'_{s'}(f_s + f_a \pi_s) \right] \\<br /> J_\theta &amp;= \mathbb{E}_{p(a|s)}\mathbb{E}_{p(s'|s,a)}\left [ D_a \pi_\theta + \gamma (J'_{s'} f_a \pi_\theta + J'_\theta) \right]<br /> \end{align}<br /> <br /> The policy gradient &lt;math&gt; \nabla_\theta J &lt;/math&gt; is calculated by applying equations 12 and 13 recursively for &lt;math&gt; T &lt;/math&gt; iterations. The MGAIL algorithm is presented below.<br /> <br /> [[File:MGAIL_alg.PNG]]<br /> <br /> == Forward Model Structure ==<br /> The stability of the learning process depends on the prediction accuracy of the forward model, but learning an accurate forward model is challenging by itself. The authors propose methods for improving the performance of the forward model based on two aspects of its functionality. First, the forward model should learn to use the action as an operator over the state space. To accomplish this, the actions and states, which are sampled form different distributions need to be first represented in a shared space. This is done by encoding the state and action using two separate neural networks and combining their outputs to form a single vector. Additionally, multiple previous states are used to predict the next state by representing the environment as an &lt;math&gt; n^{th} &lt;/math&gt; order MDP. A GRU layer is incorporated into the state encoder to enable recurrent connections from previous states. Using these modifications, the model is able to achieve better, and more stable results compared to the standard forward model based on a feed forward neural network. The comparison is presented in Figure 3.<br /> <br /> [[File:performance_comparison.PNG]]<br /> <br /> Figure 3: Performance comparison between a basic forward model (Blue), and the advanced forward model (Green).<br /> <br /> = Experiments =<br /> The proposed algorithm is evaluated on three discrete control tasks (Cartpole, Mountain-Car, Acrobot), and five continuous control tasks (Hopper, Walker, Half-Cheetah, Ant, and Humanoid), which are modeled by the MuJoCo physics simulator (Todorov et al., 2012). Expert policies are trained using the Trust Region Policy Optimization (TRPO) algorithm (Schulman et al., 2015). Different number of trajectories are used to train the expert for each task, but all trajectories are of length 1000.<br /> The discriminator and generator (policy) networks contains two hidden layers with ReLU non-linearity and are trained using the ADAM optimizer. The total reward received over a period of &lt;math&gt; N &lt;/math&gt; steps using BC, GAIL and MGAIL is presented in Table 1. The proposed algorithm achieved the highest reward for most environments while exhibiting performance comparable to the expert over all of them.<br /> <br /> [[File:mgail_test_results.PNG]]<br /> <br /> Table 1. Policy performance, boldface indicates better results, &lt;math&gt; \pm &lt;/math&gt; represents one standard deviation.<br /> <br /> = Discussion =<br /> This paper presented a model-free algorithm for imitation learning. It demonstrated how a forward model can be used to train policies using the exact gradient of the discriminator network. A downside of this approach is the need to learn a forward model, since this could be difficult in certain domains. Learning the system dynamics directly from raw images is considered as one line of future work. Another future work is to address the violation of the fundamental assumption made by all supervised learning algorithms, which requires the data to be i.i.d. This problem arises because the discriminator and forward models are trained in a supervised learning fashion using data sampled from a dynamic distribution.<br /> <br /> = Source =<br /> # Baram, Nir, et al. &quot;End-to-end differentiable adversarial imitation learning.&quot; International Conference on Machine Learning. 2017.<br /> # Ho, Jonathan, and Stefano Ermon. &quot;Generative adversarial imitation learning.&quot; Advances in Neural Information Processing Systems. 2016.<br /> # Shalev-Shwartz, Shai, et al. &quot;Long-term planning by short-term prediction.&quot; arXiv preprint arXiv:1602.01580 (2016).<br /> # Heess, Nicolas, et al. &quot;Learning continuous control policies by stochastic value gradients.&quot; Advances in Neural Information Processing Systems. 2015.<br /> # Schulman, John, et al. &quot;Trust region policy optimization.&quot; International Conference on Machine Learning. 2015.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=End-to-End_Differentiable_Adversarial_Imitation_Learning&diff=34804 End-to-End Differentiable Adversarial Imitation Learning 2018-03-20T19:31:11Z <p>Jssambee: /* Introduction */</p> <hr /> <div>= Introduction =<br /> The ability to imitate an expert policy is very beneficial in the case of automating human demonstrated tasks. Assuming that a sequence of state action pairs (trajectories) of an expert policy are available, a new policy can be trained that imitates the expert without having access to the original reward signal used by the expert. There are two main approaches to solve the problem of imitating a policy; they are Behavioural Cloning (BC) and Inverse Reinforcement Learning (IRL). BC directly learns the conditional distribution of actions over states in a supervised fashion by training on single time-step state-action pairs. The disadvantage of BC is that the training requires large amounts of expert data, which is hard to obtain. In addition, an agent trained using BC is unaware of how its action can affect future state distribution. The second method using IRL involves recovering a reward signal under which the expert is uniquely optimal; the main disadvantage is that it’s an ill-posed problem.<br /> <br /> To address the problem of imitating an expert policy, techniques based on Generative Adversarial Networks (GANs) have been proposed in recent years. GANs use a discriminator to guide the generative model towards producing patterns like those of the expert. This idea was used by (Ho &amp; Ermon, 2016) in their work titled Generative Adversarial Imitation Learning (GAIL) to imitate an expert policy in a model-free setup. A model free setup is the one where the agent cannot make predictions about what the next state and reward will be before it takes each action since the transition function to move from state A to state B is not learned. The disadvantage of GAIL’s model-free approach is that backpropagation required gradient estimation which tends to suffer from high variance, which results in the need for large sample sizes and variance reduction methods. This paper proposed a model-based method (MGAIL) to address these issues.<br /> <br /> = Background =<br /> == Imitation Learning ==<br /> A common technique for performing imitation learning is to train a policy &lt;math&gt; \pi &lt;/math&gt; that minimizes some loss function &lt;math&gt; l(s, \pi(s)) &lt;/math&gt; with respect to a discounted state distribution encountered by the expert: &lt;math&gt; d_\pi(s) = (1-\gamma)\sum_{t=0}^{\infty}\gamma^t p(s_t) &lt;/math&gt;. This can be obtained using any supervised learning (SL) algorithm, but the policy's prediction affects future state distributions; this violates the independent and identically distributed (i.i.d) assumption made my most SL algorithms. This process is susceptible to compounding errors since a slight deviation in the learner's behavior can lead to different state distributions not encountered by the expert policy. <br /> <br /> This issue was overcome through the use of the Forward Training (FT) algorithm which trains a non-stationary policy iteratively overtime. At each time step a new policy is trained on the state distribution induced by the previously trained policies. This is continued till the end of the time horizon to obtain a policy that can mimic the expert policy. This requirement to train a policy at each time step till the end makes the FT algorithm impractical for cases where the time horizon is very large or undefined. This short coming is resolved using the Stochastic Mixing Iterative Learning (SMILe) algorithm. SMILe trains a stochastic stationary policy over several iterations under the trajectory distribution induced by the previously trained policy.<br /> <br /> == Generative Adversarial Networks ==<br /> GANs learn a generative model that can fool the discriminator by using a two-player zero-sum game:<br /> <br /> \begin{align} <br /> \underset{G}{\operatorname{argmin}}\; \underset{D\in (0,1)}{\operatorname{argmax}} = \mathbb{E}_{x\sim p_E}[log(D(x)]\ +\ \mathbb{E}_{z\sim p_z}[log(1 - D(G(z)))]<br /> \end{align}<br /> <br /> In the above equation, &lt;math&gt; p_E &lt;/math&gt; represents the expert distribution and &lt;math&gt; p_z &lt;/math&gt; represents the input noise distribution from which the input to the generator is sampled. The generator produces patterns and the discriminator judges if the pattern was generated or from the expert data. When the discriminator cannot distinguish between the two distributions the game ends and the generator has learned to mimic the expert. GANs rely on basic ideas such as binary classification and algorithms such as backpropagation in order to learn the expert distribution.<br /> <br /> GAIL applies GANs to the task of imitating an expert policy in a model-free approach. GAIL uses similar objective functions like GANs, but the expert distribution in GAIL represents the joint distribution over state action tuples:<br /> <br /> \begin{align} <br /> \underset{\pi}{\operatorname{argmin}}\; \underset{D\in (0,1)}{\operatorname{argmax}} = \mathbb{E}_{\pi}[log(D(s,a)]\ +\ \mathbb{E}_{\pi_E}[log(1 - D(s,a))] - \lambda H(\pi))<br /> \end{align}<br /> <br /> where &lt;math&gt; H(\pi) \triangleq \mathbb{E}_{\pi}[-log\: \pi(a|s)]&lt;/math&gt; is the entropy.<br /> <br /> This problem cannot be solved using the standard methods described for GANs because the generator in GAIL represents a stochastic policy. The exact form of the first term in the above equation is given by: &lt;math&gt; \mathbb{E}_{s\sim \rho_\pi(s)}\mathbb{E}_{a\sim \pi(\cdot |s)} [log(D(s,a)] &lt;/math&gt;.<br /> <br /> The two-player game now depends on the stochastic properties (&lt;math&gt; \theta &lt;/math&gt;) of the policy, and it is unclear how to differentiate the above equation with respect to &lt;math&gt; \theta &lt;/math&gt;. This problem can be overcome using score functions such as REINFORCE to obtain an unbiased gradient estimation:<br /> <br /> \begin{align}<br /> \nabla_\theta\mathbb{E}_{\pi} [log\; D(s,a)] \cong \hat{\mathbb{E}}_{\tau_i}[\nabla_\theta\; log\; \pi_\theta(a|s)Q(s,a)]<br /> \end{align}<br /> <br /> where &lt;math&gt; Q(\hat{s},\hat{a}) &lt;/math&gt; is the score function of the gradient:<br /> <br /> \begin{align}<br /> Q(\hat{s},\hat{a}) = \hat{\mathbb{E}}_{\tau_i}[log\; D(s,a) | s_0 = \hat{s}, a_0 = \hat{a}]<br /> \end{align}<br /> <br /> <br /> REINFORCE gradients suffer from high variance which makes them difficult to work with even after applying variance reduction techniques. In order to better understand the changes required to fool the discriminator we need access to the gradients of the discriminator network, which can be obtained from the Jacobian of the discriminator. This paper demonstrates the use of a forward model along with the Jacobian of the discriminator to train a policy, without using high-variance gradient estimations.<br /> <br /> = Algorithm =<br /> This section first analyzes the characteristics of the discriminator network, then describes how a forward model can enable policy imitation through GANs. Lastly, the model based adversarial imitation learning algorithm is presented.<br /> <br /> == The discriminator network ==<br /> The discriminator network is trained to predict the conditional distribution: &lt;math&gt; D(s,a) = p(y|s,a) &lt;/math&gt; where &lt;math&gt; y \in (\pi_E, \pi) &lt;/math&gt;.<br /> <br /> The discriminator is trained on an even distribution of expert and generated examples; hence &lt;math&gt; p(\pi) = p(\pi_E) = \frac{1}{2} &lt;/math&gt;. Given this, we can rearrange and factor &lt;math&gt; D(s,a) &lt;/math&gt; to obtain:<br /> <br /> \begin{aligned}<br /> D(s,a) &amp;= p(\pi|s,a) \\<br /> &amp; = \frac{p(s,a|\pi)p(\pi)}{p(s,a|\pi)p(\pi) + p(s,a|\pi_E)p(\pi_E)} \\<br /> &amp; = \frac{p(s,a|\pi)}{p(s,a|\pi) + p(s,a|\pi_E)} \\<br /> &amp; = \frac{1}{1 + \frac{p(s,a|\pi_E)}{p(s,a|\pi)}} \\<br /> &amp; = \frac{1}{1 + \frac{p(a|s,\pi_E)}{p(a|s,\pi)} \cdot \frac{p(s|\pi_E)}{p(s|\pi)}} \\<br /> \end{aligned}<br /> <br /> Define &lt;math&gt; \varphi(s,a) &lt;/math&gt; and &lt;math&gt; \psi(s) &lt;/math&gt; to be:<br /> <br /> \begin{aligned}<br /> \varphi(s,a) = \frac{p(a|s,\pi_E)}{p(a|s,\pi)}, \psi(s) = \frac{p(s|\pi_E)}{p(s|\pi)}<br /> \end{aligned}<br /> <br /> to get the final expression for &lt;math&gt; D(s,a) &lt;/math&gt;:<br /> \begin{aligned}<br /> D(s,a) = \frac{1}{1 + \varphi(s,a)\cdot \psi(s)}<br /> \end{aligned}<br /> <br /> &lt;math&gt; \varphi(s,a) &lt;/math&gt; represents a policy likelihood ratio, and &lt;math&gt; \psi(s) &lt;/math&gt; represents a state distribution likelihood ratio. Based on these expressions, the paper states that the discriminator makes its decisions by answering two questions. The first question relates to state distribution: what is the likelihood of encountering state &lt;math&gt; s &lt;/math&gt; under the distribution induces by &lt;math&gt; \pi_E &lt;/math&gt; vs &lt;math&gt; \pi &lt;/math&gt;? The second question is about behavior: given a state &lt;math&gt; s &lt;/math&gt;, how likely is action a under &lt;math&gt; \pi_E &lt;/math&gt; vs &lt;math&gt; \pi &lt;/math&gt;? The desired change in state is given by &lt;math&gt; \psi_s \equiv \partial \psi / \partial s &lt;/math&gt;; this information can by obtained from the partial derivatives of &lt;math&gt; D(s,a) &lt;/math&gt;:<br /> <br /> \begin{aligned}<br /> \nabla_aD &amp;= - \frac{\varphi_a(s,a)\psi(s)}{(1 + \varphi(s,a)\psi(s))^2} \\<br /> \nabla_sD &amp;= - \frac{\varphi_s(s,a)\psi(s) + \varphi(s,a)\psi_s(s)}{(1 + \varphi(s,a)\psi(s))^2} \\<br /> \end{aligned}<br /> <br /> <br /> == Backpropagating through stochastic units ==<br /> There is interest in training stochastic policies because stochasticity encourages exploration for Policy Gradient methods. This is a problem for algorithms that build differentiable computation graphs where the gradients flow from one component to another since it is unclear how to backpropagate through stochastic units. The following subsections show how to estimate the gradients of continuous and categorical stochastic elements for continuous and discrete action domains respectively.<br /> <br /> === Continuous Action Distributions ===<br /> In the case of continuous action policies, re-parameterization was used to enable computing the derivatives of stochastic models. Assuming that the stochastic policy has a Gaussian distribution the policy &lt;math&gt; \pi &lt;/math&gt; can be written as &lt;math&gt; \pi_\theta(a|s) = \mu_\theta(s) + \xi \sigma_\theta(s) &lt;/math&gt;, where &lt;math&gt; \xi \sim N(0,1) &lt;/math&gt;. This way, the authors are able to get a Monte-Carlo estimator of the derivative of the expected value of &lt;math&gt; D(s, a) &lt;/math&gt; with respect to &lt;math&gt; \theta &lt;/math&gt;:<br /> <br /> \begin{align}<br /> \nabla_\theta\mathbb{E}_{\pi(a|s)}D(s,a) = \mathbb{E}_{\rho (\xi )}\nabla_a D(a,s) \nabla_\theta \pi_\theta(a|s) \cong \frac{1}{M}\sum_{i=1}^{M} \nabla_a D(s,a) \nabla_\theta \pi_\theta(a|s)\Bigr|_{\substack{\xi=\xi_i}}<br /> \end{align}<br /> <br /> <br /> === Categorical Action Distributions ===<br /> In the case of discrete action domains, the paper uses categorical re-parameterization with Gumbel-Softmax. This method relies on the Gumble-Max trick which is a method for drawing samples from a categorical distribution with class probabilities &lt;math&gt; \pi(a_1|s),\pi(a_2|s),...,\pi(a_N|s) &lt;/math&gt;:<br /> <br /> \begin{align}<br /> a_{argmax} = \underset{i}{argmax}[g_i + log\ \pi(a_i|s)]<br /> \end{align}<br /> <br /> <br /> Gumbel-Softmax provides a differentiable approximation of the samples obtained using the Gumble-Max trick:<br /> <br /> \begin{align}<br /> a_{softmax} = \frac{exp[\frac{1}{\tau}(g_i + log\ \pi(a_i|s))]}{\sum_{j=1}^{k}exp[\frac{1}{\tau}(g_j + log\ \pi(a_i|s))]}<br /> \end{align}<br /> <br /> <br /> In the above equation, the hyper-parameter &lt;math&gt; \tau &lt;/math&gt; (temperature) trades bias for variance. When &lt;math&gt; \tau &lt;/math&gt; gets closer to zero, the softmax operator acts like argmax resulting in a low bias, but high variance; vice versa when the &lt;math&gt; \tau &lt;/math&gt; is large.<br /> <br /> The authors use &lt;math&gt; a_{softmax} &lt;/math&gt; to interact with the environment; argmax is applied over &lt;math&gt; a_{softmax} &lt;/math&gt; to obtain a single “pure” action, but the continuous approximation is used in the backward pass using the estimation: &lt;math&gt; \nabla_\theta\; a_{argmax} \approx \nabla_\theta\; a_{softmax} &lt;/math&gt;.<br /> <br /> == Backpropagating through a Forward model ==<br /> The above subsections presented the means for extracting the partial derivative &lt;math&gt; \nabla_aD &lt;/math&gt;. The main contribution of this paper is incorporating the use of &lt;math&gt; \nabla_sD &lt;/math&gt;. In a model-free approach the state &lt;math&gt; s &lt;/math&gt; is treated as a fixed input, therefore &lt;math&gt; \nabla_sD &lt;/math&gt; is discarded. This is illustrated in Figure 1. This work uses a model-based approach which makes incorporating &lt;math&gt; \nabla_sD &lt;/math&gt; more involved. In the model-based approach, a state &lt;math&gt; s_t &lt;/math&gt; can be written as a function of the previous state action pair: &lt;math&gt; s_t = f(s_{t-1}, a_{t-1}) &lt;/math&gt;, where &lt;math&gt; f &lt;/math&gt; represents the forward model. Using the forward model and the law of total derivatives we get:<br /> <br /> \begin{align}<br /> \nabla_\theta D(s_t,a_t)\Bigr|_{\substack{s=s_t, a=a_t}} &amp;= \frac{\partial D}{\partial a}\frac{\partial a}{\partial \theta}\Bigr|_{\substack{a=a_t}} + \frac{\partial D}{\partial s}\frac{\partial s}{\partial \theta}\Bigr|_{\substack{s=s_t}} \\<br /> &amp;= \frac{\partial D}{\partial a}\frac{\partial a}{\partial \theta}\Bigr|_{\substack{a=a_t}} + \frac{\partial D}{\partial s}\left (\frac{\partial f}{\partial s}\frac{\partial s}{\partial \theta}\Bigr|_{\substack{s=s_{t-1}}} + \frac{\partial f}{\partial a}\frac{\partial a}{\partial \theta}\Bigr|_{\substack{a=a_{t-1}}} \right )<br /> \end{align}<br /> <br /> <br /> Using this formula, the error regarding deviations of future states &lt;math&gt; (\psi_s) &lt;/math&gt; propagate back in time and influence the actions of policies in earlier times. This is summarized in Figure 2.<br /> <br /> [[File:modelFree_blockDiagram.PNG]]<br /> <br /> Figure 1: Block-diagram of the model-free approach: given a state &lt;math&gt; s &lt;/math&gt;, the policy outputs &lt;math&gt; \mu &lt;/math&gt; which is fed to a stochastic sampling unit. An action &lt;math&gt; a &lt;/math&gt; is sampled, and together with &lt;math&gt; s &lt;/math&gt; are presented to the discriminator network. In the backward phase, the error message &lt;math&gt; \delta_a &lt;/math&gt; is blocked at the stochastic sampling unit. From there, a high-variance gradient estimation is used (&lt;math&gt; \delta_{HV} &lt;/math&gt;). Meanwhile, the error message &lt;math&gt; \delta_s &lt;/math&gt; is flushed.<br /> <br /> [[File:modelBased_blockDiagram.PNG|1000px]]<br /> <br /> Figure 2: Block diagram of model-based adversarial imitation learning. This diagram describes the computation graph for training the policy (i.e. G). The discriminator network D is fixed at this stage and is trained separately. At time &lt;math&gt; t &lt;/math&gt; of the forward pass, &lt;math&gt; \pi &lt;/math&gt; outputs a distribution over actions: &lt;math&gt; \mu_t = \pi(s_t) &lt;/math&gt;, from which an action at is sampled. For example, in the continuous case, this is done using the re-parametrization trick: &lt;math&gt; a_t = \mu_t + \xi \cdot \sigma &lt;/math&gt;, where &lt;math&gt; \xi \sim N(0,1) &lt;/math&gt;. The next state &lt;math&gt; s_{t+1} = f(s_t, a_t) &lt;/math&gt; is computed using the forward model (which is also trained separately), and the entire process repeats for time &lt;math&gt; t+1 &lt;/math&gt;. In the backward pass, the gradient of &lt;math&gt; \pi &lt;/math&gt; is comprised of a.) the error message &lt;math&gt; \delta_a &lt;/math&gt; (Green) that propagates fluently through the differentiable approximation of the sampling process. And b.) the error message &lt;math&gt; \delta_s &lt;/math&gt; (Blue) of future time-steps, that propagate back through the differentiable forward model.<br /> <br /> == MGAIL Algorithm ==<br /> Shalev- Shwartz et al. (2016) and Heess et al. (2015) built a multi-step computation graph for describing the familiar policy gradient objective; in this case it is given by:<br /> <br /> \begin{align}<br /> J(\theta) = \mathbb{E}\left [ \sum_{t=0}^{T} \gamma ^t D(s_t,a_t)|\theta\right ]<br /> \end{align}<br /> <br /> <br /> Using the results from Heess et al. (2015) this paper demonstrates how to differentiate &lt;math&gt; J(\theta) &lt;/math&gt; over a trajectory of &lt;math&gt;(s,a,s’) &lt;/math&gt; transitions:<br /> <br /> \begin{align}<br /> J_s &amp;= \mathbb{E}_{p(a|s)}\mathbb{E}_{p(s'|s,a)}\left [ D_s + D_a \pi_s + \gamma J'_{s'}(f_s + f_a \pi_s) \right] \\<br /> J_\theta &amp;= \mathbb{E}_{p(a|s)}\mathbb{E}_{p(s'|s,a)}\left [ D_a \pi_\theta + \gamma (J'_{s'} f_a \pi_\theta + J'_\theta) \right]<br /> \end{align}<br /> <br /> The policy gradient &lt;math&gt; \nabla_\theta J &lt;/math&gt; is calculated by applying equations 12 and 13 recursively for &lt;math&gt; T &lt;/math&gt; iterations. The MGAIL algorithm is presented below.<br /> <br /> [[File:MGAIL_alg.PNG]]<br /> <br /> == Forward Model Structure ==<br /> The stability of the learning process depends on the prediction accuracy of the forward model, but learning an accurate forward model is challenging by itself. The authors propose methods for improving the performance of the forward model based on two aspects of its functionality. First, the forward model should learn to use the action as an operator over the state space. To accomplish this, the actions and states, which are sampled form different distributions need to be first represented in a shared space. This is done by encoding the state and action using two separate neural networks and combining their outputs to form a single vector. Additionally, multiple previous states are used to predict the next state by representing the environment as an &lt;math&gt; n^{th} &lt;/math&gt; order MDP. A GRU layer is incorporated into the state encoder to enable recurrent connections from previous states. Using these modifications, the model is able to achieve better, and more stable results compared to the standard forward model based on a feed forward neural network. The comparison is presented in Figure 3.<br /> <br /> [[File:performance_comparison.PNG]]<br /> <br /> Figure 3: Performance comparison between a basic forward model (Blue), and the advanced forward model (Green).<br /> <br /> = Experiments =<br /> The proposed algorithm is evaluated on three discrete control tasks (Cartpole, Mountain-Car, Acrobot), and five continuous control tasks (Hopper, Walker, Half-Cheetah, Ant, and Humanoid), which are modeled by the MuJoCo physics simulator (Todorov et al., 2012). Expert policies are trained using the Trust Region Policy Optimization (TRPO) algorithm (Schulman et al., 2015). Different number of trajectories are used to train the expert for each task, but all trajectories are of length 1000.<br /> The discriminator and generator (policy) networks contains two hidden layers with ReLU non-linearity and are trained using the ADAM optimizer. The total reward received over a period of &lt;math&gt; N &lt;/math&gt; steps using BC, GAIL and MGAIL is presented in Table 1. The proposed algorithm achieved the highest reward for most environments while exhibiting performance comparable to the expert over all of them.<br /> <br /> [[File:mgail_test_results.PNG]]<br /> <br /> Table 1. Policy performance, boldface indicates better results, &lt;math&gt; \pm &lt;/math&gt; represents one standard deviation.<br /> <br /> = Discussion =<br /> This paper presented a model-free algorithm for imitation learning. It demonstrated how a forward model can be used to train policies using the exact gradient of the discriminator network. A downside of this approach is the need to learn a forward model, since this could be difficult in certain domains. Learning the system dynamics directly from raw images is considered as one line of future work. Another future work is to address the violation of the fundamental assumption made by all supervised learning algorithms, which requires the data to be i.i.d. This problem arises because the discriminator and forward models are trained in a supervised learning fashion using data sampled from a dynamic distribution.<br /> <br /> = Source =<br /> # Baram, Nir, et al. &quot;End-to-end differentiable adversarial imitation learning.&quot; International Conference on Machine Learning. 2017.<br /> # Ho, Jonathan, and Stefano Ermon. &quot;Generative adversarial imitation learning.&quot; Advances in Neural Information Processing Systems. 2016.<br /> # Shalev-Shwartz, Shai, et al. &quot;Long-term planning by short-term prediction.&quot; arXiv preprint arXiv:1602.01580 (2016).<br /> # Heess, Nicolas, et al. &quot;Learning continuous control policies by stochastic value gradients.&quot; Advances in Neural Information Processing Systems. 2015.<br /> # Schulman, John, et al. &quot;Trust region policy optimization.&quot; International Conference on Machine Learning. 2015.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/IMPROVING_GANS_USING_OPTIMAL_TRANSPORT&diff=34782 stat946w18/IMPROVING GANS USING OPTIMAL TRANSPORT 2018-03-20T17:32:09Z <p>Jssambee: /* Generative Adversarial Nets */</p> <hr /> <div>== Introduction ==<br /> Generative Adversarial Networks (GANs) are powerful generative models. A GAN model consists of a generator and a discriminator or critic. The generator is a neural network which is trained to generate data having a distribution matched with the distribution of the real data. The critic is also a neural network, which is trained to separate the generated data from the real data. A loss function that measures the distribution distance between the generated data and the real one is important to train the generator.<br /> <br /> Optimal transport theory evaluates the distribution distance between the generated data and the training data based on metric, which provides another method for generator training. The main advantage of optimal transport theory over the distance measurement in GAN is its closed form solution for having a tractable training process. But the theory might also result in inconsistency in statistical estimation due to the given biased gradients if the mini-batches method is applied (Bellemare et al.,<br /> 2017).<br /> <br /> This paper presents a variant GANs named OT-GAN, which incorporates a discriminative metric called 'MIni-batch Energy Distance' into its critic in order to overcome the issue of biased gradients.<br /> <br /> == GANs and Optimal Transport ==<br /> <br /> ===Generative Adversarial Nets===<br /> Original GAN was firstly reviewed. The objective function of the GAN: <br /> <br /> [[File:equation1.png|700px]]<br /> <br /> The goal of GANs is to train the generator g and the discriminator d finding a pair of (g,d) to achieve Nash equilibrium(such that either of them cannot reduce their cost without changing the others' parameters). However, it could cause failure of converging since the generator and the discriminator are trained based on gradient descent techniques.<br /> <br /> ===Wasserstein Distance (Earth-Mover Distance)===<br /> <br /> In order to solve the problem of convergence failure, Arjovsky et. al. (2017) suggested Wasserstein distance (Earth-Mover distance) based on the optimal transport theory.<br /> <br /> [[File:equation2.png|600px]]<br /> <br /> where &lt;math&gt; \prod (p,g) &lt;/math&gt; is the set of all joint distributions &lt;math&gt; \gamma (x,y) &lt;/math&gt; with marginals &lt;math&gt; p(x) &lt;/math&gt; (real data), &lt;math&gt; g(y) &lt;/math&gt; (generated data). &lt;math&gt; c(x,y) &lt;/math&gt; is a cost function and the Euclidean distance was used by Arjovsky et. al. in the paper. <br /> <br /> The Wasserstein distance can be considered as moving the minimum amount of points between distribution &lt;math&gt; g(y) &lt;/math&gt; and &lt;math&gt; p(x) &lt;/math&gt; such that the generator distribution &lt;math&gt; g(y) &lt;/math&gt; is similar to the real data distribution &lt;math&gt; p(x) &lt;/math&gt;.<br /> <br /> Computing the Wasserstein distance is intractable. The proposed Wasserstein GAN (W-GAN) provides an estimated solution by switching the optimal transport problem into Kantorovich-Rubinstein dual formulation using a set of 1-Lipschitz functions. A neural network can then be used to obtain an estimation.<br /> <br /> [[File:equation3.png|600px]]<br /> <br /> W-GAN helps to solve the unstable training process of original GAN and it can solve the optimal transport problem approximately, but it is still intractable.<br /> <br /> ===Sinklhorn Distance===<br /> Genevay et al. (2017) proposed to use the primal formulation of optimal transport instead of the dual formulation to generative modeling. They introduced Sinkhorn distance which is a smoothed generalization of the Wasserstein distance.<br /> [[File: equation4.png|600px]]<br /> <br /> It introduced entropy restriction (&lt;math&gt; \beta &lt;/math&gt;) to the joint distribution &lt;math&gt; \prod_{\beta} (p,g) &lt;/math&gt;. This distance could be generalized to approximate the mini-batches of data &lt;math&gt; X ,Y&lt;/math&gt; with &lt;math&gt; K &lt;/math&gt; vectors of &lt;math&gt; x, y&lt;/math&gt;. The &lt;math&gt; i, j &lt;/math&gt; th entry of the cost matrix &lt;math&gt; C &lt;/math&gt; can be interpreted as the cost it needs to transport the &lt;math&gt; x_i &lt;/math&gt; in mini-batch X to the &lt;math&gt; y_i &lt;/math&gt; in mini-batch &lt;math&gt;Y &lt;/math&gt;. The resulting distance will be:<br /> <br /> [[File: equation5.png|550px]]<br /> <br /> where &lt;math&gt; M &lt;/math&gt; is a &lt;math&gt; K \times K &lt;/math&gt; matrix, each row of &lt;math&gt; M &lt;/math&gt; is a joint distribution of &lt;math&gt; \gamma (x,y) &lt;/math&gt; with positive entries. The summmation of rows or columns of &lt;math&gt; M &lt;/math&gt; is always equal to 1. <br /> <br /> This mini-batch Sinkhorn distance is not only fully tractable but also capable of solving the instability problem of GANs. However, it is not a valid metric over probability distribution when taking the expectation of &lt;math&gt; \mathcal{W}_{c} &lt;/math&gt; and the gradients are biased when the mini-batch size is fixed.<br /> <br /> ===Energy Distance (Cramer Distance)===<br /> In order to solve the above problem, Bellemare et al. proposed Energy distance:<br /> <br /> [[File: equation6.png|700px]]<br /> <br /> where &lt;math&gt; x, x' &lt;/math&gt; and &lt;math&gt; y, y'&lt;/math&gt; are independent samples from data distribution &lt;math&gt; p &lt;/math&gt; and generator distribution &lt;math&gt; g &lt;/math&gt;, respectively. Based on the Energy distance, Cramer GAN is to minimize the ED distance metric when training the generator.<br /> <br /> ==MINI-BATCH ENERGY DISTANCE==<br /> Salimans et al. (2016) mentioned that comparing to use distributions over individual images, mini-batch GAN is more powerful when use the distributions over mini-batches &lt;math&gt; g(X), p(X) &lt;/math&gt;. The distance measure is generated for mini-batches.<br /> <br /> ===GENERALIZED ENERGY DISTANCE===<br /> The generalized energy distance allowed to use non-Euclidean distance functions d. It is also valid for mini-batches and is considered better than working with individual data batch.<br /> <br /> [[File: equation7.png|670px]]<br /> <br /> Similarly as defined in the Energy distance, &lt;math&gt; X, X' &lt;/math&gt; and &lt;math&gt; Y, Y'&lt;/math&gt; can be the independent samples from data distribution &lt;math&gt; p &lt;/math&gt; and the generator distribution &lt;math&gt; g &lt;/math&gt;, respectively. While in Generalized engergy distance, &lt;math&gt; X, X' &lt;/math&gt; and &lt;math&gt; Y, Y'&lt;/math&gt; can also be valid for mini-batches. The &lt;math&gt; D_{GED}(p,g) &lt;/math&gt; is a metric when having &lt;math&gt; d &lt;/math&gt; as a metric. Thus, taking the triangle inequality of &lt;math&gt; d &lt;/math&gt; into account, &lt;math&gt; D(p,g) \geq 0,&lt;/math&gt; and &lt;math&gt; D(p,g)=0 &lt;/math&gt; when &lt;math&gt; p=g &lt;/math&gt;.<br /> <br /> ===MINI-BATCH ENERGY DISTANCE===<br /> As &lt;math&gt; d &lt;/math&gt; is free to choose, authors proposed Mini-batch Energy Distance by using entropy-regularized Wasserstein distnace as &lt;math&gt; d &lt;/math&gt;. <br /> <br /> [[File: equation8.png|650px]]<br /> <br /> where &lt;math&gt; X, X' &lt;/math&gt; and &lt;math&gt; Y, Y'&lt;/math&gt; are independent sampled mini-batches from the data distribution &lt;math&gt; p &lt;/math&gt; and the generator distribution &lt;math&gt; g &lt;/math&gt;, respectively. This distance metric combines the energy distance with primal form of optimal tranport over mini-batch distributions &lt;math&gt; g(Y) &lt;/math&gt; and &lt;math&gt; p(X) &lt;/math&gt;. Inside the generalized energy distance, the Sinkhorn distance is a valid metric between each mini-batches. By adding the &lt;math&gt; - \mathcal{W}_c (Y,Y')&lt;/math&gt; and &lt;math&gt; \mathcal{W}_c (X,Y)&lt;/math&gt; to equation (5) and using enregy distance, the objective becomes statistically consistent and mini-batch gradients are unbiased.<br /> <br /> ==OPTIMAL TRANSPORT GAN (OT-GAN)==<br /> <br /> In order to secure the statistical efficiency, authors suggested using cosine distance between vectors &lt;math&gt; v_\eta (x) &lt;/math&gt; and &lt;math&gt; v_\eta (y) &lt;/math&gt; based on the deep neural network that maps the mini-batch data to a learned latent space. The reason for not using Euclidean distance is because of its poor performance in the high dimensional space. Here is the transportation cost:<br /> <br /> [[File: euqation9.png|370px]]<br /> <br /> where the &lt;math&gt; v_\eta &lt;/math&gt; is chosen to maximize the resulting minibatch energy distance.<br /> <br /> Unlike the practice when using the original GANs, the generator was trained more often than the critic, which keep the cost function from degeneration. The resulting generator in OT-GAN has a well defined and statistically consistent objective through the training process.<br /> <br /> The algorithm is defined below. The backpropagation is not used in the algorithm due to the envelope theorem. Stochastic gradient descent is used as the optimization method. <br /> <br /> [[File: al.png|600px]]<br /> <br /> <br /> [[File: al_figure.png|600px]]<br /> <br /> ==EXPERIMENTS==<br /> <br /> In order to demonstrate the supermum performance of the OT-GAN, authors compared it with the original GAN and other popular models based on four experiments: Dataset recovery; CIFAR-10 test; ImageNet test; and the conditional image synthesis test.<br /> <br /> ===MIXTURE OF GAUSSIAN DATASET===<br /> OT-GAN has a statistically consistent objective when it is compared with the original GAN (DC-GAN), such that the generator would not update to a wrong direction even if the signal provided by the cost function to the generator is not good. In order to prove this advantage, authors compared the OT-GAN with the original GAN loss (DAN-S) based on a simple task. The task was set to recover all of the 8 modes from 8 Gaussian mixers in which the means were arranged in a circle. MLP with RLU activation functions were used in this task. The critic was only updated for 15K iterations. The generator distribution was tracked for another 25K iteration. The results showed that the original GAN experiences the model collapse after fixing the discriminator while the OT-GAN recovered all the 8 modes from the mixed Gaussian data.<br /> <br /> [[File: 5_1.png|600px]]<br /> <br /> ===CIFAR-10===<br /> <br /> The dataset CIFAR-10 was then used for inspecting the effect of batch-size to the model training process and the image quality. OT-GAN and four other methods were compared using &quot;inception score&quot; as the criteria for comparison. Figure 3 shows the change of inceptions scores (y-axis) by the increased of the iteration number. Scores of four different batch sizes (200, 800, 3200 and 8000) were compared. The results show that a larger batch size would lead to a more stable model showing a larger value in inception score. However, a large batch size would also require a high-performance computational environment. The sample quality across all 5 methods are compared in Table 1 where the OT_GAN has the best score.<br /> <br /> [[File: 5_2.png|600px]]<br /> <br /> ===IMAGENET DOGS===<br /> <br /> In order to investigate the performance of OT-GAN when dealing with the high-quality images, the dog subset of ImageNet (128*128) was used to train the model. Figure 6 shows that OT-GAN produces less nonsensical images and it has a higher inception score compare to the DC-GAN. <br /> <br /> [[FIle: 5_3.png|600px]]<br /> <br /> ===CONDITIONAL GENERATION OF BIRDS===<br /> <br /> The last experiment was to compare OT-GAN with three popular GAN models for processing the text-to-image generation demonstrating the performance on conditional image synthesis. As can be found from Table 2, OT-GAN received the highest inception score than the scores of the other three models. <br /> <br /> [[File: 5_4.png|600px]]<br /> <br /> The algorithm used to obtain the results above is conditional generation generalized from '''Algorithm 1''' to include conditional information &lt;math&gt;s&lt;/math&gt; such as some text description of an image. The modified algorithm is outlined in '''Algorithm 2'''.<br /> <br /> [[File: paper23_alg2.png|600px]]<br /> <br /> ==CONCLUSION==<br /> <br /> In this paper, an OT-GAN method was proposed based on the optimal transport theory. A distance metric that combines the primal form of the optimal transport and the energy distance was given was presented for realizing the OT-GAN. One of the advantages of OT-GAN over other GAN models is that OT-GAN can stay on the correct track with an unbiased gradient even if the training on critic is stopped or presents a weak cost signal. The performance of the OT-GAN can be maintained when the batch size is increasing, though the computational cost has to be taken into consideration.<br /> <br /> ==CRITIQUE==<br /> <br /> The paper presents a variant of GANs by defining a new distance metric based on the primal form of optimal transport and the mini-batch energy distance. The stability was demonstrated based on the four experiments that comparing OP-GAN with other popular methods. However, limitations in computational efficiency was not discussed much. Furthermore, in section 2, the paper is lack of explanation on using mini-batches instead of a vector as input when applying Sinkhorn distance. It is also confusing when explaining the algorithm in section 4 about choosing M for minimizing &lt;math&gt; \mathcal{W}_c &lt;/math&gt;. Lastly, it is found that it is lack of parallel comparison with existing GAN variants in this paper. Readers may feel jumping from one algorithm to another without necessary explanations.<br /> <br /> ==Reference==<br /> Salimans, Tim, Han Zhang, Alec Radford, and Dimitris Metaxas. &quot;Improving GANs using optimal transport.&quot; (2018).</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/IMPROVING_GANS_USING_OPTIMAL_TRANSPORT&diff=34779 stat946w18/IMPROVING GANS USING OPTIMAL TRANSPORT 2018-03-20T17:27:50Z <p>Jssambee: /* Introduction */</p> <hr /> <div>== Introduction ==<br /> Generative Adversarial Networks (GANs) are powerful generative models. A GAN model consists of a generator and a discriminator or critic. The generator is a neural network which is trained to generate data having a distribution matched with the distribution of the real data. The critic is also a neural network, which is trained to separate the generated data from the real data. A loss function that measures the distribution distance between the generated data and the real one is important to train the generator.<br /> <br /> Optimal transport theory evaluates the distribution distance between the generated data and the training data based on metric, which provides another method for generator training. The main advantage of optimal transport theory over the distance measurement in GAN is its closed form solution for having a tractable training process. But the theory might also result in inconsistency in statistical estimation due to the given biased gradients if the mini-batches method is applied (Bellemare et al.,<br /> 2017).<br /> <br /> This paper presents a variant GANs named OT-GAN, which incorporates a discriminative metric called 'MIni-batch Energy Distance' into its critic in order to overcome the issue of biased gradients.<br /> <br /> == GANs and Optimal Transport ==<br /> <br /> ===Generative Adversarial Nets===<br /> Original GAN was firstly reviewed. The objective function of the GAN: <br /> <br /> [[File:equation1.png|700px]]<br /> <br /> The goal of GANs is to train the generator g and the discriminator d finding a pair of (g,d) to achieve Nash equilibrium. However, it could cause failure of converging since the generator and the discriminator are trained based on gradient descent techniques.<br /> <br /> ===Wasserstein Distance (Earth-Mover Distance)===<br /> <br /> In order to solve the problem of convergence failure, Arjovsky et. al. (2017) suggested Wasserstein distance (Earth-Mover distance) based on the optimal transport theory.<br /> <br /> [[File:equation2.png|600px]]<br /> <br /> where &lt;math&gt; \prod (p,g) &lt;/math&gt; is the set of all joint distributions &lt;math&gt; \gamma (x,y) &lt;/math&gt; with marginals &lt;math&gt; p(x) &lt;/math&gt; (real data), &lt;math&gt; g(y) &lt;/math&gt; (generated data). &lt;math&gt; c(x,y) &lt;/math&gt; is a cost function and the Euclidean distance was used by Arjovsky et. al. in the paper. <br /> <br /> The Wasserstein distance can be considered as moving the minimum amount of points between distribution &lt;math&gt; g(y) &lt;/math&gt; and &lt;math&gt; p(x) &lt;/math&gt; such that the generator distribution &lt;math&gt; g(y) &lt;/math&gt; is similar to the real data distribution &lt;math&gt; p(x) &lt;/math&gt;.<br /> <br /> Computing the Wasserstein distance is intractable. The proposed Wasserstein GAN (W-GAN) provides an estimated solution by switching the optimal transport problem into Kantorovich-Rubinstein dual formulation using a set of 1-Lipschitz functions. A neural network can then be used to obtain an estimation.<br /> <br /> [[File:equation3.png|600px]]<br /> <br /> W-GAN helps to solve the unstable training process of original GAN and it can solve the optimal transport problem approximately, but it is still intractable.<br /> <br /> ===Sinklhorn Distance===<br /> Genevay et al. (2017) proposed to use the primal formulation of optimal transport instead of the dual formulation to generative modeling. They introduced Sinkhorn distance which is a smoothed generalization of the Wasserstein distance.<br /> [[File: equation4.png|600px]]<br /> <br /> It introduced entropy restriction (&lt;math&gt; \beta &lt;/math&gt;) to the joint distribution &lt;math&gt; \prod_{\beta} (p,g) &lt;/math&gt;. This distance could be generalized to approximate the mini-batches of data &lt;math&gt; X ,Y&lt;/math&gt; with &lt;math&gt; K &lt;/math&gt; vectors of &lt;math&gt; x, y&lt;/math&gt;. The &lt;math&gt; i, j &lt;/math&gt; th entry of the cost matrix &lt;math&gt; C &lt;/math&gt; can be interpreted as the cost it needs to transport the &lt;math&gt; x_i &lt;/math&gt; in mini-batch X to the &lt;math&gt; y_i &lt;/math&gt; in mini-batch &lt;math&gt;Y &lt;/math&gt;. The resulting distance will be:<br /> <br /> [[File: equation5.png|550px]]<br /> <br /> where &lt;math&gt; M &lt;/math&gt; is a &lt;math&gt; K \times K &lt;/math&gt; matrix, each row of &lt;math&gt; M &lt;/math&gt; is a joint distribution of &lt;math&gt; \gamma (x,y) &lt;/math&gt; with positive entries. The summmation of rows or columns of &lt;math&gt; M &lt;/math&gt; is always equal to 1. <br /> <br /> This mini-batch Sinkhorn distance is not only fully tractable but also capable of solving the instability problem of GANs. However, it is not a valid metric over probability distribution when taking the expectation of &lt;math&gt; \mathcal{W}_{c} &lt;/math&gt; and the gradients are biased when the mini-batch size is fixed.<br /> <br /> ===Energy Distance (Cramer Distance)===<br /> In order to solve the above problem, Bellemare et al. proposed Energy distance:<br /> <br /> [[File: equation6.png|700px]]<br /> <br /> where &lt;math&gt; x, x' &lt;/math&gt; and &lt;math&gt; y, y'&lt;/math&gt; are independent samples from data distribution &lt;math&gt; p &lt;/math&gt; and generator distribution &lt;math&gt; g &lt;/math&gt;, respectively. Based on the Energy distance, Cramer GAN is to minimize the ED distance metric when training the generator.<br /> <br /> ==MINI-BATCH ENERGY DISTANCE==<br /> Salimans et al. (2016) mentioned that comparing to use distributions over individual images, mini-batch GAN is more powerful when use the distributions over mini-batches &lt;math&gt; g(X), p(X) &lt;/math&gt;. The distance measure is generated for mini-batches.<br /> <br /> ===GENERALIZED ENERGY DISTANCE===<br /> The generalized energy distance allowed to use non-Euclidean distance functions d. It is also valid for mini-batches and is considered better than working with individual data batch.<br /> <br /> [[File: equation7.png|670px]]<br /> <br /> Similarly as defined in the Energy distance, &lt;math&gt; X, X' &lt;/math&gt; and &lt;math&gt; Y, Y'&lt;/math&gt; can be the independent samples from data distribution &lt;math&gt; p &lt;/math&gt; and the generator distribution &lt;math&gt; g &lt;/math&gt;, respectively. While in Generalized engergy distance, &lt;math&gt; X, X' &lt;/math&gt; and &lt;math&gt; Y, Y'&lt;/math&gt; can also be valid for mini-batches. The &lt;math&gt; D_{GED}(p,g) &lt;/math&gt; is a metric when having &lt;math&gt; d &lt;/math&gt; as a metric. Thus, taking the triangle inequality of &lt;math&gt; d &lt;/math&gt; into account, &lt;math&gt; D(p,g) \geq 0,&lt;/math&gt; and &lt;math&gt; D(p,g)=0 &lt;/math&gt; when &lt;math&gt; p=g &lt;/math&gt;.<br /> <br /> ===MINI-BATCH ENERGY DISTANCE===<br /> As &lt;math&gt; d &lt;/math&gt; is free to choose, authors proposed Mini-batch Energy Distance by using entropy-regularized Wasserstein distnace as &lt;math&gt; d &lt;/math&gt;. <br /> <br /> [[File: equation8.png|650px]]<br /> <br /> where &lt;math&gt; X, X' &lt;/math&gt; and &lt;math&gt; Y, Y'&lt;/math&gt; are independent sampled mini-batches from the data distribution &lt;math&gt; p &lt;/math&gt; and the generator distribution &lt;math&gt; g &lt;/math&gt;, respectively. This distance metric combines the energy distance with primal form of optimal tranport over mini-batch distributions &lt;math&gt; g(Y) &lt;/math&gt; and &lt;math&gt; p(X) &lt;/math&gt;. Inside the generalized energy distance, the Sinkhorn distance is a valid metric between each mini-batches. By adding the &lt;math&gt; - \mathcal{W}_c (Y,Y')&lt;/math&gt; and &lt;math&gt; \mathcal{W}_c (X,Y)&lt;/math&gt; to equation (5) and using enregy distance, the objective becomes statistically consistent and mini-batch gradients are unbiased.<br /> <br /> ==OPTIMAL TRANSPORT GAN (OT-GAN)==<br /> <br /> In order to secure the statistical efficiency, authors suggested using cosine distance between vectors &lt;math&gt; v_\eta (x) &lt;/math&gt; and &lt;math&gt; v_\eta (y) &lt;/math&gt; based on the deep neural network that maps the mini-batch data to a learned latent space. The reason for not using Euclidean distance is because of its poor performance in the high dimensional space. Here is the transportation cost:<br /> <br /> [[File: euqation9.png|370px]]<br /> <br /> where the &lt;math&gt; v_\eta &lt;/math&gt; is chosen to maximize the resulting minibatch energy distance.<br /> <br /> Unlike the practice when using the original GANs, the generator was trained more often than the critic, which keep the cost function from degeneration. The resulting generator in OT-GAN has a well defined and statistically consistent objective through the training process.<br /> <br /> The algorithm is defined below. The backpropagation is not used in the algorithm due to the envelope theorem. Stochastic gradient descent is used as the optimization method. <br /> <br /> [[File: al.png|600px]]<br /> <br /> <br /> [[File: al_figure.png|600px]]<br /> <br /> ==EXPERIMENTS==<br /> <br /> In order to demonstrate the supermum performance of the OT-GAN, authors compared it with the original GAN and other popular models based on four experiments: Dataset recovery; CIFAR-10 test; ImageNet test; and the conditional image synthesis test.<br /> <br /> ===MIXTURE OF GAUSSIAN DATASET===<br /> OT-GAN has a statistically consistent objective when it is compared with the original GAN (DC-GAN), such that the generator would not update to a wrong direction even if the signal provided by the cost function to the generator is not good. In order to prove this advantage, authors compared the OT-GAN with the original GAN loss (DAN-S) based on a simple task. The task was set to recover all of the 8 modes from 8 Gaussian mixers in which the means were arranged in a circle. MLP with RLU activation functions were used in this task. The critic was only updated for 15K iterations. The generator distribution was tracked for another 25K iteration. The results showed that the original GAN experiences the model collapse after fixing the discriminator while the OT-GAN recovered all the 8 modes from the mixed Gaussian data.<br /> <br /> [[File: 5_1.png|600px]]<br /> <br /> ===CIFAR-10===<br /> <br /> The dataset CIFAR-10 was then used for inspecting the effect of batch-size to the model training process and the image quality. OT-GAN and four other methods were compared using &quot;inception score&quot; as the criteria for comparison. Figure 3 shows the change of inceptions scores (y-axis) by the increased of the iteration number. Scores of four different batch sizes (200, 800, 3200 and 8000) were compared. The results show that a larger batch size would lead to a more stable model showing a larger value in inception score. However, a large batch size would also require a high-performance computational environment. The sample quality across all 5 methods are compared in Table 1 where the OT_GAN has the best score.<br /> <br /> [[File: 5_2.png|600px]]<br /> <br /> ===IMAGENET DOGS===<br /> <br /> In order to investigate the performance of OT-GAN when dealing with the high-quality images, the dog subset of ImageNet (128*128) was used to train the model. Figure 6 shows that OT-GAN produces less nonsensical images and it has a higher inception score compare to the DC-GAN. <br /> <br /> [[FIle: 5_3.png|600px]]<br /> <br /> ===CONDITIONAL GENERATION OF BIRDS===<br /> <br /> The last experiment was to compare OT-GAN with three popular GAN models for processing the text-to-image generation demonstrating the performance on conditional image synthesis. As can be found from Table 2, OT-GAN received the highest inception score than the scores of the other three models. <br /> <br /> [[File: 5_4.png|600px]]<br /> <br /> The algorithm used to obtain the results above is conditional generation generalized from '''Algorithm 1''' to include conditional information &lt;math&gt;s&lt;/math&gt; such as some text description of an image. The modified algorithm is outlined in '''Algorithm 2'''.<br /> <br /> [[File: paper23_alg2.png|600px]]<br /> <br /> ==CONCLUSION==<br /> <br /> In this paper, an OT-GAN method was proposed based on the optimal transport theory. A distance metric that combines the primal form of the optimal transport and the energy distance was given was presented for realizing the OT-GAN. One of the advantages of OT-GAN over other GAN models is that OT-GAN can stay on the correct track with an unbiased gradient even if the training on critic is stopped or presents a weak cost signal. The performance of the OT-GAN can be maintained when the batch size is increasing, though the computational cost has to be taken into consideration.<br /> <br /> ==CRITIQUE==<br /> <br /> The paper presents a variant of GANs by defining a new distance metric based on the primal form of optimal transport and the mini-batch energy distance. The stability was demonstrated based on the four experiments that comparing OP-GAN with other popular methods. However, limitations in computational efficiency was not discussed much. Furthermore, in section 2, the paper is lack of explanation on using mini-batches instead of a vector as input when applying Sinkhorn distance. It is also confusing when explaining the algorithm in section 4 about choosing M for minimizing &lt;math&gt; \mathcal{W}_c &lt;/math&gt;. Lastly, it is found that it is lack of parallel comparison with existing GAN variants in this paper. Readers may feel jumping from one algorithm to another without necessary explanations.<br /> <br /> ==Reference==<br /> Salimans, Tim, Han Zhang, Alec Radford, and Dimitris Metaxas. &quot;Improving GANs using optimal transport.&quot; (2018).</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=34743 Do Deep Neural Networks Suffer from Crowding 2018-03-20T05:07:19Z <p>Jssambee: </p> <hr /> <div>= Still working on this. =<br /> = Introduction =<br /> Ever since the evolution of Deep Networks, there has been tremendous amount of research and effort that has been put into making machines capable of recognizing objects the same way as humans do. Humans can recognize objects in a way that is invariant to scale, translation, and clutter. Crowding is another visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called<br /> flankers, are placed close to it and this is a very common real-life experience. This paper focuses on studying the impact of crowding on Deep Neural Networks (DNNs) by adding clutter to the images and then analyzing which models and settings suffer less from such effects. <br /> <br /> The paper investigates two types of DNNs for crowding: traditional deep convolutional neural networks(DCNN) and a multi-scale eccentricity-dependent model which is an extension of the DCNNs and inspired by the retina where the receptive field size of the convolutional filters in the model grows with increasing distance from the center of the image, called the eccentricity and will be explained below. The authors focus on the dependence of crowding on image factors, such as flanker configuration, target-flanker similarity, target eccentricity and premature pooling in particular.<br /> <br /> = Models =<br /> == Deep Convolutional Neural Networks ==<br /> The DCNN is a basic architecture with 3 convolutional layers, spatial 3x3 max-pooling with varying strides and a fully connected layer for classification as shown in the below figure. <br /> [[File:DCNN.png|800px|center]]<br /> <br /> The network is fed with images resized to 60x60, with minibatches of 128 images, 32 feature channels for all convolutional layers, and convolutional filters of size 5x5 and stride 1.<br /> <br /> As highlighted earlier, the effect of pooling is into main consideration and hence three different configurations have been investigated as below: <br /> <br /> 1. '''No total pooling''' Feature maps sizes decrease only due to boundary effects, as the 3x3 max pooling has stride 1. The square feature maps sizes after each pool layer are 60-54-48-42.<br /> <br /> 2. '''Progressive pooling''' 3x3 pooling with a stride of 2 halves the square size of the feature maps, until we pool over what remains in the final layer, getting rid of any spatial information before the fully connected layer. (60-27-11-1).<br /> <br /> 3. '''At end pooling''' Same as no total pooling, but before the fully connected layer, max-pool over the entire feature map. (60-54-48-1).<br /> <br /> ===What is the problem in CNNs?===<br /> CNNs fall short in explaining human perceptual invariance. First, CNNs typically take input at a single uniform resolution. Biological measurements suggest that resolution is not uniform across the human visual field, but rather decays with eccentricity, i.e. distance from the center of focus Even more importantly, CNNs rely on data augmentation to achieve transformation-invariance and obviously a lot of processing is needed for CNNs.<br /> <br /> ==Eccentricity-dependent Model==<br /> As per Poggio et al. in ,receptive fields increase in size with eccentricity. The eccentricity-dependent model computes an invariant representation by sampling the inverted pyramid at a discrete set of scales with the same number of filters at each scale. At larger scales, the receptive fields of the filters are also larger to cover a larger image area, see Fig 3(a). Thus, the model constructs a multi-scale representation of the input, where smaller sections (crops) of the image are sampled densely at a high resolution, and larger sections (crops) are sampled with at a lower resolution, with each scale represented using the same number of pixels, as shown in Fig . Each scale is treated as an input channel to the network and then processed by convolutional filters, the weights of which are shared also across scales as well as space. Because of the downsampling of the input image, this is equivalent to having receptive fields of varying sizes. These shared parameters also allow the model to learn a scale invariant representation of the image.<br /> [[File:EDM.png|2000x450px|center]]<br /> <br /> Scale pooling reduces the number of scales by taking the maximum value of corresponding locations in the feature maps across multiple scales. We set the spatial pooling constant using At end pooling, as described above. The type of scale pooling is indicated by writing the number of scales remaining in each layer, e.g. 11-1-1-1-1. The three configurations tested for scale pooling are (1) at the beginning, in which all the different scales are pooled together after the first layer, 11-1-1-1-1 (2) progressively, 11-7-5-3-1 and (3) at the end, 11-11-11-11-1, in which all 11 scales are pooled together at the last layer.<br /> <br /> ===Contrast Normalization===<br /> Since we have multiple scales of input image, we perform normalization such that the sum of the pixel intensities in each scale is in the same range [0,1] followed by dividing them by a factor proportional to the crop area.<br /> <br /> =Experiments and its Set-Up =<br /> The models are trained with back-propagation to recognize a set of objects, called targets and flankers act as clutter with respect to these target objects. The target objects are the even MNIST numbers having translational variance (shifted at different locations of the image along the horizontal axis). Examples of the target and flanker configurations is shown below: <br /> [[File:eximages.png|800px|center]]<br /> <br /> The target and the object are referred to as ''a'' and ''x'' respectively with the below four conifgurations: (1) No flankers. Only the target object. (a in the plots) (2) One central flanker closer to the center of the image than the target. (xa) (3) One peripheral flanker closer to the boundary of the image that the target. (ax) (4) Two flankers spaced equally around the target, being both the same object (xax).<br /> <br /> ==DNNs trained with Target and Flankers==<br /> This is a constant spacing training setup where identical flankers are placed at a distance of 120 pixels either side of the target(xax) with the targte having translational variance. THe tests are evaluated on (i) DCNN with at the end pooling, and (ii) eccentricity-dependent model with 11-11-11-11-1 scale pooling, at the end spatial pooling and contrast normalization. The test data has different flanker configurations as described above.<br /> [[File:result1.png|x450px|center]]<br /> <br /> ===Observations===<br /> - With the flanker configuration same as the training one, models are better at recognizing objects in clutter rather than isolated objects for all image locations<br /> -If the target-flanker spacing is changed, then models perform worse<br /> -the eccentricity model is much better at recognizing objects in isolation than the DCNN because the multi-scale crops divide the image into discrete regions, letting the model learn from image parts as well as the whole image<br /> -Only the eccentricity-dependent model is robust to different flanker configurations not included in training, when the target is centered.<br /> <br /> ==DNNs trained with Images with the Target in Isolation==<br /> [[File:result2.png|750x400px|center]]<br /> <br /> ===Eccentric Model===<br /> [[File:result3.png|750x400px|center]]<br /> <br /> =Conclusions=<br /> We often think that just training the network with data similar to the test data would achieve good results in a general scenario too but thats not the case as we trained the model with flankers and it did not give us the ideal results for the target obects.<br /> *'''Flanker Configuration''': When models are trained with images of objects in isolation, adding flankers harms recognition. Adding two flankers is the same or worse than adding just one and the smaller the spacing between flanker and target, the more crowding occurs. These is because the pooling operation merges nearby responses, such as the target and flankers if they are close.<br /> <br /> *'''Similarity between target and flanker''': Flankers more similar to targets cause more crowding, because of the selectivity property of the learned DNN filters.<br /> <br /> *'''Dependence on target location and contrast normalization''': In DCNNs and eccentricitydependent models with contrast normalization, recognition accuracy is the same across all eccentricities. In eccentricity-dependent networks without contrast normalization, recognition does not decrease despite presence of clutter when the target is at the center of the image.<br /> <br /> *'''Effect of pooling''': adding pooling leads to better recognition accuracy of the models. Yet, in the eccentricity model, pooling across the scales too early in the hierarchy leads to lower accuracy.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:result3.png&diff=34742 File:result3.png 2018-03-20T05:05:35Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:result2.png&diff=34741 File:result2.png 2018-03-20T04:40:04Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:result1.png&diff=34740 File:result1.png 2018-03-20T04:27:59Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=34690 Do Deep Neural Networks Suffer from Crowding 2018-03-20T00:13:46Z <p>Jssambee: </p> <hr /> <div>= Still working on this. =<br /> = Introduction =<br /> Ever since the evolution of Deep Networks, there has been tremendous amount of research and effort that has been put into making machines capable of recognizing objects the same way as humans do. Humans can recognize objects in a way that is invariant to scale, translation, and clutter. Crowding is another visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called<br /> flankers, are placed close to it and this is a very common real-life experience. This paper focuses on studying the impact of crowding on Deep Neural Networks (DNNs) by adding clutter to the images and then analyzing which models and settings suffer less from such effects. <br /> <br /> The paper investigates two types of DNNs for crowding: traditional deep convolutional neural networks(DCNN) and a multi-scale eccentricity-dependent model which is an extension of the DCNNs and inspired by the retina where the receptive field size of the convolutional filters in the model grows with increasing distance from the center of the image, called the eccentricity and will be explained below. The authors focus on the dependence of crowding on image factors, such as flanker configuration, target-flanker similarity, target eccentricity and premature pooling in particular.<br /> <br /> = Models =<br /> == Deep Convolutional Neural Networks ==<br /> The DCNN is a basic architecture with 3 convolutional layers, spatial 3x3 max-pooling with varying strides and a fully connected layer for classification as shown in the below figure. <br /> [[File:DCNN.png|800px|center]]<br /> <br /> The network is fed with images resized to 60x60, with minibatches of 128 images, 32 feature channels for all convolutional layers, and convolutional filters of size 5x5 and stride 1.<br /> <br /> As highlighted earlier, the effect of pooling is into main consideration and hence three different configurations have been investigated as below: <br /> <br /> 1. '''No total pooling''' Feature maps sizes decrease only due to boundary effects, as the 3x3 max pooling has stride 1. The square feature maps sizes after each pool layer are 60-54-48-42.<br /> <br /> 2. '''Progressive pooling''' 3x3 pooling with a stride of 2 halves the square size of the feature maps, until we pool over what remains in the final layer, getting rid of any spatial information before the fully connected layer. (60-27-11-1).<br /> <br /> 3. '''At end pooling''' Same as no total pooling, but before the fully connected layer, max-pool over the entire feature map. (60-54-48-1).<br /> <br /> ===What is the problem in CNNs?===<br /> CNNs fall short in explaining human perceptual invariance. First, CNNs typically take input at a single uniform resolution. Biological measurements suggest that resolution is not uniform across the human visual field, but rather decays with eccentricity, i.e. distance from the center of focus Even more importantly, CNNs rely on data augmentation to achieve transformation-invariance and obviously a lot of processing is needed for CNNs.<br /> <br /> ==Eccentricity-dependent Model==<br /> As per Poggio et al. in ,receptive fields increase in size with eccentricity. The eccentricity-dependent model computes an invariant representation by sampling the inverted pyramid at a discrete set of scales with the same number of filters at each scale. At larger scales, the receptive fields of the filters are also larger to cover a larger image area, see Fig 3(a). Thus, the model constructs a multi-scale representation of the input, where smaller sections (crops) of the image are sampled densely at a high resolution, and larger sections (crops) are sampled with at a lower resolution, with each scale represented using the same number of pixels, as shown in Fig . Each scale is treated as an input channel to the network and then processed by convolutional filters, the weights of which are shared also across scales as well as space. Because of the downsampling of the input image, this is equivalent to having receptive fields of varying sizes. These shared parameters also allow the model to learn a scale invariant representation of the image.<br /> [[File:EDM.png|1000x450px|center]]<br /> <br /> Scale pooling reduces the number of scales by taking the maximum value of corresponding locations in the feature maps across multiple scales. We set the spatial pooling constant using At end pooling, as described above. The type of scale pooling is indicated by writing the number of scales remaining in each layer, e.g. 11-1-1-1-1. The three configurations tested for scale pooling are (1) at the beginning, in which all the different scales are pooled together after the first layer, 11-1-1-1-1 (2) progressively, 11-7-5-3-1 and (3) at the end, 11-11-11-11-1, in which all 11 scales are pooled together at the last layer.<br /> <br /> ===Contrast Normalization===<br /> Since we have multiple scales of input image, we perform normalization such that the sum of the pixel intensities in each scale is in the same range [0,1] followed by dividing them by a factor proportional to the crop area.<br /> <br /> =Experiments and its Set-Up =<br /> The models are trained with back-propagation to recognize a set of objects, called targets and flankers act as clutter with respect to these target objects. The target objects are the even MNIST numbers having translational variance (shifted at different locations of the image along the horizontal axis). Examples of the target and flanker configurations is shown below: <br /> [[File:eximages.png|800px|center]]<br /> <br /> The target and the object are referred to as ''a'' and ''x'' respectively with the below four conifgurations: (1) No flankers. Only the target object. (a in the plots) (2) One central flanker closer to the center of the image than the target. (xa) (3) One peripheral flanker closer to the boundary of the image that the target. (ax) (4) Two flankers spaced equally around the target, being both the same object (xax).<br /> <br /> ==DNNs trained with Target and Flankers==<br /> This is a constant spacing training setup where identical flankers are placed at a distance of 120 pixels either side of the target(xax) with the targte having translational variance. THe tests are evaluated on (i) DCNN with at the end pooling, and (ii) eccentricity-dependent model with 11-11-11-11-1 scale pooling, at the end spatial pooling and contrast normalization. The test data has different flanker configurations as described above.<br /> ===Observations===<br /> <br /> <br /> <br /> <br /> =Conclusions=<br /> We often think that just training the network with data similar to the test data would achieve good results in a general scenario too but thats not the case as we trained the model with flankers and it did not give us the ideal results for the target obects.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:eximages.png&diff=34687 File:eximages.png 2018-03-19T23:50:12Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=34679 Do Deep Neural Networks Suffer from Crowding 2018-03-19T23:20:44Z <p>Jssambee: </p> <hr /> <div>= Still working on this. =<br /> = Introduction =<br /> Ever since the evolution of Deep Networks, there has been tremendous amount of research and effort that has been put into making machines capable of recognizing objects the same way as humans do. Humans can recognize objects in a way that is invariant to scale, translation, and clutter. Crowding is another visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called<br /> flankers, are placed close to it and this is a very common real-life experience. This paper focuses on studying the impact of crowding on Deep Neural Networks (DNNs) by adding clutter to the images and then analyzing which models and settings suffer less from such effects. <br /> <br /> The paper investigates two types of DNNs for crowding: traditional deep convolutional neural networks(DCNN) and a multi-scale eccentricity-dependent model which is an extension of the DCNNs and inspired by the retina where the receptive field size of the convolutional filters in the model grows with increasing distance from the center of the image, called the eccentricity and will be explained below. The authors focus on the dependence of crowding on image factors, such as flanker configuration, target-flanker similarity, target eccentricity and premature pooling in particular.<br /> <br /> = Models =<br /> == Deep Convolutional Neural Networks ==<br /> The DCNN is a basic architecture with 3 convolutional layers, spatial 3x3 max-pooling with varying strides and a fully connected layer for classification as shown in the below figure. <br /> [[File:DCNN.png|800px|center]]<br /> <br /> The network is fed with images resized to 60x60, with minibatches of 128 images, 32 feature channels for all convolutional layers, and convolutional filters of size 5x5 and stride 1.<br /> <br /> As highlighted earlier, the effect of pooling is into main consideration and hence three different configurations have been investigated as below: <br /> <br /> 1. '''No total pooling''' Feature maps sizes decrease only due to boundary effects, as the 3x3 max pooling has stride 1. The square feature maps sizes after each pool layer are 60-54-48-42.<br /> <br /> 2. '''Progressive pooling''' 3x3 pooling with a stride of 2 halves the square size of the feature maps, until we pool over what remains in the final layer, getting rid of any spatial information before the fully connected layer. (60-27-11-1).<br /> <br /> 3. '''At end pooling''' Same as no total pooling, but before the fully connected layer, max-pool over the entire feature map. (60-54-48-1).<br /> <br /> ==Eccentricity-dependent Model==<br /> The eccentricity-dependent model computes an invariant representation by sampling the inverted pyramid at a discrete set of scales with the same number of filters at each scale. At larger scales, the receptive fields of the filters are also larger to cover a larger image area, see Fig 3(a). Thus, the model constructs a multi-scale representation of the input, where smaller sections (crops) of the image are sampled densely at a high resolution, and larger sections (crops) are sampled with at a lower resolution, with each scale represented using the same number of pixels, as shown in Fig . Each scale is treated as an input channel to the network and then processed by convolutional filters, the weights of which are shared also across scales as well as space. Because of the downsampling of the input image, this is equivalent to having receptive fields of varying sizes. These shared parameters also allow the model to learn a scale invariant representation of the image.<br /> [[File:EDM.png|1000x450px|center]]<br /> <br /> Scale pooling reduces the number of scales by taking the maximum value of corresponding locations in the feature maps across multiple scales. We set the spatial pooling constant using At end pooling, as described above. The type of scale pooling is indicated by writing the number of scales remaining in each layer, e.g. 11-1-1-1-1. The three configurations tested for scale pooling are (1) at the beginning, in which all the different scales are pooled together after the first layer, 11-1-1-1-1 (2) progressively, 11-7-5-3-1 and (3) at the end, 11-11-11-11-1, in which all 11 scales are pooled together at the last layer.<br /> <br /> ===Contrast Normalization===<br /> Since we have multiple scales of input image, we perform normalization such that the sum of the pixel intensities in each scale is in the same range [0,1] followed by dividing them by a factor proportional to the crop area.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:EDM.png&diff=34676 File:EDM.png 2018-03-19T23:08:27Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:DCNN.png&diff=34670 File:DCNN.png 2018-03-19T22:29:20Z <p>Jssambee: </p> <hr /> <div></div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=34669 Do Deep Neural Networks Suffer from Crowding 2018-03-19T22:16:15Z <p>Jssambee: </p> <hr /> <div>= Still working on this. =<br /> = Introduction =<br /> Ever since the evolution of Deep Networks, there has been tremendous amount of research and effort that has been put into making machines capable of recognizing objects the same way as humans do. Humans can recognize objects in a way that is invariant to scale, translation, and clutter. Crowding is another visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called<br /> flankers, are placed close to it and this is a very common real-life experience. This paper focuses on studying the impact of crowding on Deep Neural Networks (DNNs) by adding clutter to the images and then analyzing which models and settings suffer less from such effects. <br /> <br /> The paper investigates two types of DNNs for crowding: traditional deep convolutional neural networks(DCNN) and a multi-scale eccentricity-dependent model which is an extension of the DCNNs and inspired by the retina where the receptive field size of the convolutional filters in the model grows with increasing distance from the center of the image, called the eccentricity and will be explained below. The authors focus on the dependence of crowding on image factors, such as flanker configuration, target-flanker similarity, target eccentricity and premature pooling in particular.<br /> <br /> = Models =<br /> == Deep Convolutional Neural Networks ==</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18&diff=34629 stat946w18 2018-03-18T22:34:50Z <p>Jssambee: /* Paper presentation */</p> <hr /> <div>=[https://piazza.com/uwaterloo.ca/fall2017/stat946/resources List of Papers]=<br /> <br /> = Record your contributions here [https://docs.google.com/spreadsheets/d/1fU746Cld_mSqQBCD5qadvkXZW1g-j-kHvmHQ6AMeuqU/edit?usp=sharing]=<br /> <br /> Use the following notations:<br /> <br /> P: You have written a summary/critique on the paper.<br /> <br /> T: You had a technical contribution on a paper (excluding the paper that you present).<br /> <br /> E: You had an editorial contribution on a paper (excluding the paper that you present).<br /> <br /> <br /> <br /> [https://docs.google.com/forms/d/e/1FAIpQLSdcfYZu5cvpsbzf0Nlxh9TFk8k1m5vUgU1vCLHQNmJog4xSHw/viewform?usp=sf_link Your feedback on presentations]<br /> <br /> =Paper presentation=<br /> {| class=&quot;wikitable&quot;<br /> <br /> {| border=&quot;1&quot; cellpadding=&quot;3&quot;<br /> |-<br /> |width=&quot;60pt&quot;|Date<br /> |width=&quot;100pt&quot;|Name <br /> |width=&quot;30pt&quot;|Paper number <br /> |width=&quot;700pt&quot;|Title<br /> |width=&quot;30pt&quot;|Link to the paper<br /> |width=&quot;30pt&quot;|Link to the summary<br /> |-<br /> |Feb 15 (example)||Ri Wang || ||Sequence to sequence learning with neural networks.||[http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf Paper] || [http://wikicoursenote.com/wiki/Stat946f15/Sequence_to_sequence_learning_with_neural_networks#Long_Short-Term_Memory_Recurrent_Neural_Network Summary]<br /> |-<br /> |Feb 27 || || 1|| || || <br /> |-<br /> |Feb 27 || || 2|| || || <br /> |-<br /> |Feb 27 || || 3|| || || <br /> |-<br /> |Mar 1 || Peter Forsyth || 4|| Unsupervised Machine Translation Using Monolingual Corpora Only || [https://arxiv.org/pdf/1711.00043.pdf Paper] || [[https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Unsupervised_Machine_Translation_Using_Monolingual_Corpora_Only Summary]]<br /> |-<br /> |Mar 1 || wenqing liu || 5|| Spectral Normalization for Generative Adversarial Networks || [https://openreview.net/pdf?id=B1QRgziT- Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Spectral_normalization_for_generative_adversial_network Summary]<br /> |-<br /> |Mar 1 || Ilia Sucholutsky || 6|| One-Shot Imitation Learning || [https://papers.nips.cc/paper/6709-one-shot-imitation-learning.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=One-Shot_Imitation_Learning Summary]<br /> |-<br /> |Mar 6 || George (Shiyang) Wen || 7|| AmbientGAN: Generative models from lossy measurements || [https://openreview.net/pdf?id=Hy7fDog0b Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/AmbientGAN:_Generative_Models_from_Lossy_Measurements Summary]<br /> |-<br /> |Mar 6 || Raphael Tang || 8|| Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolutional Layers || [https://arxiv.org/pdf/1802.00124.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Rethinking_the_Smaller-Norm-Less-Informative_Assumption_in_Channel_Pruning_of_Convolutional_Layers Summary]<br /> |-<br /> |Mar 6 ||Fan Xia || 9|| Word translation without parallel data ||[https://arxiv.org/pdf/1710.04087.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Word_translation_without_parallel_data Summary]<br /> |-<br /> |Mar 8 || Alex (Xian) Wang || 10 || Self-Normalizing Neural Networks || [http://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Self_Normalizing_Neural_Networks Summary] <br /> |-<br /> |Mar 8 || Michael Broughton || 11|| Convergence of Adam and beyond || [https://openreview.net/pdf?id=ryQu7f-RZ Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=On_The_Convergence_Of_ADAM_And_Beyond Summary] <br /> |-<br /> |Mar 8 || Wei Tao Chen || 12|| Predicting Floor-Level for 911 Calls with Neural Networks and Smartphone Sensor Data || [https://openreview.net/forum?id=ryBnUWb0b Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Predicting_Floor-Level_for_911_Calls_with_Neural_Networks_and_Smartphone_Sensor_Data Summary]<br /> |-<br /> |Mar 13 || Chunshang Li || 13 || UNDERSTANDING IMAGE MOTION WITH GROUP REPRESENTATIONS || [https://openreview.net/pdf?id=SJLlmG-AZ Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Understanding_Image_Motion_with_Group_Representations Summary] <br /> |-<br /> |Mar 13 || Saifuddin Hitawala || 14 || Robust Imitation of Diverse Behaviors || [https://papers.nips.cc/paper/7116-robust-imitation-of-diverse-behaviors.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Robust_Imitation_of_Diverse_Behaviors Summary] <br /> |-<br /> |Mar 13 || Taylor Denouden || 15|| A neural representation of sketch drawings || [https://arxiv.org/pdf/1704.03477.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=A_Neural_Representation_of_Sketch_Drawings Summary]<br /> |-<br /> |Mar 15 || Zehao Xu || 16|| Synthetic and natural noise both break neural machine translation || [https://openreview.net/pdf?id=BJ8vJebC- Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Synthetic_and_natural_noise_both_break_neural_machine_translation Summary]<br /> |-<br /> |Mar 15 || Prarthana Bhattacharyya || 17|| Wasserstein Auto-Encoders || [https://arxiv.org/pdf/1711.01558.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Wasserstein_Auto-Encoders Summary] <br /> |-<br /> |Mar 15 || Changjian Li || 18|| Label-Free Supervision of Neural Networks with Physics and Domain Knowledge || [https://arxiv.org/pdf/1609.05566.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Label-Free_Supervision_of_Neural_Networks_with_Physics_and_Domain_Knowledge Summary]<br /> |-<br /> |Mar 20 || Travis Dunn || 19|| Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments || [https://openreview.net/pdf?id=Sk2u1g-0- Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Continuous_Adaptation_via_Meta-Learning_in_Nonstationary_and_Competitive_Environments Summary]<br /> |-<br /> |Mar 20 || Sushrut Bhalla || 20|| MaskRNN: Instance Level Video Object Segmentation || [https://papers.nips.cc/paper/6636-maskrnn-instance-level-video-object-segmentation.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/MaskRNN:_Instance_Level_Video_Object_Segmentation Summary]<br /> |-<br /> |Mar 20 || Hamid Tahir || 21|| Wavelet Pooling for Convolution Neural Networks || [https://openreview.net/pdf?id=rkhlb8lCZ Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Wavelet_Pooling_CNN Summary]<br /> |-<br /> |Mar 22 || Dongyang Yang|| 22|| Implicit Causal Models for Genome-wide Association Studies || [https://openreview.net/pdf?id=SyELrEeAb Paper] ||[https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Implicit_Causal_Models_for_Genome-wide_Association_Studies Summary]<br /> |-<br /> |Mar 22 || Yao Li || 23||Improving GANs Using Optimal Transport || [https://openreview.net/pdf?id=rkQkBnJAb Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/IMPROVING_GANS_USING_OPTIMAL_TRANSPORT Summary]<br /> |-<br /> |Mar 22 || Sahil Pereira || 24||End-to-End Differentiable Adversarial Imitation Learning|| [http://proceedings.mlr.press/v70/baram17a/baram17a.pdf Paper] || [http://proceedings.mlr.press/v70/baram17a/baram17a.pdf Summary]<br /> |-<br /> |Mar 27 || Jaspreet Singh Sambee || 25|| Do Deep Neural Networks Suffer from Crowding? || [http://papers.nips.cc/paper/7146-do-deep-neural-networks-suffer-from-crowding.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding Summary]<br /> |-<br /> |Mar 27 || Braden Hurl || 26|| Spherical CNNs || [https://openreview.net/pdf?id=Hkbd5xZRb Paper] || <br /> |-<br /> |Mar 27 || Marko Ilievski || 27|| Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders || [http://proceedings.mlr.press/v70/engel17a/engel17a.pdf Paper] || <br /> |-<br /> |Mar 29 || Alex Pon || 28||PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space || [https://arxiv.org/abs/1706.02413 Paper] ||<br /> |-<br /> |Mar 29 || Sean Walsh || 29||Multi-scale Dense Networks for Resource Efficient Image Classification || [https://arxiv.org/pdf/1703.09844.pdf Paper] ||<br /> |-<br /> |Mar 29 || Jason Ku || 30||MarrNet: 3D Shape Reconstruction via 2.5D Sketches ||[https://arxiv.org/pdf/1711.03129.pdf Paper] ||<br /> |-<br /> |Apr 3 || Tong Yang || 31|| Dynamic Routing Between Capsules. || [http://papers.nips.cc/paper/6975-dynamic-routing-between-capsules.pdf Paper] || <br /> |-<br /> |Apr 3 || Benjamin Skikos || 32|| Training and Inference with Integers in Deep Neural Networks || [https://openreview.net/pdf?id=HJGXzmspb Paper] || <br /> |-<br /> |Apr 3 || Weishi Chen || 33|| Tensorized LSTMs for Sequence Learning || [https://arxiv.org/pdf/1711.01577.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Tensorized_LSTMs&amp;action=edit&amp;redlink=1 Summary] || <br /> |-</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Do_Deep_Neural_Networks_Suffer_from_Crowding&diff=34628 Do Deep Neural Networks Suffer from Crowding 2018-03-18T22:34:09Z <p>Jssambee: Created page with &quot;Still working on this.&quot;</p> <hr /> <div>Still working on this.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Wavelet_Pooling_CNN&diff=34585 Wavelet Pooling CNN 2018-03-17T23:20:22Z <p>Jssambee: /* Pooling Background */</p> <hr /> <div>== Introduction ==<br /> It is generally the case that Convolution Neural Networks (CNNs) out perform vector-based deep learning techniques. As such, the fundamentals of CNNs are good candidates to be innovated in order to improve said performance. The pooling layer is one of these fundamentals, and although various methods exist ranging from deterministic and simple: max pooling and average pooling, to probabilistic: mixed pooling and stochastic pooling, all these methods employ a neighborhood approach to the sub-sampling which, albeit fast and simple, can produce artifacts such as blurring, aliasing, and edge halos (Parker et al., 1983).<br /> <br /> This paper introduces a novel pooling method based on the discrete wavelet transform. Specifically, it uses a second-level wavelet decomposition for the sub-sampling. This method, instead of nearest neighbor interpolation, uses a sub-band method that the authors claim produces less artifacts and represents the underlying features more accurately. Therefore, if pooling is viewed as a lossy process, the reason for employing a wavelet approach is to try to minimize this loss.<br /> <br /> == Pooling Background ==<br /> Pooling essentially means sub-sampling. After the pooling layer, the spatial dimensions of the data is reduced to some degree, with the goal being to compress the data rather than discard some of it. Typical approaches to pooling reduce the dimensionality by using some method to combine a region of values into one value. For max pooling, this can be represented by the equation &lt;math&gt;a_{kij} = max_{(p,q) \epsilon R_{ij}} (a_{kpq})&lt;/math&gt; where &lt;math&gt;a_{kij}&lt;/math&gt; is the output activation of the &lt;math&gt;k^th&lt;/math&gt; feature map at &lt;math&gt;(i,j)&lt;/math&gt;, &lt;math&gt;a_{kpq}&lt;/math&gt; is input activation at &lt;math&gt;(p,q)&lt;/math&gt; within &lt;math&gt;R_{ij}&lt;/math&gt;, and &lt;math&gt;|R_{ij}|&lt;/math&gt; is the size of the pooling region. Mean pooling can be represented by the equation &lt;math&gt;a_{kij} = \frac{1}{|R_{ij}|} \sum_{(p,q) \epsilon R_{ij}} (a_{kpq})&lt;/math&gt; with everything defined as before. Figure 1 provides a numerical example that can be followed.<br /> <br /> [[File:WT_Fig1.PNG|650px|center|]]<br /> <br /> The paper mentions that these pooling methods, although simple and effective, have shortcomings. Max pooling can omit details from an image if the important features have less intensity than the insignificant ones, and also commonly overfits. On the other hand, average pooling can dilute important features if the data is averaged with values of significantly lower intensities. Figure 2 displays an image of this.<br /> <br /> [[File:WT_Fig2.PNG|650px|center|]]<br /> <br /> To account for the above mentioned issues, probabilistic pooling methods were introduced, namely mixed pooling and; stochastic pooling. Mixed pooling is a simple method which just combines the max and the average pooling by randomly selecting one method over the other during training. Stochastic pooling on the other hand randomly samples within a receptive field with the activation values as the probabilities. These are calculated by taking each activation value and dividing it by the sum of all activation values in the grid so that the probabilities sum to 1.<br /> <br /> Figure 3 shows an example of how stochastic pooling works. On the left is a 3x3 grid filled with activations. The middle grid is the corresponding probability for each activation. The activation in the middle was randomly selected (it had a 13% chance of getting selected).<br /> <br /> [[File:paper21-stochasticpooling.png|650px|center|]]<br /> <br /> == Wavelet Background ==<br /> Data or signals tend to be composed of slowly changing trends (low frequency) as well as fast changing transients (high frequency). Similarly, images have smooth regions of intensity which are perturbed by edges or abrupt changes. We know that these abrupt changes can represent features that are of great importance to us when we perform deep learning. Wavelets are a class of functions that are well localized in time and frequency. Compare this to the Fourier transform which represents signals as the sum of sine waves which oscillate forever (not localized in time and space). The ability of wavelets to be localized in time and space is what makes it suitable for detecting the abrupt changes in an image well. <br /> <br /> Essentially, a wavelet is a fast decaying, oscillating signal with zero mean that only exists for a fixed duration and can be scaled and shifted in time. There are some well defined types of wavelets as shown in Figure 3. The key characteristic of wavelets for us is that they have a band-pass characteristic, and the band can be adjusted based on the scaling and shifting. <br /> <br /> [[File:WT_Fig3.jpg|650px|center|]]<br /> <br /> The paper uses discrete wavelet transform and more specifically a faster variation called Fast Wavelet Transform (FWT) using the Haar wavelet. There also exists a continuous wavelet transform. The main difference in these is how the scale and shift parameters are selected.<br /> <br /> == Discrete Wavelet Transform General==<br /> The discrete wavelet transform for images is essentially applying a low pass and high pass filter to your image where the transfer functions of the filters are related and defined by the type of wavelet used (Haar in this paper). This is shown in the figures below, which also show the recursive nature of the transform. For an image, the per row transform is taken first. This results in a new image where the first half is a low frequency sub-band and the second half is the high frequency sub-band. Then this new image is transformed again per column, resulting in four sub-bands. Generally, the low frequency content approximates the image and the high frequency content represents abrupt changes. Therefore, one can simply take the LL band and perform the transformation again to sub-sample even more.<br /> <br /> [[File:WT_Fig8.png|650px|center|]]<br /> <br /> [[File:WT_Fig9.png|650px|center|]]<br /> <br /> == DWT example using Haar Wavelet ==<br /> Suppose we have an image represented by the following pixels:<br /> &lt;math&gt; \begin{bmatrix} <br /> 100 &amp; 50 &amp; 60 &amp; 150 \\<br /> 20 &amp; 60 &amp; 40 &amp; 30 \\<br /> 50 &amp; 90 &amp; 70 &amp; 82 \\<br /> 74 &amp; 66 &amp; 90 &amp; 58 \\<br /> \end{bmatrix} &lt;/math&gt;<br /> <br /> For each level of the DWT using the Haar wavelet, we will perform the transform on the rows first and then the columns. For the row pass, we transform each row as follows:<br /> * Take row i = [ i1, i2, i3, i4], and let i_t = [a1, a2, d1, d2] represent the transformed row<br /> * a1 = (i1 + i2)/2<br /> * a2 = (i3 + i4)/2<br /> * d1 = (i1 - i2)/2<br /> * d2 = (i3 - i4)/2<br /> <br /> After the row transforms, the images looks as follows:<br /> &lt;math&gt; \begin{bmatrix} <br /> 75 &amp; 105 &amp; 25 &amp; -45 \\<br /> 40 &amp; 35 &amp; -20 &amp; 5 \\<br /> 70 &amp; 76 &amp; -20 &amp; -6 \\<br /> 70 &amp; 74 &amp; 4 &amp; 16 \\<br /> \end{bmatrix} &lt;/math&gt;<br /> <br /> Now we apply the same method to the columns in the exact same way.<br /> <br /> == Proposed Method ==<br /> The proposed method uses subbands from the second level FWT and discards the first level subbands. The authors postulate that this method is more 'organic' in capturing the data compression and will create less artifacts that may affect the image classification.<br /> === Forward Propagation ===<br /> FWT can be expressed by &lt;math&gt;W_\varphi[j + 1, k] = h_\varphi[-n]*W_\varphi[j,n]|_{n = 2k, k &lt;= 0}&lt;/math&gt; and &lt;math&gt;W_\psi[j + 1, k] = h_\psi[-n]*W_\psi[j,n]|_{n = 2k, k &lt;= 0}&lt;/math&gt; where &lt;math&gt;\varphi&lt;/math&gt; is the approximation function, &lt;math&gt;\psi&lt;/math&gt; is the detail function, &lt;math&gt;W_\varphi&lt;/math&gt;, &lt;math&gt;W_\psi&lt;/math&gt;, are approximation and detail coefficients, &lt;math&gt;h_\varphi[-n]&lt;/math&gt; and &lt;math&gt;h_\psi[-n]&lt;/math&gt; are time reversed scaling and wavelet vectors, &lt;math&gt;(n)&lt;/math&gt; represents the sample in the vector, and &lt;math&gt;j&lt;/math&gt; denotes the resolution level. To apply to images, FWT is first applied on the rows and then the columns. If a low (L) and high(H) sub-band is extracted from the rows and similarly for the columns than at each level there is 4 sub-bands (LH, HL, HH, and LL) where LL will further be decomposed into the level 2 decomposition. <br /> <br /> Using the level 2 decomposition sub-bands, the Inverse Fast Wavelet Transform (IFWT) is used to obtain the resulting sub-sampled image, which is sub-sampled by a factor of two. The Equation for IFWT is &lt;math&gt;W_\varphi[j, k] = h_\varphi[-n]*W_\varphi[j + 1,n] + h_\psi[-n]*W_\psi[j + 1,n]|_{n = \frac{k}{2}, k &lt;= 0}&lt;/math&gt; where the parameters are the same as previously explained. Figure 4 displays the algorithm for the forward propagation.<br /> <br /> [[File:WT_Fig6.PNG|650px|center|]]<br /> <br /> === Back Propagation ===<br /> This is simply the reverse of the forward propagation. The FWT of the image is upsampled to be used as the level 2 decomposition. Then IFWT is performed to obtain the original image which is upsampled by a factor of two using wavelet methods. Figure 5 displays the algorithm.<br /> <br /> [[File:WT_Fig7.PNG|650px|center|]]<br /> <br /> == Results ==<br /> The authors tested on MNIST, CIFAR-10, SHVN, and KDEF and the paper provides comprehensive results for each. Stochastic gradient descent was used and the Haar wavelet is used due to its even, square subbands. The network for all datasets except MNIST is loosedly based on (Zeiler &amp; Fergus, 2013). The authors keep the network consistent, but change the pooling method for each dataset. They also experiment with dropout and Batch Normalization to examine the effects of regularization on their method. All pooling methods compared use a 2x2 window. The overall results teach us that the pooling method should be chosen specific to the type of data we have. In some cases wavelet pooling may perform the best, and in other cases, other methods may perform better, if the data is more suited for those types of pooling.<br /> <br /> === MNIST ===<br /> Figure 7 shows the network and Table 1 shows the accuracy. It can be seen that wavelet pooling achieves the best accuracy from all pooling methods compared. Figure 8 shows the energy of each method per epoch.<br /> <br /> [[File:WT_Fig4.PNG|650px|center|]]<br /> <br /> [[File:paper21_fig8.png|800px|center]]<br /> <br /> [[File:WT_Tab1.PNG|650px|center|]]<br /> <br /> === CIFAR-10 ===<br /> In order to investigate the performance of different pooling methods, two networks are trained based on CIFAR-10. The first one is the regular CNN and the second one is the network with dropout and batch normalization. Figure 9 shows the network and Tables 2 and 3 shows the accuracy without and with dropout. Average pooling achieves the best accuracy but wavelet pooling is still competitive.<br /> <br /> [[File:WT_Fig5.PNG|650px|center|]]<br /> <br /> [[File:paper21_fig10.png|800px|center]]<br /> <br /> [[File:WT_Tab2.PNG|650px|center|]]<br /> <br /> [[File:WT_Tab3.PNG|650px|center|]]<br /> <br /> ===SHVN===<br /> Figure 11 shows the network and Tables 4 and 5 shows the accuracy without and with dropout. The proposed method does not perform well in this experiment. <br /> <br /> [[File: a.png|650px|center|]]<br /> <br /> [[File:paper21_fig12.png|800px|center]]<br /> <br /> [[File: b.png|650px|center|]]<br /> <br /> == Computational Complexity ==<br /> The authors explain that their paper is a proof of concept and is not meant to implement wavelet pooling in the most efficient way. The table below displays a comparison of the number of mathematical operations for each method according to the dataset. It can be seen that wavelet pooling is significantly worse. The authors explain that through good implementation and coding practices, the method can prove to be viable.<br /> <br /> [[File:WT_Tab4.PNG|650px|center|]]<br /> <br /> == Criticism ==<br /> === Positive ===<br /> * Wavelet Pooling achieves competitive performance with standard go to pooling methods<br /> * Leads to comparison of discrete transformation techniques for pooling (DCT, DFT)<br /> === Negative ===<br /> * Only 2x2 pooling window used for comparison<br /> * Highly computationally extensive<br /> * Not as simple as other pooling methods<br /> * Only one wavelet used (HAAR wavelet)<br /> <br /> == References ==<br /> Travis Williams and Robert Li. Wavelet Pooling for Convolutional Neural Networks. ICLR 2018.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Continuous_Adaptation_via_Meta-Learning_in_Nonstationary_and_Competitive_Environments&diff=34582 Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments 2018-03-17T23:04:55Z <p>Jssambee: /* Introduction */</p> <hr /> <div>= Introduction =<br /> <br /> Typically, the basic goal of machine learning is to train a model to perform a task. In Meta-learning, the goal is to train a model to perform the task of training a model to perform a task. Hence in this case the term &quot;Meta-Learning&quot; has the exact meaning you would expect; the word &quot;Meta&quot; has the precise function of introducing a layer of abstraction.<br /> <br /> The meta-learning task can be made more concrete by a simple example. Consider the CIFAR-100 classification task that we used for our data competition. We can alter this task from being a 100-class classification problem to a collection of 100 binary classification problems. The goal of Meta-Learning here is to design and train and single binary classifier that will perform well on a randomly sampled task given a limited amount of training data for that specific task. In other words, we would like to train a model to perform the following procedure:<br /> <br /> # A task is sampled. The task is &quot;Is X a dog?&quot;<br /> # A small set of labeled training data is provided to the model. The labels represent whether or not the image is a picture of a dog.<br /> # The model uses the training data to adjust itself to the specific task of checking whether or not an image is a picture of a dog.<br /> <br /> This example also highlights the intuition that the skill of sight is distinct and separable from the skill of knowing what a dog looks like.<br /> <br /> In this paper, a probabilistic framework for meta learning is derived, then applied to tasks involving simulated robotic spiders. This framework generalizes the typical machine learning set up using Markov Decision Processes. This paper focuses on a multi-agent Non-stationary environment which requires Reinforcement Learning(RL) agents to do continuous adaptation in such an environment. Nonstationarity breaks the standard assumptions and requires agents to continuously adapt, both at training and execution time, in order to earn more rewards hence the approach is to break this into a sequence of stationary tasks and present it as a multi-task learning problem.<br /> <br /> = Model Agnostic Meta-Learning =<br /> <br /> An initial framework for meta-learning is given in &quot;Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks&quot; (Finn et al, 2017):<br /> <br /> &quot;In our approach, the parameters of<br /> the model are explicitly trained such that a small<br /> number of gradient steps with a small amount<br /> of training data from a new task will produce<br /> good generalization performance on that task&quot; (Finn et al, 2017).<br /> <br /> [[File:MAML.png | 500px]]<br /> <br /> In this training algorithm, the parameter vector &lt;math&gt;\theta&lt;/math&gt; belonging to the model &lt;math&gt;f_{\theta}&lt;/math&gt; is trained such that the meta-objective function &lt;math&gt;\mathcal{L} (\theta) = \sum_{\tau_i \sim P(\tau)} \mathcal{L}_{\tau_i} (f_{\theta_i' }) &lt;/math&gt; is minimized. The sum in the objective function is over a sampled batch of training tasks. &lt;math&gt;\mathcal{L}_{\tau_i} (f_{\theta_i'})&lt;/math&gt; is the training loss function corresponding to the &lt;math&gt;i^{th}&lt;/math&gt; task in the batch evaluated at the model &lt;math&gt;f_{\theta_i'}&lt;/math&gt;. The parameter vector &lt;math&gt;\theta_i'&lt;/math&gt; is obtained by updating the general parameter &lt;math&gt;\theta&lt;/math&gt; using the loss function &lt;math&gt;\mathcal{L}_{\tau_i}&lt;/math&gt; and set of K training examples specific to the &lt;math&gt;i^{th}&lt;/math&gt; task. Note that in alternate versions of this algorithm, additional testing sets are sampled from &lt;math&gt;\tau_i&lt;/math&gt; and used to update &lt;math&gt;\theta&lt;/math&gt; using testing loss functions instead of training loss functions.<br /> <br /> One of the important difference between this algorithm and more typical fine-tuning methods is that &lt;math&gt;\theta&lt;/math&gt; is explicitly trained to be easily adjusted to perform well on different tasks rather than perform well on any specific tasks then fine tuned as the environment changes. (Sutton et al., 2007)<br /> <br /> = Probabilistic Framework for Meta-Learning =<br /> <br /> This paper puts the meta-learning problem into a Markov Decision Process (MDP) framework common to RL. Instead of training examples &lt;math&gt;\{(x, y)\}&lt;/math&gt;, we have trajectories &lt;math&gt;\tau = (x_0, a_1, x_1, R_1, x_2, ... a_H, x_H, R_H)&lt;/math&gt;. A trajectory is sequence of states/observations &lt;math&gt;x_t&lt;/math&gt;, actions &lt;math&gt;a_t&lt;/math&gt; and rewards &lt;math&gt;R_t&lt;/math&gt; that is sampled from a task &lt;math&gt; T &lt;/math&gt; according to a policy &lt;math&gt;\pi_{\theta}&lt;/math&gt;. Included with said task is a method for assigning loss values to trajectories &lt;math&gt;L_T(\tau)&lt;/math&gt; which is typically the negative cumulative reward. A policy is a deterministic function that takes in a state and returns an action. Our goal here is to train a policy &lt;math&gt;\pi_{\theta}&lt;/math&gt; with parameter vector &lt;math&gt;\theta&lt;/math&gt;. This is analougous to training a function &lt;math&gt;f_{\theta}&lt;/math&gt; that assigns labels &lt;math&gt;y&lt;/math&gt; to feature vectors &lt;math&gt;x&lt;/math&gt;. More precisely we have the following definitions:<br /> <br /> * &lt;math&gt;T :=(L_T, P_T(x_0), P_T(x_t | x_{t-1}, a_t), H )&lt;/math&gt; (A Task)<br /> * &lt;math&gt;D(T)&lt;/math&gt; : A distribution over tasks.<br /> * &lt;math&gt;L_T&lt;/math&gt;: A loss function for the task T that assigns numeric loss values to trajectories.<br /> * &lt;math&gt;P_T(x_0), P_T(x_t | x_{t-1}, a_t)&lt;/math&gt;: Probability measures specifying the markovian dynamics of the observations &lt;math&gt;x_t&lt;/math&gt;<br /> * &lt;math&gt;H&lt;/math&gt;: The horizon of the MDP. This is a fixed natural number specifying the lengths of the tasks trajectories.<br /> <br /> The papers goes further to define a Markov dynamic for sequences of tasks. Thus the policy that we would like to meta learn &lt;math&gt;\pi_{\theta}&lt;/math&gt;, after being exposed to a sample of K trajectories &lt;math&gt;\tau_T^{1:k}&lt;/math&gt; from the task &lt;math&gt;T_i&lt;/math&gt;, should produce a new policy &lt;math&gt;\pi_{\phi}&lt;/math&gt; that will perform well on the next task &lt;math&gt;T_{i+1}&lt;/math&gt;. Thus we seek to minimize the following expectation.<br /> <br /> &lt;math&gt;\mathrm{E}_{P(T_0), P(T_{i+1} | T_i)}\bigg(\sum_{i=1}^{l} \mathcal{L}_{T_i, T_{i+1}}(\theta)\bigg)&lt;/math&gt;<br /> <br /> Where &lt;math&gt;\mathcal{L}_{T_i}(\theta) = \mathrm{E}_{\tau_i^{1:k} } \bigg( \mathrm{E}_{\tau_{i+1, \phi}}\Big( L_{T_{i+1}}(\tau_{i+1, \phi}) \Big) \bigg) &lt;/math&gt; and &lt;math&gt;l&lt;/math&gt; is the number of tasks.<br /> <br /> The meta-policy &lt;math&gt;\pi_{\theta}&lt;/math&gt; is trained and then adapted at test time using the following procedures.<br /> <br /> [[File:MAML2.png | 800px]]<br /> <br /> The mathematics of calculating loss gradients is omitted.<br /> <br /> = Training Spiders to Run with Dynamic Handicaps (Robotic Locomotion in Non-Stationary Environments) =<br /> <br /> The authors used the MuJoCo physics simulator to create a simulated environment where robotic spiders with 6 legs are faced with the task of running due east as quickly as possible. The robotic spider observes the location and velocity of it's body, and the angles and velocities of its legs. It interacts with the environment by exerting torque on the joints of its legs. Each leg has two joints, the joint closer to the body rotates horizontally while the joint farther from the body rotates vertically. The environment is made non-stationary by gradually paralyzing two legs of the spider across training and testing episodes.<br /> Putting this example into the above probabilistic framework yields:<br /> <br /> * &lt;math&gt;T_i&lt;/math&gt;: The task of walking east with the torques of two legs scaled by &lt;math&gt; (i-1)/6 &lt;/math&gt;<br /> * &lt;math&gt;\{T_i\}_{i=1}^{7}&lt;/math&gt;: A sequence of tasks with the same two legs handicapped in each task. Note there are 15 different ways to choose such legs resulting in 15 sequences of tasks. 12 are used for training and 3 for testing.<br /> * A Markov Descision process composed of<br /> ** Observations &lt;math&gt; x_t &lt;/math&gt; containing information about the state of the spider.<br /> ** Actions &lt;math&gt; a_t &lt;/math&gt; containing information about the torques to apply to the spiders legs.<br /> ** Rewards &lt;math&gt; R_t &lt;/math&gt; corresponding to the speed at which the spider is moving east.<br /> <br /> Three differently structured policy neural networks are trained in this set up using both meta-learning and three different previously developed adaption methods.<br /> <br /> At testing time, the spiders following meta learned policies initially perform worse than the spiders using non-adaptive policies. However, by the third episode &lt;math&gt; i=3 &lt;/math&gt; the meta learners perform on par. And by the sixth episode, when the selected legs are mostly immobile, the meta learners significantly out perform. These results can be seen in the graphs below.<br /> <br /> [[File:locomotion_results.png | 800px]]<br /> <br /> = Training Spiders to Fight Each Other (Adversarial Meta-Learning) =<br /> <br /> The authors created an adversarial environment called RoboSumo where pairs of agents with 4 (named Ants), 6 (named Bugs),or 8 legs (named spiders) sumo wrestle. The agents observe the location and velocity of their bodies and the bodies of their opponent, the angles and velocities of their legs, and the forces being exerted on them by their opponent (equivalent of tactile sense). The game is organized into episodes and rounds. Episodes are single wrestling matches with 500 time steps and win/lose/draw outcomes. Agents win by pushing their opponent out of the ring or making their opponent's body touch the ground. Rounds are batches of episodes. An episode results in a draw when neither of these things happen after 500 time steps. Rounds have possible outcomes win, lose, and draw that are decided based on majority of episodes won. K rounds will be fought. Both agents may update their policies between rounds. The agent that wins the majority of rounds is deemed the winner of the game.<br /> <br /> == Setup ==<br /> Similar to the Robotic locomotion example, this game can be phrased in terms of the RL MDP framework.<br /> <br /> * &lt;math&gt;T_i&lt;/math&gt;: The task of fighting a round.<br /> * &lt;math&gt;\{T_i\}_{i=1}^{K}&lt;/math&gt;: A sequence of rounds against the same opponent. Note that the opponent may update their policy between rounds but the anatomy of both wrestlers will be constant across rounds.<br /> * A Markov Descision process composed of<br /> ** A horizon &lt;math&gt;H = 500*n&lt;/math&gt; where &lt;math&gt;n&lt;/math&gt; is the number of episodes per round.<br /> ** Observations &lt;math&gt; x_t &lt;/math&gt; containing information about the state of the agent and its opponent.<br /> ** Actions &lt;math&gt; a_t &lt;/math&gt; containing information about the torques to apply to the agents legs.<br /> ** Rewards &lt;math&gt; R_t &lt;/math&gt; rewards given to the agent based on its wrestling performance. &lt;math&gt;R_{500*n} = &lt;/math&gt; +2000 if win episode, -2000 if lose, and -1000 if draw.<br /> <br /> Note that the above reward set up is quite sparse, therefore in order to encourage fast training, rewards are introduced at every time step for the following.<br /> * For staying close to the center of the ring.<br /> * For exerting force on the opponents body.<br /> * For moving towards the opponent.<br /> * For the distance of the opponent to the center of the ring.<br /> <br /> This makes sense intuitively as these are reasonable goals for agents to explore when they are learning to wrestle.<br /> <br /> == Training ==<br /> The same combinations of policy networks and adaptation methods that were used in the locomotion example are trained and tested here. A family of non-adaptive policies are first trained via self-play and saved at all stages. Self-play simply means the two agents in the training environment use the same policy. All policy versions are saved so that agents of various skill levels can be sampled when training meta-learners. The weights of the different insects were calibrated such that the test win rate between two insects of differing anatomy, who have been trained for the same number of epochs via self-play, is close to 50%.<br /> <br /> [[File:weight_cal.png | 800px]]<br /> <br /> We can see in the above figure that the weight of the spider had to be increased by almost four times in order for the agents to be evenly matched.<br /> <br /> [[File:robosumo_results.png | 800px]]<br /> <br /> The above figure shows testing results for various adaptation strategies. The agent and opponent both start with the self-trained policies. The opponent uses all of its testing experience to continue training. The agent uses only the last 75 episodes to adapt its policy network. This shows that metal learners need only a limited amount of experience in order to hold their own against a constantly improving opponent.<br /> <br /> = Future Work =<br /> The authors mention that their approach will likely not work well with sparse rewards. This is because the meta-updates, which use policy gradients, are very dependent on the reward signal. They mention that this is an issue they would like to address in the future. A potential solution they have outlined for this is to introduce auxiliary dense rewards which could enable meta-learning.<br /> <br /> = Sources =<br /> # Chelsea Finn, Pieter Abbeel, Sergey Levine. &quot;Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.&quot; arXiv preprint arXiv:1703.03400v3 (2017).<br /> # Richard S Sutton, Anna Koop, and David Silver. On the role of tracking in stationary environments. In Proceedings of the 24th international conference on Machine learning, pp. 871–878. ACM, 2007.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Synthetic_and_natural_noise_both_break_neural_machine_translation&diff=34581 stat946w18/Synthetic and natural noise both break neural machine translation 2018-03-17T22:42:06Z <p>Jssambee: /* Criticism */</p> <hr /> <div>== Introduction ==<br /> * Humans have surprisingly robust language processing systems which can easily overcome typos, e.g.<br /> <br /> Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae.<br /> <br /> * A person's ability to read this text comes as no surprise to the Psychology literature<br /> *# Saberi &amp; Perrott (1999) found that this robustness extends to audio as well.<br /> *# Rayner et al. (2006) found that in noisier settings reading comprehension only slowed by 11%.<br /> *# McCusker et al. (1981) found that the common case of swapping letters could often go unnoticed by the reader.<br /> *# Mayall et al (1997) shows that we rely on word shape.<br /> *# Reicher, 1969; Pelli et al., (2003) found that we can switch between whole word recognition but the first and last letter positions are required to stay constant for comprehension<br /> <br /> However, neural machine translation (NMT) systems are brittle. i.e. The Arabic word<br /> [[File:Good_morning.PNG]] means a blessing for good morning, however [[File:Hunt.PNG]] means hunt or slaughter. <br /> <br /> Facebook's MT system mistakenly confused two words that only differ by one character, a situation that is challenging for a character-based NMT system.<br /> <br /> Figure 1 shows the performance translating German to English as a function of the percent of German words modified. Here we show two types of noise: (1) Random permutation of the word and (2) Swapping a pair of adjacent letters that does not include the first or last letter of the word. The important thing to note is that even small amounts of noise lead to substantial drops in performance.<br /> <br /> [[File:BLEU_plot.PNG]] <br /> <br /> BLEU (bilingual evaluation understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: &quot;the closer a machine translation is to a professional human translation, the better it is&quot;. BLEU is between 0 and 1.<br /> <br /> This paper explores two simple strategies for increasing model robustness:<br /> # using structure-invariant representations (character CNN representation)<br /> # robust training on noisy data, a form of adversarial training.<br /> <br /> The goal of the paper is two-fold:<br /> # to initiate a conversation on robust training and modeling techniques in NMT<br /> # to promote the creation of better and more linguistically accurate artificial noise to be applied to new languages and tasks<br /> <br /> == Adversarial examples ==<br /> The growing literature on adversarial examples has demonstrated how dangerous it can be to have brittle machine learning systems being used so pervasively in the real world. Small changes to the input can lead to dramatic<br /> failures of deep learning models. This leads to a potential for malicious attacks using adversarial examples. An important distinction is often drawn between white-box attacks, where adversarial examples are generated with<br /> access to the model parameters, and black-box attacks, where examples are generated without such access.<br /> <br /> The paper devises simple methods for generating adversarial examples for NMT. They do not assume any access to the NMT models' gradients, instead relying on cognitively-informed and naturally occurring language errors to generate noise.<br /> <br /> == MT system ==<br /> We experiment with three different NMT systems with access to character information at different levels.<br /> # Use &lt;code&gt;char2char&lt;/code&gt;, the fully character-level model of (Lee et al. 2017). This model processes a sentence as a sequence of characters. The encoder works as follows: the characters are embedded as vectors, and then the sequence of vectors is fed to a convolutional layer. The sequence output by the convolutional layer is then shortened by max pooling in the time dimension. The output of the max-pooling layer is then fed to a four-layer highway network (Srivasta et al. 2015), and the output of the highway network is in turn fed to a bidirectional GRU, producing a sequence of hidden units. The sequence of hidden units is then processed by the decoder, a GRU with attention, to produce probabilities over sequences of output characters.<br /> # Use &lt;code&gt;Nematus&lt;/code&gt; (Sennrich et al., 2017), a popular NMT toolkit. It is another sequence-to-sequence model with several architecture modifications, especially operating on sub-word units using byte-pair encoding. Byte-pair encoding (Sennich et al. 2015, Gage 1994) is an algorithm according to which we begin with a list of characters as our symbols, and repeatedly fuse common combinations to create new symbols. For example, if we begin with the letters a to z as our symbol list, and we find that &quot;th&quot; is the most common two-letter combination in a corpus, then we would add &quot;th&quot; to our symbol list in the first iteration. After we have used this algorithm to create a symbol list of the desired size, we apply a standard encoder-decoder with attention.<br /> # Use an attentional sequence-to-sequence model with a word representation based on a character convolutional neural network (&lt;code&gt;charCNN&lt;/code&gt;). The &lt;code&gt;charCNN&lt;/code&gt; model is similar to &lt;code&gt;char2char&lt;/code&gt;, but uses a shallower highway network and, although it reads the input sentence as characters, it produces as output a probability distribution over words, not characters.<br /> <br /> == Data ==<br /> === MT Data ===<br /> We use the TED talks parallel corpus prepared for IWSLT 2016 (Cettolo et al., 2012) for testing all of the NMT systems.<br /> <br /> [[File:Table1x.PNG]]<br /> <br /> === Natural and Artificial Noise ===<br /> ==== Natural Noise ====<br /> The three languages, French, German, and Czech, each have their own frequent natural errors. The corpora of edits used for these languages are:<br /> <br /> # French : Wikipedia Correction and Paraphrase Corpus (WiCoPaCo)<br /> # German : RWSE Wikipedia Correction Dataset and The MERLIN corpus<br /> # Czech : CzeSL Grammatical Error Correction Dataset (CzeSL-GEC) which is a manually annotated dataset of essays written by both non-native learners of Czech and Czech pupils<br /> <br /> The authors harvested naturally occurring errors (typos, misspellings, etc.) corresponding to these three languages from available corpora of edits to build a look-up table of possible lexical replacements.<br /> <br /> They insert these errors into the source-side of the parallel data by replacing every word in the corpus with an error if one exists in our dataset. When there is more than one possible replacement to choose, words for which there is no error, are sampled uniformly and kept as is.<br /> <br /> ==== Synthetic Noise ====<br /> In addition to naturally collected sources of error, we also experiment with four types of synthetic noise: Swap, Middle Random, Fully Random, and Key Typo. <br /> # &lt;code&gt;Swap&lt;/code&gt;: The first and simplest source of noise is swapping two letters (do not alter the first or last letters, only apply to words of length &gt;=4).<br /> # &lt;code&gt;Middle Random&lt;/code&gt;: Randomize the order of all the letters in a word except for the first and last (only apply to words of length &gt;=4).<br /> # &lt;code&gt;Fully Random&lt;/code&gt; Completely randomized words.<br /> # &lt;code&gt;Keyboard Typo&lt;/code&gt; Randomly replace one letter in each word with an adjacent key<br /> <br /> [[File:Table3x.PNG]]<br /> <br /> Table 3 shows BLEU scores of models trained on clean (Vanilla) texts and tested on clean and noisy<br /> texts. All models suffer a significant drop in BLEU when evaluated on noisy texts. This is true<br /> for both natural noise and all kinds of synthetic noise. The more noise in the text, the worse the<br /> translation quality, with random scrambling producing the lowest BLEU scores.<br /> <br /> == Dealing with noise ==<br /> === Structure Invariant Representations ===<br /> The three NMT models are all sensitive to word structure. The &lt;code&gt;char2char&lt;/code&gt; and &lt;code&gt;charCNN&lt;/code&gt; models both have convolutional layers on character sequences, designed to capture character n-grams (which are sequences of characters or words, of length n). The model in &lt;code&gt;Nematus&lt;/code&gt; is based on sub-word units obtained with byte pair encoding (where common consecutive characters are replaced with a unique byte that does not occur in the data). It thus relies on character order.<br /> <br /> The simplest way to improve such a model is to take the average character embeddings as a word representation. This model, referred to as &lt;code&gt;meanChar&lt;/code&gt;, first generates a word representation by averaging character embeddings, and then proceeds with a word-level encoder similar to the &lt;code&gt;charCNN&lt;/code&gt; model.<br /> <br /> [[File:Table5x.PNG]]<br /> <br /> &lt;code&gt;meanChar&lt;/code&gt; is good with the other three scrambling errors (Swap, Middle Random and Fully Random), but bad with Keyboard errors and Natural errors.<br /> <br /> === Black-Box Adversarial Training ===<br /> <br /> &lt;code&gt;charCNN&lt;/code&gt; Performance<br /> [[File:Table6x.PNG]]<br /> <br /> Here is the result of the translation of the scrambled meme:<br /> “According to a study of Cambridge University, it doesn’t matter which technology in a word is going to get the letters in a word that is the only important thing for the first and last letter.”<br /> <br /> == Analysis ==<br /> === Learning Multiple Kinds of Noise in &lt;code&gt;charCNN&lt;/code&gt; ===<br /> <br /> As Table 6 above shows, &lt;code&gt;charCNN&lt;/code&gt; models performed quite well across different noise types on the test set when they are trained on a mix of noise types, which led the authors to speculate that filters from different convolutional layers learned to be robust to different types of noises. To test this hypothesis, they analyzed the weights learned by &lt;code&gt;charCNN&lt;/code&gt; models trained on two kinds of input: completely scrambled words (Rand) without other kinds of noise, and a mix of Rand+Key+Nat kinds of noise. For each model, they computed the variance across the filter dimension for each one of the 1000 filters and for each one of the 25 character embedding dimensions, which were then averaged across the filters to yield 25 variances. <br /> <br /> As Figure 2 below shows, the variances for the ensemble model are higher and more varied, which indicates that the filters learned different patterns and the model differentiated between different character embedding dimensions. Under the random scrambling scheme, there should be no patterns for the model to learn, so it makes sense for the filter weights to stay close uniform weights, hence the consistently lower variance measures.<br /> <br /> [[File:Table7x.PNG]]<br /> <br /> == Conclusion ==<br /> In this work, the authors have shown that character-based NMT models are extremely brittle and tend to break when presented with both natural and synthetic kinds of noise. After a comparison of the models, they found that a character-based CNN can learn to<br /> address multiple types of errors that are seen in training.<br /> For the future work, the author suggested generating more realistic synthetic noise by using phonetic and syntactic structure. Also, they suggested that a better NMT architecture could be designed which can be robust to noise without seeing it in the training data.<br /> <br /> == Criticism ==<br /> A major critique of this paper is that the solutions presented do not adequately solve the problem. The response to the meanChar architecture has been mostly negative and the method of noise injection has been seen as a simple start. However, the authors have acknowledged these critiques stating that they realize their solution is just a starting point. They argue that this paper has opened the discussion on dealing with noise in machine translation which has been mostly left untouched. Also these solutions/models still do not tackle the problem of natural noise as the models trained on the synthetic noise don't generalize well to natural noise.<br /> <br /> == References ==<br /> # Yonatan Belinkov and Yonatan Bisk. Synthetic and Natural Noise Both Break Neural Machine Translation. In ''International Conference on Learning Representations (ICLR)'', 2017.<br /> # Mauro Cettolo, Christian Girardi, and Marcello Federico. WIT: Web Inventory of Transcribed and Translated Talks. In ''Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT)'', pp. 261–268, Trento, Italy, May 2012.<br /> # Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully Character-Level Neural Machine Translation without Explicit Segmentation. ''Transactions of the Association for Computational Linguistics (TACL)'', 2017.<br /> # Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Laubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. Nematus: a Toolkit for Neural Machine Translation. In ''Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics'', pp. 65–68, Valencia, Spain, April 2017. Association for Computational Linguistics. URL http://aclweb.org/anthology/E17-3017.<br /> # Aurlien Max and Guillaume Wisniewski. Mining Naturally-occurring Corrections and Paraphrases from Wikipedias Revision History. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta, Malta, may 2010. European Language Resources Association (ELRA). ISBN 2-9517408-6-7. URL https://wicopaco.limsi.fr.<br /> # Katrin Wisniewski, Karin Schne, Lionel Nicolas, Chiara Vettori, Adriane Boyd, Detmar Meurers, Andrea Abel, and Jirka Hana. MERLIN: An online trilingual learner corpus empirically grounding the European Reference Levels in authentic learner data, 10 2013. URL https://www.ukp.tu-darmstadt.de/data/spelling-correction/rwse-datasets.<br /> # Torsten Zesch. Measuring Contextual Fitness Using Error Contexts Extracted from the Wikipedia Revision History. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pp. 529–538, Avignon, France, April 2012. Association for Computational Linguistics.<br /> # Suranjana Samanta and Sameep Mehta. Towards Crafting Text Adversarial Samples. arXiv preprint arXiv:1707.02812, 2017. Karel Sebesta, Zuzanna Bedrichova, Katerina Sormov́a, Barbora Stindlov́a, Milan Hrdlicka, Tereza Hrdlickov́a, Jiŕı Hana, Vladiḿır Petkevic, Toḿas Jeĺınek, Svatava Skodov́a, Petr Janes, Katerina Lund́akov́a, Hana Skoumalov́a, Simon Sĺadek, Piotr Pierscieniak, Dagmar Toufarov́a, Milan Straka, Alexandr Rosen, Jakub Ńaplava, and Marie Poĺackova. CzeSL grammatical error correction dataset (CzeSL-GEC). Technical report, LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University, 2017. URL https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-2143.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Synthetic_and_natural_noise_both_break_neural_machine_translation&diff=34580 stat946w18/Synthetic and natural noise both break neural machine translation 2018-03-17T22:25:02Z <p>Jssambee: /* Natural Noise */</p> <hr /> <div>== Introduction ==<br /> * Humans have surprisingly robust language processing systems which can easily overcome typos, e.g.<br /> <br /> Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae.<br /> <br /> * A person's ability to read this text comes as no surprise to the Psychology literature<br /> *# Saberi &amp; Perrott (1999) found that this robustness extends to audio as well.<br /> *# Rayner et al. (2006) found that in noisier settings reading comprehension only slowed by 11%.<br /> *# McCusker et al. (1981) found that the common case of swapping letters could often go unnoticed by the reader.<br /> *# Mayall et al (1997) shows that we rely on word shape.<br /> *# Reicher, 1969; Pelli et al., (2003) found that we can switch between whole word recognition but the first and last letter positions are required to stay constant for comprehension<br /> <br /> However, neural machine translation (NMT) systems are brittle. i.e. The Arabic word<br /> [[File:Good_morning.PNG]] means a blessing for good morning, however [[File:Hunt.PNG]] means hunt or slaughter. <br /> <br /> Facebook's MT system mistakenly confused two words that only differ by one character, a situation that is challenging for a character-based NMT system.<br /> <br /> Figure 1 shows the performance translating German to English as a function of the percent of German words modified. Here we show two types of noise: (1) Random permutation of the word and (2) Swapping a pair of adjacent letters that does not include the first or last letter of the word. The important thing to note is that even small amounts of noise lead to substantial drops in performance.<br /> <br /> [[File:BLEU_plot.PNG]] <br /> <br /> BLEU (bilingual evaluation understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: &quot;the closer a machine translation is to a professional human translation, the better it is&quot;. BLEU is between 0 and 1.<br /> <br /> This paper explores two simple strategies for increasing model robustness:<br /> # using structure-invariant representations (character CNN representation)<br /> # robust training on noisy data, a form of adversarial training.<br /> <br /> The goal of the paper is two-fold:<br /> # to initiate a conversation on robust training and modeling techniques in NMT<br /> # to promote the creation of better and more linguistically accurate artificial noise to be applied to new languages and tasks<br /> <br /> == Adversarial examples ==<br /> The growing literature on adversarial examples has demonstrated how dangerous it can be to have brittle machine learning systems being used so pervasively in the real world. Small changes to the input can lead to dramatic<br /> failures of deep learning models. This leads to a potential for malicious attacks using adversarial examples. An important distinction is often drawn between white-box attacks, where adversarial examples are generated with<br /> access to the model parameters, and black-box attacks, where examples are generated without such access.<br /> <br /> The paper devises simple methods for generating adversarial examples for NMT. They do not assume any access to the NMT models' gradients, instead relying on cognitively-informed and naturally occurring language errors to generate noise.<br /> <br /> == MT system ==<br /> We experiment with three different NMT systems with access to character information at different levels.<br /> # Use &lt;code&gt;char2char&lt;/code&gt;, the fully character-level model of (Lee et al. 2017). This model processes a sentence as a sequence of characters. The encoder works as follows: the characters are embedded as vectors, and then the sequence of vectors is fed to a convolutional layer. The sequence output by the convolutional layer is then shortened by max pooling in the time dimension. The output of the max-pooling layer is then fed to a four-layer highway network (Srivasta et al. 2015), and the output of the highway network is in turn fed to a bidirectional GRU, producing a sequence of hidden units. The sequence of hidden units is then processed by the decoder, a GRU with attention, to produce probabilities over sequences of output characters.<br /> # Use &lt;code&gt;Nematus&lt;/code&gt; (Sennrich et al., 2017), a popular NMT toolkit. It is another sequence-to-sequence model with several architecture modifications, especially operating on sub-word units using byte-pair encoding. Byte-pair encoding (Sennich et al. 2015, Gage 1994) is an algorithm according to which we begin with a list of characters as our symbols, and repeatedly fuse common combinations to create new symbols. For example, if we begin with the letters a to z as our symbol list, and we find that &quot;th&quot; is the most common two-letter combination in a corpus, then we would add &quot;th&quot; to our symbol list in the first iteration. After we have used this algorithm to create a symbol list of the desired size, we apply a standard encoder-decoder with attention.<br /> # Use an attentional sequence-to-sequence model with a word representation based on a character convolutional neural network (&lt;code&gt;charCNN&lt;/code&gt;). The &lt;code&gt;charCNN&lt;/code&gt; model is similar to &lt;code&gt;char2char&lt;/code&gt;, but uses a shallower highway network and, although it reads the input sentence as characters, it produces as output a probability distribution over words, not characters.<br /> <br /> == Data ==<br /> === MT Data ===<br /> We use the TED talks parallel corpus prepared for IWSLT 2016 (Cettolo et al., 2012) for testing all of the NMT systems.<br /> <br /> [[File:Table1x.PNG]]<br /> <br /> === Natural and Artificial Noise ===<br /> ==== Natural Noise ====<br /> The three languages, French, German, and Czech, each have their own frequent natural errors. The corpora of edits used for these languages are:<br /> <br /> # French : Wikipedia Correction and Paraphrase Corpus (WiCoPaCo)<br /> # German : RWSE Wikipedia Correction Dataset and The MERLIN corpus<br /> # Czech : CzeSL Grammatical Error Correction Dataset (CzeSL-GEC) which is a manually annotated dataset of essays written by both non-native learners of Czech and Czech pupils<br /> <br /> The authors harvested naturally occurring errors (typos, misspellings, etc.) corresponding to these three languages from available corpora of edits to build a look-up table of possible lexical replacements.<br /> <br /> They insert these errors into the source-side of the parallel data by replacing every word in the corpus with an error if one exists in our dataset. When there is more than one possible replacement to choose, words for which there is no error, are sampled uniformly and kept as is.<br /> <br /> ==== Synthetic Noise ====<br /> In addition to naturally collected sources of error, we also experiment with four types of synthetic noise: Swap, Middle Random, Fully Random, and Key Typo. <br /> # &lt;code&gt;Swap&lt;/code&gt;: The first and simplest source of noise is swapping two letters (do not alter the first or last letters, only apply to words of length &gt;=4).<br /> # &lt;code&gt;Middle Random&lt;/code&gt;: Randomize the order of all the letters in a word except for the first and last (only apply to words of length &gt;=4).<br /> # &lt;code&gt;Fully Random&lt;/code&gt; Completely randomized words.<br /> # &lt;code&gt;Keyboard Typo&lt;/code&gt; Randomly replace one letter in each word with an adjacent key<br /> <br /> [[File:Table3x.PNG]]<br /> <br /> Table 3 shows BLEU scores of models trained on clean (Vanilla) texts and tested on clean and noisy<br /> texts. All models suffer a significant drop in BLEU when evaluated on noisy texts. This is true<br /> for both natural noise and all kinds of synthetic noise. The more noise in the text, the worse the<br /> translation quality, with random scrambling producing the lowest BLEU scores.<br /> <br /> == Dealing with noise ==<br /> === Structure Invariant Representations ===<br /> The three NMT models are all sensitive to word structure. The &lt;code&gt;char2char&lt;/code&gt; and &lt;code&gt;charCNN&lt;/code&gt; models both have convolutional layers on character sequences, designed to capture character n-grams (which are sequences of characters or words, of length n). The model in &lt;code&gt;Nematus&lt;/code&gt; is based on sub-word units obtained with byte pair encoding (where common consecutive characters are replaced with a unique byte that does not occur in the data). It thus relies on character order.<br /> <br /> The simplest way to improve such a model is to take the average character embeddings as a word representation. This model, referred to as &lt;code&gt;meanChar&lt;/code&gt;, first generates a word representation by averaging character embeddings, and then proceeds with a word-level encoder similar to the &lt;code&gt;charCNN&lt;/code&gt; model.<br /> <br /> [[File:Table5x.PNG]]<br /> <br /> &lt;code&gt;meanChar&lt;/code&gt; is good with the other three scrambling errors (Swap, Middle Random and Fully Random), but bad with Keyboard errors and Natural errors.<br /> <br /> === Black-Box Adversarial Training ===<br /> <br /> &lt;code&gt;charCNN&lt;/code&gt; Performance<br /> [[File:Table6x.PNG]]<br /> <br /> Here is the result of the translation of the scrambled meme:<br /> “According to a study of Cambridge University, it doesn’t matter which technology in a word is going to get the letters in a word that is the only important thing for the first and last letter.”<br /> <br /> == Analysis ==<br /> === Learning Multiple Kinds of Noise in &lt;code&gt;charCNN&lt;/code&gt; ===<br /> <br /> As Table 6 above shows, &lt;code&gt;charCNN&lt;/code&gt; models performed quite well across different noise types on the test set when they are trained on a mix of noise types, which led the authors to speculate that filters from different convolutional layers learned to be robust to different types of noises. To test this hypothesis, they analyzed the weights learned by &lt;code&gt;charCNN&lt;/code&gt; models trained on two kinds of input: completely scrambled words (Rand) without other kinds of noise, and a mix of Rand+Key+Nat kinds of noise. For each model, they computed the variance across the filter dimension for each one of the 1000 filters and for each one of the 25 character embedding dimensions, which were then averaged across the filters to yield 25 variances. <br /> <br /> As Figure 2 below shows, the variances for the ensemble model are higher and more varied, which indicates that the filters learned different patterns and the model differentiated between different character embedding dimensions. Under the random scrambling scheme, there should be no patterns for the model to learn, so it makes sense for the filter weights to stay close uniform weights, hence the consistently lower variance measures.<br /> <br /> [[File:Table7x.PNG]]<br /> <br /> == Conclusion ==<br /> In this work, the authors have shown that character-based NMT models are extremely brittle and tend to break when presented with both natural and synthetic kinds of noise. After a comparison of the models, they found that a character-based CNN can learn to<br /> address multiple types of errors that are seen in training.<br /> For the future work, the author suggested generating more realistic synthetic noise by using phonetic and syntactic structure. Also, they suggested that a better NMT architecture could be designed which can be robust to noise without seeing it in the training data.<br /> <br /> == Criticism ==<br /> A major critique of this paper is that the solutions presented do not adequately solve the problem. The response to the meanChar architecture has been mostly negative and the method of noise injection has been seen as a simple start. However, the authors have acknowledged these critiques stating that they realize their solution is just a starting point. They argue that this paper has opened the discussion on dealing with noise in machine translation which has been mostly left untouched.<br /> <br /> == References ==<br /> # Yonatan Belinkov and Yonatan Bisk. Synthetic and Natural Noise Both Break Neural Machine Translation. In ''International Conference on Learning Representations (ICLR)'', 2017.<br /> # Mauro Cettolo, Christian Girardi, and Marcello Federico. WIT: Web Inventory of Transcribed and Translated Talks. In ''Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT)'', pp. 261–268, Trento, Italy, May 2012.<br /> # Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully Character-Level Neural Machine Translation without Explicit Segmentation. ''Transactions of the Association for Computational Linguistics (TACL)'', 2017.<br /> # Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Laubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. Nematus: a Toolkit for Neural Machine Translation. In ''Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics'', pp. 65–68, Valencia, Spain, April 2017. Association for Computational Linguistics. URL http://aclweb.org/anthology/E17-3017.<br /> # Aurlien Max and Guillaume Wisniewski. Mining Naturally-occurring Corrections and Paraphrases from Wikipedias Revision History. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta, Malta, may 2010. European Language Resources Association (ELRA). ISBN 2-9517408-6-7. URL https://wicopaco.limsi.fr.<br /> # Katrin Wisniewski, Karin Schne, Lionel Nicolas, Chiara Vettori, Adriane Boyd, Detmar Meurers, Andrea Abel, and Jirka Hana. MERLIN: An online trilingual learner corpus empirically grounding the European Reference Levels in authentic learner data, 10 2013. URL https://www.ukp.tu-darmstadt.de/data/spelling-correction/rwse-datasets.<br /> # Torsten Zesch. Measuring Contextual Fitness Using Error Contexts Extracted from the Wikipedia Revision History. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pp. 529–538, Avignon, France, April 2012. Association for Computational Linguistics.<br /> # Suranjana Samanta and Sameep Mehta. Towards Crafting Text Adversarial Samples. arXiv preprint arXiv:1707.02812, 2017. Karel Sebesta, Zuzanna Bedrichova, Katerina Sormov́a, Barbora Stindlov́a, Milan Hrdlicka, Tereza Hrdlickov́a, Jiŕı Hana, Vladiḿır Petkevic, Toḿas Jeĺınek, Svatava Skodov́a, Petr Janes, Katerina Lund́akov́a, Hana Skoumalov́a, Simon Sĺadek, Piotr Pierscieniak, Dagmar Toufarov́a, Milan Straka, Alexandr Rosen, Jakub Ńaplava, and Marie Poĺackova. CzeSL grammatical error correction dataset (CzeSL-GEC). Technical report, LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University, 2017. URL https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-2143.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18&diff=34501 stat946w18 2018-03-17T14:03:50Z <p>Jssambee: /* Paper presentation */</p> <hr /> <div>=[https://piazza.com/uwaterloo.ca/fall2017/stat946/resources List of Papers]=<br /> <br /> = Record your contributions here [https://docs.google.com/spreadsheets/d/1fU746Cld_mSqQBCD5qadvkXZW1g-j-kHvmHQ6AMeuqU/edit?usp=sharing]=<br /> <br /> Use the following notations:<br /> <br /> P: You have written a summary/critique on the paper.<br /> <br /> T: You had a technical contribution on a paper (excluding the paper that you present).<br /> <br /> E: You had an editorial contribution on a paper (excluding the paper that you present).<br /> <br /> <br /> <br /> [https://docs.google.com/forms/d/e/1FAIpQLSdcfYZu5cvpsbzf0Nlxh9TFk8k1m5vUgU1vCLHQNmJog4xSHw/viewform?usp=sf_link Your feedback on presentations]<br /> <br /> =Paper presentation=<br /> {| class=&quot;wikitable&quot;<br /> <br /> {| border=&quot;1&quot; cellpadding=&quot;3&quot;<br /> |-<br /> |width=&quot;60pt&quot;|Date<br /> |width=&quot;100pt&quot;|Name <br /> |width=&quot;30pt&quot;|Paper number <br /> |width=&quot;700pt&quot;|Title<br /> |width=&quot;30pt&quot;|Link to the paper<br /> |width=&quot;30pt&quot;|Link to the summary<br /> |-<br /> |Feb 15 (example)||Ri Wang || ||Sequence to sequence learning with neural networks.||[http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf Paper] || [http://wikicoursenote.com/wiki/Stat946f15/Sequence_to_sequence_learning_with_neural_networks#Long_Short-Term_Memory_Recurrent_Neural_Network Summary]<br /> |-<br /> |Feb 27 || || 1|| || || <br /> |-<br /> |Feb 27 || || 2|| || || <br /> |-<br /> |Feb 27 || || 3|| || || <br /> |-<br /> |Mar 1 || Peter Forsyth || 4|| Unsupervised Machine Translation Using Monolingual Corpora Only || [https://arxiv.org/pdf/1711.00043.pdf Paper] || [[https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Unsupervised_Machine_Translation_Using_Monolingual_Corpora_Only Summary]]<br /> |-<br /> |Mar 1 || wenqing liu || 5|| Spectral Normalization for Generative Adversarial Networks || [https://openreview.net/pdf?id=B1QRgziT- Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Spectral_normalization_for_generative_adversial_network Summary]<br /> |-<br /> |Mar 1 || Ilia Sucholutsky || 6|| One-Shot Imitation Learning || [https://papers.nips.cc/paper/6709-one-shot-imitation-learning.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=One-Shot_Imitation_Learning Summary]<br /> |-<br /> |Mar 6 || George (Shiyang) Wen || 7|| AmbientGAN: Generative models from lossy measurements || [https://openreview.net/pdf?id=Hy7fDog0b Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/AmbientGAN:_Generative_Models_from_Lossy_Measurements Summary]<br /> |-<br /> |Mar 6 || Raphael Tang || 8|| Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolutional Layers || [https://arxiv.org/pdf/1802.00124.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Rethinking_the_Smaller-Norm-Less-Informative_Assumption_in_Channel_Pruning_of_Convolutional_Layers Summary]<br /> |-<br /> |Mar 6 ||Fan Xia || 9|| Word translation without parallel data ||[https://arxiv.org/pdf/1710.04087.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Word_translation_without_parallel_data Summary]<br /> |-<br /> |Mar 8 || Alex (Xian) Wang || 10 || Self-Normalizing Neural Networks || [http://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Self_Normalizing_Neural_Networks Summary] <br /> |-<br /> |Mar 8 || Michael Broughton || 11|| Convergence of Adam and beyond || [https://openreview.net/pdf?id=ryQu7f-RZ Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=On_The_Convergence_Of_ADAM_And_Beyond Summary] <br /> |-<br /> |Mar 8 || Wei Tao Chen || 12|| Predicting Floor-Level for 911 Calls with Neural Networks and Smartphone Sensor Data || [https://openreview.net/forum?id=ryBnUWb0b Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Predicting_Floor-Level_for_911_Calls_with_Neural_Networks_and_Smartphone_Sensor_Data Summary]<br /> |-<br /> |Mar 13 || Chunshang Li || 13 || UNDERSTANDING IMAGE MOTION WITH GROUP REPRESENTATIONS || [https://openreview.net/pdf?id=SJLlmG-AZ Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Understanding_Image_Motion_with_Group_Representations Summary] <br /> |-<br /> |Mar 13 || Saifuddin Hitawala || 14 || Robust Imitation of Diverse Behaviors || [https://papers.nips.cc/paper/7116-robust-imitation-of-diverse-behaviors.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Robust_Imitation_of_Diverse_Behaviors Summary] <br /> |-<br /> |Mar 13 || Taylor Denouden || 15|| A neural representation of sketch drawings || [https://arxiv.org/pdf/1704.03477.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=A_Neural_Representation_of_Sketch_Drawings Summary]<br /> |-<br /> |Mar 15 || Zehao Xu || 16|| Synthetic and natural noise both break neural machine translation || [https://openreview.net/pdf?id=BJ8vJebC- Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Synthetic_and_natural_noise_both_break_neural_machine_translation Summary]<br /> |-<br /> |Mar 15 || Prarthana Bhattacharyya || 17|| Wasserstein Auto-Encoders || [https://arxiv.org/pdf/1711.01558.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Wasserstein_Auto-Encoders Summary] <br /> |-<br /> |Mar 15 || Changjian Li || 18|| Label-Free Supervision of Neural Networks with Physics and Domain Knowledge || [https://arxiv.org/pdf/1609.05566.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Label-Free_Supervision_of_Neural_Networks_with_Physics_and_Domain_Knowledge Summary]<br /> |-<br /> |Mar 20 || Travis Dunn || 19|| Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments || [https://openreview.net/pdf?id=Sk2u1g-0- Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Continuous_Adaptation_via_Meta-Learning_in_Nonstationary_and_Competitive_Environments Summary]<br /> |-<br /> |Mar 20 || Sushrut Bhalla || 20|| MaskRNN: Instance Level Video Object Segmentation || [https://papers.nips.cc/paper/6636-maskrnn-instance-level-video-object-segmentation.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/MaskRNN:_Instance_Level_Video_Object_Segmentation Summary]<br /> |-<br /> |Mar 20 || Hamid Tahir || 21|| Wavelet Pooling for Convolution Neural Networks || [https://openreview.net/pdf?id=rkhlb8lCZ Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Wavelet_Pooling_CNN Summary]<br /> |-<br /> |Mar 22 || Dongyang Yang|| 22|| Implicit Causal Models for Genome-wide Association Studies || [https://openreview.net/pdf?id=SyELrEeAb Paper] ||[https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Implicit_Causal_Models_for_Genome-wide_Association_Studies Summary]<br /> |-<br /> |Mar 22 || Yao Li || 23||Improving GANs Using Optimal Transport || [https://openreview.net/pdf?id=rkQkBnJAb Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/IMPROVING_GANS_USING_OPTIMAL_TRANSPORT Summary]<br /> |-<br /> |Mar 22 || Sahil Pereira || 24||End-to-End Differentiable Adversarial Imitation Learning|| [http://proceedings.mlr.press/v70/baram17a/baram17a.pdf Paper] || [http://proceedings.mlr.press/v70/baram17a/baram17a.pdf Summary]<br /> |-<br /> |Mar 27 || Jaspreet Singh Sambee || 25|| Do Deep Neural Networks Suffer from Crowding? || [http://papers.nips.cc/paper/7146-do-deep-neural-networks-suffer-from-crowding.pdf Paper] || <br /> |-<br /> |Mar 27 || Braden Hurl || 26|| Spherical CNNs || [https://openreview.net/pdf?id=Hkbd5xZRb Paper] || <br /> |-<br /> |Mar 27 || Marko Ilievski || 27|| Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders || [http://proceedings.mlr.press/v70/engel17a/engel17a.pdf Paper] || <br /> |-<br /> |Mar 29 || Alex Pon || 28||PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space || [https://arxiv.org/abs/1706.02413 Paper] ||<br /> |-<br /> |Mar 29 || Sean Walsh || 29||Multi-scale Dense Networks for Resource Efficient Image Classification || [https://arxiv.org/pdf/1703.09844.pdf Paper] ||<br /> |-<br /> |Mar 29 || Jason Ku || 30||MarrNet: 3D Shape Reconstruction via 2.5D Sketches ||[https://arxiv.org/pdf/1711.03129.pdf Paper] ||<br /> |-<br /> |Apr 3 || Tong Yang || 31|| Dynamic Routing Between Capsules. || [http://papers.nips.cc/paper/6975-dynamic-routing-between-capsules.pdf Paper] || <br /> |-<br /> |Apr 3 || Benjamin Skikos || 32|| Training and Inference with Integers in Deep Neural Networks || [https://openreview.net/pdf?id=HJGXzmspb Paper] || <br /> |-<br /> |Apr 3 || Weishi Chen || 33|| Tensorized LSTMs for Sequence Learning || [https://arxiv.org/pdf/1711.01577.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/Tensorized_LSTMs&amp;action=edit&amp;redlink=1 Summary] || <br /> |-</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Label-Free_Supervision_of_Neural_Networks_with_Physics_and_Domain_Knowledge&diff=34378 Label-Free Supervision of Neural Networks with Physics and Domain Knowledge 2018-03-15T19:12:01Z <p>Jssambee: /* Problem Setup */</p> <hr /> <div>== Introduction ==<br /> Applications of machine learning are often encumbered by the need for large amounts of labeled training data. Neural networks have made large amounts of labeled data even more crucial to success (LeCun, Bengio, and Hinton 2015). Nonetheless, Humans are often able to learn without direct examples, opting instead for high level instructions for how a task should be performed, or what it will look like when completed. This work explores whether a similar principle can be applied to teaching machines: can we supervise networks without individual examples by instead describing only the structure of desired outputs.<br /> <br /> [[File:c433li-1.png]]<br /> <br /> Unsupervised learning methods such as autoencoders, also aim to uncover hidden structure in the data without having access to any label. Such systems succeed in producing highly compressed, yet informative representations of the inputs (Kingma and Welling 2013; Le 2013). However, these representations differ from ours as they are not explicitly constrained to have a particular meaning or semantics. This paper attempts to explicitly provide the semantics of the hidden variables we hope to discover, but still train without labels by learning from constraints that are known to hold according to prior domain knowledge. By training without direct examples of the values our hidden (output) variables take, several advantages are gained over traditional supervised learning, including:<br /> * a reduction in the amount of work spent labeling, <br /> * an increase in generality, as a single set of constraints can be applied to multiple data sets without relabeling.<br /> <br /> == Problem Setup ==<br /> In a traditional supervised learning setting, we are given a training set &lt;math&gt;D=\{(x_1, y_1), \cdots, (x_n, y_n)\}&lt;/math&gt; of &lt;math&gt;n&lt;/math&gt; training examples. Each example is a pair &lt;math&gt;(x_i,y_i)&lt;/math&gt; formed by an instance &lt;math&gt;x_i \in X&lt;/math&gt; and the corresponding output (label) &lt;math&gt;y_i \in Y&lt;/math&gt;. The goal is to learn a function &lt;math&gt;f: X \rightarrow Y&lt;/math&gt; mapping inputs to outputs. To quantify performance, a loss function &lt;math&gt;\ell:Y \times Y \rightarrow \mathbb{R}&lt;/math&gt; is provided, and a mapping is found via <br /> <br /> ::&lt;math&gt; f^* = \text{argmin}_{f \in \mathcal{F}} \sum_{i=1}^n \ell(f(x_i),y_i) &lt;/math&gt;<br /> <br /> where the optimization is over a pre-defined class of functions &lt;math&gt;\mathcal{F}&lt;/math&gt; (hypothesis class). In our case, &lt;math&gt;\mathcal{F}&lt;/math&gt; will be (convolutional) neural networks parameterized by their weights. The loss could be for example &lt;math&gt;\ell(f(x_i),y_i) = 1[f(x_i) \neq y_i]&lt;/math&gt;. By restricting the space of possible functions specifying the hypothesis class &lt;math&gt;\mathcal{F}&lt;/math&gt;, we are leveraging prior knowledge about the specific problem we are trying to solve. Informally, the so-called No Free Lunch Theorems state that every machine learning algorithm must make such assumptions in order to work. Another common way in which a modeler incorporates prior knowledge is by specifying an a-priori preference for certain functions in &lt;math&gt;\mathcal{F}&lt;/math&gt;, incorporating a regularization term &lt;math&gt;R:\mathcal{F} \rightarrow \mathbb{R}&lt;/math&gt;, and solving for &lt;math&gt; f^* = argmin_{f \in \mathcal{F}} \sum_{i=1}^n \ell(f(x_i),y_i) + R(f)&lt;/math&gt;. Typically, the regularization term &lt;math&gt;R:\mathcal{F} \rightarrow \mathbb{R}&lt;/math&gt; specifies a preference for &quot;simpler' functions (Occam's razor).<br /> <br /> The focus is on the set of problems/domains where the problem is a complex environment having a complex representation of the output space, for example mapping an input image to the height of an object(since this leads to a complex output space) rather than simple binary classification problem.<br /> <br /> In this paper, prior knowledge on the structure of the outputs is modelled by providing a weighted constraint function &lt;math&gt;g:X \times Y \rightarrow \mathbb{R}&lt;/math&gt;, used to penalize “structures” that are not consistent with our prior knowledge. And whether this weak form of supervision is sufficient to learn interesting functions is explored. While one clearly needs labels &lt;math&gt;y&lt;/math&gt; to evaluate &lt;math&gt;f^*&lt;/math&gt;, labels may not be necessary to discover &lt;math&gt;f^*&lt;/math&gt;. If prior knowledge informs us that outputs of &lt;math&gt;f^*&lt;/math&gt; have other unique properties among functions in &lt;math&gt;\mathcal{F}&lt;/math&gt;, we may use these properties for training rather than direct examples &lt;math&gt;y&lt;/math&gt;. <br /> <br /> Specifically, an unsupervised approach where the labels &lt;math&gt;y_i&lt;/math&gt; are not provided to us is considered, where a necessary property of the output &lt;math&gt;g&lt;/math&gt; is optimized instead.<br /> ::&lt;math&gt;\hat{f}^* = \text{argmin}_{f \in \mathcal{F}} \sum_{i=1}^n g(x_i,f(x_i))+ R(f) &lt;/math&gt;<br /> <br /> If the optimizing the above equation is sufficient to find &lt;math&gt;\hat{f}^*&lt;/math&gt;, we can use it in replace of labels. If it's not sufficient, additional regularization terms are added. The idea is illustrated with three examples, as described in the next section.<br /> <br /> == Experiments ==<br /> === Tracking an object in free fall ===<br /> In the first experiment, they record videos of an object being thrown across the field of view, and aim to learn the object's height in each frame. The goal is to obtain a regression network mapping from &lt;math&gt;{R^{\text{height} \times \text{width} \times 3}} \rightarrow \mathbb{R}&lt;/math&gt;, where &lt;math&gt;\text{height}&lt;/math&gt; and &lt;math&gt;\text{width}&lt;/math&gt; are the number of vertical and horizontal pixels per frame, and each pixel has 3 color channels. This network is trained as a structured prediction problem operating on a sequence of &lt;math&gt;N&lt;/math&gt; images to produce a sequence of &lt;math&gt;N&lt;/math&gt; heights, &lt;math&gt;\left(R^{\text{height} \times \text{width} \times 3} \right)^N \rightarrow \mathbb{R}^N&lt;/math&gt;, and each piece of data &lt;math&gt;x_i&lt;/math&gt; will be a vector of images, &lt;math&gt;\mathbf{x}&lt;/math&gt;.<br /> Rather than supervising the network with direct labels, &lt;math&gt;\mathbf{y} \in \mathbb{R}^N&lt;/math&gt;, the network is instead supervised to find an object obeying the elementary physics of free falling objects. An object acting under gravity will have a fixed acceleration of &lt;math&gt;a = -9.8 m / s^2&lt;/math&gt;, and the plot of the object's height over time will form a parabola:<br /> ::&lt;math&gt;\mathbf{y}_i = y_0 + v_0(i\Delta t) + \frac{1}{2} a(i\Delta t)^2&lt;/math&gt;<br /> <br /> The idea is, given any trajectory of &lt;math&gt;N&lt;/math&gt; height predictions, &lt;math&gt;f(\mathbf{x})&lt;/math&gt;, we fit a parabola with fixed curvature to those predictions, and minimize the resulting residual. Formally, if we specify &lt;math&gt;\mathbf{a} = [\frac{1}{2} a\Delta t^2, \frac{1}{2} a(2 \Delta t)^2, \ldots, \frac{1}{2} a(N \Delta t)^2]&lt;/math&gt;, the prediction produced by the fitted parabola is:<br /> ::&lt;math&gt; \mathbf{\hat{y}} = \mathbf{a} + \mathbf{A} (\mathbf{A}^T\mathbf{A})^{-1} \mathbf{A}^T (f(\mathbf{x}) - \mathbf{a}) &lt;/math&gt;<br /> <br /> where<br /> ::&lt;math&gt;<br /> \mathbf{A} = <br /> \left[ {\begin{array}{*{20}c}<br /> \Delta t &amp; 1 \\<br /> 2\Delta t &amp; 1 \\<br /> 3\Delta t &amp; 1 \\<br /> \vdots &amp; \vdots \\<br /> N\Delta t &amp; 1 \\<br /> \end{array} } \right]<br /> &lt;/math&gt;<br /> <br /> The constraint loss is then defined as<br /> ::&lt;math&gt;g(\mathbf{x},f(\mathbf{x})) = g(f(\mathbf{x})) = \sum_{i=1}^{N} |\mathbf{\hat{y}}_i - f(\mathbf{x})_i|&lt;/math&gt;<br /> <br /> Note that &lt;math&gt;\hat{y}&lt;/math&gt; is not the ground truth labels. Because &lt;math&gt;g&lt;/math&gt; is differentiable almost everywhere, it can be optimized with SGD. They find that when combined with existing regularization methods for neural networks, this optimization is sufficient to recover &lt;math&gt;f^*&lt;/math&gt; up to an additive constant &lt;math&gt;C&lt;/math&gt; (specifying what object height corresponds to 0).<br /> <br /> [[File:c433li-2.png]]<br /> <br /> The data set is collected on a laptop webcam running at 10 frames per second (&lt;math&gt;\Delta t = 0.1s&lt;/math&gt;). The camera position is fixed and 65 diverse trajectories of the object in flight, totalling 602 images are recorded. For each trajectory, the network is trained on randomly selected intervals of &lt;math&gt;N=5&lt;/math&gt; contiguous frames. Images are resized to &lt;math&gt;56 \times 56&lt;/math&gt; pixels before going into a small, randomly initialized neural network with no pretraining. The network consists of 3 Conv/ReLU/MaxPool blocks followed by 2 Fully Connected/ReLU layers with probability 0.5 dropout and a single regression output.<br /> <br /> Since scaling the &lt;math&gt;y_0&lt;/math&gt; and &lt;math&gt;v_0&lt;/math&gt; results in the same constraint loss &lt;math&gt;g&lt;/math&gt;, the authors evaluate the result by the correlation of predicted heights with ground truth pixel measurements. This method was used since the distance from the object to the camera could not be accurately recorded, and this distance is required to calculate the height in meters. This is not a bullet proof evaluation, and is discussed in further detail in the critique section. The results are compared to a supervised network trained with the labels to directly predict the height of the object in pixels. The supervised learning task is viewed as a substantially easier task. From this knowledge we can see from the table below that, under their evaluation criteria, the result is pretty satisfying.<br /> {| class=&quot;wikitable&quot;<br /> |+ style=&quot;text-align: left;&quot; | Evaluation <br /> |-<br /> ! scope=&quot;col&quot; | Method !! scope=&quot;col&quot; | Random Uniform Output !! scope=&quot;col&quot; | Supervised with Labels !! scope=&quot;col&quot; | Approach in this Paper<br /> |-<br /> ! scope=&quot;row&quot; | Correlation <br /> | 12.1% || 94.5% || 90.1%<br /> |}<br /> <br /> === Tracking the position of a walking man ===<br /> In the second experiment, they aim to detect the horizontal position of a person walking across a frame without providing direct labels &lt;math&gt;y \in \mathbb{R}&lt;/math&gt; by exploiting the assumption that the person will be walking at a constant velocity over short periods of time. This is formulated as a structured prediction problem &lt;math&gt;f: \left(R^{\text{height} \times \text{width} \times 3} \right)^N \rightarrow \mathbb{R}^N&lt;/math&gt;, and each training instances &lt;math&gt;x_i&lt;/math&gt; are a vector of images, &lt;math&gt;\mathbf{x}&lt;/math&gt;, being mapped to a sequence of predictions, &lt;math&gt;\mathbf{y}&lt;/math&gt;. Given the similarities to the first experiment with free falling objects, we might hope to simply remove the gravity term from equation and retrain. However, in this case, that is not possible, as the constraint provides a necessary, but not sufficient, condition for convergence.<br /> <br /> Given any sequence of correct outputs, &lt;math&gt;(\mathbf{y}_1, \ldots, \mathbf{y}_N)&lt;/math&gt;, the modified sequence, &lt;math&gt;(\lambda * \mathbf{y}_1 + C, \ldots, \lambda * \mathbf{y}_N + C)&lt;/math&gt; (&lt;math&gt;\lambda, C \in \mathbb{R}&lt;/math&gt;) will also satisfy the constant velocity constraint. In the worst case, when &lt;math&gt;\lambda = 0&lt;/math&gt;, &lt;math&gt;f \equiv C&lt;/math&gt;, and the network can satisfy the constraint while having no dependence on the image. The trivial output is avoided by adding two two additional loss terms.<br /> <br /> ::&lt;math&gt;h_1(\mathbf{x}) = -\text{std}(f(\mathbf{x}))&lt;/math&gt;<br /> which seeks to maximize the standard deviation of the output, and<br /> <br /> ::&lt;math&gt;\begin{split}<br /> h_2(\mathbf{x}) = \hphantom{'} &amp; \text{max}(\text{ReLU}(f(\mathbf{x}) - 10)) \hphantom{\text{ }}+ \\<br /> &amp; \text{max}(\text{ReLU}(0 - f(\mathbf{x})))<br /> \end{split}<br /> &lt;/math&gt;<br /> which limit the output to a fixed ranged &lt;math&gt;[0, 10]&lt;/math&gt;, the final loss is thus:<br /> <br /> ::&lt;math&gt;<br /> \begin{split}<br /> g(\mathbf{x}) = \hphantom{'} &amp; ||(\mathbf{A} (\mathbf{A}^T\mathbf{A})^{-1} \mathbf{A}^T - \mathbf{I}) * f(\mathbf{x})||_1 \hphantom{\text{ }}+ \\<br /> &amp; \gamma_1 * h_1(\mathbf{x}) <br /> \hphantom{\text{ }}+ \\<br /> &amp; \gamma_2 * h_2(\mathbf{x})<br /> % h_2(y) &amp; = \text{max}(\text{ReLU}(y - 10)) + \\<br /> % &amp; \hphantom{=}\hphantom{a} \text{max}(\text{ReLU}(0 - y))<br /> \end{split}<br /> &lt;/math&gt;<br /> <br /> [[File:c433li-3.png]]<br /> <br /> The data set contains 11 trajectories across 6 distinct scenes, totalling 507 images resized to &lt;math&gt;56 \times 56&lt;/math&gt;. The network is trained to output linearly consistent positions on 5 strided frames from the first half of each trajectory, and is evaluated on the second half. The boundary violation penalty is set to &lt;math&gt;\gamma_2 = 0.8&lt;/math&gt; and the standard deviation bonus is set to &lt;math&gt;\gamma_1 = 0.6&lt;/math&gt;.<br /> <br /> As in the previous experiment, the result is evaluated by the correlation with the ground truth. The result is as follows:<br /> {| class=&quot;wikitable&quot;<br /> |+ style=&quot;text-align: left;&quot; | Evaluation <br /> |-<br /> ! scope=&quot;col&quot; | Method !! scope=&quot;col&quot; | Random Uniform Output !! scope=&quot;col&quot; | Supervised with Labels !! scope=&quot;col&quot; | Approach in this Paper<br /> |-<br /> ! scope=&quot;row&quot; | Correlation <br /> | 45.9% || 80.5% || 95.4%<br /> |}<br /> Surprisingly, the approach in this paper beats the same network trained with direct labeled supervision on the test set, which can be attributed to overfitting on the small amount of training data available (as correlation on training data reached 99.8%).<br /> <br /> === Detecting objects with causal relationships ===<br /> In the previous experiments, the authors explored options for incorporating constraints pertaining to dynamics equations in real-world phenomena, i.e., prior knowledge derived from elementary physics. In this experiment, the authors explore the possibilities of learning from logical constraints imposed on single images. More specifically, they ask whether it is possible to learn from causal phenomena.<br /> <br /> [[File:paper18_Experiment_3.png|400px]]<br /> <br /> Here, the authors provide images containing a stochastic collection of up to four characters: Peach, Mario, Yoshi, and Bowser, with each character having small appearance changes across frames due to rotation and reflection. Example images can be seen in Fig. (4). While the existence of objects in each frame is non-deterministic, the generating distribution encodes the underlying phenomenon that Mario will always appear whenever Peach appears. The aim is to create a pair of neural networks &lt;math&gt;f_1, f_2&lt;/math&gt; for identifying Peach and Mario, respectively. The networks, &lt;math&gt;f_k : R^{height×width×3} → \{0, 1\}&lt;/math&gt;, map the image to the discrete boolean variables, &lt;math&gt;y_1&lt;/math&gt; and &lt;math&gt;y_2&lt;/math&gt;. Rather than supervising with direct labels, the authors train the networks by constraining their outputs to have the logical relationship &lt;math&gt;y_1 ⇒ y_2&lt;/math&gt;. This problem is challenging because the networks must simultaneously learn to recognize the characters and select them according to logical relationships. To avoid the trivial solution &lt;math&gt;y_1 \equiv 1, y_2 \equiv 1&lt;/math&gt; on every image, three additional loss terms need to be added:<br /> <br /> ::&lt;math&gt; h_1(\mathbf{x}, k) = \frac{1}{M}\sum_i^M |Pr[f_k(\mathbf{x}) = 1] - Pr[f_k(\rho(\mathbf{x})) = 1]|, &lt;/math&gt;<br /> <br /> which forces rotational independence of the outputs in order to encourage the network to learn the existence, rather than location of objects, <br /> <br /> ::&lt;math&gt; h_2(\mathbf{x}, k) = -\text{std}_{i \in [1 \dots M]}(Pr[f_k(\mathbf{x}_i) = 1]), &lt;/math&gt;<br /> <br /> which seeks high variance outputs, and<br /> <br /> ::&lt;math&gt; h_3(\mathbf{x}, v) = \frac{1}{M}\sum_i^{M} (Pr[f(\mathbf{x}_i) = v] - \frac{1}{3} + (\frac{1}{3} - \mu_v))^2 \\<br /> \mu_{v} = \frac{1}{M}\sum_i^{M} \mathbb{1}\{v = \text{argmax}_{v' \in \{0, 1\}^2} Pr[f(\mathbf{x}) = v']\}. &lt;/math&gt;<br /> <br /> which seeks high entropy outputs. The final loss function then becomes: <br /> <br /> ::&lt;math&gt; \begin{split}<br /> g(\mathbf{x}) &amp; = \mathbb{1}\{f_1(\mathbf{x}) \nRightarrow f_2(\mathbf{x})\} \hphantom{\text{ }} + \\<br /> &amp; \sum_{k \in \{1, 2\}} \gamma_1 h_1(\mathbf{x}, k) + \gamma_2 h_2(\mathbf{x}, k) + <br /> \hspace{-0.7em} \sum_{v \neq \{1,0\}} \hspace{-0.7em} \gamma_3 * h_3(\mathbf{x}, v)<br /> \end{split}<br /> &lt;/math&gt;<br /> <br /> '''Evaluation'''<br /> <br /> The input images, shown in Fig. (4), are 56 × 56 pixels. The authors used &lt;math&gt;\gamma_1 = 0.65, \gamma_2 = 0.65, \gamma_3 = 0.95&lt;/math&gt;, and trained for 4,000 iterations. This experiment demonstrates that networks can learn from constraints that operate over discrete sets with potentially complex logical rules. Removing constraints will cause learning to fail. Thus, the experiment also shows that sophisticated sufficiency conditions can be key to success when learning from constraints.<br /> <br /> == Conclusion and Critique ==<br /> This paper has introduced a method for using physics and other domain constraints to supervise neural networks. However, the approach described in this paper is not entirely new. Similar ideas are already widely used in Q learning, where the Q value are not available, and the network is supervised by the constraint, as in Deep Q learning (Mnih, Riedmiller et al. 2013).<br /> ::&lt;math&gt;Q(s,a) = R(r,s) + \gamma \sum_{s' ~ P_{sa}}{\text{max}_{a'}Q(s',a')}&lt;/math&gt;<br /> <br /> <br /> Also, the paper has a mistake where they quote the free fall equation as<br /> ::&lt;math&gt;\mathbf{y}_i = y_0 + v_0(i\Delta t) + a(i\Delta t)^2&lt;/math&gt;<br /> which should be<br /> ::&lt;math&gt;\mathbf{y}_i = y_0 + v_0(i\Delta t) + \frac{1}{2} a(i\Delta t)^2&lt;/math&gt;<br /> Although in this case it doesn't affect the result.<br /> <br /> <br /> For the evaluation of the experiments, they used correlation with ground truth as the metric to avoid the fact that the output can be scaled without affecting the constraint loss. This is fine if the network gives output of the same scale. However, there's no such guarantee, and the network may give output of varying scale, in which case, we can't say that the network has learnt the correct thing, although it may have a high correlation with ground truth. In fact, to solve the scaling issue, an obvious way is to combine the constraints introduced in this paper with some labeled training data. It's not clear why the author didn't experiment with a combination of these two loss.<br /> <br /> These methods essentially boil down to generating approximate labels for training data using some knowledge of the dynamic that the labels should follow.<br /> <br /> Finally, this paper only picks examples where the constraints are easy to design, while in some more common tasks such as image classification, what kind of constraints are needed is not straightforward at all.<br /> <br /> == References ==<br />  LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep learning. Nature 521(7553):436–444.<br /> <br />  Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; and Riedmiller, M. 2013. Playing Atari with Deep Reinforcement Learning. arxiv 1312.5602.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/MaskRNN:_Instance_Level_Video_Object_Segmentation&diff=34181 stat946w18/MaskRNN: Instance Level Video Object Segmentation 2018-03-15T03:54:42Z <p>Jssambee: /* MaskRNN: Experimental Results */</p> <hr /> <div>== Introduction ==<br /> Deep Learning has produced state of the art results in many computer vision tasks like image classification, object localization, object detection, object segmentation, semantic segmentation and instance level video object segmentation. Image classification classify the image based on the prominent objects. Object localization is the task of finding objects’ location in the frame. Object Segmentation task involves providing a pixel map which represents the pixel wise location of the objects in the image. Semantic segmentation task attempts at segmenting the image into meaningful parts. Instance level video object segmentation is the task of consistent object segmentation in video sequences.<br /> <br /> There are 2 different types of video object segmentation: Unsupervised and Semi-supervised. In unsupervised video object segmentation, the task is to find the salient objects and track the main objects in the video. In an unsupervised setting, the ground truth mask of the salient objects is provided for the first frame. The task is thus simplified to only track the objects required. In this paper we look at an unsupervised video object segmentation technique.<br /> <br /> == Background Papers ==<br /> Video object segmentation has been performed using spatio-temporal graphs and deep learning. The Graph based methods construct 3D spatio-temporal graphs in order to model the inter- and the intra-frame relationship of pixels or superpixels in a video.Hence they are computationally slower than deep learning methods and are unable to run at real-time. There are 2 main deep learning techniques for semi-supervised video object segmentation: One Shot Video Object Segmentation (OSVOS) and Learning Video Object Segmentation from Static Images (MaskTrack). Following a brief description of the new techniques introduced by these papers for semi-supervised video object segmentation task.<br /> <br /> === OSVOS (One-Shot Video Object Segmentation) ===<br /> <br /> [[File:OSVOS.jpg | 1000px]]<br /> <br /> This paper introduces the technique of using a frame-by-frame object segmentation without any temporal information from the previous frames of the video. The paper uses a VGG-16 network with pre-trained weights from image classification task. This network is then converted into a fully-connected network (FCN) by removing the fully connected dense layers at the end and adding convolution layers to generate a segment mask of the input. This network is then trained on the DAVIS 2016 dataset.<br /> <br /> During testing, the trained VGG-16 FCN is fine-tuned using the first frame of the video using the ground truth. Because this is a semi-supervised case, the segmented mask (ground truth) for the first frame is available. The first frame data is augmented by zooming/rotating/flipping the first frame and the associated segment mask.<br /> <br /> === MaskTrack (Learning Video Object Segmentation from Static Images) ===<br /> <br /> [[File:MaskTrack.jpg | 500px]]<br /> <br /> MaskTrack takes the output of the previous frame to improve its predictions to generate the segmentation mask for the next frame. Thus the input to the network is 4 channel wide (3 RGB channels from the frame at time (t) + 1 binary segmentation mask from frame (t-1)). The output of the network is the binary segmentation mask for frame at time (t). Using the binary segmentation mask (referred to as guided object segmentation in the paper), the network is able to use some temporal information from previous frame to improve its segmentation mask prediction for the next frame.<br /> <br /> The model of the MaskTrack network is similar to a modular VGG-16 and is referred to as MaskTrack ConvNet in the paper. The network is trained offline on saliency segmentation datasets: ECSSD, MSRA 10K, SOD and PASCAL-S. The input mask for the binary segmentation mask channel is generated via non-rigid deformation and affine transformation of the ground truth segmentation mask. Similar data-augmentation techniques are also used during online training. Just like OSVOS, MaskTrack uses the first frame ground truth (with augmented images) to fine-tune the network to improve prediction score for the particular video sequence.<br /> <br /> A parallel ConvNet network is used to generate predicted segment mask based on the optical flow magnitude. The optical flow between 2 frames is calculated using the EpicFlow algorithm. The output of the two networks is combined using averaging operation to generate the final predicted segmented mask.<br /> <br /> == Dataset ==<br /> The three major datasets used in this paper are DAVIS-2016, DAVIS-2017 and Segtrack v2. DAVIS-2016 dataset provides video sequences with only one segment mask for all salient objects. DAVIS-2017 improves the ground truth data by providing segmentation mask for each salient object as a separate color segment mask. Segtrack v2 also provides multiple segmentation mask for all salient objects in the video sequence. These datasets try to recreate real-life scenarios like occlusions, low resolution videos, background clutter, motion blur, fast motion etc.<br /> <br /> == MaskRNN: Introduction ==<br /> Most techniques mentioned above don’t work directly on instance level segmentation of the objects through the video sequence. The above approaches focus on image segmentation on each frame and using additional information (mask propagation and optical flow) from the preceding frame perform predictions for the current frame. To address the instance level segmentation problem, MaskRNN proposes a framework where the salient objects are tracked and segmented by capturing the temporal information in the video sequence using a recurrent neural network.<br /> <br /> == MaskRNN: Overview ==<br /> In a video sequence I = {I¬1, I2, …, IT}, the sequence of T frames are given as input to the network, where the video sequence contains N salient objects. The ground truth for the first frame y*1 is also provided for N salient objects.<br /> In this paper, the problem is formulated as a time dependency problem and using a recurrent neural network, the prediction of the previous frame influences the prediction of the next frame. The approach also computes the optical flow between frames and uses that as the input to the neural network. The optical flow is also used to align the output of the predicted mask. “The warped prediction, the optical flow itself, and the appearance of the current frame are then used as input for N deep nets, one for each of the N objects.”[1 - MaskRNN] Each deep net is a made of a object localization network and a binary segmentation network. The binary segmentation network is used to generate the segmentation mask for an object. The object localization network is used to alleviate outliers from the predictions. The final prediction of the segmentation mask is generated by merging the predictions of the 2 networks. For N objects, there are N deep nets which predict the mask for each salient object. The predictions are then merged into a single prediction using an argmax operation at test time.<br /> <br /> == MaskRNN: Multiple Instance Level Segmentation ==<br /> <br /> [[File:2ObjectSeg.jpg | 850px]]<br /> <br /> Image segmentation requires producing a pixel level segmentation mask and this can become a mullti-class problem. Instead, using the approach from [2- Mask R-CNN] this approach is converted into a multiple binary segmentation problem. A separate segmentation mask is predicted separately for each salient object and thus we get a binary segmentation problem. The binary segments are combined using an argmax operation where each pixel is assigned to the object containing the largest predicted probability.<br /> <br /> === MaskRNN: Binary Segmentation Network ===<br /> <br /> [[File:MaskRNNDeepNet.jpg | 850px]]<br /> <br /> The above picture shows a single deep net employed for predicting the segment mask for one salient object in the video frame. The network consists of 2 networks: binary segmentation network and object localization network. The binary segmentation network is split into two streams: appearance and flow stream. The input of the appearance stream is the RGB frame at time t and the wrapped prediction of the binary segmentation mask from time (t-1). The wrapping function uses the optical flow between frame (t-1) and frame (t) to generate a new binary segmentation mask for frame (t). The input to the flow stream is the concatenation of the optical flow magnitude between frames (t-1) to (t) and frames (t) to (t+1) and the wrapped prediction of the segmentation mask from frame (t-1). The magnitude of the optical flow is replicated into an RBG format before feeding it to the flow stream. The network architecture closely resembles a VGG-16 network without the fully connected layers at the end. The fully connected layers are replaced with convolutional and bilinear interpolation upsampling layers to generate a binary segment mask. This technique is borrowed from the Fully Convolutional Network mentioned above. The output of the flow stream and the appearance stream is linearly combined and sigmoid function is applied to the result to generate binary mask for ith object. All parts of the network are fully differentiable and thus it can be fully trained in every pass.<br /> <br /> === MaskRNN: Object Localization Network: ===<br /> Using a similar technique to the Faster R-CNN method of object localization, where the ROI pooling of the features of the region proposals (the bounding box proposals here) is performed and passed through Fully connected layers to perform regression. the Object localization network generates a bounding box of the salient object in the frame. This bounding box is enlarged by a factor of 1.25 and combined with the output of binary segmentation mask. Only the segment mask available in the bounding box is used for prediction and the pixels outside of the bounding box are marked as zero. MaskRNN uses the convolutional feature output of the appearance stream’s as the input to the ROI-pooling layer to generate the predicted bounding box.<br /> <br /> == MaskRNN: Implementation Details ==<br /> The deep net is first trained offline on a set of static images. The ground truth is randomly perturbed locally to generate the imperfect mask from frame (t-1). Two different networks are trained offline separately for DAVIS-2016 and DAVIS-2017 datasets for a fair evaluation of both datasets. After both the object localization net and binary segmentation networks have trained, the temporal information in the network is used to further improve the segmented prediction results. Because of GPU memory constraints the RNN is only able to backpropagate the gradients back 7 frames and learn long-term temporal information. <br /> <br /> For optical flow, a pre-trained flowNet2.0 is used to compute the optical flow between frames. <br /> <br /> The deep nets (without the RNN) are then fine-tuned during test time by online training the networks on the ground truth of the first frame and the some augmentations of the first frame data. The learning rate is set to 10-5 for online training for 200 iterations.<br /> <br /> == MaskRNN: Experimental Results ==<br /> === Evaluation Metrics ===<br /> There are 3 different techniques for performance analysis for Video Object Segmentation techniques:<br /> <br /> 1. Region Similarity (Jaccard Index): Region similarity or Intersection-over-union is used to capture precision of the area covered by the prediction segmentation mask compared to the ground truth segmentation mask.<br /> <br /> [[File:IoU.jpg | 200px]]<br /> <br /> 2. Contour Accuracy (F-score): This metric measures the accuracy in the boundary of the predicted segment mask and the ground truth segment mask using bipartite matching between the bounding pixels of the masks. <br /> <br /> [[File:Fscore.jpg | 200px]]<br /> <br /> 3. Temporal Stability : This estimates the degree of deformation needed to transform the segmentation masks from one frame to the next and is measured by the dissimilarity of the set of points on the contours of the segmentation between two adjacent frames.<br /> <br /> Temporal Stability measures how well the pixels of the two masks match, while Contour Accuracy measures the accuracy of the contours.<br /> <br /> === Ablation Study ===<br /> <br /> The ablation study summarized how the different components contributed to the algorithm evaluated on DAVIS-2016 and DAVIS-2017 datasets.<br /> <br /> [[File:MaskRNNTable2.jpg | 700px]]<br /> <br /> The above table presents the contribution of each component of the network to the final prediction score. We observe that online fine-tuning improves the performance by a large margin. Addition of RNN/Localization Net and FStream all seem to positively affect the performance of the deep net.<br /> <br /> === Quantitative Evaluation ===<br /> <br /> The authors use DAVIS-2016, DAVIS-2017 and Segtrack v2 to compare the performance of the proposed approach to other methods based on foreground-background video object segmentation and multiple instance-level video object segmentation.<br /> <br /> [[File:MaskRNNTable3.jpg | 700px]]<br /> <br /> The above table shows the results for contour accuracy mean and region similarity. The MaskRNN method seems to outperform all previously proposed methods. The performance gain is significant by employing a Recurrent Neural Network for learning recurrence relationship and using a object localization network to improve prediction results.<br /> <br /> The following table shows the improvements in the state of the art achieved by MaskRNN on the DAVIS-2017 and the SegTrack v2 dataset.<br /> <br /> [[File:MaskRNNTable4.jpg | 700px]]<br /> <br /> == Conclusion ==<br /> In this paper a novel approach to instance level video object segmentation task is presented which performs better than current state of the art. The long-term recurrence relationship is learnt using an RNN. The object localization network is added to improve accuracy of the system. Using online fine-tuning the network is adjusted to predict better for the current video sequence.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/MaskRNN:_Instance_Level_Video_Object_Segmentation&diff=34180 stat946w18/MaskRNN: Instance Level Video Object Segmentation 2018-03-15T03:48:22Z <p>Jssambee: /* Evaluation Metrics */</p> <hr /> <div>== Introduction ==<br /> Deep Learning has produced state of the art results in many computer vision tasks like image classification, object localization, object detection, object segmentation, semantic segmentation and instance level video object segmentation. Image classification classify the image based on the prominent objects. Object localization is the task of finding objects’ location in the frame. Object Segmentation task involves providing a pixel map which represents the pixel wise location of the objects in the image. Semantic segmentation task attempts at segmenting the image into meaningful parts. Instance level video object segmentation is the task of consistent object segmentation in video sequences.<br /> <br /> There are 2 different types of video object segmentation: Unsupervised and Semi-supervised. In unsupervised video object segmentation, the task is to find the salient objects and track the main objects in the video. In an unsupervised setting, the ground truth mask of the salient objects is provided for the first frame. The task is thus simplified to only track the objects required. In this paper we look at an unsupervised video object segmentation technique.<br /> <br /> == Background Papers ==<br /> Video object segmentation has been performed using spatio-temporal graphs and deep learning. The Graph based methods construct 3D spatio-temporal graphs in order to model the inter- and the intra-frame relationship of pixels or superpixels in a video.Hence they are computationally slower than deep learning methods and are unable to run at real-time. There are 2 main deep learning techniques for semi-supervised video object segmentation: One Shot Video Object Segmentation (OSVOS) and Learning Video Object Segmentation from Static Images (MaskTrack). Following a brief description of the new techniques introduced by these papers for semi-supervised video object segmentation task.<br /> <br /> === OSVOS (One-Shot Video Object Segmentation) ===<br /> <br /> [[File:OSVOS.jpg | 1000px]]<br /> <br /> This paper introduces the technique of using a frame-by-frame object segmentation without any temporal information from the previous frames of the video. The paper uses a VGG-16 network with pre-trained weights from image classification task. This network is then converted into a fully-connected network (FCN) by removing the fully connected dense layers at the end and adding convolution layers to generate a segment mask of the input. This network is then trained on the DAVIS 2016 dataset.<br /> <br /> During testing, the trained VGG-16 FCN is fine-tuned using the first frame of the video using the ground truth. Because this is a semi-supervised case, the segmented mask (ground truth) for the first frame is available. The first frame data is augmented by zooming/rotating/flipping the first frame and the associated segment mask.<br /> <br /> === MaskTrack (Learning Video Object Segmentation from Static Images) ===<br /> <br /> [[File:MaskTrack.jpg | 500px]]<br /> <br /> MaskTrack takes the output of the previous frame to improve its predictions to generate the segmentation mask for the next frame. Thus the input to the network is 4 channel wide (3 RGB channels from the frame at time (t) + 1 binary segmentation mask from frame (t-1)). The output of the network is the binary segmentation mask for frame at time (t). Using the binary segmentation mask (referred to as guided object segmentation in the paper), the network is able to use some temporal information from previous frame to improve its segmentation mask prediction for the next frame.<br /> <br /> The model of the MaskTrack network is similar to a modular VGG-16 and is referred to as MaskTrack ConvNet in the paper. The network is trained offline on saliency segmentation datasets: ECSSD, MSRA 10K, SOD and PASCAL-S. The input mask for the binary segmentation mask channel is generated via non-rigid deformation and affine transformation of the ground truth segmentation mask. Similar data-augmentation techniques are also used during online training. Just like OSVOS, MaskTrack uses the first frame ground truth (with augmented images) to fine-tune the network to improve prediction score for the particular video sequence.<br /> <br /> A parallel ConvNet network is used to generate predicted segment mask based on the optical flow magnitude. The optical flow between 2 frames is calculated using the EpicFlow algorithm. The output of the two networks is combined using averaging operation to generate the final predicted segmented mask.<br /> <br /> == Dataset ==<br /> The three major datasets used in this paper are DAVIS-2016, DAVIS-2017 and Segtrack v2. DAVIS-2016 dataset provides video sequences with only one segment mask for all salient objects. DAVIS-2017 improves the ground truth data by providing segmentation mask for each salient object as a separate color segment mask. Segtrack v2 also provides multiple segmentation mask for all salient objects in the video sequence. These datasets try to recreate real-life scenarios like occlusions, low resolution videos, background clutter, motion blur, fast motion etc.<br /> <br /> == MaskRNN: Introduction ==<br /> Most techniques mentioned above don’t work directly on instance level segmentation of the objects through the video sequence. The above approaches focus on image segmentation on each frame and using additional information (mask propagation and optical flow) from the preceding frame perform predictions for the current frame. To address the instance level segmentation problem, MaskRNN proposes a framework where the salient objects are tracked and segmented by capturing the temporal information in the video sequence using a recurrent neural network.<br /> <br /> == MaskRNN: Overview ==<br /> In a video sequence I = {I¬1, I2, …, IT}, the sequence of T frames are given as input to the network, where the video sequence contains N salient objects. The ground truth for the first frame y*1 is also provided for N salient objects.<br /> In this paper, the problem is formulated as a time dependency problem and using a recurrent neural network, the prediction of the previous frame influences the prediction of the next frame. The approach also computes the optical flow between frames and uses that as the input to the neural network. The optical flow is also used to align the output of the predicted mask. “The warped prediction, the optical flow itself, and the appearance of the current frame are then used as input for N deep nets, one for each of the N objects.”[1 - MaskRNN] Each deep net is a made of a object localization network and a binary segmentation network. The binary segmentation network is used to generate the segmentation mask for an object. The object localization network is used to alleviate outliers from the predictions. The final prediction of the segmentation mask is generated by merging the predictions of the 2 networks. For N objects, there are N deep nets which predict the mask for each salient object. The predictions are then merged into a single prediction using an argmax operation at test time.<br /> <br /> == MaskRNN: Multiple Instance Level Segmentation ==<br /> <br /> [[File:2ObjectSeg.jpg | 850px]]<br /> <br /> Image segmentation requires producing a pixel level segmentation mask and this can become a mullti-class problem. Instead, using the approach from [2- Mask R-CNN] this approach is converted into a multiple binary segmentation problem. A separate segmentation mask is predicted separately for each salient object and thus we get a binary segmentation problem. The binary segments are combined using an argmax operation where each pixel is assigned to the object containing the largest predicted probability.<br /> <br /> === MaskRNN: Binary Segmentation Network ===<br /> <br /> [[File:MaskRNNDeepNet.jpg | 850px]]<br /> <br /> The above picture shows a single deep net employed for predicting the segment mask for one salient object in the video frame. The network consists of 2 networks: binary segmentation network and object localization network. The binary segmentation network is split into two streams: appearance and flow stream. The input of the appearance stream is the RGB frame at time t and the wrapped prediction of the binary segmentation mask from time (t-1). The wrapping function uses the optical flow between frame (t-1) and frame (t) to generate a new binary segmentation mask for frame (t). The input to the flow stream is the concatenation of the optical flow magnitude between frames (t-1) to (t) and frames (t) to (t+1) and the wrapped prediction of the segmentation mask from frame (t-1). The magnitude of the optical flow is replicated into an RBG format before feeding it to the flow stream. The network architecture closely resembles a VGG-16 network without the fully connected layers at the end. The fully connected layers are replaced with convolutional and bilinear interpolation upsampling layers to generate a binary segment mask. This technique is borrowed from the Fully Convolutional Network mentioned above. The output of the flow stream and the appearance stream is linearly combined and sigmoid function is applied to the result to generate binary mask for ith object. All parts of the network are fully differentiable and thus it can be fully trained in every pass.<br /> <br /> === MaskRNN: Object Localization Network: ===<br /> Using a similar technique to the Faster R-CNN method of object localization, where the ROI pooling of the features of the region proposals (the bounding box proposals here) is performed and passed through Fully connected layers to perform regression. the Object localization network generates a bounding box of the salient object in the frame. This bounding box is enlarged by a factor of 1.25 and combined with the output of binary segmentation mask. Only the segment mask available in the bounding box is used for prediction and the pixels outside of the bounding box are marked as zero. MaskRNN uses the convolutional feature output of the appearance stream’s as the input to the ROI-pooling layer to generate the predicted bounding box.<br /> <br /> == MaskRNN: Implementation Details ==<br /> The deep net is first trained offline on a set of static images. The ground truth is randomly perturbed locally to generate the imperfect mask from frame (t-1). Two different networks are trained offline separately for DAVIS-2016 and DAVIS-2017 datasets for a fair evaluation of both datasets. After both the object localization net and binary segmentation networks have trained, the temporal information in the network is used to further improve the segmented prediction results. Because of GPU memory constraints the RNN is only able to backpropagate the gradients back 7 frames and learn long-term temporal information. <br /> <br /> For optical flow, a pre-trained flowNet2.0 is used to compute the optical flow between frames. <br /> <br /> The deep nets (without the RNN) are then fine-tuned during test time by online training the networks on the ground truth of the first frame and the some augmentations of the first frame data. The learning rate is set to 10-5 for online training for 200 iterations.<br /> <br /> == MaskRNN: Experimental Results ==<br /> === Evaluation Metrics ===<br /> There are 2 different techniques for performance analysis for Video Object Segmentation techniques:<br /> <br /> 1. Region Similarity (Jaccard Index): Region similarity or Intersection-over-union is used to capture precision of the area covered by the prediction segmentation mask compared to the ground truth segmentation mask.<br /> <br /> [[File:IoU.jpg | 200px]]<br /> <br /> 2. Contour Accuracy (F-score): This metric measures the accuracy in the boundary of the predicted segment mask and the ground truth segment mask using bipartite matching between the bounding pixels of the masks.<br /> <br /> [[File:Fscore.jpg | 200px]]<br /> <br /> 3. Temporal Stability : This estimates the degree of deformation needed to transform the segmentation masks from one frame to the next.<br /> <br /> Temporal Stability measures how well the pixels of the two masks match, while Contour Accuracy measures the accuracy of the contours.<br /> <br /> === Ablation Study ===<br /> <br /> The ablation study summarized how the different components contributed to the algorithm evaluated on DAVIS-2016 and DAVIS-2017 datasets.<br /> <br /> [[File:MaskRNNTable2.jpg | 700px]]<br /> <br /> The above table presents the contribution of each component of the network to the final prediction score. We observe that online fine-tuning improves the performance by a large margin. Addition of RNN/Localization Net and FStream all seem to positively affect the performance of the deep net.<br /> <br /> === Quantitative Evaluation ===<br /> <br /> The authors use DAVIS-2016, DAVIS-2017 and Segtrack v2 to compare the performance of the proposed approach to other methods based on foreground-background video object segmentation and multiple instance-level video object segmentation.<br /> <br /> [[File:MaskRNNTable3.jpg | 700px]]<br /> <br /> The above table shows the results for contour accuracy mean and region similarity. The MaskRNN method seems to outperform all previously proposed methods. The performance gain is significant by employing a Recurrent Neural Network for learning recurrence relationship and using a object localization network to improve prediction results.<br /> <br /> The following table shows the improvements in the state of the art achieved by MaskRNN on the DAVIS-2017 and the SegTrack v2 dataset.<br /> <br /> [[File:MaskRNNTable4.jpg | 700px]]<br /> <br /> == Conclusion ==<br /> In this paper a novel approach to instance level video object segmentation task is presented which performs better than current state of the art. The long-term recurrence relationship is learnt using an RNN. The object localization network is added to improve accuracy of the system. Using online fine-tuning the network is adjusted to predict better for the current video sequence.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/MaskRNN:_Instance_Level_Video_Object_Segmentation&diff=34179 stat946w18/MaskRNN: Instance Level Video Object Segmentation 2018-03-15T03:37:06Z <p>Jssambee: /* MaskRNN: Object Localization Network: */</p> <hr /> <div>== Introduction ==<br /> Deep Learning has produced state of the art results in many computer vision tasks like image classification, object localization, object detection, object segmentation, semantic segmentation and instance level video object segmentation. Image classification classify the image based on the prominent objects. Object localization is the task of finding objects’ location in the frame. Object Segmentation task involves providing a pixel map which represents the pixel wise location of the objects in the image. Semantic segmentation task attempts at segmenting the image into meaningful parts. Instance level video object segmentation is the task of consistent object segmentation in video sequences.<br /> <br /> There are 2 different types of video object segmentation: Unsupervised and Semi-supervised. In unsupervised video object segmentation, the task is to find the salient objects and track the main objects in the video. In an unsupervised setting, the ground truth mask of the salient objects is provided for the first frame. The task is thus simplified to only track the objects required. In this paper we look at an unsupervised video object segmentation technique.<br /> <br /> == Background Papers ==<br /> Video object segmentation has been performed using spatio-temporal graphs and deep learning. The Graph based methods construct 3D spatio-temporal graphs in order to model the inter- and the intra-frame relationship of pixels or superpixels in a video.Hence they are computationally slower than deep learning methods and are unable to run at real-time. There are 2 main deep learning techniques for semi-supervised video object segmentation: One Shot Video Object Segmentation (OSVOS) and Learning Video Object Segmentation from Static Images (MaskTrack). Following a brief description of the new techniques introduced by these papers for semi-supervised video object segmentation task.<br /> <br /> === OSVOS (One-Shot Video Object Segmentation) ===<br /> <br /> [[File:OSVOS.jpg | 1000px]]<br /> <br /> This paper introduces the technique of using a frame-by-frame object segmentation without any temporal information from the previous frames of the video. The paper uses a VGG-16 network with pre-trained weights from image classification task. This network is then converted into a fully-connected network (FCN) by removing the fully connected dense layers at the end and adding convolution layers to generate a segment mask of the input. This network is then trained on the DAVIS 2016 dataset.<br /> <br /> During testing, the trained VGG-16 FCN is fine-tuned using the first frame of the video using the ground truth. Because this is a semi-supervised case, the segmented mask (ground truth) for the first frame is available. The first frame data is augmented by zooming/rotating/flipping the first frame and the associated segment mask.<br /> <br /> === MaskTrack (Learning Video Object Segmentation from Static Images) ===<br /> <br /> [[File:MaskTrack.jpg | 500px]]<br /> <br /> MaskTrack takes the output of the previous frame to improve its predictions to generate the segmentation mask for the next frame. Thus the input to the network is 4 channel wide (3 RGB channels from the frame at time (t) + 1 binary segmentation mask from frame (t-1)). The output of the network is the binary segmentation mask for frame at time (t). Using the binary segmentation mask (referred to as guided object segmentation in the paper), the network is able to use some temporal information from previous frame to improve its segmentation mask prediction for the next frame.<br /> <br /> The model of the MaskTrack network is similar to a modular VGG-16 and is referred to as MaskTrack ConvNet in the paper. The network is trained offline on saliency segmentation datasets: ECSSD, MSRA 10K, SOD and PASCAL-S. The input mask for the binary segmentation mask channel is generated via non-rigid deformation and affine transformation of the ground truth segmentation mask. Similar data-augmentation techniques are also used during online training. Just like OSVOS, MaskTrack uses the first frame ground truth (with augmented images) to fine-tune the network to improve prediction score for the particular video sequence.<br /> <br /> A parallel ConvNet network is used to generate predicted segment mask based on the optical flow magnitude. The optical flow between 2 frames is calculated using the EpicFlow algorithm. The output of the two networks is combined using averaging operation to generate the final predicted segmented mask.<br /> <br /> == Dataset ==<br /> The three major datasets used in this paper are DAVIS-2016, DAVIS-2017 and Segtrack v2. DAVIS-2016 dataset provides video sequences with only one segment mask for all salient objects. DAVIS-2017 improves the ground truth data by providing segmentation mask for each salient object as a separate color segment mask. Segtrack v2 also provides multiple segmentation mask for all salient objects in the video sequence. These datasets try to recreate real-life scenarios like occlusions, low resolution videos, background clutter, motion blur, fast motion etc.<br /> <br /> == MaskRNN: Introduction ==<br /> Most techniques mentioned above don’t work directly on instance level segmentation of the objects through the video sequence. The above approaches focus on image segmentation on each frame and using additional information (mask propagation and optical flow) from the preceding frame perform predictions for the current frame. To address the instance level segmentation problem, MaskRNN proposes a framework where the salient objects are tracked and segmented by capturing the temporal information in the video sequence using a recurrent neural network.<br /> <br /> == MaskRNN: Overview ==<br /> In a video sequence I = {I¬1, I2, …, IT}, the sequence of T frames are given as input to the network, where the video sequence contains N salient objects. The ground truth for the first frame y*1 is also provided for N salient objects.<br /> In this paper, the problem is formulated as a time dependency problem and using a recurrent neural network, the prediction of the previous frame influences the prediction of the next frame. The approach also computes the optical flow between frames and uses that as the input to the neural network. The optical flow is also used to align the output of the predicted mask. “The warped prediction, the optical flow itself, and the appearance of the current frame are then used as input for N deep nets, one for each of the N objects.”[1 - MaskRNN] Each deep net is a made of a object localization network and a binary segmentation network. The binary segmentation network is used to generate the segmentation mask for an object. The object localization network is used to alleviate outliers from the predictions. The final prediction of the segmentation mask is generated by merging the predictions of the 2 networks. For N objects, there are N deep nets which predict the mask for each salient object. The predictions are then merged into a single prediction using an argmax operation at test time.<br /> <br /> == MaskRNN: Multiple Instance Level Segmentation ==<br /> <br /> [[File:2ObjectSeg.jpg | 850px]]<br /> <br /> Image segmentation requires producing a pixel level segmentation mask and this can become a mullti-class problem. Instead, using the approach from [2- Mask R-CNN] this approach is converted into a multiple binary segmentation problem. A separate segmentation mask is predicted separately for each salient object and thus we get a binary segmentation problem. The binary segments are combined using an argmax operation where each pixel is assigned to the object containing the largest predicted probability.<br /> <br /> === MaskRNN: Binary Segmentation Network ===<br /> <br /> [[File:MaskRNNDeepNet.jpg | 850px]]<br /> <br /> The above picture shows a single deep net employed for predicting the segment mask for one salient object in the video frame. The network consists of 2 networks: binary segmentation network and object localization network. The binary segmentation network is split into two streams: appearance and flow stream. The input of the appearance stream is the RGB frame at time t and the wrapped prediction of the binary segmentation mask from time (t-1). The wrapping function uses the optical flow between frame (t-1) and frame (t) to generate a new binary segmentation mask for frame (t). The input to the flow stream is the concatenation of the optical flow magnitude between frames (t-1) to (t) and frames (t) to (t+1) and the wrapped prediction of the segmentation mask from frame (t-1). The magnitude of the optical flow is replicated into an RBG format before feeding it to the flow stream. The network architecture closely resembles a VGG-16 network without the fully connected layers at the end. The fully connected layers are replaced with convolutional and bilinear interpolation upsampling layers to generate a binary segment mask. This technique is borrowed from the Fully Convolutional Network mentioned above. The output of the flow stream and the appearance stream is linearly combined and sigmoid function is applied to the result to generate binary mask for ith object. All parts of the network are fully differentiable and thus it can be fully trained in every pass.<br /> <br /> === MaskRNN: Object Localization Network: ===<br /> Using a similar technique to the Faster R-CNN method of object localization, where the ROI pooling of the features of the region proposals (the bounding box proposals here) is performed and passed through Fully connected layers to perform regression. the Object localization network generates a bounding box of the salient object in the frame. This bounding box is enlarged by a factor of 1.25 and combined with the output of binary segmentation mask. Only the segment mask available in the bounding box is used for prediction and the pixels outside of the bounding box are marked as zero. MaskRNN uses the convolutional feature output of the appearance stream’s as the input to the ROI-pooling layer to generate the predicted bounding box.<br /> <br /> == MaskRNN: Implementation Details ==<br /> The deep net is first trained offline on a set of static images. The ground truth is randomly perturbed locally to generate the imperfect mask from frame (t-1). Two different networks are trained offline separately for DAVIS-2016 and DAVIS-2017 datasets for a fair evaluation of both datasets. After both the object localization net and binary segmentation networks have trained, the temporal information in the network is used to further improve the segmented prediction results. Because of GPU memory constraints the RNN is only able to backpropagate the gradients back 7 frames and learn long-term temporal information. <br /> <br /> For optical flow, a pre-trained flowNet2.0 is used to compute the optical flow between frames. <br /> <br /> The deep nets (without the RNN) are then fine-tuned during test time by online training the networks on the ground truth of the first frame and the some augmentations of the first frame data. The learning rate is set to 10-5 for online training for 200 iterations.<br /> <br /> == MaskRNN: Experimental Results ==<br /> === Evaluation Metrics ===<br /> There are 2 different techniques for performance analysis for Video Object Segmentation techniques:<br /> <br /> 1. Region Similarity (Jaccard Index): Region similarity or Intersection-over-union is used to capture precision of the area covered by the prediction segmentation mask compared to the ground truth segmentation mask.<br /> <br /> [[File:IoU.jpg | 200px]]<br /> <br /> 2. Contour Accuracy (F-score): This metric measures the accuracy in the boundary of the predicted segment mask and the ground truth segment mask using bipartite matching between the bounding pixels of the masks.<br /> <br /> [[File:Fscore.jpg | 200px]]<br /> <br /> === Ablation Study ===<br /> <br /> The ablation study summarized how the different components contributed to the algorithm evaluated on DAVIS-2016 and DAVIS-2017 datasets.<br /> <br /> [[File:MaskRNNTable2.jpg | 700px]]<br /> <br /> The above table presents the contribution of each component of the network to the final prediction score. We observe that online fine-tuning improves the performance by a large margin. Addition of RNN/Localization Net and FStream all seem to positively affect the performance of the deep net.<br /> <br /> === Quantitative Evaluation ===<br /> <br /> The authors use DAVIS-2016, DAVIS-2017 and Segtrack v2 to compare the performance of the proposed approach to other methods based on foreground-background video object segmentation and multiple instance-level video object segmentation.<br /> <br /> [[File:MaskRNNTable3.jpg | 700px]]<br /> <br /> The above table shows the results for contour accuracy mean and region similarity. The MaskRNN method seems to outperform all previously proposed methods. The performance gain is significant by employing a Recurrent Neural Network for learning recurrence relationship and using a object localization network to improve prediction results.<br /> <br /> The following table shows the improvements in the state of the art achieved by MaskRNN on the DAVIS-2017 and the SegTrack v2 dataset.<br /> <br /> [[File:MaskRNNTable4.jpg | 700px]]<br /> <br /> == Conclusion ==<br /> In this paper a novel approach to instance level video object segmentation task is presented which performs better than current state of the art. The long-term recurrence relationship is learnt using an RNN. The object localization network is added to improve accuracy of the system. Using online fine-tuning the network is adjusted to predict better for the current video sequence.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/MaskRNN:_Instance_Level_Video_Object_Segmentation&diff=34176 stat946w18/MaskRNN: Instance Level Video Object Segmentation 2018-03-15T03:08:41Z <p>Jssambee: /* Background Papers */</p> <hr /> <div>== Introduction ==<br /> Deep Learning has produced state of the art results in many computer vision tasks like image classification, object localization, object detection, object segmentation, semantic segmentation and instance level video object segmentation. Image classification classify the image based on the prominent objects. Object localization is the task of finding objects’ location in the frame. Object Segmentation task involves providing a pixel map which represents the pixel wise location of the objects in the image. Semantic segmentation task attempts at segmenting the image into meaningful parts. Instance level video object segmentation is the task of consistent object segmentation in video sequences.<br /> <br /> There are 2 different types of video object segmentation: Unsupervised and Semi-supervised. In unsupervised video object segmentation, the task is to find the salient objects and track the main objects in the video. In an unsupervised setting, the ground truth mask of the salient objects is provided for the first frame. The task is thus simplified to only track the objects required. In this paper we look at an unsupervised video object segmentation technique.<br /> <br /> == Background Papers ==<br /> Video object segmentation has been performed using spatio-temporal graphs and deep learning. The Graph based methods construct 3D spatio-temporal graphs in order to model the inter- and the intra-frame relationship of pixels or superpixels in a video.Hence they are computationally slower than deep learning methods and are unable to run at real-time. There are 2 main deep learning techniques for semi-supervised video object segmentation: One Shot Video Object Segmentation (OSVOS) and Learning Video Object Segmentation from Static Images (MaskTrack). Following a brief description of the new techniques introduced by these papers for semi-supervised video object segmentation task.<br /> <br /> === OSVOS (One-Shot Video Object Segmentation) ===<br /> <br /> [[File:OSVOS.jpg | 1000px]]<br /> <br /> This paper introduces the technique of using a frame-by-frame object segmentation without any temporal information from the previous frames of the video. The paper uses a VGG-16 network with pre-trained weights from image classification task. This network is then converted into a fully-connected network (FCN) by removing the fully connected dense layers at the end and adding convolution layers to generate a segment mask of the input. This network is then trained on the DAVIS 2016 dataset.<br /> <br /> During testing, the trained VGG-16 FCN is fine-tuned using the first frame of the video using the ground truth. Because this is a semi-supervised case, the segmented mask (ground truth) for the first frame is available. The first frame data is augmented by zooming/rotating/flipping the first frame and the associated segment mask.<br /> <br /> === MaskTrack (Learning Video Object Segmentation from Static Images) ===<br /> <br /> [[File:MaskTrack.jpg | 500px]]<br /> <br /> MaskTrack takes the output of the previous frame to improve its predictions to generate the segmentation mask for the next frame. Thus the input to the network is 4 channel wide (3 RGB channels from the frame at time (t) + 1 binary segmentation mask from frame (t-1)). The output of the network is the binary segmentation mask for frame at time (t). Using the binary segmentation mask (referred to as guided object segmentation in the paper), the network is able to use some temporal information from previous frame to improve its segmentation mask prediction for the next frame.<br /> <br /> The model of the MaskTrack network is similar to a modular VGG-16 and is referred to as MaskTrack ConvNet in the paper. The network is trained offline on saliency segmentation datasets: ECSSD, MSRA 10K, SOD and PASCAL-S. The input mask for the binary segmentation mask channel is generated via non-rigid deformation and affine transformation of the ground truth segmentation mask. Similar data-augmentation techniques are also used during online training. Just like OSVOS, MaskTrack uses the first frame ground truth (with augmented images) to fine-tune the network to improve prediction score for the particular video sequence.<br /> <br /> A parallel ConvNet network is used to generate predicted segment mask based on the optical flow magnitude. The optical flow between 2 frames is calculated using the EpicFlow algorithm. The output of the two networks is combined using averaging operation to generate the final predicted segmented mask.<br /> <br /> == Dataset ==<br /> The three major datasets used in this paper are DAVIS-2016, DAVIS-2017 and Segtrack v2. DAVIS-2016 dataset provides video sequences with only one segment mask for all salient objects. DAVIS-2017 improves the ground truth data by providing segmentation mask for each salient object as a separate color segment mask. Segtrack v2 also provides multiple segmentation mask for all salient objects in the video sequence. These datasets try to recreate real-life scenarios like occlusions, low resolution videos, background clutter, motion blur, fast motion etc.<br /> <br /> == MaskRNN: Introduction ==<br /> Most techniques mentioned above don’t work directly on instance level segmentation of the objects through the video sequence. The above approaches focus on image segmentation on each frame and using additional information (mask propagation and optical flow) from the preceding frame perform predictions for the current frame. To address the instance level segmentation problem, MaskRNN proposes a framework where the salient objects are tracked and segmented by capturing the temporal information in the video sequence using a recurrent neural network.<br /> <br /> == MaskRNN: Overview ==<br /> In a video sequence I = {I¬1, I2, …, IT}, the sequence of T frames are given as input to the network, where the video sequence contains N salient objects. The ground truth for the first frame y*1 is also provided for N salient objects.<br /> In this paper, the problem is formulated as a time dependency problem and using a recurrent neural network, the prediction of the previous frame influences the prediction of the next frame. The approach also computes the optical flow between frames and uses that as the input to the neural network. The optical flow is also used to align the output of the predicted mask. “The warped prediction, the optical flow itself, and the appearance of the current frame are then used as input for N deep nets, one for each of the N objects.”[1 - MaskRNN] Each deep net is a made of a object localization network and a binary segmentation network. The binary segmentation network is used to generate the segmentation mask for an object. The object localization network is used to alleviate outliers from the predictions. The final prediction of the segmentation mask is generated by merging the predictions of the 2 networks. For N objects, there are N deep nets which predict the mask for each salient object. The predictions are then merged into a single prediction using an argmax operation at test time.<br /> <br /> == MaskRNN: Multiple Instance Level Segmentation ==<br /> <br /> [[File:2ObjectSeg.jpg | 850px]]<br /> <br /> Image segmentation requires producing a pixel level segmentation mask and this can become a mullti-class problem. Instead, using the approach from [2- Mask R-CNN] this approach is converted into a multiple binary segmentation problem. A separate segmentation mask is predicted separately for each salient object and thus we get a binary segmentation problem. The binary segments are combined using an argmax operation where each pixel is assigned to the object containing the largest predicted probability.<br /> <br /> === MaskRNN: Binary Segmentation Network ===<br /> <br /> [[File:MaskRNNDeepNet.jpg | 850px]]<br /> <br /> The above picture shows a single deep net employed for predicting the segment mask for one salient object in the video frame. The network consists of 2 networks: binary segmentation network and object localization network. The binary segmentation network is split into two streams: appearance and flow stream. The input of the appearance stream is the RGB frame at time t and the wrapped prediction of the binary segmentation mask from time (t-1). The wrapping function uses the optical flow between frame (t-1) and frame (t) to generate a new binary segmentation mask for frame (t). The input to the flow stream is the concatenation of the optical flow magnitude between frames (t-1) to (t) and frames (t) to (t+1) and the wrapped prediction of the segmentation mask from frame (t-1). The magnitude of the optical flow is replicated into an RBG format before feeding it to the flow stream. The network architecture closely resembles a VGG-16 network without the fully connected layers at the end. The fully connected layers are replaced with convolutional and bilinear interpolation upsampling layers to generate a binary segment mask. This technique is borrowed from the Fully Convolutional Network mentioned above. The output of the flow stream and the appearance stream is linearly combined and sigmoid function is applied to the result to generate binary mask for ith object. All parts of the network are fully differentiable and thus it can be fully trained in every pass.<br /> <br /> === MaskRNN: Object Localization Network: ===<br /> Using a similar technique to the Faster R-CNN method of object localization, the Object localization network generates a bounding box of the salient object in the frame. This bounding box is enlarged by a factor of 1.25 and combined with the output of binary segmentation mask. Only the segment mask available in the bounding box is used for prediction and the pixels outside of the bounding box are marked as zero. MaskRNN uses the convolutional feature output of the appearance stream’s as the input to the ROI-pooling layer to generate the predicted bounding box.<br /> <br /> == MaskRNN: Implementation Details ==<br /> The deep net is first trained offline on a set of static images. The ground truth is randomly perturbed locally to generate the imperfect mask from frame (t-1). Two different networks are trained offline separately for DAVIS-2016 and DAVIS-2017 datasets for a fair evaluation of both datasets. After both the object localization net and binary segmentation networks have trained, the temporal information in the network is used to further improve the segmented prediction results. Because of GPU memory constraints the RNN is only able to backpropagate the gradients back 7 frames and learn long-term temporal information. <br /> <br /> For optical flow, a pre-trained flowNet2.0 is used to compute the optical flow between frames. <br /> <br /> The deep nets (without the RNN) are then fine-tuned during test time by online training the networks on the ground truth of the first frame and the some augmentations of the first frame data. The learning rate is set to 10-5 for online training for 200 iterations.<br /> <br /> == MaskRNN: Experimental Results ==<br /> === Evaluation Metrics ===<br /> There are 2 different techniques for performance analysis for Video Object Segmentation techniques:<br /> <br /> 1. Region Similarity (Jaccard Index): Region similarity or Intersection-over-union is used to capture precision of the area covered by the prediction segmentation mask compared to the ground truth segmentation mask.<br /> <br /> [[File:IoU.jpg | 200px]]<br /> <br /> 2. Contour Accuracy (F-score): This metric measures the accuracy in the boundary of the predicted segment mask and the ground truth segment mask using bipartite matching between the bounding pixels of the masks.<br /> <br /> [[File:Fscore.jpg | 200px]]<br /> <br /> === Ablation Study ===<br /> <br /> The ablation study summarized how the different components contributed to the algorithm evaluated on DAVIS-2016 and DAVIS-2017 datasets.<br /> <br /> [[File:MaskRNNTable2.jpg | 700px]]<br /> <br /> The above table presents the contribution of each component of the network to the final prediction score. We observe that online fine-tuning improves the performance by a large margin. Addition of RNN/Localization Net and FStream all seem to positively affect the performance of the deep net.<br /> <br /> === Quantitative Evaluation ===<br /> <br /> The authors use DAVIS-2016, DAVIS-2017 and Segtrack v2 to compare the performance of the proposed approach to other methods based on foreground-background video object segmentation and multiple instance-level video object segmentation.<br /> <br /> [[File:MaskRNNTable3.jpg | 700px]]<br /> <br /> The above table shows the results for contour accuracy mean and region similarity. The MaskRNN method seems to outperform all previously proposed methods. The performance gain is significant by employing a Recurrent Neural Network for learning recurrence relationship and using a object localization network to improve prediction results.<br /> <br /> The following table shows the improvements in the state of the art achieved by MaskRNN on the DAVIS-2017 and the SegTrack v2 dataset.<br /> <br /> [[File:MaskRNNTable4.jpg | 700px]]<br /> <br /> == Conclusion ==<br /> In this paper a novel approach to instance level video object segmentation task is presented which performs better than current state of the art. The long-term recurrence relationship is learnt using an RNN. The object localization network is added to improve accuracy of the system. Using online fine-tuning the network is adjusted to predict better for the current video sequence.</div> Jssambee http://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat946w18/MaskRNN:_Instance_Level_Video_Object_Segmentation&diff=34175 stat946w18/MaskRNN: Instance Level Video Object Segmentation 2018-03-15T03:08:19Z <p>Jssambee: /* Background Papers */</p> <hr /> <div>== Introduction ==<br /> Deep Learning has produced state of the art results in many computer vision tasks like image classification, object localization, object detection, object segmentation, semantic segmentation and instance level video object segmentation. Image classification classify the image based on the prominent objects. Object localization is the task of finding objects’ location in the frame. Object Segmentation task involves providing a pixel map which represents the pixel wise location of the objects in the image. Semantic segmentation task attempts at segmenting the image into meaningful parts. Instance level video object segmentation is the task of consistent object segmentation in video sequences.<br /> <br /> There are 2 different types of video object segmentation: Unsupervised and Semi-supervised. In unsupervised video object segmentation, the task is to find the salient objects and track the main objects in the video. In an unsupervised setting, the ground truth mask of the salient objects is provided for the first frame. The task is thus simplified to only track the objects required. In this paper we look at an unsupervised video object segmentation technique.<br /> <br /> == Background Papers ==<br /> Video object segmentation has been performed using spatio-temporal graphs and deep learning. The Graph based methods construct 3D spatio-temporal graphs in order to to model the inter- and the intra-frame relationship of pixels or superpixels in a video.Hence they are computationally slower than deep learning methods and are unable to run at real-time. There are 2 main deep learning techniques for semi-supervised video object segmentation: One Shot Video Object Segmentation (OSVOS) and Learning Video Object Segmentation from Static Images (MaskTrack). Following a brief description of the new techniques introduced by these papers for semi-supervised video object segmentation task.<br /> <br /> === OSVOS (One-Shot Video Object Segmentation) ===<br /> <br /> [[File:OSVOS.jpg | 1000px]]<br /> <br /> This paper introduces the technique of using a frame-by-frame object segmentation without any temporal information from the previous frames of the video. The paper uses a VGG-16 network with pre-trained weights from image classification task. This network is then converted into a fully-connected network (FCN) by removing the fully connected dense layers at the end and adding convolution layers to generate a segment mask of the input. This network is then trained on the DAVIS 2016 dataset.<br /> <br /> During testing, the trained VGG-16 FCN is fine-tuned using the first frame of the video using the ground truth. Because this is a semi-supervised case, the segmented mask (ground truth) for the first frame is available. The first frame data is augmented by zooming/rotating/flipping the first frame and the associated segment mask.<br /> <br /> === MaskTrack (Learning Video Object Segmentation from Static Images) ===<br /> <br /> [[File:MaskTrack.jpg | 500px]]<br /> <br /> MaskTrack takes the output of the previous frame to improve its predictions to generate the segmentation mask for the next frame. Thus the input to the network is 4 channel wide (3 RGB channels from the frame at time (t) + 1 binary segmentation mask from frame (t-1)). The output of the network is the binary segmentation mask for frame at time (t). Using the binary segmentation mask (referred to as guided object segmentation in the paper), the network is able to use some temporal information from previous frame to improve its segmentation mask prediction for the next frame.<br /> <br /> The model of the MaskTrack network is similar to a modular VGG-16 and is referred to as MaskTrack ConvNet in the paper. The network is trained offline on saliency segmentation datasets: ECSSD, MSRA 10K, SOD and PASCAL-S. The input mask for the binary segmentation mask channel is generated via non-rigid deformation and affine transformation of the ground truth segmentation mask. Similar data-augmentation techniques are also used during online training. Just like OSVOS, MaskTrack uses the first frame ground truth (with augmented images) to fine-tune the network to improve prediction score for the particular video sequence.<br /> <br /> A parallel ConvNet network is used to generate predicted segment mask based on the optical flow magnitude. The optical flow between 2 frames is calculated using the EpicFlow algorithm. The output of the two networks is combined using averaging operation to generate the final predicted segmented mask.<br /> <br /> == Dataset ==<br /> The three major datasets used in this paper are DAVIS-2016, DAVIS-2017 and Segtrack v2. DAVIS-2016 dataset provides video sequences with only one segment mask for all salient objects. DAVIS-2017 improves the ground truth data by providing segmentation mask for each salient object as a separate color segment mask. Segtrack v2 also provides multiple segmentation mask for all salient objects in the video sequence. These datasets try to recreate real-life scenarios like occlusions, low resolution videos, background clutter, motion blur, fast motion etc.<br /> <br /> == MaskRNN: Introduction ==<br /> Most techniques mentioned above don’t work directly on instance level segmentation of the objects through the video sequence. The above approaches focus on image segmentation on each frame and using additional information (mask propagation and optical flow) from the preceding frame perform predictions for the current frame. To address the instance level segmentation problem, MaskRNN proposes a framework where the salient objects are tracked and segmented by capturing the temporal information in the video sequence using a recurrent neural network.<br /> <br /> == MaskRNN: Overview ==<br /> In a video sequence I = {I¬1, I2, …, IT}, the sequence of T frames are given as input to the network, where the video sequence contains N salient objects. The ground truth for the first frame y*1 is also provided for N salient objects.<br /> In this paper, the problem is formulated as a time dependency problem and using a recurrent neural network, the prediction of the previous frame influences the prediction of the next frame. The approach also computes the optical flow between frames and uses that as the input to the neural network. The optical flow is also used to align the output of the predicted mask. “The warped prediction, the optical flow itself, and the appearance of the current frame are then used as input for N deep nets, one for each of the N objects.”[1 - MaskRNN] Each deep net is a made of a object localization network and a binary segmentation network. The binary segmentation network is used to generate the segmentation mask for an object. The object localization network is used to alleviate outliers from the predictions. The final prediction of the segmentation mask is generated by merging the predictions of the 2 networks. For N objects, there are N deep nets which predict the mask for each salient object. The predictions are then merged into a single prediction using an argmax operation at test time.<br /> <br /> == MaskRNN: Multiple Instance Level Segmentation ==<br /> <br /> [[File:2ObjectSeg.jpg | 850px]]<br /> <br /> Image segmentation requires producing a pixel level segmentation mask and this can become a mullti-class problem. Instead, using the approach from [2- Mask R-CNN] this approach is converted into a multiple binary segmentation problem. A separate segmentation mask is predicted separately for each salient object and thus we get a binary segmentation problem. The binary segments are combined using an argmax operation where each pixel is assigned to the object containing the largest predicted probability.<br /> <br /> === MaskRNN: Binary Segmentation Network ===<br /> <br /> [[File:MaskRNNDeepNet.jpg | 850px]]<br /> <br /> The above picture shows a single deep net employed for predicting the segment mask for one salient object in the video frame. The network consists of 2 networks: binary segmentation network and object localization network. The binary segmentation network is split into two streams: appearance and flow stream. The input of the appearance stream is the RGB frame at time t and the wrapped prediction of the binary segmentation mask from time (t-1). The wrapping function uses the optical flow between frame (t-1) and frame (t) to generate a new binary segmentation mask for frame (t). The input to the flow stream is the concatenation of the optical flow magnitude between frames (t-1) to (t) and frames (t) to (t+1) and the wrapped prediction of the segmentation mask from frame (t-1). The magnitude of the optical flow is replicated into an RBG format before feeding it to the flow stream. The network architecture closely resembles a VGG-16 network without the fully connected layers at the end. The fully connected layers are replaced with convolutional and bilinear interpolation upsampling layers to generate a binary segment mask. This technique is borrowed from the Fully Convolutional Network mentioned above. The output of the flow stream and the appearance stream is linearly combined and sigmoid function is applied to the result to generate binary mask for ith object. All parts of the network are fully differentiable and thus it can be fully trained in every pass.<br /> <br /> === MaskRNN: Object Localization Network: ===<br /> Using a similar technique to the Faster R-CNN method of object localization, the Object localization network generates a bounding box of the salient object in the frame. This bounding box is enlarged by a factor of 1.25 and combined with the output of binary segmentation mask. Only the segment mask available in the bounding box is used for prediction and the pixels outside of the bounding box are marked as zero. MaskRNN uses the convolutional feature output of the appearance stream’s as the input to the ROI-pooling layer to generate the predicted bounding box.<br /> <br /> == MaskRNN: Implementation Details ==<br /> The deep net is first trained offline on a set of static images. The ground truth is randomly perturbed locally to generate the imperfect mask from frame (t-1). Two different networks are trained offline separately for DAVIS-2016 and DAVIS-2017 datasets for a fair evaluation of both datasets. After both the object localization net and binary segmentation networks have trained, the temporal information in the network is used to further improve the segmented prediction results. Because of GPU memory constraints the RNN is only able to backpropagate the gradients back 7 frames and learn long-term temporal information. <br /> <br /> For optical flow, a pre-trained flowNet2.0 is used to compute the optical flow between frames. <br /> <br /> The deep nets (without the RNN) are then fine-tuned during test time by online training the networks on the ground truth of the first frame and the some augmentations of the first frame data. The learning rate is set to 10-5 for online training for 200 iterations.<br /> <br /> == MaskRNN: Experimental Results ==<br /> === Evaluation Metrics ===<br /> There are 2 different techniques for performance analysis for Video Object Segmentation techniques:<br /> <br /> 1. Region Similarity (Jaccard Index): Region similarity or Intersection-over-union is used to capture precision of the area covered by the prediction segmentation mask compared to the ground truth segmentation mask.<br /> <br /> [[File:IoU.jpg | 200px]]<br /> <br /> 2. Contour Accuracy (F-score): This metric measures the accuracy in the boundary of the predicted segment mask and the ground truth segment mask using bipartite matching between the bounding pixels of the masks.<br /> <br /> [[File:Fscore.jpg | 200px]]<br /> <br /> === Ablation Study ===<br /> <br /> The ablation study summarized how the different components contributed to the algorithm evaluated on DAVIS-2016 and DAVIS-2017 datasets.<br /> <br /> [[File:MaskRNNTable2.jpg | 700px]]<br /> <br /> The above table presents the contribution of each component of the network to the final prediction score. We observe that online fine-tuning improves the performance by a large margin. Addition of RNN/Localization Net and FStream all seem to positively affect the performance of the deep net.<br /> <br /> === Quantitative Evaluation ===<br /> <br /> The authors use DAVIS-2016, DAVIS-2017 and Segtrack v2 to compare the performance of the proposed approach to other methods based on foreground-background video object segmentation and multiple instance-level video object segmentation.<br /> <br /> [[File:MaskRNNTable3.jpg | 700px]]<br /> <br /> The above table shows the results for contour accuracy mean and region similarity. The MaskRNN method seems to outperform all previously proposed methods. The performance gain is significant by employing a Recurrent Neural Network for learning recurrence relationship and using a object localization network to improve prediction results.<br /> <br /> The following table shows the improvements in the state of the art achieved by MaskRNN on the DAVIS-2017 and the SegTrack v2 dataset.<br /> <br /> [[File:MaskRNNTable4.jpg | 700px]]<br /> <br /> == Conclusion ==<br /> In this paper a novel approach to instance level video object segmentation task is presented which performs better than current state of the art. The long-term recurrence relationship is learnt using an RNN. The object localization network is added to improve accuracy of the system. Using online fine-tuning the network is adjusted to predict better for the current video sequence.</div> Jssambee