Search results

Jump to navigation Jump to search
View (previous 50 | ) (20 | 50 | 100 | 250 | 500)

Page title matches

  • ...polymorphisms (SNPs), insertions, and deletions (indels). Calling SNPs and small indels are technically challenging since it requires a program to distingui This paper aims to solve the problem of calling SNPs and small indels using a convolutional neural net by casting the reads as images and ...
    18 KB (2,856 words) - 04:24, 16 December 2020

Page text matches

  • \small W(p_r, p_g) = \underset{\gamma\sim\Pi(p_r, p_g)} {\inf}\pmb{\mathbb{E}}_{(x ...)</math>, and corresponding densities with lower case letters, i.e. <math>\small p(x)</math>. ...
    21 KB (3,416 words) - 22:25, 25 April 2018
  • ...intrinsic dimension of the data. Since <math>\hat{n}</math> could be very small compared to the dimension <math>n</math> of the data, this algorithm is com Since <math> \beta </math> is very small, and we want to avoid large value of <math> z </math>, we could change vari ...
    7 KB (1,209 words) - 09:46, 30 August 2017
  • ...ction that is mainly focused on modeling large dissimilarities rather than small ones. As a result of that, they do not provide good visualizations of data ...l the entropy of <math> \mathbf{ P_i} </math> is within some predetermined small tolerance of <math> \mathbf{\log_2 M } </math>. ...
    15 KB (2,530 words) - 09:45, 30 August 2017
  • So, we can take some small number of samples <math>y</math>, compute the sparse representation <math>s ...ty is now clear: when a signal has a sparse expansion, one can discard the small coefficients without much perceptual loss. Formally, consider <math>f_{S}(t ...
    13 KB (2,258 words) - 09:45, 30 August 2017
  • In this study, NN LMs are trained only on a small part of the data (which are in-domain corpora) plus some randomly subsample performance for small values of M, and even with M = 2000, ...
    9 KB (1,542 words) - 09:46, 30 August 2017
  • |Week of Nov 25 || Yuliang Shi || || Small-gan: Speeding up gan training using core-sets || [http://proceedings.mlr.pr ...
    5 KB (642 words) - 23:29, 1 December 2021
  • ...adding more convolutional layers, which is feasible due to the use of very small (3 × 3) convolution filters in all layers. As a result, they come up with s ...d through a stack of convolutional (conv.) layers with filters with a very small receptive field: 3 × 3 with a convolutional stride of 1 pixel. Spatial poo ...
    11 KB (1,680 words) - 09:46, 30 August 2017
  • ...-art Gaussian Mixture Models-Hidden Markov Model (GMM-HMM) systems in both small and large speech recognition tasks ...lores using multiple convolutional layers, and the system is tested on one small dataset and two large datasets. The results show that CNNs outperform DNNs ...
    11 KB (1,587 words) - 09:46, 30 August 2017
  • ...e a small cost for using a large <math> \mathbf q_{j|i} </math> to model a small <math> \mathbf p_{j|i} </math>. Therefore, the SNE cost function focuses m ...too far away in the two-dimensional map. In SNE, this will result in very small attractive force from datapoint <math> i </math> to these too-distant map p ...
    19 KB (3,223 words) - 09:45, 30 August 2017
  • ...within the sphere (i.e. the data points are approximately uniform in each small local region). ...>) is to let the sphere to contain sufficiently many data points, and also small enough to satisfy the assumption that <math>\,f</math> is approximately con ...
    15 KB (2,484 words) - 09:46, 30 August 2017
  • ...aordinary small (compared to usual font for math formulas). Sometimes this small font helps and sometimes it hurts! One solution to correct this is to simpl ...
    5 KB (769 words) - 22:53, 5 September 2021
  • SSR is small and hard to recognize but contains important info with 90% accuracy. SDR is ...mask M can generate adversarial artifacts. Adversarial artifacts are very small and imperceivable by people but can ruin the classifier. This phenomenon sh ...
    12 KB (1,840 words) - 14:09, 20 March 2018
  • ...refers to instances where the gradient used in backpropagation becomes too small to make discernable differences as the parameters in the model are tuned, a ...epth, it becomes more difficult to train them as gradients may become very small. The authors developed a method that train a model to fit a residual mappin ...
    6 KB (1,020 words) - 12:01, 3 December 2021
  • ...m of the weights equal to N. Analogous to <math>\beta</math> distribution, small <math>a</math> allows the model to up- or down-scale weights <math>\boldsym ...on at z affects the statistic T(F). Thus, this corrupted z value will have small effect on the statistic T(F). ...
    9 KB (1,489 words) - 02:35, 19 November 2018
  • ...ng only the last n-1 words instead of the whole context. However, even for small n, certain sequences could still be missing. ...robability for even the rarest words, the neural network only calculates a small subset of the most common words. This way, the output vector can be signifi ...
    15 KB (2,517 words) - 09:46, 30 August 2017
  • ...ed for parameter and model selection. Second, regarding the selection of a small representative subgraph as training set, a method based on Expansion factor 6). Repeat the above procedure untill change in EF value is too small (comparing to a threshold specified by the user) ...
    10 KB (1,675 words) - 09:46, 30 August 2017
  • ...It is difficult to test the true robustness of the model with a relatively small test set. If a larger data set can be found to help correctly identify othe ...PTB makes it difficult to determine the robustness of the model due to the small size of the test set. Given a larger dataset, the model could be tested to ...
    21 KB (3,373 words) - 07:19, 15 December 2020
  • ...descent with momentum and dropout, where mini-batches were constructed. A small L1 weight penalty was included in the cost function. The model’s weights we ...
    8 KB (1,353 words) - 09:46, 30 August 2017
  • unobserved ones. The small square nodes represent factors, and there is an edge between a variable ...th>x_i</math>. Moreover, Figure 1 shows the notion they use in graphs. The small squares denote potential functions, and, as usual, the shaded and unshaded ...
    17 KB (2,924 words) - 09:46, 30 August 2017
  • ...n and the original unknown matrix recovery are provably accurate even when small amount of noise is present and corrupts the few observed entries. The error ...fty}} \leq \sqrt{\mu_B / n_2}</math>, where <math>\mu \geq 1</math> and is small. To see that this assumption guarantees dense vectors, consider the case wh ...
    14 KB (2,342 words) - 09:45, 30 August 2017
  • ...nner product between the input feature map and a filter, shifted by <math>\small x</math>. ...nner product between the input feature map and a filter, rotated by <math>\small R</math>. ...
    23 KB (3,814 words) - 22:53, 20 April 2018
  • ...s and knowledge graphs. Another work trained a label cleaning network by a small set of clean labels and used it to reduce the noise in large-scale noisy la ...instances to the peer network. <math>R(T)</math> governs the percentage of small-loss instances to be used in updating the parameters of each network. ...
    15 KB (2,318 words) - 21:02, 11 December 2018
  • ...led data than labelled data. A common situation is to have a comparatively small quantity of labelled data paired with a larger amount of unlabelled data. T ...nd drastically better for when the number of labelled data samples is very small (100 out of 50000). ...
    9 KB (1,554 words) - 09:46, 30 August 2017
  • ...with normal enumeration if we choose to have a dictionary of the words for small values of <math>\tau</math> ...
    4 KB (646 words) - 19:44, 26 October 2017
  • ...nected linear layer as the classifier. To make use of order information of small regions it uses hand-crafted n-grams as features in addition to single word ...h convolutional layers. The essence of CNN is to learn word embeddings for small size regions and each kernel of convolutional layer tries to capture a spec ...
    13 KB (2,188 words) - 12:42, 15 March 2018
  • ...sets if the data is correctly labeled. However, they can be trounced by a small number of incorrect labels, which can be quite challenging to fix. We try t ...alized by [LS], who construct a "hard" training data distribution, where a small percentage of labels is randomly flipped. This label noise then leads to a ...
    18 KB (2,846 words) - 00:18, 5 December 2020
  • ...of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In this work, we propose a meta-learning algori ...c algorithm for meta-learning that trains a model’s parameters such that a small number of gradient updates will lead to fast learning on a new task. The pa ...
    26 KB (4,205 words) - 10:18, 4 December 2017
  • ...as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird. ...formation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. Wh ...
    17 KB (2,650 words) - 23:54, 30 March 2018
  • ...stablished PDE models exist, but where our amount of available data is too small to guarantee the robustness of convergence in neural network training. In e ...pting to answer the first of the questions above. Specifically, if given a small number of noisy measurements of the solution of the PDE ...
    23 KB (3,762 words) - 15:51, 6 December 2020
  • ...tool in statistical learning, which tries to preserve the variability by a small number of principal components. In the classical method, the principal comp ...The diagnostic plot is shown as following. Clearly, ROBPCA distinguishes a small group of bad leverage points which all three other PCA methods fails to rec ...
    15 KB (2,414 words) - 09:46, 30 August 2017
  • ...especially relevant to situations where the number of observations may be small. ...unctions <math>\,f_t</math> are related to each other, so they all share a small set of features. Formally, the hypothesis is that the functions <math>\,f_t ...
    17 KB (2,834 words) - 09:45, 30 August 2017
  • ...ons in large boxes should be of less significance than small deviations in small boxes. The author claims that predicting the square root of the bounding bo * The loss function treats errors in large bounding boxes the same as small bounding boxes to some extent, which is inconsistent with the relative cont ...
    19 KB (2,746 words) - 16:04, 20 November 2018
  • ...his similarity measure is large for the points within the same cluster and small for points in different clusters. <math>W</math> has non neagtive elements ...lized cut takes a small value if the clusters <math>C_k</math> are not too small <ref> Ulrike von Luxburg, A Tutorial on Spectral Clustering, Technical Repo ...
    35 KB (5,767 words) - 09:45, 30 August 2017
  • ...ble if the above representation has just a few large coefficients and many small coefficients. We shall now briefly overview how the transform coding of sig ...<math>\,N</math> may be very large even if the desired <math>\ K</math> is small. ...
    18 KB (2,888 words) - 09:45, 30 August 2017
  • ...over set <math>\displaystyle A</math> but <math>\displaystyle g</math> is small, then <math>\displaystyle \frac{f}{g} </math> would be large and it would r ...
    6 KB (1,083 words) - 09:45, 30 August 2017
  • ...ffer from some technical problems. Most importantly, they are limited to a small vocabulary because of complexity and number of parameters that have to be t ...f computing the normalization constant, the authors proposed to use only a small subset <math>v\prime</math> of the target vocabulary at each update<ref> ...
    14 KB (2,301 words) - 09:46, 30 August 2017
  • ...an adversarial attack where a model is deceived by an attacker by adding a small noise to an input image and as a result, the prediction of the model change ...nce(x,x')=\delta, f(x)\neq f(x')</math>, where <math>\delta</math> is some small number and <math>f(\cdot)</math> is the image label. If the classifier assi ...
    15 KB (2,325 words) - 06:58, 6 December 2020
  • ...first or last letter of the word. The important thing to note is that even small amounts of noise lead to substantial drops in performance. ...ttle machine learning systems being used so pervasively in the real world. Small changes to the input can lead to dramatic ...
    17 KB (2,634 words) - 00:15, 21 April 2018
  • ...gested as a candidate is formed by combining basic building blocks to form small modules, then the same basic structures introduced on the building blocks a ...ent_Architecture_Search#Primitive_operations section 2.3] are used to form small networks defined as ''motifs'' by the authors. To combine the outputs of mu ...
    30 KB (4,568 words) - 12:53, 11 December 2018
  • ...based on the query terms appearing in each document. Stage one produces a small subset of documents where the answer might appear (high recall), and then i ...ize can be billions of documents. In stage one, a retriever would select a small set of potentially relevant documents, which then would be fed to a neural ...
    17 KB (2,691 words) - 22:57, 7 December 2020
  • One major challenge in XTMC problems is that most data fall into a small group of labels. To tackle this challenge, the authors propose partitioning ...lexity can be reduced by configuring the model so that <math>p_i</math> is small, which corresponds to a low probability of a batch entering the tail cluste ...
    15 KB (2,456 words) - 22:04, 7 December 2020
  • The paper shows that the same phenomenon occurs even in small linear models. These observations are explained by the Bayesian evidence, w The authors propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. ...
    34 KB (5,220 words) - 20:32, 10 December 2018
  • <ol><li>Modularity: increase the depth of a network by simply repeating a small module and aim to achieve higher accuracy</li> ...ent with respect to <math>w_1</math> can be small due to multiplication of small numbers (a.k.a. vanishing gradient). When <math>w_3</math> and <math>w_2</m ...
    19 KB (2,963 words) - 14:42, 22 November 2018
  • ...polygons because it is a special representation of the image which can use small number of vertices instead of various pixels and makes it easy to incorpora ...objects with a closed polygon. Polygons allow annotation of objects with a small number of clicks (30 - 40) compared to other methods. This approach works a ...
    21 KB (3,323 words) - 18:41, 16 December 2018
  • ...rming a convex relation of the problem that is a semidefinite program. For small problems, semidifinite programs can be solved via general purpose interior- ...le (meaning it is of multiplicity 1) and <math>\rho</math> is sufficiently small, from the first equation it follows that <math>\textbf{Rank}(X)=1</math>. I ...
    13 KB (2,202 words) - 09:45, 30 August 2017
  • ...for these calculations are biased towards certain distribution types (i.e. small number of modes). The attempt is to get around this. ...t-1})</math> or <math>P(x_t, h_t|x_{t-1}, h_{t-1})</math>, which contain a small number of important modes. This leads to a simple gradient of a partition f ...
    12 KB (1,906 words) - 09:46, 30 August 2017
  • ...n was answered. They showed that this procedure can be done by measuring a small number of random linear projection of the source signal. They also provided ...\le N</math> non-zero entries. To measure this source signal we measure a small number of linear combinations of its elements, <math>\ M</math>, as follows ...
    23 KB (3,784 words) - 09:45, 30 August 2017
  • ...(SSL) to improve the generalization of few-shot learned representations on small labeled datasets. Few-shot learning refers to training a classifier on small datasets with few examples per class, contrary to the normal practice of us ...
    17 KB (2,644 words) - 01:46, 13 December 2020
  • ...states will perform poorly in generation tasks. Movement generation using small HMM model is likely to compromise the fine details of the movements. Adding ...of each node in the tree. If the distance to a child node is sufficiently small, the new motion recurses to the most similar child node. Otherwise, the mot ...
    18 KB (2,835 words) - 09:46, 30 August 2017
  • ...ipping tokens'. Skim-RNN predicts each word as important or unimportant. A small RNN is used if the word is not important, and a large RNN is used if the wo ...inference on CPUs, which makes it very useful for large-scale products and small devices. ...
    27 KB (4,321 words) - 05:09, 16 December 2020
View (previous 50 | ) (20 | 50 | 100 | 250 | 500)