Search results

Jump to navigation Jump to search
View (previous 20 | ) (20 | 50 | 100 | 250 | 500)

Page title matches

  • ...polymorphisms (SNPs), insertions, and deletions (indels). Calling SNPs and small indels are technically challenging since it requires a program to distingui This paper aims to solve the problem of calling SNPs and small indels using a convolutional neural net by casting the reads as images and ...
    18 KB (2,856 words) - 04:24, 16 December 2020

Page text matches

  • \small W(p_r, p_g) = \underset{\gamma\sim\Pi(p_r, p_g)} {\inf}\pmb{\mathbb{E}}_{(x ...)</math>, and corresponding densities with lower case letters, i.e. <math>\small p(x)</math>. ...
    21 KB (3,416 words) - 22:25, 25 April 2018
  • ...intrinsic dimension of the data. Since <math>\hat{n}</math> could be very small compared to the dimension <math>n</math> of the data, this algorithm is com Since <math> \beta </math> is very small, and we want to avoid large value of <math> z </math>, we could change vari ...
    7 KB (1,209 words) - 09:46, 30 August 2017
  • ...ction that is mainly focused on modeling large dissimilarities rather than small ones. As a result of that, they do not provide good visualizations of data ...l the entropy of <math> \mathbf{ P_i} </math> is within some predetermined small tolerance of <math> \mathbf{\log_2 M } </math>. ...
    15 KB (2,530 words) - 09:45, 30 August 2017
  • So, we can take some small number of samples <math>y</math>, compute the sparse representation <math>s ...ty is now clear: when a signal has a sparse expansion, one can discard the small coefficients without much perceptual loss. Formally, consider <math>f_{S}(t ...
    13 KB (2,258 words) - 09:45, 30 August 2017
  • In this study, NN LMs are trained only on a small part of the data (which are in-domain corpora) plus some randomly subsample performance for small values of M, and even with M = 2000, ...
    9 KB (1,542 words) - 09:46, 30 August 2017
  • |Week of Nov 25 || Yuliang Shi || || Small-gan: Speeding up gan training using core-sets || [http://proceedings.mlr.pr ...
    5 KB (642 words) - 23:29, 1 December 2021
  • ...adding more convolutional layers, which is feasible due to the use of very small (3 × 3) convolution filters in all layers. As a result, they come up with s ...d through a stack of convolutional (conv.) layers with filters with a very small receptive field: 3 × 3 with a convolutional stride of 1 pixel. Spatial poo ...
    11 KB (1,680 words) - 09:46, 30 August 2017
  • ...-art Gaussian Mixture Models-Hidden Markov Model (GMM-HMM) systems in both small and large speech recognition tasks ...lores using multiple convolutional layers, and the system is tested on one small dataset and two large datasets. The results show that CNNs outperform DNNs ...
    11 KB (1,587 words) - 09:46, 30 August 2017
  • ...e a small cost for using a large <math> \mathbf q_{j|i} </math> to model a small <math> \mathbf p_{j|i} </math>. Therefore, the SNE cost function focuses m ...too far away in the two-dimensional map. In SNE, this will result in very small attractive force from datapoint <math> i </math> to these too-distant map p ...
    19 KB (3,223 words) - 09:45, 30 August 2017
  • ...within the sphere (i.e. the data points are approximately uniform in each small local region). ...>) is to let the sphere to contain sufficiently many data points, and also small enough to satisfy the assumption that <math>\,f</math> is approximately con ...
    15 KB (2,484 words) - 09:46, 30 August 2017
  • ...aordinary small (compared to usual font for math formulas). Sometimes this small font helps and sometimes it hurts! One solution to correct this is to simpl ...
    5 KB (769 words) - 22:53, 5 September 2021
  • SSR is small and hard to recognize but contains important info with 90% accuracy. SDR is ...mask M can generate adversarial artifacts. Adversarial artifacts are very small and imperceivable by people but can ruin the classifier. This phenomenon sh ...
    12 KB (1,840 words) - 14:09, 20 March 2018
  • ...refers to instances where the gradient used in backpropagation becomes too small to make discernable differences as the parameters in the model are tuned, a ...epth, it becomes more difficult to train them as gradients may become very small. The authors developed a method that train a model to fit a residual mappin ...
    6 KB (1,020 words) - 12:01, 3 December 2021
  • ...m of the weights equal to N. Analogous to <math>\beta</math> distribution, small <math>a</math> allows the model to up- or down-scale weights <math>\boldsym ...on at z affects the statistic T(F). Thus, this corrupted z value will have small effect on the statistic T(F). ...
    9 KB (1,489 words) - 02:35, 19 November 2018
  • ...ng only the last n-1 words instead of the whole context. However, even for small n, certain sequences could still be missing. ...robability for even the rarest words, the neural network only calculates a small subset of the most common words. This way, the output vector can be signifi ...
    15 KB (2,517 words) - 09:46, 30 August 2017
  • ...ed for parameter and model selection. Second, regarding the selection of a small representative subgraph as training set, a method based on Expansion factor 6). Repeat the above procedure untill change in EF value is too small (comparing to a threshold specified by the user) ...
    10 KB (1,675 words) - 09:46, 30 August 2017
  • ...It is difficult to test the true robustness of the model with a relatively small test set. If a larger data set can be found to help correctly identify othe ...PTB makes it difficult to determine the robustness of the model due to the small size of the test set. Given a larger dataset, the model could be tested to ...
    21 KB (3,373 words) - 07:19, 15 December 2020
  • ...descent with momentum and dropout, where mini-batches were constructed. A small L1 weight penalty was included in the cost function. The model’s weights we ...
    8 KB (1,353 words) - 09:46, 30 August 2017
  • unobserved ones. The small square nodes represent factors, and there is an edge between a variable ...th>x_i</math>. Moreover, Figure 1 shows the notion they use in graphs. The small squares denote potential functions, and, as usual, the shaded and unshaded ...
    17 KB (2,924 words) - 09:46, 30 August 2017
  • ...n and the original unknown matrix recovery are provably accurate even when small amount of noise is present and corrupts the few observed entries. The error ...fty}} \leq \sqrt{\mu_B / n_2}</math>, where <math>\mu \geq 1</math> and is small. To see that this assumption guarantees dense vectors, consider the case wh ...
    14 KB (2,342 words) - 09:45, 30 August 2017
View (previous 20 | ) (20 | 50 | 100 | 250 | 500)