Search results

Jump to navigation Jump to search
View (previous 100 | ) (20 | 50 | 100 | 250 | 500)

Page title matches

  • ...polymorphisms (SNPs), insertions, and deletions (indels). Calling SNPs and small indels are technically challenging since it requires a program to distingui This paper aims to solve the problem of calling SNPs and small indels using a convolutional neural net by casting the reads as images and ...
    18 KB (2,856 words) - 04:24, 16 December 2020

Page text matches

  • \small W(p_r, p_g) = \underset{\gamma\sim\Pi(p_r, p_g)} {\inf}\pmb{\mathbb{E}}_{(x ...)</math>, and corresponding densities with lower case letters, i.e. <math>\small p(x)</math>. ...
    21 KB (3,416 words) - 22:25, 25 April 2018
  • ...intrinsic dimension of the data. Since <math>\hat{n}</math> could be very small compared to the dimension <math>n</math> of the data, this algorithm is com Since <math> \beta </math> is very small, and we want to avoid large value of <math> z </math>, we could change vari ...
    7 KB (1,209 words) - 09:46, 30 August 2017
  • ...ction that is mainly focused on modeling large dissimilarities rather than small ones. As a result of that, they do not provide good visualizations of data ...l the entropy of <math> \mathbf{ P_i} </math> is within some predetermined small tolerance of <math> \mathbf{\log_2 M } </math>. ...
    15 KB (2,530 words) - 09:45, 30 August 2017
  • So, we can take some small number of samples <math>y</math>, compute the sparse representation <math>s ...ty is now clear: when a signal has a sparse expansion, one can discard the small coefficients without much perceptual loss. Formally, consider <math>f_{S}(t ...
    13 KB (2,258 words) - 09:45, 30 August 2017
  • In this study, NN LMs are trained only on a small part of the data (which are in-domain corpora) plus some randomly subsample performance for small values of M, and even with M = 2000, ...
    9 KB (1,542 words) - 09:46, 30 August 2017
  • |Week of Nov 25 || Yuliang Shi || || Small-gan: Speeding up gan training using core-sets || [http://proceedings.mlr.pr ...
    5 KB (642 words) - 23:29, 1 December 2021
  • ...adding more convolutional layers, which is feasible due to the use of very small (3 × 3) convolution filters in all layers. As a result, they come up with s ...d through a stack of convolutional (conv.) layers with filters with a very small receptive field: 3 × 3 with a convolutional stride of 1 pixel. Spatial poo ...
    11 KB (1,680 words) - 09:46, 30 August 2017
  • ...-art Gaussian Mixture Models-Hidden Markov Model (GMM-HMM) systems in both small and large speech recognition tasks ...lores using multiple convolutional layers, and the system is tested on one small dataset and two large datasets. The results show that CNNs outperform DNNs ...
    11 KB (1,587 words) - 09:46, 30 August 2017
  • ...e a small cost for using a large <math> \mathbf q_{j|i} </math> to model a small <math> \mathbf p_{j|i} </math>. Therefore, the SNE cost function focuses m ...too far away in the two-dimensional map. In SNE, this will result in very small attractive force from datapoint <math> i </math> to these too-distant map p ...
    19 KB (3,223 words) - 09:45, 30 August 2017
  • ...within the sphere (i.e. the data points are approximately uniform in each small local region). ...>) is to let the sphere to contain sufficiently many data points, and also small enough to satisfy the assumption that <math>\,f</math> is approximately con ...
    15 KB (2,484 words) - 09:46, 30 August 2017
  • ...aordinary small (compared to usual font for math formulas). Sometimes this small font helps and sometimes it hurts! One solution to correct this is to simpl ...
    5 KB (769 words) - 22:53, 5 September 2021
  • SSR is small and hard to recognize but contains important info with 90% accuracy. SDR is ...mask M can generate adversarial artifacts. Adversarial artifacts are very small and imperceivable by people but can ruin the classifier. This phenomenon sh ...
    12 KB (1,840 words) - 14:09, 20 March 2018
  • ...refers to instances where the gradient used in backpropagation becomes too small to make discernable differences as the parameters in the model are tuned, a ...epth, it becomes more difficult to train them as gradients may become very small. The authors developed a method that train a model to fit a residual mappin ...
    6 KB (1,020 words) - 12:01, 3 December 2021
  • ...m of the weights equal to N. Analogous to <math>\beta</math> distribution, small <math>a</math> allows the model to up- or down-scale weights <math>\boldsym ...on at z affects the statistic T(F). Thus, this corrupted z value will have small effect on the statistic T(F). ...
    9 KB (1,489 words) - 02:35, 19 November 2018
  • ...ng only the last n-1 words instead of the whole context. However, even for small n, certain sequences could still be missing. ...robability for even the rarest words, the neural network only calculates a small subset of the most common words. This way, the output vector can be signifi ...
    15 KB (2,517 words) - 09:46, 30 August 2017
  • ...ed for parameter and model selection. Second, regarding the selection of a small representative subgraph as training set, a method based on Expansion factor 6). Repeat the above procedure untill change in EF value is too small (comparing to a threshold specified by the user) ...
    10 KB (1,675 words) - 09:46, 30 August 2017
  • ...It is difficult to test the true robustness of the model with a relatively small test set. If a larger data set can be found to help correctly identify othe ...PTB makes it difficult to determine the robustness of the model due to the small size of the test set. Given a larger dataset, the model could be tested to ...
    21 KB (3,373 words) - 07:19, 15 December 2020
  • ...descent with momentum and dropout, where mini-batches were constructed. A small L1 weight penalty was included in the cost function. The model’s weights we ...
    8 KB (1,353 words) - 09:46, 30 August 2017
  • unobserved ones. The small square nodes represent factors, and there is an edge between a variable ...th>x_i</math>. Moreover, Figure 1 shows the notion they use in graphs. The small squares denote potential functions, and, as usual, the shaded and unshaded ...
    17 KB (2,924 words) - 09:46, 30 August 2017
  • ...n and the original unknown matrix recovery are provably accurate even when small amount of noise is present and corrupts the few observed entries. The error ...fty}} \leq \sqrt{\mu_B / n_2}</math>, where <math>\mu \geq 1</math> and is small. To see that this assumption guarantees dense vectors, consider the case wh ...
    14 KB (2,342 words) - 09:45, 30 August 2017
  • ...nner product between the input feature map and a filter, shifted by <math>\small x</math>. ...nner product between the input feature map and a filter, rotated by <math>\small R</math>. ...
    23 KB (3,814 words) - 22:53, 20 April 2018
  • ...s and knowledge graphs. Another work trained a label cleaning network by a small set of clean labels and used it to reduce the noise in large-scale noisy la ...instances to the peer network. <math>R(T)</math> governs the percentage of small-loss instances to be used in updating the parameters of each network. ...
    15 KB (2,318 words) - 21:02, 11 December 2018
  • ...led data than labelled data. A common situation is to have a comparatively small quantity of labelled data paired with a larger amount of unlabelled data. T ...nd drastically better for when the number of labelled data samples is very small (100 out of 50000). ...
    9 KB (1,554 words) - 09:46, 30 August 2017
  • ...with normal enumeration if we choose to have a dictionary of the words for small values of <math>\tau</math> ...
    4 KB (646 words) - 19:44, 26 October 2017
  • ...nected linear layer as the classifier. To make use of order information of small regions it uses hand-crafted n-grams as features in addition to single word ...h convolutional layers. The essence of CNN is to learn word embeddings for small size regions and each kernel of convolutional layer tries to capture a spec ...
    13 KB (2,188 words) - 12:42, 15 March 2018
  • ...sets if the data is correctly labeled. However, they can be trounced by a small number of incorrect labels, which can be quite challenging to fix. We try t ...alized by [LS], who construct a "hard" training data distribution, where a small percentage of labels is randomly flipped. This label noise then leads to a ...
    18 KB (2,846 words) - 00:18, 5 December 2020
  • ...of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In this work, we propose a meta-learning algori ...c algorithm for meta-learning that trains a model’s parameters such that a small number of gradient updates will lead to fast learning on a new task. The pa ...
    26 KB (4,205 words) - 10:18, 4 December 2017
  • ...as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird. ...formation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. Wh ...
    17 KB (2,650 words) - 23:54, 30 March 2018
  • ...stablished PDE models exist, but where our amount of available data is too small to guarantee the robustness of convergence in neural network training. In e ...pting to answer the first of the questions above. Specifically, if given a small number of noisy measurements of the solution of the PDE ...
    23 KB (3,762 words) - 15:51, 6 December 2020
  • ...tool in statistical learning, which tries to preserve the variability by a small number of principal components. In the classical method, the principal comp ...The diagnostic plot is shown as following. Clearly, ROBPCA distinguishes a small group of bad leverage points which all three other PCA methods fails to rec ...
    15 KB (2,414 words) - 09:46, 30 August 2017
  • ...especially relevant to situations where the number of observations may be small. ...unctions <math>\,f_t</math> are related to each other, so they all share a small set of features. Formally, the hypothesis is that the functions <math>\,f_t ...
    17 KB (2,834 words) - 09:45, 30 August 2017
  • ...ons in large boxes should be of less significance than small deviations in small boxes. The author claims that predicting the square root of the bounding bo * The loss function treats errors in large bounding boxes the same as small bounding boxes to some extent, which is inconsistent with the relative cont ...
    19 KB (2,746 words) - 16:04, 20 November 2018
  • ...his similarity measure is large for the points within the same cluster and small for points in different clusters. <math>W</math> has non neagtive elements ...lized cut takes a small value if the clusters <math>C_k</math> are not too small <ref> Ulrike von Luxburg, A Tutorial on Spectral Clustering, Technical Repo ...
    35 KB (5,767 words) - 09:45, 30 August 2017
  • ...ble if the above representation has just a few large coefficients and many small coefficients. We shall now briefly overview how the transform coding of sig ...<math>\,N</math> may be very large even if the desired <math>\ K</math> is small. ...
    18 KB (2,888 words) - 09:45, 30 August 2017
  • ...over set <math>\displaystyle A</math> but <math>\displaystyle g</math> is small, then <math>\displaystyle \frac{f}{g} </math> would be large and it would r ...
    6 KB (1,083 words) - 09:45, 30 August 2017
  • ...ffer from some technical problems. Most importantly, they are limited to a small vocabulary because of complexity and number of parameters that have to be t ...f computing the normalization constant, the authors proposed to use only a small subset <math>v\prime</math> of the target vocabulary at each update<ref> ...
    14 KB (2,301 words) - 09:46, 30 August 2017
  • ...an adversarial attack where a model is deceived by an attacker by adding a small noise to an input image and as a result, the prediction of the model change ...nce(x,x')=\delta, f(x)\neq f(x')</math>, where <math>\delta</math> is some small number and <math>f(\cdot)</math> is the image label. If the classifier assi ...
    15 KB (2,325 words) - 06:58, 6 December 2020
  • ...first or last letter of the word. The important thing to note is that even small amounts of noise lead to substantial drops in performance. ...ttle machine learning systems being used so pervasively in the real world. Small changes to the input can lead to dramatic ...
    17 KB (2,634 words) - 00:15, 21 April 2018
  • ...gested as a candidate is formed by combining basic building blocks to form small modules, then the same basic structures introduced on the building blocks a ...ent_Architecture_Search#Primitive_operations section 2.3] are used to form small networks defined as ''motifs'' by the authors. To combine the outputs of mu ...
    30 KB (4,568 words) - 12:53, 11 December 2018
  • ...based on the query terms appearing in each document. Stage one produces a small subset of documents where the answer might appear (high recall), and then i ...ize can be billions of documents. In stage one, a retriever would select a small set of potentially relevant documents, which then would be fed to a neural ...
    17 KB (2,691 words) - 22:57, 7 December 2020
  • One major challenge in XTMC problems is that most data fall into a small group of labels. To tackle this challenge, the authors propose partitioning ...lexity can be reduced by configuring the model so that <math>p_i</math> is small, which corresponds to a low probability of a batch entering the tail cluste ...
    15 KB (2,456 words) - 22:04, 7 December 2020
  • The paper shows that the same phenomenon occurs even in small linear models. These observations are explained by the Bayesian evidence, w The authors propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. ...
    34 KB (5,220 words) - 20:32, 10 December 2018
  • <ol><li>Modularity: increase the depth of a network by simply repeating a small module and aim to achieve higher accuracy</li> ...ent with respect to <math>w_1</math> can be small due to multiplication of small numbers (a.k.a. vanishing gradient). When <math>w_3</math> and <math>w_2</m ...
    19 KB (2,963 words) - 14:42, 22 November 2018
  • ...polygons because it is a special representation of the image which can use small number of vertices instead of various pixels and makes it easy to incorpora ...objects with a closed polygon. Polygons allow annotation of objects with a small number of clicks (30 - 40) compared to other methods. This approach works a ...
    21 KB (3,323 words) - 18:41, 16 December 2018
  • ...rming a convex relation of the problem that is a semidefinite program. For small problems, semidifinite programs can be solved via general purpose interior- ...le (meaning it is of multiplicity 1) and <math>\rho</math> is sufficiently small, from the first equation it follows that <math>\textbf{Rank}(X)=1</math>. I ...
    13 KB (2,202 words) - 09:45, 30 August 2017
  • ...for these calculations are biased towards certain distribution types (i.e. small number of modes). The attempt is to get around this. ...t-1})</math> or <math>P(x_t, h_t|x_{t-1}, h_{t-1})</math>, which contain a small number of important modes. This leads to a simple gradient of a partition f ...
    12 KB (1,906 words) - 09:46, 30 August 2017
  • ...n was answered. They showed that this procedure can be done by measuring a small number of random linear projection of the source signal. They also provided ...\le N</math> non-zero entries. To measure this source signal we measure a small number of linear combinations of its elements, <math>\ M</math>, as follows ...
    23 KB (3,784 words) - 09:45, 30 August 2017
  • ...(SSL) to improve the generalization of few-shot learned representations on small labeled datasets. Few-shot learning refers to training a classifier on small datasets with few examples per class, contrary to the normal practice of us ...
    17 KB (2,644 words) - 01:46, 13 December 2020
  • ...states will perform poorly in generation tasks. Movement generation using small HMM model is likely to compromise the fine details of the movements. Adding ...of each node in the tree. If the distance to a child node is sufficiently small, the new motion recurses to the most similar child node. Otherwise, the mot ...
    18 KB (2,835 words) - 09:46, 30 August 2017
  • ...ipping tokens'. Skim-RNN predicts each word as important or unimportant. A small RNN is used if the word is not important, and a large RNN is used if the wo ...inference on CPUs, which makes it very useful for large-scale products and small devices. ...
    27 KB (4,321 words) - 05:09, 16 December 2020
  • The main idea of the paper is to find only a small but critical subset of the gradient information and in each learning step, ...ay using the sparsified gradient obtained from the top layer. Since only a small subset of the weight matrix is modified, we obtain a linear reduction in th ...
    20 KB (3,272 words) - 20:40, 28 November 2017
  • .... For instance, a sizable portion of scientific research is carried out by small or medium sized of participants within a trial, leading to small datasets. Similar datasets from multiple sites can be pooled to potentially ...
    23 KB (3,530 words) - 20:45, 28 November 2017
  • ...low layers of the ResNet models are only able to access local information (small area of the image) , and thus learn local representations. As the image is They found that with a small portion of the data, shallow layers of the ViT were able to learn represent ...
    13 KB (2,006 words) - 00:11, 17 November 2021
  • When there is a very large number of data <math>\,n</math>, and a very small portion of them totalling <math>\,k</math> is to be sampled as landmarks fo ...ults. In the case of using SRS, sometimes the sampled landmarks comprise a small cluster in the dataset that does not represent the entire dataset well. In ...
    17 KB (2,679 words) - 09:45, 30 August 2017
  • ...the speech recognition systems. DNNs are proved to outperform GMMs in both small and large vocabulary speech recognition tasks. ...decrease. The pretraining is essential when the amount of training data is small. Restricted Boltzmann Machines (RBMs) are used for pretraining except for t ...
    24 KB (3,699 words) - 09:46, 30 August 2017
  • ...nd findings by testing their model to the classification of glioma and non-small-cell lung carcinoma cases. ...ting their model. Those classification tasks are classifying gliomaand Non-Small-Cell Lung Carcinoma (NSCLC) cases into glioma and NSCLC subtypes which the ...
    16 KB (2,470 words) - 14:07, 19 November 2021
  • </ref>, completed by removing non-informative small components (less than 100 pixels). Traditionally segmentation methods use a ...e work. Training with balanced frequencies allows better discrimination of small objects, and although it tends to have lower overall pixel-wise accuracy, i ...
    12 KB (1,895 words) - 09:46, 30 August 2017
  • ...ave been shown to be susceptible to adversarial attacks. In these attacks, small humanly-imperceptible changes are made to images (that are originally corre .../math> is the true class for input x. In words, the adversarial image is a small distance from the original image, but the classifier classifies it incorrec ...
    27 KB (3,974 words) - 17:54, 6 December 2018
  • # A small set of labeled training data is provided to the model. Each label is a bool the model are explicitly trained such that a small ...
    17 KB (2,846 words) - 00:12, 21 April 2018
  • ...e = "CAE"></ref>, encourages robustness of <math>h\left(x\right)</math> to small variations in <math>x</math> by penalizing the Frobenius norm of the encode ...>J\left(x + \varepsilon\right)</math> where <math>\,\varepsilon </math> is small, as this represents the rate of change of the Jacobian. This yields the "CA ...
    22 KB (3,505 words) - 09:46, 30 August 2017
  • ...systems to such examples. In the example below (Goodfellow et. al) [17], a small perturbation is applied to the original image of a panda, changing the pred ...(Xu et. al) [5]''' performs a simple type of quantization that can remove small (adversarial) variations in pixel values from an image. During the bit redu ...
    32 KB (4,769 words) - 18:45, 16 December 2018
  • ...ayers and a larger weight for the remaining layers. The reason we choose a small weight is that it can prevent deleting too many neurons in the first few la two different weights: a relatively small one for the first few layers, and a larger weight for the ...
    24 KB (3,886 words) - 01:20, 3 December 2017
  • ...g multi-label accuracy, using more labelers, and focusing on robustness to small distribution shifts. Although the researchers had some different findings, ...n that current performance benchmarks are not addressing the robustness to small and natural distribution shifts, which are easily handled by humans. ...
    29 KB (4,464 words) - 00:08, 15 December 2020
  • ...tes the higher difficulty of the detection dataset, which can contain many small objects while the classification and localization images typically contain ...ror. The fine stride technique illustrated in Figure 3 brings a relatively small improvement in the single-scale method, but is also of importance for the m ...
    19 KB (2,961 words) - 09:46, 30 August 2017
  • 2. The learned metric can be restricted to small dimensional basis efficiently to enable scalability to large data sets with ...
    6 KB (1,007 words) - 09:46, 30 August 2017
  • ...e network are gradually reduced from 7x7 to 5x5 and then to 3x3 to capture small interesting features. Zero-paddings are introduced either adapt to the conf ...o the shallower layers during the backward pass, it often just becomes too small to have an effect on the weights. This forces standard RNN architectures to ...
    16 KB (2,430 words) - 18:30, 16 December 2018
  • ...polymorphisms (SNPs), insertions, and deletions (indels). Calling SNPs and small indels are technically challenging since it requires a program to distingui This paper aims to solve the problem of calling SNPs and small indels using a convolutional neural net by casting the reads as images and ...
    18 KB (2,856 words) - 04:24, 16 December 2020
  • ...on (e.g. Basque), which could lead to the problem of the dataset being too small (Koehn & Knowles, 2017). ...e recently tried to address this problem using semi-supervised approaches (small set of parallel corpora). Their approaches have included pivoting or triang ...
    28 KB (4,293 words) - 00:28, 17 December 2018
  • Finally, stochastic rounding is substituted for small or real-valued updates during gradient accumulation. ...twidth may have increased) stochastic rounding is used as a substitute for small gradient accumulation. ...
    20 KB (2,998 words) - 21:23, 20 April 2018
  • ...arge and can not be solved by common techniques which are used for solving small [http://en.wikipedia.org/wiki/Convex_optimization convex optimization] prob ...and [http://www.math.cmu.edu/~reha/sdpt3.html SDPT3] can be used to solve small [http://en.wikipedia.org/wiki/Semidefinite_programming semidefinite program ...
    20 KB (3,146 words) - 09:45, 30 August 2017
  • ...face attribute (such as moustache and glasses) classification model on our small dataset. ...
    13 KB (2,036 words) - 12:50, 16 December 2021
  • ...typically implies a flat pdf which is rather constant near zero, and very small at the two ends. (e.g. uniform distribution with finite support) ...CA algorithms got developed since early 1990s, though ICA still remained a small and narrow research area until mid-1990s. The breakthrough happened between ...
    15 KB (2,422 words) - 09:45, 30 August 2017
  • </ref>, <math>lSDE</math> is slower than SDE. Because the ddataset is too small and has a particular cyclic structure so that incremental scheme for adding ...
    7 KB (1,093 words) - 09:45, 30 August 2017
  • ...nch in a pixel-by-pixel way. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to RoIPool is standard for extracting a small feature map from each RoI. However, it performs quantization before subdivi ...
    20 KB (3,056 words) - 22:37, 7 December 2020
  • ...d within networks to ensure local invariance to prevent overfitting due to small transitional shifts within an image. Despite the effectiveness of tradition The proposed pooling method uses wavelets (i.e. small waves - generally used in signal processing) to reduce the dimensions of th ...
    26 KB (3,974 words) - 20:50, 11 December 2018
  • ...U, Y is much determined, so the conditional covariance operator should be small. It can be proved that when <math>\sum{_{YY|U}}=\sum{_{YY|X}}</math>, X and ...
    6 KB (1,132 words) - 09:46, 30 August 2017
  • ...ieve this goal, Shental ''et al.'' introduced the idea of ''chunklets'' – "small sets of data points, in which the class label is constant, but unknown" [1] ...nt learning paradigm assumes that input data is naturally partitioned into small subsets, or ''chunklets'', which are in turn subsets of equivalence classes ...
    21 KB (3,516 words) - 09:45, 30 August 2017
  • ...sis, where if the model is in the boundary range at time <math>0</math>, a small change in <math>b</math> would result in a sudden large change in <math>x_{ ...
    17 KB (2,685 words) - 09:46, 30 August 2017
  • * The additional computational needed to implement spectral normalization is small ...for which the training with our spectral normalization prefers relatively small feature space. Above figure shows the result of our experiments. As we pred ...
    16 KB (2,645 words) - 10:31, 18 April 2018
  • ...prevents the predictions of the previous task from changing too much. A <i>small</i> step orthogonal to the gradient of a task should result in little chang ...gorithms. One of the downsides is that the learning rate must be kept very small, in order to respect the assumption that orthogonal gradients do not affect ...
    15 KB (2,322 words) - 23:30, 7 December 2020
  • ...e observed errors. Bayesian networks are data-efficient and can learn with small datasets without overfitting (Jospin, Buntine, Boussaid, Laga, & Bennamoun, ...1</math>, which means making <math>\alpha_0^{\mathbf W}(\mathbf x)</math> small can be counterproductive. ...
    29 KB (4,651 words) - 10:57, 15 December 2020
  • ...Images are resized to <math>56 \times 56</math> pixels before going into a small, randomly initialized neural network with no pretraining. The network consi ...supervision on the test set, which can be attributed to overfitting on the small amount of training data available (as correlation on training data reached ...
    21 KB (3,358 words) - 00:04, 21 April 2018
  • ...help in developing equivariant representations of our image accounting for small distortions in images. The structure with these layers would either be that ...ication (through benchmarks like ImageNet) can be achieved with relatively small kernel sizes, but deep networks, along with a uniform structure, whereas pr ...
    32 KB (5,284 words) - 22:03, 19 March 2018
  • :: The name “LightRNN” is to illustrate the small model size and fast training speed. Because of these features of the new RN '''Advantage 1: small model size''' ...
    28 KB (4,651 words) - 20:18, 28 November 2017
  • \small{\textrm{hidden state of the phrase-RNN at time step t}} \leftarrow h_t &= f \small{\text{output of the label unit}} \leftarrow l_t &= softmax(f_{phrase-label} ...
    23 KB (3,760 words) - 10:33, 4 December 2017
  • ...rd to generalize to other forms of optical flow. Its main drawback was its small size of only 194 frame pairs, which proved to be insufficient for accuratel ...d and non-adaptive, their inclusion may only be helpful in the presence of small deformations; with large transformations, pooling may help provide little t ...
    16 KB (2,542 words) - 17:26, 26 November 2018
  • ...hanged. It is seen that when the number of hidden layers are two, having a small number of neurons in the layers degrade the predictive capability of DNNs. ...magnitude of the change in coefficient of determination relative to RF is small in some data sets, on average its better than RF. The paper recommends a se ...
    17 KB (2,705 words) - 09:46, 30 August 2017
  • ...atures without the use of GPU. In essence, the feature vector for LBPs are small, yet powerful enough that its accuracy comparable to that of a trained CNN. ...the convergence of learning process, so there should be a balance between small and large $\eta$. In the paper we set $\eta = \sigma / 2$. As to learning r ...
    21 KB (3,321 words) - 15:00, 4 December 2017
  • ...econd phase learns a function for a specific task but does so using only a small number of data points by exploiting the domain-wide statistics already lear digit even for a small number of context points. Crucially, ...
    32 KB (4,970 words) - 00:26, 17 December 2018
  • ...s also worth noting that it is much more efficient to train a model with a small character-level vocabulary than it is to train a model with a word-level vo ...because the probability that a long context occurs more than once is very small. ...
    18 KB (2,926 words) - 09:46, 30 August 2017
  • ...itive Gaussian noise increased. Baselines perform better when the noise is small. By the increase of the variance, AmbientGAN models present a much better ...n an AmbientGAN model when we have an unknown measurement model but also a small sample of unmeasured data, or at the very least to remove the differentiabi ...
    19 KB (2,916 words) - 22:25, 20 April 2018
  • ...fferences between different methods in classification error vary only by a small amount. ...n small and large dataset. Would the performance increase be negligible in small features datasets? ...
    23 KB (3,748 words) - 03:46, 16 December 2020
  • ...m <math>\frac{1}{2}\lambda ||\omega||^2</math> encourages the tree to have small weights.<br> #:* Choosing a small block size results in inefficient parallelization ...
    21 KB (3,313 words) - 02:21, 5 December 2021
  • ...layer’s activation is flattened to form a vector which is then fed into a small number of fully-connected layers followed by the classification layer. ...ent layer can efficiently capture long-term dependencies, requiring only a small number of convolution layers. However, the recurrent layer is computational ...
    32 KB (5,160 words) - 22:32, 27 March 2018
  • An overly small block size results in small workload and inefficient parallelization ...
    15 KB (2,406 words) - 18:07, 28 November 2018
  • ...l by 0.002 and 0.006 in terms of AUC. Note that the difference is relative small in offline compared to online since the labels in offline data are fixed wh ...
    8 KB (1,119 words) - 04:28, 1 December 2021
  • ...ing sample is far from an anchor point, the corresponding weight should be small. ...
    9 KB (1,589 words) - 09:46, 30 August 2017
  • ...put and updated. If the prediction matches mean well (i.e. the distance is small), more weights will be assigned to it. After getting the new weights, a wei ...
    8 KB (1,394 words) - 19:54, 20 March 2018
  • # Small updates to <math>Q\,</math>can significantly change the policy, and thus th ...f the global parameters was selected by performing an informal search on a small subset of the 49 games. The goal is to use minimal prior knowledge and perf ...
    25 KB (4,026 words) - 09:46, 30 August 2017
  • ...oblem is that traditional convolutional neural networks (CNNs) only take a small region around each pixel into account which is often not sufficient for lab ...te boundaries for large regions (sky, road, grass, etc), but fails to spot small objects. ...
    18 KB (2,935 words) - 09:46, 30 August 2017
View (previous 100 | ) (20 | 50 | 100 | 250 | 500)