Search results

Jump to navigation Jump to search
View ( | next 50) (20 | 50 | 100 | 250 | 500)
  • ...the previous sections, alternatively updating G and D requires significant attention. We modify the way we update the generator G to improve stability and gener ...
    15 KB (2,279 words) - 22:00, 14 March 2018
  • This idea is also successfully used in attention networks[13] such as image captioning and machine translation. In this pape ...o, K., Courville, A., and Bengio, Y. Describing multi- media content using attention-based Encoder–Decoder networks. IEEE Transactions on Multimedia, 17(11): 18 ...
    29 KB (4,577 words) - 10:13, 14 December 2018
  • ...oped in the literature. The latter sub-task has received relatively little attention and is typically borrowed without justification from the PCA context. In th ...
    20 KB (3,332 words) - 09:45, 30 August 2017
  • ...ls which can do few-shot estimations of data. This can be implemented with attention mechanisms (Reed et al., 2017) or additional memory units in a VAE model ( ...their case features of samples are compared with target features using an attention kernel. At a higher level one can interpret this model as a CNP where the a ...
    32 KB (4,970 words) - 00:26, 17 December 2018
  • ...ng generative adversarial networks[8], variational autoencoders (VAE)[17], attention models[18], have shown that a deep network can learn an image distribution ...
    32 KB (4,965 words) - 15:02, 4 December 2017
  • Attention-based models: #Bahdanau et al. (2014): These are a different class of models which use attention modules(different architectures) to help focus the neural network to decide ...
    31 KB (5,069 words) - 18:21, 16 December 2018
  • ...red solution in image recognition and computer vision problems, increasing attention has been dedicated to evolving the network architecture to further improve ...
    16 KB (2,542 words) - 17:26, 26 November 2018
  • field has attracted the attention of a wide research community, which resulted in ...
    16 KB (2,430 words) - 00:54, 7 December 2020
  • Neural network first caught people’s attention during the 2012 imageNet contest. A solution using neural network achieve 8 ...
    17 KB (2,650 words) - 23:54, 30 March 2018
  • Independent component analysis has been given more attention recently. It is become a popular method for estimating the independent feat ...
    17 KB (2,679 words) - 09:45, 30 August 2017
  • ...calculated using BERT, Roberta, XLNET, and XLM models, which utilize self-attention and nonlinear transformations. ...
    17 KB (2,510 words) - 01:32, 13 December 2020
  • ...ieval can potentially improve QA system performance, and has received more attention. ...
    17 KB (2,691 words) - 22:57, 7 December 2020
  • ...uivalently, to the description logic ALCQ, which has received considerable attention in the knowledge representation community (Baader et al., 2003; Baader & Lu ...
    17 KB (2,786 words) - 17:02, 6 December 2020
  • ...it, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, page ...
    16 KB (2,331 words) - 16:58, 6 December 2020
  • ...n approaches for solving machine learning problems have gained significant attention. In this paper, the non-convex boosting in classification using integer pro ...
    18 KB (2,846 words) - 00:18, 5 December 2020
  • ...ts are fixed to need only 1-bit precision, it is now possible to focus our attention on the features preceding it. ...szkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. 2017. ...
    34 KB (5,105 words) - 00:39, 17 December 2018
  • ...911 calls: victims trapped in a tall building who seek immediate medical attention, locating emergency personnel such as firefighters or paramedics, or a mino ...
    18 KB (2,896 words) - 18:43, 16 December 2018
  • ...ining parameters still needs to be clarified. In addition, it is worthy of attention and further explanation about the results of prediction accuracy, under wha ...
    18 KB (2,856 words) - 04:24, 16 December 2020
  • * Gregor et al. (2015) used a recurrent variational autoencoder with attention mechanisms for reading and writing different portions of the image canvas. ...
    18 KB (2,781 words) - 12:35, 4 December 2017
  • Now we turn our attention back to the the problem of measuring independence between two (generally mu ...
    27 KB (4,561 words) - 09:45, 30 August 2017
  • ...a result, data-intensive promotional strategies are getting more and more attention nowadays from marketing teams to further improve company returns. ...
    20 KB (2,757 words) - 14:41, 13 December 2018
  • ...zkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. ...
    19 KB (2,731 words) - 21:29, 20 November 2021
  • ...use of algorithms for finding optimal DNN architectures has attracted the attention of researchers who have tackled the problem through four main groups of tec ...
    30 KB (4,568 words) - 12:53, 11 December 2018
  • * At the beginning, the study mentions that the pooling method is not under attention as it should be. In the end, results show that choosing the pooling method ...
    26 KB (3,974 words) - 20:50, 11 December 2018
  • ...to be an unaccounted component of tuning the model but this receives scant attention in the current paper. Several numerical comparisons should be carried out t ...
    24 KB (3,886 words) - 01:20, 3 December 2017
  • ...erent image sets and backend cloud services. Also, by taking a look at the attention maps for some of the images, we will figure out why the agent has chosen th ...
    27 KB (4,274 words) - 00:07, 8 December 2020
  • ...ory prediction algorithms using other machine learning algorithms, such as attention-aware neural networks. ...
    29 KB (4,569 words) - 23:12, 14 December 2020
  • ...ization is a key issue that has recently attracted a significant amount of attention in a wide range of applications. Navigation, vehicle tracking, Emergency Ca ...
    28 KB (4,210 words) - 09:45, 30 August 2017
  • From the above code, we should pay attention to the following aspects when comparing with SVD method: ...ki/Vapnik Vapnik], Chervonenkis et al., however the ideas did not gain any attention until strong results were shown in the early 1990s. ...
    263 KB (43,685 words) - 09:45, 30 August 2017
  • ...so when we need to determine <math>\!x_{t+1}</math>, we only need to pay attention in <math>\!x_{t}</math>. ...
    139 KB (23,688 words) - 09:45, 30 August 2017
  • ...e started in the late seventies (Vapnik, 1979), it is receiving increasing attention recently by researchers. It is such a powerful method that in the few years ...ssified points are given higher weight to ensure the classifier "pays more attention" to them, to fit better in the next iteration. The idea behind boosting is ...
    314 KB (52,298 words) - 12:30, 18 November 2020
  • ...ferent function altogether, such as a <math>\,sin(x)</math> dimension. Pay attention, We don't do QDA with LDA. If we try QDA directly on this problem the resul SVM was introduced after neural networks and gathered attention by outperforming neural networks in many applications e.g. bioinformatics, ...
    451 KB (73,277 words) - 09:45, 30 August 2017
  • Attention:There is a "dot" between sqrt(d) and "*". It is because d and tet are vecto ...
    370 KB (63,356 words) - 09:46, 30 August 2017
View ( | next 50) (20 | 50 | 100 | 250 | 500)