如何快速抠图_抠图在线生成网_图片在线设计_在线长图生成器_切图软件
当前位置:建站首页 > 成功案例 > 企业案例 >

小程序游戏模板—126篇殿堂级深度神经网络论文

发表日期:2021-05-05 20:02文章编辑:微信小程序游戏模浏览次数: 标签:    

倘若让你十分大的自信心从事深层次学习培训学习培训,又不肯在这里里一行打酱油,那么研读高手大学毕业毕业论文将并不是能防止的一步。而作为初学者,你的第一个难点或许是:“大学毕业毕业论文那么多,从哪一篇读起?”

原文中将试着解决这一难点——文章内容內容题型本来是:“重新手新手入门到迷失,无止尽的深层次学习培训学习培训大学毕业毕业论文”。
[标识:內容1]
请诸位备好手机游戏游戏道具,开启头悬梁锥刺股的学神姿势。

开家玩笑话话。

但对非科班出身出生出世的开发设计设计方案者来说,读大学毕业毕业论文的确可以变为一件很痛苦的事。但喜报来了——为避免初学者陷入迷途苦海,呢称之为 songrotek 的学神在 GitHub 发布了他整理的深层次学习培训学习培训路线图,归类梳理梳理了新初学者新手入门者最务必学习培训学习培训的 DL 大学毕业毕业论文,又按重要水准给每章大学毕业毕业论文再加星星。

截至目前,这一份 DL 大学毕业毕业论文路线图已在 GitHub 得到了近万颗星星五星五星好评,人气值值十分高。雷锋网感觉十分务必对大家进行详尽详细介绍。

闲话少说,该路线图根据以下四项规范而组织:

从考試考试大纲到重要点

从经典到前沿

从一般到具体制造行业

关注全新升级科学研究科学研究提高

写作者注:有许多大学毕业毕业论文很新但十分十分非常值得一读。

1 深层次学习培训学习培训历史时间時间和基本1.0 书籍

█[0] Bengio, Yoshua, Ian J. Goodfellow, and Aaron Courville. Deep learning. An MIT Press book. (2015). [pdf] (Ian Goodfellow 等高手所著的教科书,乃深层次学习培训学习培训圣经。你可以以以同时研习这一部书以及以下大学毕业毕业论文) ★★★★★

详尽详细地址: 

1.1 调查

█[1] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature 521.7553 (2015): 436-444. [pdf] (三巨头做的调查) ★★★★★

详尽详细地址:cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf

1.2 深层次坚信互连网 (DBN,深层次学习培训学习培训前夜的里程数数碑)

█[2] Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation 18.7 (2006): 1527-1554. [pdf] (深层次学习培训学习培训前夜) ★★★

详尽详细地址:cs.toronto.edu/~hinton/absps/ncfast.pdf

█[3] Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science 313.5786 (2006): 504-507. [pdf] (里程数数碑,呈现了深层次学习培训学习培训的销售市场市场前景) ★★★

详尽详细地址:cs.toronto.edu/~hinton/science.pdf

1.3 ImageNet 的演化(深层次学习培训学习培训自此发芽)

█[4] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems. 2012. [pdf] (AlexNet, 深层次学习培训学习培训提高) ★★★★★

详尽详细地址:papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

█[5] Simonyan, Karen, and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). [pdf] (VGGNet,神经系统系统软件互连网越来越越很深层次次) ★★★

详尽详细地址:arxiv.org/pdf/1409.1556.pdf

█[6] Szegedy, Christian, et al. Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015. [pdf] (GoogLeNet) ★★★

详尽详细地址:cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf

█[7] He, Kaiming, et al. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015). [pdf](ResNet,十分深的神经系统系统软件互连网, CVPR 最好大学毕业毕业论文) ★★★★★

详尽详细地址:arxiv.org/pdf/1512.03385.pdf

1.4 视頻视频语音辨别的演化

█[8] Hinton, Geoffrey, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29.6 (2012): 82-97. [pdf] (视頻视频语音辨别的提高) ★★★★

详尽详细地址:cs224d.stanford.edu/papers/maas_paper.pdf

█[9] Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013. [pdf] (RNN) ★★★

详尽详细地址:arxiv.org/pdf/1303.5778.pdf

█[10] Graves, Alex, and Navdeep Jaitly. Towards End-To-End Speech Recognition with Recurrent Neural Networks. ICML. Vol. 14. 2014. [pdf] ★★★

详尽详细地址:jmlr.org/proceedings/papers/v32/graves14.pdf

█[11] Sak, Haşim, et al. Fast and accurate recurrent neural network acoustic models for speech recognition. arXiv preprint arXiv:1507.06947 (2015). [pdf] (Google视頻视频语音辨别系统软件手机软件) ★★★

详尽详细地址:arxiv.org/pdf/1507.06947

█[12] Amodei, Dario, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595 (2015). [pdf] (百度搜索检索视頻视频语音辨别系统软件手机软件) ★★★★

详尽详细地址:arxiv.org/pdf/1512.02595.pdf

█[13] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, G. Zweig Achieving Human Parity in Conversational Speech Recognition. arXiv preprint arXiv:1610.05256 (2016). [pdf] (最前沿的视頻视频语音辨别, 微软公司企业) ★★★★

详尽详细地址:arxiv.org/pdf/1610.05256v1

研读以上大学毕业毕业论文之后,你将对深层次学习培训学习培训历史时间時间、实体线实体模型的基本架构(包括 CNN, RNN, LSTM)有一个基本的把握,并掌握深层次学习培训学习培训如何应用于图像视频视频语音辨别难点。接下来的大学毕业毕业论文,将陪你深层次次探索深层次学习培训学习培训方法、没有同行业业的应用和前沿顶级技术性性。我建议,你可以以以根据兴趣爱好喜好和工作中中/科学研究科学研究方向进行选择性的阅读文章文章内容。

2 深层次学习培训学习培训方法2.1 实体线实体模型

█[14] Hinton, Geoffrey E., et al. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012). [pdf] (Dropout) ★★★

详尽详细地址:arxiv.org/pdf/1207.0580.pdf

█[15] Srivastava, Nitish, et al. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15.1 (2014): 1929-1958. [pdf] ★★★

详尽详细地址:jmlr.org/papers/volume15/srivastava14a.old/source/srivastava14a.pdf

█[16] Ioffe, Sergey, and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015). [pdf] (2015 年的出色科学研究科学研究) ★★★★

详尽详细地址:arxiv.org/pdf/1502.03167

█[17] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450 (2016). [pdf] (Batch Normalization 的升級) ★★★★

详尽详细地址:arxiv.org/pdf/1607.06450.pdf?utm_source=sciontist utm_medium=refer utm_campaign=promote

█[18] Courbariaux, Matthieu, et al. Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to+ 1 or−1.  [pdf] (新实体线实体模型,快) ★★★

详尽详细地址:pdfs.semanticscholar.org/f832/b16cb367802609d91d400085eb87d630212a.pdf

█[19] Jaderberg, Max, et al. Decoupled neural interfaces using synthetic gradients. arXiv preprint arXiv:1608.05343 (2016). [pdf] (训练方法的独立自主创新,科学研究科学研究十分十分好) ★★★★★

详尽详细地址:arxiv.org/pdf/1608.05343

█[20] Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641 (2015). [pdf] (改进此前的训练互连网,来降低训练周期时间時间) ★★★

详尽详细地址:arxiv.org/abs/1511.05641

█[21] Wei, Tao, et al. Network Morphism. arXiv preprint arXiv:1603.01670 (2016). [pdf] (改进此前的训练互连网,来降低训练周期时间時间) ★★★

详尽详细地址:arxiv.org/abs/1603.01670

2.2 提高 Optimization

█[22] Sutskever, Ilya, et al. On the importance of initialization and momentum in deep learning. ICML (3) 28 (2013): 1139-1147. [pdf] (Momentum optimizer) ★★

详尽详细地址:jmlr.org/proceedings/papers/v28/sutskever13.pdf

█[23] Kingma, Diederik, and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). [pdf] (Maybe used most often currently) ★★★

详尽详细地址:arxiv.org/pdf/1412.6980

█[24] Andrychowicz, Marcin, et al. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474 (2016). [pdf] (Neural Optimizer,Amazing Work) ★★★★★

详尽详细地址:arxiv.org/pdf/1606.04474

█[25] Han, Song, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149 2 (2015). [pdf] (ICLR best paper, new direction to make NN running fast,DeePhi Tech Startup) ★★★★★

详尽详细地址:pdfs.semanticscholar.org/5b6c/9dda1d88095fa4aac1507348e498a1f2e863.pdf

█[26] Iandola, Forrest N., et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 1MB model size. arXiv preprint arXiv:1602.07360 (2016). [pdf] (Also a new direction to optimize NN,DeePhi Tech Startup) ★★★★

详尽详细地址:arxiv.org/pdf/1602.07360

2.3 无管控学习培训学习培训/深层次转换成实体线实体模型

█[27] Le, Quoc V. Building high-level features using large scale unsupervised learning. 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013. [pdf] (里程数数碑, 吴恩达, Google人的人的大脑, Cat) ★★★★

详尽详细地址:arxiv.org/pdf/1112.6209.pdf embed

█[28] Kingma, Diederik P., and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013). [pdf](VAE) ★★★★

详尽详细地址:arxiv.org/pdf/1312.6114

█[29] Goodfellow, Ian, et al. Generative adversarial nets. Advances in Neural Information Processing Systems. 2014. [pdf](GAN,很帅的想法) ★★★★★

详尽详细地址:papers.nips.cc/paper/5423-generative-adversarial-nets.pdf

█[30] Radford, Alec, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015). [pdf] (DCGAN) ★★★★

详尽详细地址:arxiv.org/pdf/1511.06434

█[31] Gregor, Karol, et al. DRAW: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623 (2015). [pdf] (VAE with attention, 很出色的科学研究科学研究) ★★★★★

详尽详细地址:jmlr.org/proceedings/papers/v37/gregor15.pdf

█[32] Oord, Aaron van den, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759 (2016). [pdf] (PixelRNN) ★★★★

详尽详细地址:arxiv.org/pdf/1601.06759

█[33] Oord, Aaron van den, et al. Conditional image generation with PixelCNN decoders. arXiv preprint arXiv:1606.05328 (2016). [pdf] (PixelCNN) ★★★★

详尽详细地址:arxiv.org/pdf/1606.05328

2.4 递归神经系统系统软件互连网(RNN) / Sequence-to-Sequence Model

█[34] Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013). [pdf] (LSTM, 具体实际效果十分好,呈现了 RNN 的特点) ★★★★

详尽详细地址:arxiv.org/pdf/1308.0850

█[35] Cho, Kyunghyun, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014). [pdf] (第一篇 Sequence-to-Sequence 的大学毕业毕业论文) ★★★★

详尽详细地址:arxiv.org/pdf/1406.1078

█[36] Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. Advances in neural information processing systems. 2014. [pdf] (出色科学研究科学研究) ★★★★★

详尽详细地址:papers.nips.cc/paper/5346-information-based-learning-by-agents-in-unbounded-state-spaces.pdf

█[37] Bahdanau, Dzmitry, KyungHyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473 (2014). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1409.0473v7.pdf

█[38] Vinyals, Oriol, and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869 (2015). [pdf] (Seq-to-Seq 闲谈机器设备人) ★★★

详尽详细地址:arxiv.org/pdf/1506.05869.pdf%20(arxiv.org/pdf/1506.05869.pdf)

2.5 神经系统系统软件互连网图灵机

█[39] Graves, Alex, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401 (2014). [pdf] (未来计算机的基本原型机) ★★★★★

详尽详细地址:arxiv.org/pdf/1410.5401.pdf

█[40] Zaremba, Wojciech, and Ilya Sutskever. Reinforcement learning neural Turing machines. arXiv preprint arXiv:1505.00521 362 (2015). [pdf] ★★★

详尽详细地址:pdfs.semanticscholar.org/f10e/071292d593fef939e6ef4a59baf0bb3a6c2b.pdf

█[41] Weston, Jason, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916 (2014). [pdf] ★★★

详尽详细地址:arxiv.org/pdf/1410.3916

█[42] Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. End-to-end memory networks. Advances in neural information processing systems. 2015. [pdf] ★★★★

详尽详细地址:papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf

█[43] Vinyals, Oriol, Meire Fortunato, and Navdeep Jaitly. Pointer networks. Advances in Neural Information Processing Systems. 2015. [pdf] ★★★★

详尽详细地址:papers.nips.cc/paper/5866-pointer-networks.pdf

█[44] Graves, Alex, et al. Hybrid computing using a neural network with dynamic external memory. Nature (2016). [pdf] (里程数数碑,把以上大学毕业毕业论文的想法结合了起来) ★★★★★

详尽详细地址:dropbox/s/0a40xi702grx3dq/2016-graves.pdf

2.6 深层次提升学习培训学习培训

█[45] Mnih, Volodymyr, et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013). [pdf]) (第一个以深层次提升学习培训学习培训难题的大学毕业毕业论文) ★★★★

详尽详细地址:arxiv.org/pdf/1312.5602.pdf

█[46] Mnih, Volodymyr, et al. Human-level control through deep reinforcement learning. Nature 518.7540 (2015): 529-533. [pdf] (里程数数碑) ★★★★★

详尽详细地址:storage.googleapis/deepmind-data/assets/papers/DeepMindNature14236Paper.pdf

█[47] Wang, Ziyu, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581 (2015). [pdf] (ICLR 最好大学毕业毕业论文,十分好的想法) ★★★★

详尽详细地址:arxiv.org/pdf/1511.06581

█[48] Mnih, Volodymyr, et al. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783 (2016). [pdf] (前沿方法) ★★★★★

详尽详细地址:arxiv.org/pdf/1602.01783

█[49] Lillicrap, Timothy P., et al. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015). [pdf] (DDPG) ★★★★

详尽详细地址:arxiv.org/pdf/1509.02971

█[50] Gu, Shixiang, et al. Continuous Deep Q-Learning with Model-based Acceleration. arXiv preprint arXiv:1603.00748 (2016). [pdf] (NAF) ★★★★

详尽详细地址:arxiv.org/pdf/1603.00748

█[51] Schulman, John, et al. Trust region policy optimization. CoRR, abs/1502.05477 (2015). [pdf] (TRPO) ★★★★

详尽详细地址:jmlr.org/proceedings/papers/v37/schulman15.pdf

█[52] Silver, David, et al. Mastering the game of Go with deep neural networks and tree search. Nature 529.7587 (2016): 484-489. [pdf] (AlphaGo) ★★★★★

详尽详细地址:willamette.edu/~levenick/cs448/goNature.pdf

2.7 深层次迁移学习培训学习培训 /终生学习培训学习培训 / 提升学习培训学习培训

█[53] Bengio, Yoshua. Deep Learning of Representations for Unsupervised and Transfer Learning. ICML Unsupervised and Transfer Learning 27 (2012): 17-36. [pdf] (它是一个案例实例教程) ★★★

详尽详细地址:jmlr.org/proceedings/papers/v27/bengio12a/bengio12a.pdf

█[54] Silver, Daniel L., Qiang Yang, and Lianghao Li. Lifelong Machine Learning Systems: Beyond Learning Algorithms. AAAI Spring Symposium: Lifelong Machine Learning. 2013. [pdf] (对终生学习培训学习培训的简单讨论) ★★★

详尽详细地址:citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.696.7800 rep=rep1 type=pdf

█[55] Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015). [pdf] (大神们的科学研究科学研究) ★★★★

详尽详细地址:arxiv.org/pdf/1503.02531

█[56] Rusu, Andrei A., et al. Policy distillation. arXiv preprint arXiv:1511.06295 (2015). [pdf] (RL 制造行业) ★★★

详尽详细地址:arxiv.org/pdf/1511.06295

█[57] Parisotto, Emilio, Jimmy Lei Ba, and Ruslan Salakhu★★★tdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv:1511.06342 (2015). [pdf] (RL 制造行业) ★★★

详尽详细地址:arxiv.org/pdf/1511.06342

█[58] Rusu, Andrei A., et al. Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016). [pdf] (出色科学研究科学研究, 很独特的想法) ★★★★★

详尽详细地址:arxiv.org/pdf/1606.04671

2.8 One Shot 深层次学习培训学习培训

█[59] Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science 350.6266 (2015): 1332-1338. [pdf] (沒有深层次学习培训学习培训但十分非常值得一读) ★★★★★

详尽详细地址:clm.utexas.edu/compjclub/wp-content/uploads/2016/02/lake2015.pdf

█[60] Koch, Gregory, Richard Zemel, and Ruslan Salakhutdinov. Siamese Neural Networks for One-shot Image Recognition. (2015) [pdf] ★★★

详尽详细地址:cs.utoronto.ca/~gkoch/files/msc-thesis.pdf

█[61] Santoro, Adam, et al. One-shot Learning with Memory-Augmented Neural Networks. arXiv preprint arXiv:1605.06065 (2016). [pdf] (one shot 学习培训学习培训的基本一步) ★★★★

详尽详细地址:arxiv.org/pdf/1605.06065

█[62] Vinyals, Oriol, et al. Matching Networks for One Shot Learning. arXiv preprint arXiv:1606.04080 (2016). [pdf] ★★★

详尽详细地址:arxiv.org/pdf/1606.04080

█[63] Hariharan, Bharath, and Ross Girshick. Low-shot visual object recognition. arXiv preprint arXiv:1606.02819 (2016). [pdf] (通向更经营规模性数据信息信息内容的一步) ★★★★

详尽详细地址:arxiv.org/pdf/1606.02819

3 应用3.1 自然语言处理 (NLP)

█[1] Antoine Bordes, et al. Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing. AISTATS(2012) [pdf] ★★★★

详尽详细地址:hds.utc.fr/~bordesan/dokuwiki/lib/exe/fetch.php?id=en%3Apubli cache=cache media=en:bordes12aistats.pdf

█[2] Mikolov, et al. Distributed representations of words and phrases and their compositionality. ANIPS(2013): 3111-3119 [pdf] (word2vec) ★★★

详尽详细地址:papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf

█[3] Sutskever, et al. “Sequence to sequence learning with neural networks. ANIPS(2014) [pdf] ★★★ 

详尽详细地址:papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf

█[4] Ankit Kumar, et al. “Ask Me Anything: Dynamic Memory Networks for Natural Language Processing. arXiv preprint arXiv:1506.07285(2015) [pdf] ★★★★

详尽详细地址:arxiv.org/abs/1506.07285

█[5] Yoon Kim, et al. Character-Aware Neural Language Models. NIPS(2015) arXiv preprint arXiv:1508.06615(2015) [pdf] ★★★

详尽详细地址:arxiv.org/abs/1508.06615

█[6] Jason Weston, et al. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. arXiv preprint arXiv:1502.05698(2015) [pdf] (bAbI tasks) ★★★

详尽详细地址:arxiv.org/abs/1502.05698

█[7] Karl Moritz Hermann, et al. Teaching Machines to Read and Comprehend. arXiv preprint arXiv:1506.03340(2015) [pdf](CNN/每日邮报完形填空设计方案设计风格的难点) ★★

详尽详细地址:arxiv.org/abs/1506.03340

█[8] Alexis Conneau, et al. Very Deep Convolutional Networks for Natural Language Processing. arXiv preprint arXiv:1606.01781(2016) [pdf] (文本分类的前沿技术性性) ★★★

详尽详细地址:arxiv.org/abs/1606.01781

█[9] Armand Joulin, et al. Bag of Tricks for Efficient Text Classification. arXiv preprint arXiv:1607.01759(2016) [pdf] (比前沿技术性性稍过时, 但快很多) ★★★

详尽详细地址:arxiv.org/abs/1607.01759

3.2 物品检测

█[1] Szegedy, Christian, Alexander Toshev, and Dumitru Erhan. Deep neural networks for object detection. Advances in Neural Information Processing Systems. 2013. [pdf] ★★★

详尽详细地址:papers.nips.cc/paper/5207-deep-neural-networks-for-object-detection.pdf

█[2] Girshick, Ross, et al. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. [pdf] (RCNN) ★★★★★

详尽详细地址:cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf

█[3] He, Kaiming, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition. European Conference on Computer Vision. Springer International Publishing, 2014. [pdf] (SPPNet) ★★★★

详尽详细地址:arxiv.org/pdf/1406.4729

█[4] Girshick, Ross. Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision. 2015. [pdf] ★★★★

详尽详细地址:pdfs.semanticscholar.org/8f67/64a59f0d17081f2a2a9d06f4ed1cdea1a0ad.pdf

█[5] Ren, Shaoqing, et al. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in neural information processing systems. 2015. [pdf] ★★★★

详尽详细地址:papers.nips.cc/paper/5638-analysis-of-variational-bayesian-latent-dirichlet-allocation-weaker-sparsity-than-map.pdf

█[6] Redmon, Joseph, et al. You only look once: Unified, real-time object detection. arXiv preprint arXiv:1506.02640 (2015). [pdf] (YOLO,出色科学研究科学研究,十分具有运用应用使用价值) ★★★★★

详尽详细地址:homes.cs.washington.edu/~ali/papers/YOLO.pdf

█[7] Liu, Wei, et al. SSD: Single Shot MultiBox Detector. arXiv preprint arXiv:1512.02325 (2015). [pdf] ★★★

详尽详细地址:arxiv.org/pdf/1512.02325

█[8] Dai, Jifeng, et al. R-FCN: Object Detection via Region-based Fully Convolutional Networks. arXiv preprint arXiv:1605.06409 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/abs/1605.06409

3.3 视觉效果实际效果追踪

█[1] Wang, Naiyan, and Dit-Yan Yeung. Learning a deep compact image representation for visual tracking. Advances in neural information processing systems. 2013. [pdf] (第一篇运用深层次学习培训学习培训做视觉效果实际效果追踪的大学毕业毕业论文,DLT Tracker) ★★★

详尽详细地址:papers.nips.cc/paper/5192-learning-a-deep-compact-image-representation-for-visual-tracking.pdf

█[2] Wang, Naiyan, et al. Transferring rich feature hierarchies for robust visual tracking. arXiv preprint arXiv:1501.04587 (2015). [pdf] (SO-DLT) ★★★★

详尽详细地址:arxiv.org/pdf/1501.04587

█[3] Wang, Lijun, et al. Visual tracking with fully convolutional networks. Proceedings of the IEEE International Conference on Computer Vision. 2015. [pdf] (FCNT) ★★★★

详尽详细地址:cv-foundation.org/openaccess/content_iccv_2015/papers/Wang_Visual_Tracking_With_ICCV_2015_paper.pdf

█[4] Held, David, Sebastian Thrun, and Silvio Savarese. Learning to Track at 100 FPS with Deep Regression Networks. arXiv preprint arXiv:1604.01802 (2016). [pdf] (GOTURN,在深层次学习培训学习培训方法里算作十分快的,但仍比非深层次学习培训学习培训方法慢很多) ★★★★

详尽详细地址:arxiv.org/pdf/1604.01802

█[5] Bertinetto, Luca, et al. Fully-Convolutional Siamese Networks for Object Tracking. arXiv preprint arXiv:1606.09549 (2016). [pdf] (SiameseFC,及时物品追踪制造行业的全新升级前沿技术性性) ★★★★

详尽详细地址:arxiv.org/pdf/1606.09549

█[6] Martin Danelljan, Andreas Robinson, Fahad Khan, Michael Felsberg. Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking. ECCV (2016) [pdf] (C-COT) ★★★★

详尽详细地址:cvl.isy.liu.se/research/objrec/visualtracking/conttrack/C-COT_ECCV16.pdf

█[7] Nam, Hyeonseob, Mooyeol Baek, and Bohyung Han. Modeling and Propagating CNNs in a Tree Structure for Visual Tracking. arXiv preprint arXiv:1608.07242 (2016). [pdf] (VOT2016 得奖大学毕业毕业论文,TCNN) ★★★★

详尽详细地址:arxiv.org/pdf/1608.07242

3.4 图像标出

█[1] Farhadi,Ali,etal. Every picture tells a story: Generating sentences from images . In Computer VisionECCV 2010. Springer Berlin Heidelberg:15-29, 2010. [pdf] ★★★

详尽详细地址:cs.cmu.edu/~afarhadi/papers/sentence.pdf

█[2] Kulkarni, Girish, et al. Baby talk: Understanding and generating image descriptions . In Proceedings of the 24th CVPR, 2011. [pdf] ★★★★

详尽详细地址:tamaraberg/papers/generation_cvpr11.pdf

█[3] Vinyals, Oriol, et al. Show and tell: A neural image caption generator . In arXiv preprint arXiv:1411.4555, 2014. [pdf] ★★★

详尽详细地址:arxiv.org/pdf/1411.4555.pdf

█[4] Donahue, Jeff, et al. Long-term recurrent convolutional networks for visual recognition and description . In arXiv preprint arXiv:1411.4389 ,2014. [pdf] 

详尽详细地址:arxiv.org/pdf/1411.4389.pdf

█[5] Karpathy, Andrej, and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions . In arXiv preprint arXiv:1412.2306, 2014. [pdf] ★★★★★

详尽详细地址:cs.stanford.edu/people/karpathy/cvpr2015.pdf

█[6] Karpathy, Andrej, Armand Joulin, and Fei Fei F. Li. Deep fragment embeddings for bidirectional image sentence mapping . In Advances in neural information processing systems, 2014. [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1406.5679v1.pdf

█[7] Fang, Hao, et al. From captions to visual concepts and back . In arXiv preprint arXiv:1411.4952, 2014. [pdf] ★★★★★

详尽详细地址:arxiv.org/pdf/1411.4952v3.pdf

█[8] Chen, Xinlei, and C. Lawrence Zitnick. Learning a recurrent visual representation for image caption generation . In arXiv preprint arXiv:1411.5654, 2014. [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1411.5654v1.pdf

█[9] Mao, Junhua, et al. Deep captioning with multimodal recurrent neural networks (m-rnn) . In arXiv preprint arXiv:1412.6632, 2014. [pdf] ★★★

详尽详细地址:arxiv.org/pdf/1412.6632v5.pdf

█[10] Xu, Kelvin, et al. Show, attend and tell: Neural image caption generation with visual attention . In arXiv preprint arXiv:1502.03044, 2015. [pdf] ★★★★★

详尽详细地址:arxiv.org/pdf/1502.03044v3.pdf

3.5 机器设备中文汉语翻译

一一部分里程数数碑科学研究科学研究被列入 RNN / Seq-to-Seq 版块。

█[1] Luong, Minh-Thang, et al. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206 (2014). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1410.8206

█[2] Sennrich, et al. Neural Machine Translation of Rare Words with Subword Units . In arXiv preprint arXiv:1508.07909, 2015. [pdf] ★★★

详尽详细地址:arxiv.org/pdf/1508.07909.pdf

█[3] Luong, Minh-Thang, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1508.04025

█[4] Chung, et al. A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation . In arXiv preprint arXiv:1603.06147, 2016. [pdf] ★★

详尽详细地址:arxiv.org/pdf/1603.06147.pdf

█[5] Lee, et al. Fully Character-Level Neural Machine Translation without Explicit Segmentation . In arXiv preprint arXiv:1610.03017, 2016. [pdf] ★★★★★

详尽详细地址:arxiv.org/pdf/1610.03017.pdf

█[6] Wu, Schuster, Chen, Le, et al. Google s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation . In arXiv preprint arXiv:1609.08144v2, 2016. [pdf] (Milestone) ★★★★

详尽详细地址:arxiv.org/pdf/1609.08144v2.pdf

3.6 机器设备人

█[1] Koutník, Jan, et al. Evolving large-scale neural networks for vision-based reinforcement learning. Proceedings of the 15th annual conference on Genetic and evolutionary computation. ACM, 2013. [pdf] ★★★

详尽详细地址:repository.supsi.ch/4550/1/koutnik2013gecco.pdf

█[2] Levine, Sergey, et al. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research 17.39 (2016): 1-40. [pdf] ★★★★★

详尽详细地址:jmlr.org/papers/volume17/15-522/15-522.pdf

█[3] Pinto, Lerrel, and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 55k tries and 700 robot hours. arXiv preprint arXiv:1509.06825 (2015). [pdf] ★★★

详尽详细地址:arxiv.org/pdf/1509.06825

█[4] Levine, Sergey, et al. Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection. arXiv preprint arXiv:1603.02199 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1603.02199

█[5] Zhu, Yuke, et al. Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning. arXiv preprint arXiv:1609.05143 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1609.05143

█[6] Yahya, Ali, et al. Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search. arXiv preprint arXiv:1610.00673 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1610.00673

█[7] Gu, Shixiang, et al. Deep Reinforcement Learning for Robotic Manipulation. arXiv preprint arXiv:1610.00633 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1610.00633

█[8] A Rusu, M Vecerik, Thomas Rothörl, N Heess, R Pascanu, R Hadsell. Sim-to-Real Robot Learning from Pixels with Progressive Nets. arXiv preprint arXiv:1610.04286 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1610.04286.pdf

█[9] Mirowski, Piotr, et al. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1611.03673

3.7 造型设计造型艺术

█[1] Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (2015). Inceptionism: Going Deeper into Neural Networks . Google Research. [html] (Deep Dream) ★★★★

详尽详细地址:research.googleblog/2015/06/inceptionism-going-deeper-into-neural.html

█[2] Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015). [pdf] (出色科学研究科学研究,迄今最获得取得成功的方法) ★★★★★

详尽详细地址:arxiv.org/pdf/1508.06576

█[3] Zhu, Jun-Yan, et al. Generative Visual Manipulation on the Natural Image Manifold. European Conference on Computer Vision. Springer International Publishing, 2016. [pdf] (iGAN) ★★★★

详尽详细地址:arxiv.org/pdf/1609.03552

█[4] Champandard, Alex J. Semantic Style Transfer and Turning Two-Bit Doodles into Fine Artworks. arXiv preprint arXiv:1603.01768 (2016). [pdf] (Neural Doodle) ★★★★

详尽详细地址:arxiv.org/pdf/1603.01768

█[5] Zhang, Richard, Phillip Isola, and Alexei A. Efros. Colorful Image Colorization. arXiv preprint arXiv:1603.08511 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1603.08511

█[6] Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1603.08155.pdf

█[7] Vincent Dumoulin, Jonathon Shlens and Manjunath Kudlur. A learned representation for artistic style. arXiv preprint arXiv:1610.07629 (2016). [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1610.00633

█[8] Gatys, Leon and Ecker, et al. Controlling Perceptual Factors in Neural Style Transfer. arXiv preprint arXiv:1611.07865 (2016). [pdf] (control style transfer over spatial location,colour information and across spatial scale) ★★★★

详尽详细地址:arxiv.org/pdf/1610.04286.pdf

█[9] Ulyanov, Dmitry and Lebedev, Vadim, et al. Texture Networks: Feed-forward Synthesis of Textures and Stylized Images. arXiv preprint arXiv:1603.03417(2016). [pdf] (纹理转换成日设计风格变化) ★★★★

详尽详细地址:arxiv.org/pdf/1611.03673

3.8 整体总体目标分割 Object Segmentation

█[1] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation.” in CVPR, 2015. [pdf] ★★★★★

详尽详细地址:arxiv.org/pdf/1411.4038v2.pdf

█[2] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015. [pdf] ★★★★★

详尽详细地址:arxiv.org/pdf/1606.00915v1.pdf

█[3] Pinheiro, P.O., Collobert, R., Dollar, P. Learning to segment object candidates. In: NIPS. 2015. [pdf] ★★★★

详尽详细地址:arxiv.org/pdf/1506.06204v2.pdf

█[4] Dai, J., He, K., Sun, J. Instance-aware semantic segmentation via multi-task network cascades. in CVPR. 2016 [pdf] ★★★

详尽详细地址:arxiv.org/pdf/1512.04412v1.pdf

█[5] Dai, J., He, K., Sun, J. Instance-sensitive Fully Convolutional Networks. arXiv preprint arXiv:1603.08678 (2016). [pdf] ★★★

详尽详细地址:arxiv.org/pdf/1603.08678v1.pdf

全篇详尽详细地址: 雷锋网获授权雷锋网(手机微信微信公众号:雷锋网)获授权

相关文章内容內容:





雷锋网经典著作权文章内容內容,没承受权禁止转截。详尽信息内容见。

相关新闻

怎样在制作微信小程序—签约万盛康翔实业有限

义务编写: /泽群一站式互联网服务组织尽心竭力不遗余力尽含意是您出示:,,,,,,,...

日期:2021-05-04 浏览次数:145

怎么制作微信小程序—巴彦淖尔市微信小程序商

巴彦淖尔市手机上入门机手机微信手机上手机微信手机微信微信小程序商城系统系统软件系统...

日期:2021-04-29 浏览次数:76

扬州企业网站建设难—微信小程序锻炼会报名系

移动互联网网网网早已进到到大部分分分分消费者的视线,不管是主题风格设计风格主题风格...

日期:2021-04-21 浏览次数:62

手机网页—微信小程序大幕开启 富途证券携“富

一月9日,全部互连网界都将眼光看向了腾迅手机上入门机手机微信手游大作用—手机上入门机...

日期:2021-04-10 浏览次数:174

活动企业官网建设规划—【大同抖音小程序制作

...

日期:2021-04-06 浏览次数:117

微信小程序怎么解读—想做好微信小程序前期用

公司在做手机上入门机手机微信手机上手机微信手机微信微信小程序的情况下,怎样累积第一...

日期:2021-03-22 浏览次数:152