閱讀大概需要9分鐘
跟隨小博主,每天進步一丟丟
轉(zhuǎn)載自:AINLP 推薦Github上一個NLP相關(guān)的項目:msgi/nlp-journey
項目地址,閱讀原文可以直達,歡迎參與和Star: https://github.com/msgi/nlp-journey 這個項目的作者是AINLP交流群里的慢時光同學(xué),該項目收集了NLP相關(guān)的一些代碼, 包括詞向量(Word Embedding)、命名實體識別(NER)、文本分類(Text Classificatin)、文本生成、文本相似性(Text Similarity)計算等,基于keras和tensorflow,也收集了相關(guān)的書目、論文、博文、算法、項目資源鏈接,并且很細致的做了分類。 以下來自該項目介紹頁,點擊閱讀原文可以直達相關(guān)資源鏈接。
Deep Learning.深度學(xué)習(xí)必讀. 斯坦福大學(xué)《語音與語言處理》第三版:NLP必讀. Neural Networks and Deep Learning. 入門必讀. 復(fù)旦大學(xué)《神經(jīng)網(wǎng)絡(luò)與深度學(xué)習(xí)》邱錫鵬教授. CS224d: Deep Learning for Natural Language Processing.
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. A Neural Probabilistic Language Model. Transformer. Transformer-XL. Convolutional Neural Networks for Sentence Classification. Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. A Question-Focused Multi-Factor Attention Network for Question Answering. AutoCross: Automatic Feature Crossing for Tabular Data in Real-World Applications. GloVe: Global Vectors for Word Representation. A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. A Knowledge-Grounded Neural Conversation Model. Neural Generative Question Answering. A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification. ImageNet Classification with Deep Convolutional Neural Networks. Network In Network. Long Short-term Memory. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. Get To The Point: Summarization with Pointer-Generator Networks. Generative Adversarial Text to Image Synthesis. Image-to-Image Translation with Conditional Adversarial Networks. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. Unsupervised Learning of Visual Structure using Predictive Generative Networks. Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks. Event Extraction via Dynamic Multi-Pooling Convolutional Neural. Low-Memory Neural Network Training:A Technical Report. Language Models are Unsupervised Multitask Learners. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient.
The Illustrated Transformer. Attention-based-model. KL divergence. Building Autoencoders in Keras. Modern Deep Learning Techniques Applied to Natural Language Processing. Node2vec embeddings for graph data. Bert解讀. 難以置信!LSTM和GRU的解析從未如此清晰(動圖+視頻)。
構(gòu)建詞向量 fasttext(skipgram+cbow) gensim(word2vec) 數(shù)據(jù)增強 eda
svm fasttext textcnn bilstm+attention rcnn han
莫墜青云志 彗雙智能-Keras源碼分析 機器之心 colah ZHPMATRIX wildml 徐阿衡 零基礎(chǔ)入門深度學(xué)習(xí)
Association of Computational Linguistics(計算語言學(xué)協(xié)會). ACL Empirical Methods in Natural Language Processing. EMNLP International Conference on Computational Linguistics. COLING Neural Information Processing Systems(神經(jīng)信息處理系統(tǒng)會議). NIPS AAAI Conference on Artificial Intelligence. AAAI International Joint Conferences on AI. IJCAI International Conference on Machine Learning(國際機器學(xué)習(xí)大會). ICML
編輯不易,還望給個好看!
|