# 学习笔记 **Repository Path**: sjyttkl/learning-notes ## Basic Information - **Project Name**: 学习笔记 - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2022-10-25 - **Last Updated**: 2022-10-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # 论文阅读计划 1.先将20篇论文简单过一遍,了解一下思想(暂定每周两篇,两个月读完) 2.选择其中比较新颖的想法精读 3.选择比较经典的论文和代码精读,理解细节 # 2021.5.30 [阅读论文 An Encoding Strategy Based Word-Character LSTM for Chinese NER](https://gitee.com/terry-gjt/learning-notes/blob/master/2019 An Encoding Strategy Based Word-Character LSTM for Chinese NER.md) 主要思想: # 2021.5.19 [阅读论文 CAN-NER: Convolutional Attention Network for Chinese Named Entity Recognition](https://gitee.com/terry-gjt/learning-notes/blob/master/2019 Convolutional Attention Network for Chinese Named Entity Recognition.md) 主要思想: 使用CNN用于捕获相邻字符和句子上下文中的信息,并使用BiGRU-CRF作为论文的基本模型结构。 # 2021.5.12 [阅读论文 Vanilla Transformer](https://gitee.com/terry-gjt/learning-notes/blob/master/Vanilla%20Transformer%20Character-Level%20Language%20Modeling%20with%20Deeper%20Self-Attention.md) 主要思想: 更深的Transformer(64层),每层都会将上一层的输出与pos embedding加在一起作为下一层的输入 # 2021.5.8 [阅读论文 FLAT Chinese NER Using Flat-Lattice Transformer](https://gitee.com/terry-gjt/learning-notes/blob/master/FLAT%20Chinese%20NER%20Using%20Flat-Lattice%20Transformer.md) 主要思想:替换lattice LSTM结构中LSTM为Transformer以支持并行,加快计算速度。用到了Vanilla Transformer # 2021.4.27 [阅读论文 2020ACL-Simplify the Usage of Lexicon in Chinese NER](https://gitee.com/terry-gjt/learning-notes/blob/master/2020ACL---Simplify%20the%20Usage%20of%20Lexicon%20in%20Chinese%20NER.md) 主要思想:简化lattice LSTM结构以加快计算速度,并尽可能保留原有优点 # 2021.4.21 [阅读文章 模型蒸馏](https://zhuanlan.zhihu.com/p/71986772) 主要思想:通过一个student模型去学习一个teacher模型的泛化能力,以提高其准确率 # 2021.4.14 [阅读论文Transformers: Attention is all you need](https://wp.recgroup.cn/?p=2577) 主要思想:完全基于注意力机制,完全免除了递归和卷积。并解决了LSTM中的长距离依赖问题 # 2021.4.11 [阅读论文Entity Enhanced BERT Pre-training for Chinese NER](https://wp.recgroup.cn/?p=2562) 主要思想:将BRET中加入特定领域的词汇嵌入进行预训练,因为利用词典信息有助于命名实体识别,词汇由基于信息熵的新词发现方法生成 # 2021.4.1 [阅读论文BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://wp.recgroup.cn/?p=2501) 主要思想:通过Masked LM任务进行无监督预训练,然后在不同下游任务上进行有监督的微调 # 2021.3.17 [阅读论文Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification](https://wp.recgroup.cn/?p=2498) 主要思想:双向LSTM加注意力层 # 2021.3.8 [更新要看的20篇论文](https://gitee.com/terry-gjt/learning-notes/blob/master) # 2021.2.25 [更新LGN论文指标评估的理解](https://gitee.com/terry-gjt/learning-notes/blob/master) 主要思想:利用全局中继节点克服LSTM的长期依赖问题 # 2021.2.8 [画出LGN论文结构图(在论文中)](https://wp.recgroup.cn/?p=1883) # 2021.2.1 [学习LGN论文中的相关pytorch内容](https://gitee.com/terry-gjt/learning-notes/blob/master) # 2021.1.27 [学习词向量相关知识](https://gitee.com/terry-gjt/learning-notes/blob/master/%E8%AF%8D%E5%90%91%E9%87%8F%E7%9B%B8%E5%85%B3%E7%9F%A5%E8%AF%86.md) # 2021.1.21 [更新LGN论文代码与公式的结合理解](https://wp.recgroup.cn/?p=1883)