# readpaper **Repository Path**: nonday/readpaper ## Basic Information - **Project Name**: readpaper - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-10-26 - **Last Updated**: 2024-05-11 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # ReadPaper ## 可视化 ### t-sne * https://github.com/lvdmaaten/bhtsne ### cam * grad-cam ## knowledge_distillation ### * https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/advanced_tutorials/knowledge_distillation.md * Beyond Self-Supervision: A Simple Yet Effective Network Distillation Alternative to Improve Backbones ## Lightweight-Network ### An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection ### MicroNet * MicroNet: Improving Image Recognition with Extremely Low FLOPs ### MobileOne * An Improved One millisecond Mobile Backbone ### PP-LCNet * PP-LCNet: A Lightweight CPU Convolutional Neural Network * PP-LCNetV2: https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNetV2.md ### ResRep - ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting ### RepMLP - RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition ### RepVGG - RepVGG: Making VGG-style ConvNets Great Again - RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization ### ACNet - ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks - Diverse Branch Block: Building a Convolution as an Inception-like Unit ### MixNet ### EfficientNet * EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks * EfficientNetV2: Smaller Models and Faster Training ### EfficientNet-lite * https://blog.tensorflow.org/2020/03/higher-accuracy-on-vision-models-with-efficientnet-lite.html * https://blog.tensorflow.org/2020/03/higher-accuracy-on-vision-models-with-efficientnet-lite.html ### GhostNet * GhostNet: More Features from Cheap Operation * GhostNets on Heterogeneous Devices via Cheap Operations * GhostNetV2 Enhance Cheap Operation with Long-Range Attention ### SqueezeNet - SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE ### ShuffleNet * ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices - 1x1的分组卷积+channel shuffle替代普通的1x1 pointwise减少FLOPs ` mobilenet中使用depthwise + 1x1 pointwise,计算量集中在1x1 pointwise ` * ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design - 1x1卷积,输入和输出的通道数相同的时候MAC最小 - 组卷积要谨慎使用,分组越多MAC越大 - 避免网络的碎片化即多路分支结构,速度会慢 - 减少元素级运算,FLOPs较小,但是却需要较大的MAC,如relu/add等 ``` FLOPs固定的情况下,MAC有以上结论,FLOPs减少MAC其实也会减少 加速方式:减少MAC同时减少FLOPs ``` ### MobileNet * MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications - conv3x3用depthwise conv3x3 + pointwise conv1x1代替,输入输出通道数都不改变 - 引入Width Multiplier(通道数)和Resolution Multiplier(分辨率)两个参数 ``` 如果是conv3x3+bn+relu,用(depthwise conv3x3+bn+relu) + (pointwise conv1x1+bn+relu)代替 原始conv后面跟的什么,那么depthwise和pointwise后面也跟什么内容,conv_nxn+operations <=>(depthwise conv_nxn+operations) + (pointwise conv1x1+operations) 其它普通卷积都可以用这种方式替换,不一定需要conv3x3 ``` * MobileNetV2: Inverted Residuals and Linear Bottlenecks - Inverted residuals: conv => pointwise conv1x1(提升通道数量) + depthwise conv + pointwise conv1x1(压缩通道数量) - Inverted residuals加入shortcut ``` 根据V1,conv_nxn+operations <=>(depthwise conv_nxn+operations) + (pointwise conv1x1+operations) V2最后的pointwise conv1x1不使用激活函数,称之为Linear激活函数 ``` * Searching for MobileNetV3 - hswish替代relu - 在bottleneck结构中加入了Squeeze-and-Excitation结构,并且放在了depthwise filter之后 * MobileNetV4 - Universal Models for the Mobile Ecosystem ### Pelee * Pelee: A Real-Time Object Detection System on Mobile Devices ### VoVNet * An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection * CenterMask: Real-Time Anchor-Free Instance Segmentation ## Other-Network ### SENet * Squeeze-and-Excitation Networks ### Batch Normalization * Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift * High-Performance Large-Scale Image Recognition Without Normalization ## code * https://github.com/rwightman/pytorch-image-models * https://github.com/viperit/FDDA/tree/main/pytorchcv/models * https://github.com/pytorch/vision/tree/d367a01a18a3ae6bee13d8be3b63fd6a581ea46f/torchvision/models * https://github.com/PaddlePaddle/PaddleSleeve/tree/18cc4b83ae311365b8d132ea4619d60abf3945bf/AdvBox/examples/objectdetector/ppdet/modeling/backbones * https://github.com/PaddlePaddle/PaddleClas/tree/9312a29bc5dd34fbce219961c16e012a1baed7f5/ppcls/arch/backbone/model_zoo ## useful network * https://github.com/shanglianlm0525/PyTorch-Networks * https://github.com/shanglianlm0525/CvPytorch ## fuse-bn * https://github.com/pytorch/pytorch/blob/master/torch/nn/utils/fusion.py