一区二区三区日韩精品-日韩经典一区二区三区-五月激情综合丁香婷婷-欧美精品中文字幕专区

分享

DL之DenseNet:DenseNet算法的簡(jiǎn)介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細(xì)攻略

 處女座的程序猿 2021-09-28

DL之DenseNet:DenseNet算法的簡(jiǎn)介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細(xì)攻略


相關(guān)文章
DL之DenseNet:DenseNet算法的簡(jiǎn)介(論文介紹)、架構(gòu)詳解、案例應(yīng)用等配圖集合之詳細(xì)攻略
DL之DenseNet:DenseNet算法的架構(gòu)詳解

DenseNet算法的簡(jiǎn)介(論文介紹)

? ? ? ? DenseNet算法即Densely Connected Convolutional Networks,在某種度上也借鑒了ResNet算法,相關(guān)論文獲得2017 (CVPR Best Paper Award)。

Abstract ?
? ? ? Recent work has shown that convolutional networks can ?be substantially deeper, more accurate, and efficient to train ?if they contain shorter connections between layers close to ?the input and those close to the output. In this paper, we ?embrace this observation and introduce the Dense Convolutional ?Network (DenseNet), which connects each layer ?to every other layer in a feed-forward fashion. Whereas ?traditional convolutional networks with L layers have L ?connections—one between each layer and its subsequent ?layer—our network has L(L+1) ?2 ?direct connections. For ?each layer, the feature-maps of all preceding layers are ?used as inputs, and its own feature-maps are used as inputs ?into all subsequent layers. DenseNets have several compelling ?advantages: they alleviate the vanishing-gradient ?problem, strengthen feature propagation, encourage feature ?reuse, and substantially reduce the number of parameters. ?We evaluate our proposed architecture on four highly ?competitive object recognition benchmark tasks (CIFAR-10, ?CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant ?improvements over the state-of-the-art on most of ?them, whilst requiring less computation to achieve high performance. ?Code and pre-trained models are available at ?https://github.com/liuzhuang13/DenseNet.

摘要
? ? ? 最近的研究表明,如果卷積網(wǎng)絡(luò)在靠近輸入和接近輸出的層之間包含較短的連接,那么卷積網(wǎng)絡(luò)可以更深入、更準(zhǔn)確和有效地訓(xùn)練。在本文中,我們采用這種觀測(cè)方法,并引入了緊密卷積網(wǎng)絡(luò)(densenet),它以一種前饋的方式將每一層連接到另一層。傳統(tǒng)的具有L層的卷積網(wǎng)絡(luò)在每一層和其后續(xù)層之間都有L連接,而我們的網(wǎng)絡(luò)有L(L+1)2個(gè)直接連接。對(duì)于每個(gè)圖層,前面所有圖層的 feature-maps都用作輸入,其自身的 feature-maps也用作后面所有圖層的輸入。 DenseNets有幾個(gè)引人注目的優(yōu)點(diǎn):它們可以緩解消失梯度問題,加強(qiáng)特征傳播,鼓勵(lì)特征重用,并大幅減少參數(shù)數(shù)量。我們?cè)谒膫€(gè)高度競(jìng)爭(zhēng)的對(duì)象識(shí)別基準(zhǔn)任務(wù) (CIFAR-10, ?CIFAR-100, SVHN, and ImageNet)上評(píng)估我們提出的體系結(jié)構(gòu)。DenseNets 在大多數(shù)方面都比最先進(jìn)的技術(shù)有了顯著的改進(jìn),同時(shí)需要較少的計(jì)算來實(shí)現(xiàn)高性能??稍?span>https://github.com/liuzhuang13/DenseNet上獲取代碼和預(yù)訓(xùn)練模型。

Conclusion ?
? ? ? We proposed a new convolutional network architecture, ?which we refer to as Dense Convolutional Network ?(DenseNet). It introduces direct connections between any ?two layers with the same feature-map size. We showed that ?DenseNets scale naturally to hundreds of layers, while exhibiting ?no optimization difficulties. In our experiments,DenseNets tend to yield consistent improvement in accuracy ?with growing number of parameters, without any signs ?of performance degradation or overfitting. Under multiple ?settings, it achieved state-of-the-art results across several ?highly competitive datasets. Moreover, DenseNets ?require substantially fewer parameters and less computation ?to achieve state-of-the-art performances. Because we ?adopted hyperparameter settings optimized for residual networks ?in our study, we believe that further gains in accuracy ?of DenseNets may be obtained by more detailed tuning of ?hyperparameters and learning rate schedules.
? ? ? ?Whilst following a simple connectivity rule, DenseNets ?naturally integrate the properties of identity mappings, deep ?supervision, and diversified depth. They allow feature reuse ?throughout the networks and can consequently learn more ?compact and, according to our experiments, more accurate ?models. Because of their compact internal representations ?and reduced feature redundancy, DenseNets may be good ?feature extractors for various computer vision tasks that ?build on convolutional features, e.g., [4, 5]. We plan to ?study such feature transfer with DenseNets in future work.
結(jié)論
? ? ? ?我們提出了一種新的卷積網(wǎng)絡(luò)結(jié)構(gòu),我們稱之為密集卷積網(wǎng)絡(luò)(DenseNet)。它引入了任何兩層之間具有相同feature-map大小的直接連接。我們發(fā)現(xiàn)?DenseNets可以自然地?cái)U(kuò)展到數(shù)百層,但不存在優(yōu)化困難。在我們的實(shí)驗(yàn)中,隨著參數(shù)數(shù)量的增加,?DenseNets的精確度會(huì)持續(xù)提高,而不會(huì)出現(xiàn)性能下降或過度擬合的跡象。在多個(gè)設(shè)置下,它在多個(gè)高度競(jìng)爭(zhēng)的數(shù)據(jù)集中實(shí)現(xiàn)了最先進(jìn)的結(jié)果。此外,?DenseNets需要更少的參數(shù)和更少的計(jì)算來實(shí)現(xiàn)最先進(jìn)的性能。因?yàn)槲覀冊(cè)谘芯恐胁捎昧酸槍?duì)剩余網(wǎng)絡(luò)進(jìn)行優(yōu)化的超參數(shù)設(shè)置,我們相信通過更詳細(xì)地調(diào)整超參數(shù)和學(xué)習(xí)速率時(shí)間表,可以進(jìn)一步提高?DenseNets的精度。
? ? ? ?在遵循簡(jiǎn)單連接規(guī)則的同時(shí),?DenseNets自然地整合了身份映射、深度監(jiān)督和多樣化深度的屬性。它們?cè)试S在整個(gè)網(wǎng)絡(luò)中重復(fù)使用功能,因此可以學(xué)習(xí)更緊湊的,根據(jù)我們的實(shí)驗(yàn),更精確的模型。由于其緊湊的內(nèi)部表示和減少的特征冗余,DenseNets可能是各種計(jì)算機(jī)視覺任務(wù)的很好的特征提取器,這些任務(wù)基于卷積特征,例如[4,5]。我們計(jì)劃在未來的工作中與DenseNets一起研究這種特征轉(zhuǎn)移。

論文
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Weinberger.
Densely connected convolutional networks. CVPR. 2017 (CVPR Best Paper Award)
https:///pdf/1608.06993.pdf

GitHub
https://github.com/liuzhuang13/DenseNet
? ? ? ?DenseNet is a network architecture where each layer is directly connected to every other layer in a feed-forward fashion (within each?dense block). For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. This connectivity pattern yields state-of-the-art accuracies on CIFAR10/100 (with or without data augmentation) and SVHN. On the large scale ILSVRC 2012 (ImageNet) dataset, DenseNet achieves a similar accuracy as ResNet, but using less than half the amount of parameters and roughly half the number of FLOPs.
? ? ? ?Densenet是一種網(wǎng)絡(luò)架構(gòu),其中每一層以前饋方式(在每個(gè)密集塊內(nèi))直接連接到其他每一層。對(duì)于每個(gè)圖層,前面所有圖層的要素圖都被視為單獨(dú)的輸入,而它自己的要素圖則作為輸入傳遞給后面所有圖層。這種連接模式在CIFAR10/100(有或無數(shù)據(jù)擴(kuò)充)和SVHN上產(chǎn)生最先進(jìn)的精度。在大規(guī)模的ILSVRC 2012(ImageNet)數(shù)據(jù)集上,DenseNet 實(shí)現(xiàn)了與ResNet相似的精度,但使用的參數(shù)數(shù)量不到一半,而使用的觸發(fā)器數(shù)量大約為一半。

DenseNet算法的架構(gòu)詳解


3、DenseNet architectures for ImageNet

The growth rate for all the networks is ?=32. Note that each “conv” layer shown in the table corresponds the sequence BN-ReLU-Conv.? ?所有網(wǎng)絡(luò)的增長率為32。請(qǐng)注意,表中所示的每個(gè)“conv”層對(duì)應(yīng)于序列BN-ReLU-Conv。

4、實(shí)驗(yàn)結(jié)果

1、CIFAR-10上的結(jié)果

2、ImageNet上的結(jié)果

The top-1 and top-5 error rates on the ImageNet validation set, with single-crop / 10-crop testing

ImageNet上的結(jié)果:基于DenseNet的分類器只需要ResNet一半的參數(shù)量,就可在ImageNet上達(dá)到相同分類精度

?

DenseNet算法的案例應(yīng)用

后期更新……

    轉(zhuǎn)藏 分享 獻(xiàn)花(0

    0條評(píng)論

    發(fā)表

    請(qǐng)遵守用戶 評(píng)論公約

    類似文章 更多

    欧美国产日产综合精品| 最新国产欧美精品91| 欧美黑人黄色一区二区| 最新午夜福利视频偷拍| 国产亚洲不卡一区二区| 黄色国产自拍在线观看| 91精品国产综合久久精品 | 欧美av人人妻av人人爽蜜桃| 亚洲伊人久久精品国产| 国产又粗又长又大高潮视频| 国产高清在线不卡一区| 日韩一区二区三区久久| 中国美女偷拍福利视频| 又黄又色又爽又免费的视频| 人妻少妇系列中文字幕| 国产老女人性生活视频| 国产欧美韩日一区二区三区| 精品一区二区三区不卡少妇av| 久久综合狠狠综合久久综合| 亚洲欧美中文日韩综合| 日韩精品一区二区不卡| 五月综合激情婷婷丁香| 国产一区欧美午夜福利| 黄片美女在线免费观看| 国产中文另类天堂二区| 国产麻豆成人精品区在线观看| 深夜视频在线观看免费你懂| 亚洲黄色在线观看免费高清| 中文字幕精品少妇人妻| 国产福利一区二区久久| 日本午夜一本久久久综合| 大尺度剧情国产在线视频| 开心激情网 激情五月天| 插进她的身体里在线观看骚| 国产精品成人免费精品自在线观看| 日本黄色高清视频久久| 小黄片大全欧美一区二区| 太香蕉久久国产精品视频| 婷婷激情四射在线观看视频| 亚洲精品国产福利在线| 精品久久av一二三区|