mxnetで頑張る深層学習
TRANSCRIPT
![Page 1: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/1.jpg)
で頑張る深層学習
Tokyo.R#57 2016-09-24 @kashitan
![Page 2: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/2.jpg)
• xgboostの開発元であるdmlcが開発したDeep Learning Framework
• C++, Scala, Python, R, Juliaなど様々なインタフェースがある
• CPU/GPUの切り替えが容易
• 複数コア/複数ノードで分散処理が可能
![Page 3: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/3.jpg)
• 多層パーセプトロン
• 畳み込みニューラルネットワーク
![Page 4: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/4.jpg)
• 多層パーセプトロン
• 畳み込みニューラルネットワーク
![Page 5: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/5.jpg)
多層パーセプトロン• パーセプトロンを複数の層に配置させ
⼊⼒層に与えられた値を伝播させていくネットワーク
・ ・ ・ ・
⼊⼒層 中間層 出⼒層
x1
x2
xn
y2
y1
![Page 6: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/6.jpg)
多層パーセプトロン• irisデータでやってみる
Sepal.Length
Sepal.Width
Petal.Length
Petal.Width
Species
5.1 3.5 1.4 0.2 setosa
4.9 3.0 1.4 0.2 setosa
4.7 3.2 1.3 0.2 setosa
4.6 3.1 1.5 0.2 setosasepal
petal
setosa, versicolor, virginicaの各品種に50行ずつあるデータセット
![Page 7: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/7.jpg)
多層パーセプトロン• 学習⽤と検証⽤にデータを分割
> data(iris) > # 奇数⾏を学習⽤データ、偶数⾏を検証⽤データとする > train.ind <- seq(1, nrow(iris), 2) > > train.x <- data.matrix(iris[train.ind, 1:4]) > train.y <- as.numeric(iris[train.ind, 5]) - 1 > test.x <- data.matrix(iris[-train.ind, 1:4]) > test.y <- as.numeric(iris[-train.ind, 5]) - 1 > table(train.y) train.y 0 1 2 25 25 25
![Page 8: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/8.jpg)
多層パーセプトロン• パッケージのロードとログの設定
> library(mxnet) > > # mxnetでランダムプロセスをコントロールするための関数 > mx.set.seed(0) > > # 学習および検証時の誤差を後で使うためのリファレンスクラス > logger <- mx.metric.logger$new()
![Page 9: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/9.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
![Page 10: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/10.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
中間層のユニット数
![Page 11: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/11.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
出⼒層のユニット数 今回は3class分類なので
3を指定
![Page 12: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/12.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
活性化関数 今回はReLUを指定
![Page 13: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/13.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
誤差関数
![Page 14: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/14.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
学習回数
![Page 15: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/15.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
1回の学習で 使⽤する
学習データ数
![Page 16: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/16.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
学習率
![Page 17: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/17.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
モーメンタム
![Page 18: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/18.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
評価⽅法 回帰の場合はRMSEなど
![Page 19: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/19.jpg)
多層パーセプトロン• モデル作成
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = 6, + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
学習毎に実⾏する関数
![Page 20: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/20.jpg)
多層パーセプトロン• モデル作成結果
Start training with 1 devices [1] Train-accuracy=0.38 [1] Validation-accuracy=0.333333333333333 [2] Train-accuracy=0.333333333333333 [2] Validation-accuracy=0.333333333333333 [3] Train-accuracy=0.333333333333333 [3] Validation-accuracy=0.333333333333333 (中略) [199] Train-accuracy=0.96 [199] Validation-accuracy=0.96 [200] Train-accuracy=0.986666666666667 [200] Validation-accuracy=0.96
![Page 21: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/21.jpg)
多層パーセプトロン• 学習毎の精度をプロット
> library(dplyr) > library(plotly) > data.frame(epoc = seq(1, 200, 1), + train = logger$train, test = logger$eval) %>% + plot_ly( + x = epoc, y = train, + type = "scatter", mode = "lines", name = "train" + ) %>% + add_trace( + x = epoc, y = test, + type = "scatter", mode = "lines", name = "test" + ) %>% + layout(yaxis = list(title = "accuracy"))
![Page 22: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/22.jpg)
多層パーセプトロン• 学習毎の精度をプロット
![Page 23: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/23.jpg)
多層パーセプトロン• モデルを検証⽤データに当てはめ
> # 検証⽤データに当てはめ > preds <- predict(model, test.x) > head(t(preds)) [,1] [,2] [,3] [1,] 0.9999290 7.107238e-05 1.109092e-12 [2,] 0.9997237 2.762006e-04 1.440176e-11 [3,] 0.9999521 4.790557e-05 6.202604e-13 [4,] 0.9999722 2.779509e-05 1.976960e-13 [5,] 0.9999521 4.791964e-05 5.282145e-13 [6,] 0.9998796 1.204296e-04 3.145415e-12
![Page 24: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/24.jpg)
多層パーセプトロン• 混合⾏列/精度の確認
> # 最も確率が⾼い列を予測した品種とする > pred.label <- max.col(t(preds)) - 1
> # 混合⾏列をプロット > library(caret) > confusionMatrix(pred.label, test.y) Confusion Matrix and Statistics
Reference Prediction 0 1 2 0 25 0 0 1 0 24 2 2 0 1 23
![Page 25: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/25.jpg)
多層パーセプトロン• 計算グラフの描画
> graph.viz(model$symbol$as.json())
⼊⼒層
中間層
活性化関数
出⼒層
尤度関数
![Page 26: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/26.jpg)
多層パーセプトロン• 2層にしてみる
> model <- mx.mlp( + train.x, train.y, + eval.data = list(data = test.x, label = test.y), + hidden_node = c(6, 6), + out_node = 3, + activation = "relu", + out_activation = "softmax", + num.round = 200, + array.batch.size = 25, + learning.rate = 0.07, + momentum = 0.9, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(5, logger) + )
![Page 27: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/27.jpg)
多層パーセプトロン• 計算グラフの描画
> graph.viz(model$symbol$as.json())
中間層が2層に 増えている
![Page 28: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/28.jpg)
多層パーセプトロン• 学習毎の精度をプロット
![Page 29: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/29.jpg)
多層パーセプトロン• 混合⾏列/精度の確認
> confusionMatrix(pred.label, test.y) Confusion Matrix and Statistics
Reference Prediction 0 1 2 0 25 0 0 1 0 25 25 2 0 0 0
Overall Statistics Accuracy : 0.6667 95% CI : (0.5483, 0.7714)
![Page 30: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/30.jpg)
多層パーセプトロン• オッカムの剃⼑
“Pluralitas non est ponenda sine neccesitate. Frustra fit per plura quod potest fieri per pauciora.”
必要が無いなら多くのものを定⽴してはならない。少数の論理でよい場合は多数の論理を定⽴してはならない。
![Page 31: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/31.jpg)
• 多層パーセプトロン
• 畳み込みニューラルネットワーク
![Page 32: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/32.jpg)
畳み込みニューラルネットワーク
• ⼈の視覚野にある受容野をモデル化 • 以下の層から構成される
• 畳み込み層 • プーリング層 • 全結合層 • 出⼒層
![Page 33: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/33.jpg)
畳み込みニューラルネットワーク
• 畳み込み層とプーリング層は複数回繰り返す
• 全結合層も何層か繰り返し深い層を形成
http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdfより抜粋
![Page 34: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/34.jpg)
畳み込みニューラルネットワーク
• 詳しくは以下の神スライドを参照
http://www.slideshare.net/matsukenbook/ss-50545587
![Page 35: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/35.jpg)
畳み込みニューラルネットワーク
• MNISTデータでやってみる • 0〜9までの数字の⼿書き⽂字 • 学習⽤データ 6万件 • 検証⽤データ 1万件
![Page 36: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/36.jpg)
畳み込みニューラルネットワーク
• LeNet-5を可能なかぎり再現してみる
![Page 37: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/37.jpg)
畳み込みニューラルネットワーク
• MNISTデータのダウンロード
http://yann.lecun.com/exdb/mnist/
![Page 38: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/38.jpg)
畳み込みニューラルネットワーク
• バイナリデータを読み込む関数をロード
https://gist.github.com/brendano/39760
![Page 39: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/39.jpg)
畳み込みニューラルネットワーク
• データの読み込み> # MNISTデータの読み込み > load_mnist() > > # 28x28(=784)のレコードが60000個 > dim(train$x) [1] 60000 784 >
![Page 40: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/40.jpg)
畳み込みニューラルネットワーク
• 次元の変換とプロット> # mx.symbol.Convolutionで処理できる4次元配列に変換 > train.x <- t(train$x) > dim(train.x) <- c(28, 28, 1, 60000) > > test.x <- t(test$x) > dim(test.x) <- c(28, 28, 1, 10000) > > # 1⽂字⽬をプロット > library(plotly) > plot_ly(z=train.x[, , 1, 1], colorscale = "Greys", type = "heatmap")
![Page 41: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/41.jpg)
畳み込みニューラルネットワーク
• 次元の変換とプロット
![Page 42: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/42.jpg)
畳み込みニューラルネットワーク
• ⼊⼒層とC1, S2の定義> library(mxnet) > input <- mx.symbol.Variable('data') > > # C1 > c1 <- mx.symbol.Convolution( + data = input, kernel = c(5, 5), num_filter = 6) > a1 <- mx.symbol.Activation(data = c1, act_type = "tanh") > > # S2 > s2 <- mx.symbol.Pooling( + data = a1, pool_type = "max", kernel = c(2, 2))
![Page 43: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/43.jpg)
畳み込みニューラルネットワーク
• ⼊⼒層とC1, S2の定義> library(mxnet) > input <- mx.symbol.Variable('data') > > # C1 > c1 <- mx.symbol.Convolution( + data = input, kernel = c(5, 5), num_filter = 6) > a1 <- mx.symbol.Activation(data = c1, act_type = "tanh") > > # S2 > s2 <- mx.symbol.Pooling( + data = a1, pool_type = "max", kernel = c(2, 2))
フィルタのサイズ
![Page 44: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/44.jpg)
畳み込みニューラルネットワーク
• ⼊⼒層とC1, S2の定義> library(mxnet) > input <- mx.symbol.Variable('data') > > # C1 > c1 <- mx.symbol.Convolution( + data = input, kernel = c(5, 5), num_filter = 6) > a1 <- mx.symbol.Activation(data = c1, act_type = "tanh") > > # S2 > s2 <- mx.symbol.Pooling( + data = a1, pool_type = "max", kernel = c(2, 2))
フィルタの数
![Page 45: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/45.jpg)
畳み込みニューラルネットワーク
• ⼊⼒層とC1, S2の定義> library(mxnet) > input <- mx.symbol.Variable('data') > > # C1 > c1 <- mx.symbol.Convolution( + data = input, kernel = c(5, 5), num_filter = 6) > a1 <- mx.symbol.Activation(data = c1, act_type = "tanh") > > # S2 > s2 <- mx.symbol.Pooling( + data = a1, pool_type = "max", kernel = c(2, 2))
活性化関数
![Page 46: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/46.jpg)
畳み込みニューラルネットワーク
• ⼊⼒層とC1, S2の定義> library(mxnet) > input <- mx.symbol.Variable('data') > > # C1 > c1 <- mx.symbol.Convolution( + data = input, kernel = c(5, 5), num_filter = 6) > a1 <- mx.symbol.Activation(data = c1, act_type = "tanh") > > # S2 > s2 <- mx.symbol.Pooling( + data = a1, pool_type = "max", kernel = c(2, 2))
プーリングのタイプ
![Page 47: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/47.jpg)
畳み込みニューラルネットワーク
• ⼊⼒層とC1, S2の定義> library(mxnet) > input <- mx.symbol.Variable('data') > > # C1 > c1 <- mx.symbol.Convolution( + data = input, kernel = c(5, 5), num_filter = 6) > a1 <- mx.symbol.Activation(data = c1, act_type = "tanh") > > # S2 > s2 <- mx.symbol.Pooling( + data = a1, pool_type = "max", kernel = c(2, 2))
フィルタのサイズ
![Page 48: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/48.jpg)
畳み込みニューラルネットワーク
• C3, S4, C5の定義> # C3 > c3 <- mx.symbol.Convolution( + data = s2, kernel = c(5, 5), num_filter = 16) > a2 <- mx.symbol.Activation(data = c3, act_type = "tanh") > > # S4 > s4 <- mx.symbol.Pooling( + data = a2, pool_type = "max", kernel = c(2, 2)) > > # C5 > c5 <- mx.symbol.Flatten(data = s4)
![Page 49: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/49.jpg)
畳み込みニューラルネットワーク
• F6, OUTPUT, 損失関数の定義> # F6 > f6 <- mx.symbol.FullyConnected(data = c5, num_hidden = 84) > a3 <- mx.symbol.Activation(data = f6, act_type = "tanh") > > # Output > output <- mx.symbol.FullyConnected(data = a3, num_hidden = 10) > > # loss > lenet <- mx.symbol.SoftmaxOutput(data = output)
![Page 50: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/50.jpg)
畳み込みニューラルネットワーク
• ネットワークモデルの学習> model <- mx.model.FeedForward.create( + lenet, + X = train.x, + y = train$y, + ctx = mx.gpu(), + num.round = 20, + array.batch.size = 1000, + learning.rate = 0.05, + momentum = 0.9, + wd = 0.00001, + eval.metric = mx.metric.accuracy, + epoch.end.callback = mx.callback.log.train.metric(100) + )
![Page 51: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/51.jpg)
畳み込みニューラルネットワーク
• 学習毎の精度をプロット> data.frame(epoc = 1:20, + train = logger$train, + test = logger$eval) %>% + plot_ly( + x = epoc, y = train, + type = "scatter", mode = "lines", name = "train") %>% + add_trace( + x = epoc, y = test, + type = "scatter", mode = "lines", name = "test") %>% + layout(yaxis = list(title = "accuracy"))
![Page 52: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/52.jpg)
畳み込みニューラルネットワーク
• 学習毎の精度をプロット
![Page 53: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/53.jpg)
畳み込みニューラルネットワーク
• 混合⾏列/精度の確認> confusionMatrix(pred.label, test$y) Confusion Matrix and Statistics
Reference Prediction 0 1 2 3 4 5 6 7 8 9 0 970 0 1 0 0 2 5 0 5 3 1 0 1126 2 0 0 0 2 1 0 2 2 3 1 1020 1 1 0 1 4 3 0 3 0 1 0 997 0 7 1 2 3 4 4 1 0 3 0 971 0 2 0 2 13 5 0 0 0 5 0 876 2 0 2 2 6 2 2 1 0 2 4 943 0 1 0 7 1 2 4 5 3 1 0 1019 6 10 8 3 3 1 2 0 1 2 0 951 2
![Page 54: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/54.jpg)
畳み込みニューラルネットワーク
• 計算グラフの描画> graph.viz(model$symbol$as.json())
![Page 55: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/55.jpg)
まとめ
![Page 56: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/56.jpg)
で でも 深層学習が捗る!
![Page 57: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/57.jpg)
参考
![Page 58: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/58.jpg)
•MXNet R Package - mxnet 0.7.0 documentation
•Deep Learningライブラリ{mxnet}のR版でConvolutional Neural Networkをサクッと試してみた(追記3件あり)
•Deep Learningライブラリ「MXNet」のR版をKaggle Otto Challengeで実践してみた
•Mxnetで回帰 #TokyoR 53th
![Page 59: mxnetで頑張る深層学習](https://reader031.vdocuments.site/reader031/viewer/2022022412/58f9a91d760da3da068b6b29/html5/thumbnails/59.jpg)