neural - wordpress.com · 2019. 10. 1. · b b a 2b dw x dz duh day n2 db t dz j k m du k mi duh k...
TRANSCRIPT
Week 2Note
Reto Rectified linearUnit
type of function
Neural Networks
op
input hidden layer output
layer layer
Types of NN
StandardNNconvolutional NW used for imageapplications
Recurrent NN sequence data played out overture ID timefour
audio and languageComplexHybridNN Autonomous driving
Supervised Learning
Structured data Unstructured dataeach ofthe features has images audio texta defined meaning
Note m of trainingexamples
sigmoid Rew
Week 2
Binary Classification
Yes No classification
Image Redpi.gl
Values ha n 64643 12288
Blue pixel values n lengthof feature vectorGreenpixel values
image Yestwo x y
Notation
x y x f R
g t oBm training examples
la g K y K y Kim y'mJtrainingset
A of training samples
Matrix X a
g fi nat
m
x shape namm
MatrixY
Y ly y y'm YERm
Logistic Regression
Given x want ya P y 11 x goal yn yT
prediction
estimateof y sigmoid r
Ii TG
Parameters w E R b f R t1
Output l o Emy r wtx t b l r L
I I te Zl
I 2 is large r G I 1I
2 WTxD b z is small r G TO
I
Loss error Function want as small as possible on one singletrainingsample
hlya.gl Ig ly tog g t d g log Ci jiY4 Llyng logy want togj large want j largeY 0 Llyng boy4gal want log large want ynsmall
Cost Function applied to The whole rainy set
TCw.bi m.FI h y y t
1M ly tog j t t y tog i yLoss function
Gradient Descent
Want to find w b to minimize J w b
repeats learning rate
w w x JJ w change in value
Jut dwb b x d w b db
in code
ylb
X
WXz I Wix went b ya a r Z a yWor
Lz jhca.gl da th aylb t 2 Jayp a y AI 11yw dwi x tz
ga.LA a
LzJb IZ
r
cost functionW w a LtdJWW lost function
fffw Im I hla y
Logistic Regression on m examples
getting rid of for loop in vectorization
or E1 to m Aw I2 WTx t b twa r za
wi w a fwJ y log a t t f tog t aIf we Wz a dwzd a g
b b a 2bdw x dzduh day
n2
db t dz
J k Mdu k mi duh k m db fm
Vectorization
2 WTx t b
Non vectorized vectorized2 O
2 ripdot w x bfor i in range n x w x
2 1 w D xD2 t b
UsingNumpy to avoid for loopsdw npzeros n D
For E1 to M dw I2 WTd t b twa r za
wi w a fwJ fy log a t t f tog t aM wz Wz a two
a in dz a gb b a 2b
dajµ n 2 reM dwdue x did
usingdb dz
T idwmidbtm_ due m
frumpy
Vectoriging Logistic Regression
2 WTx t ba r z
x
X µ din I fnm
z Em WIX t b b b W tb w tb Wtxm
cIxm
Z np dat w T x HE adjusts automatically
A ta d a r z Y y y yn
µz him a g a y A Y
Jb Im 7 fz Im np sum tz
tw Im X di MI Ig f fdj.fm tmqHzi't tii'de
Nxt
For E I to m loop this for multiple gradient descend
Z WTil t b z WTX b updot WT x tba r za A r ZT fy log a t l y tog l aM dz AY
dzH a gdw MIX dzT
dw x dzdw d
n2db Im7 dz hpsum dz
db dzvectorized withoutforloops
J k Mdu k mi dw k m db fm Singh iterationof gradientdescent
Explanation of Logistic Regression Cost Function
y r w x b r z IIte t
g P y 11 7
If y 1 plying ya
If y o ply yaP fl x
a b t a c be
a btc btc
btc att DP ylx ng l egg
t
g ya iffy ynI
F F o y y g l ya Ix h g HF
tog pCybo dogj't l j t ylogy I g log G HL j y f minimize
loston m examples
pl labels intraining set III P y 1 41log PC Ein tog ply l x L y y
Ltg y
ftp.tmigej JCw.bl mt.EILCy y
Numpy notes
ripexp x e
x shape get the dimension ofthe matrix
x reshape shape x to the given dimension
X shapeCo a Nbnx ShapeCB b
yx shape Ca µ ti
PA
training set m train images labeledtest set to be labeled
image shape num pi num px 3
Shapes of matrices
initialtrainingset 209 64,64 3 training set y 1 209
test set x 50 64 64 3 test set y 1,50battened
trainingset 64 64 3 209 training set y 1 209
test set x 64 64 3 50 test set y 1150
trainxtraing
6463 Ig Iq I y y y
testxtesty
axioms fly ly x y y y
Visualizing thePA
sigmoid z 2 W'Xtb
Propagate
A z E 22 Z
J cost MLEI y log a Li gil togG di
dw DJDJ
db dedb
optimize
forpropagate
W W x dwb b L db