Download - 2.supervised learning(epoch#2)-3
![Page 1: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/1.jpg)
Introduction to
Machine Learning with Python
2. Supervised Learning(3)
Honedae Machine Learning Study Epoch #2
1
![Page 2: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/2.jpg)
Contacts
Haesun Park
Email : [email protected]
Meetup: https://www.meetup.com/Hongdae-Machine-Learning-Study/
Facebook : https://facebook.com/haesunrpark
Blog : https://tensorflow.blog
2
![Page 3: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/3.jpg)
Book
파이썬라이브러리를활용한머신러닝, 박해선.
(Introduction to Machine Learning with Python, Andreas
Muller & Sarah Guido의번역서입니다.)
번역서의 1장과 2장은블로그에서무료로읽을수있습니다.
원서에대한프리뷰를온라인에서볼수있습니다.
Github:
https://github.com/rickiepark/introduction_to_ml_with_python/
3
![Page 4: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/4.jpg)
커널서포트벡터머신
4
![Page 5: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/5.jpg)
Linear SVM
5
결정평면의기울기를줄여마진을최대화하기위해 w를최소화
마진위배를허용하는슬랙변수를최소화
같은 epsilon 높이에서마진을최대화하기위해 w를최소화
![Page 6: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/6.jpg)
비선형특성
6
![Page 7: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/7.jpg)
커널기법kernel trick
어떤특성을추가해야할지불분명하고많은특성을추가하면연산비용커짐
커널함수라부르는특별한함수를사용하여데이터포인트간의거리를계산하여
데이터포인트의특성이고차원에매핑된것같은효과를얻음
다항커널
xa = 𝑎1, 𝑎2 , 𝑥𝑏 = 𝑏1, 𝑏2 일때 (𝑥𝑎⋅ 𝑥𝑏)2 = (𝑎1𝑏1 + 𝑎2𝑏2)
2 =
𝑎12𝑏1
2 + 2𝑎1𝑏1𝑎2𝑏2 + 𝑎22𝑏2
2 = 𝑎12, 2𝑎1𝑎2, 𝑎2
2 ⋅ (𝑏12, 2𝑏1𝑏2, 𝑏2
2)
RBFradial basis function 커널
exp(−𝛾 𝑥𝑎 − 𝑥𝑏2), 𝑒𝑥의테일러급수전개: 𝐶 𝑛=0
∞ 𝑥𝑎⋅𝑥𝑏𝑛
𝑛!
무한한특성공간으로매핑하는효과, 고차항일수록특성중요성은떨어짐
7
![Page 8: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/8.jpg)
SVM 이해하기
커널기법을적용한 SVM을 ‘커널 SVM’ 혹은그냥 ‘SVM’으로부릅니다.
Scikit-Learn은 SVC, SVR 클래스를제공합니다.
클래스경계에위치한데이터포인트를
서포트벡터라부릅니다.
RBF 커널을가우시안Gaussian 커널이라고도부릅니다.
8유클리디안거리
커널의폭(gamma), 𝑒0~𝑒−∞ = 1~0
𝛾가작을수록샘플의영향범위커짐
𝛾 =1
2𝜎2일때 𝜎를커널의폭이라부름
exp(−𝛾 𝑥𝑎 − 𝑥𝑏2)
![Page 9: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/9.jpg)
SVC + forge dataset
9
서포트벡터
![Page 10: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/10.jpg)
매개변수튜닝
10
small gamma
less complex
large gamma
more complex
small C
큰제약, 과소적합
large C
작은제약, 과대적합
![Page 11: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/11.jpg)
SVC + cancer dataset
11
C=1, gamma=1/n_features
과대적합
cancer 데이터셋의데이터스케일
SVM은특성의범위에크게민감함
![Page 12: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/12.jpg)
데이터전처리
MinMaxScaler : 𝑋 −min(𝑋)
max 𝑋 −min(𝑋), 0 ~ 1 사이로조정
12
![Page 13: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/13.jpg)
SVC + 전처리데이터
13
전처리된후에는오히려과소적합됩니다.
제약완화
![Page 14: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/14.jpg)
장단점과매개변수
장점
강력하며여러종류의데이터셋에적용가능합니다.
특성이적을때에도복잡한결정경계만듦니다(커널 트릭).
특성이많을때에도잘작동합니다.
SVC/SVR(libsvm), LinearSVC/LinearSVR(liblinear)
단점
샘플이많을경우훈련속도가느리고메모리를많이사용합니다(>100,000).
데이터전처리와매개변수에 민감합니다.(랜덤 포레스트, 그래디언트부스팅)
분석하기어렵고비전문가에게 설명하기어렵습니다.
매개변수
C, gamma, coef0(다항, 시그모이드), degree(다항)
𝑙𝑖𝑛𝑒𝑎𝑟: 𝑥1 ⋅ 𝑥2, 𝑝𝑜𝑙𝑦: 𝛾 𝑥1 ⋅ 𝑥2 + 𝑐 𝑑 , 𝑠𝑖𝑔𝑚𝑜𝑖𝑑: tanh(𝛾 𝑥1 ⋅ 𝑥2 + 𝑐) 14
![Page 15: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/15.jpg)
신경망neural network
15
![Page 16: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/16.jpg)
퍼셉트론perceptron
1957년에프랑크로젠블라트가발표하였습니다.
종종뉴럴네트워크를멀티레이어퍼셉트론multilayer perceptron으로도부릅니다.
사이킷런은 sklearn.linear_model.Perceptron 클래스를제공하다가 0.18 버전에서
MLPClassifier, MLPRegressor 추가되었습니다.
16입력레이어
출력레이어
![Page 17: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/17.jpg)
선형모델의일반화
𝑦 = 𝑤 0 × 𝑥 0 + 𝑤 1 × 𝑥 1 + ⋯+ 𝑤 𝑝 × 𝑥 𝑝 + 𝑏
17
가중치
유닛
신경망그림에서편향이표현되지않는경우가종종있습니다.
![Page 18: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/18.jpg)
다층퍼셉트론
h 0 = 𝑤 0,0 × 𝑥 0 + 𝑤 1,0 × 𝑥 1 + 𝑤 2,0 × 𝑥 2 + 𝑤 3,0 × 𝑥 3 + 𝑏 0
h 0 = 𝑤 0,1 × 𝑥 0 + 𝑤 1,1 × 𝑥 1 + 𝑤 2,1 × 𝑥 2 + 𝑤 3,1 × 𝑥 3 + 𝑏 1
h 0 = 𝑤 0,2 × 𝑥 0 + 𝑤 1,2 × 𝑥 1 + 𝑤 2,2 × 𝑥 2 + 𝑤 3,2 × 𝑥 3 + 𝑏 2
𝑦 = 𝑣 0 × ℎ[0] + 𝑣 1 × ℎ 1 + 𝑣 2 × ℎ 2 + 𝑏
18
![Page 19: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/19.jpg)
Naming
완전연결뉴럴네트워크
(Fully Connected Neural Network)
덴스네트워크
(Dense Network)
멀티레이어퍼셉트론
(Multi-Layer Perceptron)
피드포워드뉴럴네트워크
(Feed Forward Neural Network)
딥러닝
(Deep Learning)
19
뉴런혹은유닛unit으로불립니다.
(동물의뇌와전혀관련이없습니다)
![Page 20: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/20.jpg)
다층퍼셉트론
h 0 = tanh(𝑤 0,0 × 𝑥 0 + 𝑤 1,0 × 𝑥 1 + 𝑤 2,0 × 𝑥 2 + 𝑤 3,0 × 𝑥 3 + 𝑏 0 )
h 0 = tanh(𝑤 0,1 × 𝑥 0 + 𝑤 1,1 × 𝑥 1 + 𝑤 2,1 × 𝑥 2 + 𝑤 3,1 × 𝑥 3 + 𝑏 1 )
h 0 = tanh(𝑤 0,2 × 𝑥 0 + 𝑤 1,2 × 𝑥 1 + 𝑤 2,2 × 𝑥 2 + 𝑤 3,2 × 𝑥 3 + 𝑏 2 )
𝑦 = 𝑣 0 × ℎ[0] + 𝑣 1 × ℎ 1 + 𝑣 2 × ℎ 2 + 𝑏
20
이진분류일경우시그모이드다중분류일경우소프트맥스함수사용
비선형활성화함수
![Page 21: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/21.jpg)
비용함수
로지스틱회귀와마찬가지로로지스틱비용함수(이진분류)나크로스엔트로피
비용함수(다중분류)를사용합니다.
모델함수가비선형이므로최적값을해석적으로계산하지못해경사하강법gradient
descent 계열의알고리즘을즐겨사용합니다.
각가중치 w에대해비용함수를미분하여기울기아래쪽으로조금씩(learning_rate)
이동합니다.
비용함수를직접 w에대해미분하는대신체인룰을이용해미분값(그래디언트)을
누적하여곱해갑니다(역전파backpropagation)
21
−
𝑖=1
𝑛
𝑦𝑙𝑜𝑔 𝑦
𝑑 𝑦
𝑑𝑤=
𝑑 𝑦
𝑑𝑢⋅𝑑𝑢
𝑑𝑤
![Page 22: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/22.jpg)
활성화함수activation function
은닉유닛의가중치합의결과에비선형성을주입
렐루ReLU, 하이퍼볼릭탄젠트tanh, 시그모이드sigmoid
22
기울기가 0에가까워집니다.
x>0 그래디언트는 1
x=0 (sub)그래디언트는 0
x<0 그래디언트는 0
(여러변종등장)
𝑑𝑦
𝑑𝑤=
𝑑𝑦
𝑑𝑢⋅𝑑𝑢
𝑑𝑤
![Page 23: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/23.jpg)
MLPClassifier + two_moons
23
hidden_layer_sizes=[100],
activation=‘relu’
Limited BFGS
의사뉴턴방법
![Page 24: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/24.jpg)
은닉유닛개수 = 10
24
![Page 25: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/25.jpg)
레이어추가, tanh 활성화함수
25
![Page 26: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/26.jpg)
L2 규제매개변수 alpha
26
기본값
낮은규제 높은규제
![Page 27: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/27.jpg)
신경망의복잡도
은닉층의수가많을수록,
은닉층의유닛개수가많을수록,
규제가낮을수록복잡도가증가됨
모델훈련시훈련샘플마다은닉층의유닛의일부를랜덤하게작동시키지않아
마치여러개의신경망을앙상블하는것같은드롭아웃dropout이신경망에서과대적합
방지하는대표적인방법 scikit-learn에추가될예정
27
![Page 28: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/28.jpg)
랜덤초기화
모델을훈련할때가중치를무작위로초기화
모델의크기와복잡도가낮으면영향을미칠수있음
28
![Page 29: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/29.jpg)
가중치초기화방법
Scikit-Learn에서는 Glorot 초기화방식을사용하여가중치와편향을모두
초기화합니다.
시그모이드함수: −2
𝑛𝑖𝑛𝑝𝑢𝑡𝑠+𝑛𝑜𝑢𝑡𝑝𝑢𝑡𝑠~ +
2
𝑛𝑖𝑛𝑝𝑢𝑡𝑠+𝑛𝑜𝑢𝑡𝑝𝑢𝑡𝑠의균등분포
tanh, relu 함수: −6
𝑛𝑖𝑛𝑝𝑢𝑡𝑠+𝑛𝑜𝑢𝑡𝑝𝑢𝑡𝑠~ +
6
𝑛𝑖𝑛𝑝𝑢𝑡𝑠+𝑛𝑜𝑢𝑡𝑝𝑢𝑡𝑠의균등분포
* Xavier 초기화
시그모이드: ±6
𝑛𝑖𝑛𝑝𝑢𝑡𝑠+𝑛𝑜𝑢𝑡𝑝𝑢𝑡𝑠tanh: ±4
6
𝑛𝑖𝑛𝑝𝑢𝑡𝑠+𝑛𝑜𝑢𝑡𝑝𝑢𝑡𝑠relu: ± 2
6
𝑛𝑖𝑛𝑝𝑢𝑡𝑠+𝑛𝑜𝑢𝑡𝑝𝑢𝑡𝑠
29
![Page 30: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/30.jpg)
MLPClassifier + cancer dataset
신경망에서도데이터의범주에민감함
30
평균이 0이고, 분산이 1인표준정규분포표준점수또는 z 점수StandardScaler
![Page 31: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/31.jpg)
MLPClassifier + adam
31
최대반복횟수도달(기본값 200)
solver=‘adam’ (Adaptive Moment Estimation)
![Page 32: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/32.jpg)
신경망의가중치검사
32
밝을수록큰값입력층과은닉층사이의가중치
은닉층과출력층사이의가중치는해석하기더어렵습니다.
덜중요하다고판단됨
![Page 33: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/33.jpg)
DL framework landscape
33
+
PyTorch
Caffe2(Python)
lasagna
Python’17 Mar. KDnuggets
Wrapper
![Page 34: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/34.jpg)
장단점과매개변수
장점
충분한시간과데이터가있으면매우복잡한모델을만들수있습니다.
종종다른알고리즘을 압도하는성능을발휘합니다(음성인식, 이미지분류, 번역등)
단점
데이터전처리에민감합니다(배치 정규화).
이종의데이터타입일경우트리모델이더나을수있습니다.
매개변수튜닝이매우어렵습니다(학습된 모델재사용).
Scikit-Learn은콘볼루션convolution이나리커런트recurrent 신경망을제공하지않습니다.
매개변수
solver=[‘adam’, ‘sgd’, ‘lbfgs’],
sgd일경우 momentum + nesterovs_momentum
alpha(L2 규제)
34
𝛼 ↑ → 규제 ↑ 𝑤 ↓𝛼 ↓ → 규제 ↓ 𝑤 ↑
![Page 35: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/35.jpg)
신경망의복잡도와권장설정
100개의특성과 100개의은닉유닛, 1개의출력유닛 = 10,100개의가중치
+ 100개의유닛을가진은닉층추가 10,000개의가중치증가
만약 1000개유닛을가진은닉층이라면 101,000 + 1,000,000 = 1,101,000개
solver의기본값은 ‘adam’으로데이터스케일에조금민감합니다.("The Marginal Value of Adaptive Gradient Methods in Machine Learning," A. C. Wilson et al. (2017), 에서
Adam, RMSProp 등이보여준회의적인결과로모멘텀방식을함께고려해야합니다.)
‘lbfgs’는안정적이지만규모가큰모델이나대량의데이터셋에서는시간이오래
걸립니다.
‘sgd’는 ’momentum’과 ‘nesterov_momentum’ 옵션과함께사용합니다.35
![Page 36: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/36.jpg)
모멘텀
모멘텀알고리즘은이전그래디언트값을가속도와같이생각하여전역최적값의
방향으로더빠르게수렴하게만듭니다.
momentum 매개변수는일종의댐퍼와같은역할을합니다.
𝑣𝑡+1 = 𝜇𝑣𝑡 − 𝜖𝑔 𝜃𝑡
𝜃𝑡+1 = 𝜃𝑡 + 𝑣𝑡+1
36
learning_ratemomentum
![Page 37: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/37.jpg)
네스테로프모멘텀
네스테로프모멘텀은모멘텀방식으로진행한후의그래디언트를계산하여현재에
적용합니다.
𝑣𝑡+1 = 𝜇𝑣𝑡 − 𝜖𝑔 𝜃𝑡 + 𝜇𝑣𝑡
𝜃𝑡+1 = 𝜃𝑡 + 𝑣𝑡+1
실제구현에서는앞선그래디언트를계산하는대신모멘텀방식을두번적용하여
네스테로프의근사값을구합니다.(https://tensorflow.blog/2017/03/22/momentum-nesterov-momentum/ 참조)
37
![Page 38: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/38.jpg)
분류의불확실성추정
38
![Page 39: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/39.jpg)
분류의불확실성
어떤테스트포인트에대해예측클래스뿐만아니라얼마나그클래스임을
확신하는지가중요할때가있음
대부분 decision_function 과 predict_proba 메서드둘중하나는제공함
39
![Page 40: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/40.jpg)
결정함수
40
이진분류에서 decision_function() 반환값의크기 (n_samples,)
𝑦 = 𝑤 0 × 𝑥 0 + 𝑤 1 × 𝑥 1 + ⋯+ 𝑤 𝑝 × 𝑥 𝑝 + 𝑏
![Page 41: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/41.jpg)
결정경계 + decision_function
41
![Page 42: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/42.jpg)
예측확률
predict_proba() 반환값의크기 (n_samples, n_classes)
가장큰확률값을가진클래스를예측클래스로선택함
42
![Page 43: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/43.jpg)
결정경계 + predict_proba
43
![Page 44: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/44.jpg)
다중분류 + decision_function
반환값의크기는 (n_samples, n_classes)
가장큰값이예측클래스가됨
44
![Page 45: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/45.jpg)
다중분류 + predict_proba
반환값의크기는 (n_samples, n_classes)
45
![Page 46: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/46.jpg)
요약및정리
46
![Page 47: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/47.jpg)
알고리즘정리
최근접이웃
작은데이터셋일경우, 기본모델로서좋고설명하기쉬움.
선형모델
첫번째로시도할알고리즘. 대용량데이터셋가능. 고차원데이터에가능.
나이브베이즈
분류만가능. 선형모델보다훨씬빠름. 대용량데이터셋과고차원데이터에가능.
선형모델보다덜정확함.
결정트리
매우빠름. 데이터스케일조정이필요없음. 시각화하기좋고설명하기쉬움.
47
![Page 48: 2.supervised learning(epoch#2)-3](https://reader031.vdocuments.site/reader031/viewer/2022020213/5a647a087f8b9a2c568b48e9/html5/thumbnails/48.jpg)
랜덤포레스트
결정트리하나보다거의항상좋은성능을냄. 매우안정적이고강력함.
데이터스케일조정필요없음. 고차원희소데이터에는잘안맞음.
그래디언트부스팅
랜덤포레스트보다조금더성능이좋음. 랜덤포레스트보다 학습은느리나예측은
빠르고메모리를조금사용. 랜덤포레스트보다 매개변수튜닝이많이필요함.
서포트벡터머신
비슷한의미의특성으로이뤄진중간규모데이터셋에잘맞음.
데이터스케일조정필요. 매개변수에민감.
신경망
특별히대용량데이터셋에서 매우복잡한모델을만들수있음.
매개변수선택과데이터스케일에민감. 큰모델은학습이오래걸림.
48