Download - 2.supervised learning(epoch#2)-1
![Page 1: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/1.jpg)
Introduction to
Machine Learning with Python
2. Supervised Learning(1)
Honedae Machine Learning Study Epoch #2
1
![Page 2: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/2.jpg)
Contacts
Haesun Park
Email : [email protected]
Meetup: https://www.meetup.com/Hongdae-Machine-Learning-Study/
Facebook : https://facebook.com/haesunrpark
Blog : https://tensorflow.blog
2
![Page 3: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/3.jpg)
Book
파이썬라이브러리를활용한머신러닝, 박해선.
(Introduction to Machine Learning with Python, Andreas
Muller & Sarah Guido의번역서입니다.)
번역서의 1장과 2장은블로그에서무료로읽을수있습니다.
원서에대한프리뷰를온라인에서볼수있습니다.
Github:
https://github.com/rickiepark/introduction_to_ml_with_python/
3
![Page 4: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/4.jpg)
지도학습
4
![Page 5: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/5.jpg)
지도학습supervised learning
입력과출력의샘플데이터가있을때새로운입력에대한출력을예측합니다.
5
훈련(training, learning)
예측(predict, Inference)
사전에데이터를만드는노력이필요합니다(자동화필요).
![Page 6: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/6.jpg)
분류classification와회귀regression
분류는가능성있는클래스레이블중하나를예측합니다.
이진분류binary classification의예: 두개클래스분류(스팸, 0-음성클래스, 1-양성클래스)
다중분류multiclass classification의예: 셋이상의클래스분류(붓꽃품종)
언어의종류를분류(한국어와 프랑스어사이에다른언어가없음)
대부분머신러닝문제가분류의문제입니다.
회귀는연속적인숫자(실수)를예측합니다.
예)교육, 나이, 주거지를바탕으로연간소득예측
전년도수확량, 날씨, 고용자수로올해수확량예측
예측값에미묘한차이가크게중요하지않습니다.(연봉 예측의경우 40,000,001
원이나 39,999,999원이문제가되지않습니다.)
통계학에서많이연구되어왔습니다.6
![Page 7: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/7.jpg)
회귀 - Regression
7
“regression toward the mean”
프랜시스골턴Francis Galton
![Page 8: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/8.jpg)
일반화, 과대적합, 과소적합
일반화generalization
훈련세트로학습한모델을테스트세트에적용하는것입니다.
(예, 모델이테스트세트에잘일반화가되었다)
과대적합overfitting
훈련세트에너무맞추어져있어테스트세트의성능저하되는현상입니다.
(예, 모델이훈련세트에과대적합되어 있다)
과소적합underfitting
훈련세트를충분히반영하지못해훈련세트, 테스트세트에서모두성능저하되는
현상입니다. (예, 모델이너무단순하여과소적합되어 있다)
8
![Page 9: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/9.jpg)
너무상세한모델과대적합
9
45세이상, 자녀셋미만,
이혼하지않은고객이요트를살것이다.
요트회사의고객데이터를활용해요트를사려는사람을예측하려고합니다.
만약 10,000개의데이터가모두이조건을만족한다면?
![Page 10: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/10.jpg)
너무간단한모델과소적합
10
집이있는고객은모두요트를살것이다.
![Page 11: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/11.jpg)
모델복잡도곡선
11
데이터에대한민감성알고리즘자체오차(y = ax + b 의 b가아님)
편향-분산트레이드오프bias-variance tradeoff라고도부릅니다.
![Page 12: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/12.jpg)
데이터셋과복잡도관계
데이터가많으면다양성이커져복잡한모델을만들수있습니다.
10,000명의고객데이터에서 45세이상, 자녀셋미만, 이혼하지않은고객이
요트를사려한다면 이전보다훨씬신뢰높은모델이라고할수있습니다.
12
훈련
테스트
(less complex) (more complex)
![Page 13: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/13.jpg)
데이터셋
forge 데이터셋
인위적으로만든이진분류데이터셋, 26개의데이터포인트, 2개의속성
13
![Page 14: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/14.jpg)
데이터셋
wave 데이터셋
인위적으로만든회귀데이터셋, 40개의데이터포인트, 1개의속성
14
![Page 15: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/15.jpg)
데이터셋
위스콘신유방암데이터셋
악성Malignant(1)/양성Benign(0)의 이진분류, load_breast_cancer(),
569개의데이터포인트, 30개의특성
보스턴주택가격데이터셋
1970년대보스턴주변의주택평균가격예측, 회귀데이터셋, load_boston(),
506개의데이터포인트, 13개의특성
확장보스턴주택가격데이터셋
특성끼리곱하여새로운특성을만듦(특성공학, 4장, PolynomialFeatures),
mglearn.datasets.load_extended_boston(), 104개의데이터포인트
중복조합공식은 𝑛𝑘
= 𝑛+𝑘−1𝑘
이므로 132
= 13+2−12
=14!
2! 14−2 != 91
15
양성positive, 음성negative
특성의제곱이만들어짐
![Page 16: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/16.jpg)
load_breast_cancer()
16
(샘플, 특성)
Bunch 클래스의키
타깃
![Page 17: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/17.jpg)
load_boston()
17
원래 13개특성
확장된 104개특성이책의예제를위해서만든확장된데이터셋
![Page 18: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/18.jpg)
k-최근접이웃분류
18
![Page 19: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/19.jpg)
k-최근접이웃분류
새로운데이터포인트에가까운이웃중다수클래스majority voting를예측합니다.
forge 데이터셋에 1-최근접이웃, 3-최근접이웃적용하면다음과같습니다.
19
예측이바뀜
![Page 20: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/20.jpg)
k-NN 분류기
20
훈련세트와테스트세트로분리
모델객체생성
모델평가
모델학습
![Page 21: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/21.jpg)
KNeighborsClassifier 분석
21
more complex
과대적합less complex
과소적합
일정한간격으로생성한포인트의예측값을이용해결정경계구분
![Page 22: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/22.jpg)
k-NN 분류기의모델복잡도
22
more complex less complex
최적점
과대적합
과소적합
![Page 23: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/23.jpg)
k-최근접이웃회귀
23
![Page 24: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/24.jpg)
k-최근접이웃회귀
새로운데이터포인트에가까운이웃의출력값평균이예측이됩니다.
wave 데이터셋(1차원)에 1-최근접이웃, 3-최근접이웃적용하면다음과같습니다.
24
테스트데이터
예측
![Page 25: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/25.jpg)
k-NN 추정기
25
훈련세트와테스트세트로분리
모델객체생성
모델평가
모델학습
![Page 26: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/26.jpg)
회귀모델의평가
회귀모델의 score() 함수는결정계수 𝑅2를반환
𝑅2 = 1 − 𝑖=0
𝑛 𝑦 − 𝑦 2
𝑖=0𝑛 𝑦 − 𝑦 2 𝑦:타깃값 𝑦:타깃값의평균 𝑦:모델의예측
완벽예측: 타깃값==예측분자==0, 𝑅2 = 1
타깃값의평균정도예측: 분자≈분모, 𝑅2 = 0이됨
평균보다나쁘게예측하면음수가될수있음
26
![Page 27: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/27.jpg)
KNeighborsRegressor 분석
27
more complex
과대적합
less complex
과소적합
![Page 28: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/28.jpg)
장단점과매개변수
중요매개변수포인트사이의거리측정방법: metric 매개변수기본값 ‘minkowski’이고
p 매개변수기본값 2 일때,
유클리디안거리 𝑖=0𝑚 𝑥1
(𝑖)− 𝑥2
(𝑖) 2
이웃의수(n_neighbors): 3개나 5개(기본값)가 보편적
장점 이해하기쉬움, 특별한조정없이잘작동, 처음시도하는모델로적합,
비교적모델을빠르게만들수있음
단점 특성개수나샘플개수가크면예측이느림,
데이터전처리중요(스케일이 작은특성에영향을줄이지않으려면정규화필요),
(수백개이상의) 많은특성을가진데이터셋에는잘작동하지않음,
희소한데이터셋에 잘작동하지않음
28
![Page 29: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/29.jpg)
선형모델 - 회귀
29
![Page 30: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/30.jpg)
회귀의선형모델
선형함수(linear model)를사용하여예측
𝑦 = 𝑤 0 × 𝑥 0 + 𝑤 1 × 𝑥 1 + ⋯+ 𝑤 𝑝 × 𝑥 𝑝 + 𝑏
𝑦 = 𝑤0 × 𝑥0 + 𝑤1 × 𝑥1 + ⋯+ 𝑤𝑝 × 𝑥𝑝 + 𝑏
특성 : 𝑥 0 ~𝑥 𝑝 특성개수: (p + 1)
모델파라미터model parameter: 𝑤 0 ~𝑤 𝑝 , 𝑏
𝜔 : 가중치weight, 계수coefficient, 𝜃, 𝛽 e.g. model.coef_
𝘣 : 절편intercept, 편향bias e.g. model.intercept_
하이퍼파라미터hyperparameter: 학습되지않고직접설정해주어야함(매개변수)
e.g. KNeighborsRegressor의 n_neighbors
30
사용자가지정한매개변수와구분
![Page 31: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/31.jpg)
wave 데이터셋(특성 1개)으로비교
31
선형모델 k-최근접이웃
𝑦 = 𝑤 0 × 𝑥 0 + 𝑏
k-최근접이웃에비해단순해보이지만특성이많으면오히려과대적합되기쉽습니다.
선형모델은특성이두개이면평면, 세개이상은초평면hyperplane이됩니다.
![Page 32: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/32.jpg)
최소제곱법OLS, ordinary least squares
평균제곱오차mean square error(MSE =1
𝑛 𝑖=0
𝑛 𝑦𝑖 − 𝑦𝑖2)를최소화
LinearRegression: 정규방정식normal equation 𝛽 = 𝑋𝑇𝑋 −1𝑋𝑇𝑦을사용하여 w, b를구함
32
훈련세트와테스트세트로분리
모델평가
모델객체생성 & 학습
과소적합(1차원데이터셋이라예상한대로)
가중치는특성의개수만큼
![Page 33: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/33.jpg)
최소제곱법OLS, ordinary least squares
보스턴주택가격데이터셋, 506개샘플, 104개특성
33
훈련세트와테스트세트의 R2 점수차이가큼특성이많아가중치가풍부해져(104개의차원) 과대적합됨
동일한기본매개변수로훈련
![Page 34: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/34.jpg)
릿지ridge
선형모델(MSE 최소화) + 가중치최소화(가능한 0에가깝게)
L2 규제regularization : L2 노름norm의제곱 𝑤 22 = 𝑗=1
𝑚 𝑤𝑗2
비용함수cost function : 𝑀𝑆𝐸 + 𝛼 𝑗=1𝑚 𝑤𝑗
2
손실함수loss function, 목적함수objective function 라고도부름
𝛼가크면페널티가커져 𝑤𝑗가작아져야함(과대적합방지)
𝑤𝑗가 0에가깝게되지만 0이되지는않음
34
최적값페널티penalty
⍺↑
⍺↓
![Page 35: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/35.jpg)
Ridge 클래스
35
(가중치가규제되어) 과대적합이줄고테스트세트점수가상승됨
기본값 alpha=1.0
제약이너무커짐과소적합 alpha=0.00001
로하면비슷해짐
최소제곱법
![Page 36: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/36.jpg)
Ridge.coef_
36
alpha 값에따른 coef_ 값의변화
![Page 37: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/37.jpg)
규제와훈련데이터의관계
보스턴주택가격데이터셋의학습곡선 : LinearRegression vs Ridge(alpha=1)
37
훈련세트의점수는릿지가더낮음
테스트세트의점수는릿지가더높음
데이터가적을땐LinearRegression 학습안됨R2 < 0
(4~10 samples per weight)
데이터가많으면규제효과감소(복잡한모델이가능해짐)
데이터가많아지면과대적합줄어듬
![Page 38: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/38.jpg)
라쏘Lasso
38
선형모델(MSE 최소화) + 가중치최소화(가능한 0으로)
L1 규제 : L1 노름 𝑤 1 = 𝑗=1𝑚 𝑤𝑗
비용함수cost function : 𝑀𝑆𝐸 + 𝛼 𝑗=1𝑚 𝑤𝑗
𝛼가크면페널티가커져 𝑤𝑗가더작아져야함
𝑤𝑗가 0이될수있음(특성선택의효과)
일부계수가 0이되면모델을이해하기쉽고
중요한특성을파악하기쉽습니다.
최적값
⍺↑
⍺↓
![Page 39: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/39.jpg)
Lasso
39
alpha=1.0, max_iter=1000
과소적합(규제가너무큼), 4개의특성만활용됨
규제를너무낮추면 LinearRegression과비슷
좌표하강법coordinate descent
최소제곱법
릿지와비슷
![Page 40: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/40.jpg)
Lasso.coef_
40
alpha 값에따른 coef_ 값의변화
![Page 41: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/41.jpg)
Ridge vs Lasso
일반적으로릿지가라쏘보다선호됨
L2 페널티가 L1 페널티보다선호됨 (SGD에서 강한수렴)
많은특성중일부만중요하다고판단되면라쏘
분석하고이해하기쉬운모델을원할때는라쏘
41
![Page 42: 2.supervised learning(epoch#2)-1](https://reader034.vdocuments.site/reader034/viewer/2022051521/5a647a027f8b9a5d568b478b/html5/thumbnails/42.jpg)
ElasticNet
릿지와라쏘페널티결합 (R의 glmnet)
alpha, l1_ratio 매개변수로 L1 규제와 L2 규제의양을조절
사실 Lasso는 ElasticNet(l1_ratio=1.0)와동일
42
𝑀𝑆𝐸 + 𝛼 × 𝑙1_𝑟𝑎𝑡𝑖𝑜
𝑗=1
𝑚
𝑥𝑗 +1
2𝛼 × 1 − 𝑙1_𝑟𝑎𝑡𝑖𝑜
𝑗=1
𝑚
𝑤𝑗2
𝑙1 = 𝛼 × 𝑙1_𝑟𝑎𝑡𝑖𝑜 𝑙2 = 𝛼 × 1 − 𝑙1_𝑟𝑎𝑡𝑖𝑜
𝛼 = 𝑙1 + 𝑙2 𝑙1_𝑟𝑎𝑡𝑖𝑜 =𝑙1
𝑙1 + 𝑙2𝑙1과 𝑙2에맞추어𝛼와 𝑙1_ratio을조절