綜述
“譬如行遠必自邇,譬如登高必自卑?!?/p>
本文采用編譯器:jupyter?
所謂梯度下降算法,本質上來講并不能將其稱作為機器學習算法,但是可以用于很多機器學習解決問題的領域,并且從數(shù)學上為我們解決一個復雜的問題提供了一個思路。
回顧上一文中所述線性回歸算法的損失函數(shù),我們完全可以使用梯度下降法來最小化,就像日常生活中我們在登山時一定會選擇最近的路程,所謂“梯度”其實就是“山”最陡峭的那條路。
其中超參數(shù)稱為學習率(learning rate),的取值將會影響獲得最優(yōu)解的速度,如果取了不合適的值可能無法得到最優(yōu)解。
如果取值太小的話,會限制收斂學習的速度,如圖
如果取值太大的話,可能會導致不收斂,如圖
然鵝,并不是所有的函數(shù)都有唯一的極值點,如下。對此我們可以多次運行,隨機化初始點,故梯度下降法的初始點也是一個超參數(shù)。
01 梯度下降法模擬
import numpy as np
import matplotlib.pyplot as plt
plot_x = np.linspace(-1, 6, 141) # 包括-1和6共141個點,采樣點數(shù)為140
plot_x
"""
Out[2]:
array([-1. , -0.95, -0.9 , -0.85, -0.8 , -0.75, -0.7 , -0.65, -0.6 , -0.55, -0.5 ,
......
5.65, 5.7 , 5.75, 5.8 , 5.85, 5.9 , 5.95, 6. ])
"""
plot_y = (plot_x-2.5)**2-1 # 二次曲線
plt.plot(plot_x, plot_y)
plt.show()
?
def dJ(theta): # 求導
return 2*(theta-2.5)
def J(theta):
return (theta-2.5)**2-1
eta = 0.1 # 學習率
epsilon = 1e-8 # 兩次精度的差值如果小于這個說明到了極小值點
?
theta = 0.0
while True:
gradient = dJ(theta) # 梯度
last_theta = theta
theta = theta - eta*gradient
if(abs(J(theta) - J(last_theta)) < epsilon):
break
print(theta)
print(J(theta))
# 2.499891109642585
# -0.99999998814289
eta = 0.1 # 學習率
theta = 0.0
theta_history = [theta]
?
while True:
gradient = dJ(theta)
last_theta = theta
theta = theta - eta*gradient
theta_history.append(theta)
if(abs(J(theta) - J(last_theta)) < epsilon):
break
plt.plot(plot_x, J(plot_x), color='b')
plt.plot(np.array(theta_history), J(np.array(theta_history)), color='r', marker='+')
plt.show()
len(theta_history)
# Out[11]:
# 46
# 將函數(shù)進行封裝
def gradient_descent(initial_theta, eta, epsilon=1e-8):
theta = initial_theta
theta_history.append(initial_theta)
while True:
gradient = dJ(theta)
last_theta = theta
theta = theta - eta*gradient
theta_history.append(theta)
if(abs(J(theta) - J(last_theta)) < epsilon):
break
def plot_theta_history():
plt.plot(plot_x, J(plot_x))
plt.plot(np.array(theta_history), J(np.array(theta_history)), color='r', marker="+")
plt.show()
eta = 0.01
theta_history = []
gradient_descent(0., eta)
plot_theta_history()
len(theta_history)
# Out[16]:
# 424
eta = 0.001 # 取值太小
theta_history = []
gradient_descent(0., eta)
plot_theta_history()
eta = 0.8 # 取值較大
theta_history = []
gradient_descent(0., eta)
plot_theta_history()
eta = 1.1 # 取值過大,無法找到極值點
theta_history = []
gradient_descent(0., eta)
plot_theta_history()
"""
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-19-a0a1b21268a2> in <module>()
? 1 eta = 1.1
? 2 theta_history = []
? ----> 3 gradient_descent(0., eta)
? 4 plot_theta_history()
<ipython-input-14-d4bbfa921317> in gradient_descent(initial_theta, eta, epsilon)
? 9 theta_history.append(theta)
? 10
---> 11 if(abs(J(theta) - J(last_theta)) < epsilon):
? 12 break
? 13
<ipython-input-6-ae1577092099> in J(theta)
? 1 def J(theta):
----> 2 return (theta-2.5)**2-1
OverflowError: (34, 'Result too large')
"""
def J(theta):
try: # 如果步長太大函數(shù)計算值過大則拋出異常
return (theta-2.5)**2 - 1.
except:
return float('inf') # 返回浮點數(shù)的最大值
# 限定最大循環(huán)次數(shù)
def gradient_descent(initial_theta, eta, n_iters = 1e4, epsilon=1e-8):
theta = initial_theta
theta_history.append(initial_theta)
i_iter = 0
while i_iter < n_iters:
gradient = dJ(theta)
last_theta = theta
theta = theta - eta*gradient
theta_history.append(theta)
if(abs(J(theta) - J(last_theta)) < epsilon):
break
i_iter += 1
eta = 1.1 # 取值過大,無法找到極值點
theta_history = []
gradient_descent(0., eta)
len(theta_history)
# Out[23]:
# 10001
theta_history[-1]
# Out[24]:
# nan
# 繪圖觀察eta太大會發(fā)生的結果
eta = 1.1 # 取值過大,無法找到極值點
theta_history = []
gradient_descent(0., eta, n_iters=10)
plot_theta_history()
02 在線性回歸模型中使用梯度下降法
參考前一篇介紹線性回歸的文章,我們完全可以應用梯度下降法來找到線性回歸模型誤差的最小值,求出對應的
推導如下:
為了使計算結果與數(shù)據(jù)的個數(shù)m無關,將計算m個數(shù)據(jù)的均值
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(666)
x = 2 * np.random.random(size=100) # 定義成一維方便可視化操作
y = x * 3. + 4. + np.random.normal(size=100)
X = x.reshape(-1, 1) # 拓展到多維情況,共100個數(shù)據(jù),每個數(shù)據(jù)由一個特征
X.shape
# Out[4]:
# (100, 1)
y.shape
# Out[5]:
# (100,)
plt.scatter(x, y)
plt.show()
使用梯度下降法訓練
?
def J(theta, X_b, y):
try:
return np.sum((y - X_b.dot(theta))**2) / len(X_b)
except:
return float('inf')
def dJ(theta, X_b, y):
res = np.empty(len(theta))
res[0] = np.sum(X_b.dot(theta) - y)
for i in range(1, len(theta)):
res[i] = (X_b.dot(theta) - y).dot(X_b[:,i])
return res * 2 / len(X_b) # 對于一個二維數(shù)組,len返回其行數(shù)
def gradient_descent(X_b, y, initial_theta, eta, n_iters = 1e4, epsilon=1e-8):
theta = initial_theta
i_iter = 0
while i_iter < n_iters:
gradient = dJ(theta, X_b, y)
last_theta = theta
theta = theta - eta*gradient
if(abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon):
break
i_iter += 1
return theta
X_b = np.hstack([np.ones((len(x), 1)) , x.reshape(-1, 1)])
initial_theta = np.zeros(X_b.shape[1]) # 每個特征對應一個theta,(還應再多一個theta0)
eta = 0.01
?
theta = gradient_descent(X_b, y, initial_theta, eta)
theta # 結果對應截距和斜率
# Out[11]:
# array([ 4.02145786, 3.00706277])
封裝我們的線性回歸算法
from playML.LinearRegression import LinearRegression
?
lin_reg = LinearRegression()
lin_reg.fit_gd(X, y)
# Out[12]:
# LinearRegression()
lin_reg.coef_
# Out[13]:
# array([ 3.00574511])
lin_reg.intercept_
# Out[14]:
# 4.023020112808255
03 梯度下降法的向量化?
?
?
Xb是一個m??(n+1)的矩陣,括號里面是1??m的矩陣,最終乘積是一個行向量,但梯度是一個列向量所以在此進行轉置操作變?yōu)?/p>
?
此時結果是列向量。
至此,梯度可以變成
?可以將代碼中使用for循環(huán)的操作變成向量的乘積。
import numpy as np
from sklearn import datasets
boston = datasets.load_boston()
X = boston.data
y = boston.target
?
X = X[y < 50.0]
y = y[y < 50.0]
from playML.model_selection import train_test_split
?
X_train, X_test, y_train, y_test = train_test_split(X, y, seed=666)
from playML.LinearRegression import LinearRegression
?
lin_reg1 = LinearRegression()
%time lin_reg1.fit_normal(X_train, y_train)
lin_reg1.score(X_test, y_test)
# 0.81298026026584913
CPU times: user 130 ms, sys: 10.6 ms, total: 141 ms Wall time: 147 ms
使用梯度下降法
lin_reg2 = LinearRegression()
lin_reg2.fit_gd(X_train, y_train)
# Out[5]:
# LinearRegression()
lin_reg2.coef_ # 真實數(shù)據(jù)集中,每個特征所對應的數(shù)量級是不同的,可能計算出的步長還是過大
# Out[6]:
# array([ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
# nan, nan])
lin_reg2.fit_gd(X_train, y_train, eta=0.000001)
# Out[7]:
# LinearRegression()
lin_reg2.score(X_test, y_test) # 結果不對,可能因為步長太小而需要更多的循環(huán)次數(shù)
# Out[8]:
# 0.27556634853389195
%time lin_reg2.fit_gd(X_train, y_train, eta=0.000001, n_iters=1e6)
CPU times: user 37 s, sys: 107 ms, total: 37.1 s
Wall time: 37.6 s
# Out[9]:
# LinearRegression()
lin_reg2.score(X_test, y_test) # 結果仍不令人滿意
?
# 解決數(shù)據(jù)不在一個維度上的辦法是先將數(shù)據(jù)進行歸一化操作
# Out[10]:
# 0.75418523539807636
使用梯度下降法前進行數(shù)據(jù)歸一化
from sklearn.preprocessing import StandardScaler
standardScaler = StandardScaler()
standardScaler.fit(X_train)
# Out[12]:
# StandardScaler(copy=True, with_mean=True, with_std=True)
X_train_standard = standardScaler.transform(X_train)
lin_reg3 = LinearRegression()
%time lin_reg3.fit_gd(X_train_standard, y_train)
CPU times: user 194 ms, sys: 3.7 ms, total: 198 ms
Wall time: 199 ms
# Out[14]:
# LinearRegression()
X_test_standard = standardScaler.transform(X_test)
lin_reg3.score(X_test_standard, y_test)
# Out[16]:
# 0.81298806201222351
梯度下降法的優(yōu)勢,在數(shù)據(jù)量較大時耗時比線性回歸法小
m = 1000
n = 5000
?
big_X = np.random.normal(size=(m, n))
?
# 生成隨機在0到100取值的5001個數(shù)
true_theta = np.random.uniform(0.0, 100.0, size=n+1)
big_y = big_X.dot(true_theta[1:]) + true_theta[0] + np.random.normal(0., 10., size=m)
big_reg1 = LinearRegression()
%time big_reg1.fit_normal(big_X, big_y)
CPU times: user 20.8 s, sys: 658 ms, total: 21.4 s
Wall time: 9.67 s
# Out[18]:
# LinearRegression()
big_reg2 = LinearRegression()
%time big_reg2.fit_gd(big_X, big_y)
CPU times: user 10.6 s, sys: 100 ms, total: 10.7 s
Wall time: 4.59 s
# Out[19]:
# LinearRegression()
?04 隨機梯度下降法
樣本i為一個隨機變量,即每次取一個隨機的梯度進行下降
?
學習率應當隨著循環(huán)次數(shù)的增加應當逐漸緩慢減小,防止跳出誤差最小值點的范圍。
經(jīng)驗上a取5,b取50。(模擬退火的思想)
04 隨機梯度下降法
?
import numpy as np
import matplotlib.pyplot as plt
m = 100000
?
x = np.random.normal(size=m)
X = x.reshape(-1,1) # 只有一個特征
?
y = 4.*x + 3. +np.random.normal(0, 3, size=m)
def J(theta, X_b, y):
try:
return np.sum((y - X_b.dot(theta))**2) / len(X_b)
except:
return float('inf')
def dJ(theta, X_b, y):
return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(y)
?
def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):
theta = initial_theta
i_iter = 0
while i_iter < n_iters:
gradient = dJ(theta, X_b, y)
last_theta = theta
theta = theta - eta*gradient
if(abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon):
break
i_iter += 1
return theta
%%time
X_b = np.hstack([np.ones((len(X), 1)), X])
initial_theta = np.zeros(X_b.shape[1])
eta = 0.01
theta = gradient_descent(X_b, y, initial_theta, eta)
CPU times: user 864 ms, sys: 68 ms, total: 932 ms
Wall time: 646 ms
theta
# Out[5]:
# array([ 2.98747839, 4.00185783])
隨機梯度下降法
def dJ_sgd(theta, X_b_i, y_i): #傳入的是X_b,y的某一行
return X_b_i.T.dot(X_b_i.dot(theta) - y_i) * 2.
# 梯度下降法,學習率不斷改變
def sgd(X_b, y, initial_theta, n_iters):
t0 = 5
t1 = 50
def learning_rate(t):
return t0 / (t + t1)
# 由于是隨機梯度下降,不能按照前后兩次函數(shù)值的變化多少來判斷是否找到了最小值點
theta = initial_theta
for cur_iter in range(n_iters):
rand_i = np.random.randint(len(X_b))
gradient = dJ_sgd(theta, X_b[rand_i], y[rand_i])
theta = theta - learning_rate(cur_iter) * gradient
return theta
%%time
X_b = np.hstack([np.ones((len(X), 1)), X])
initial_theta = np.zeros(X_b.shape[1])
theta = sgd(X_b, y, initial_theta, n_iters=len(X_b)//3) # 循環(huán)次數(shù)設置為樣本量的三分之一
CPU times: user 247 ms, sys: 4.24 ms, total: 252 ms
Wall time: 251 ms
theta
# Out[9]:
# array([ 2.95001259, 3.88694308])
05 使用我們自己的SGD
import numpy as np
import matplotlib.pyplot as plt
m = 10000
?
x = np.random.normal(size=m)
X = x.reshape(-1, 1)
y = 4.*x + 3. + np.random.normal(0, 3, size=m)
from playML.LinearRegression import LinearRegression
?
lin_reg = LinearRegression()
lin_reg.fit_sgd(X, y, n_iters=2)
# Out[3]:
LinearRegression()
lin_reg.coef_
# Out[4]:
# array([ 4.02866416])
lin_reg.intercept_
# Out[5]:
# 3.0302884363039437
?真實使用我們自己的SGD
from sklearn import datasets
?
boston = datasets.load_boston()
X = boston.data
y = boston.target
?
X = X[y < 50.0]
y = y[y < 50.0]
from playML.model_selection import train_test_split
?
X_train, X_test, y_train, y_test = train_test_split(X, y, seed=666)
from sklearn.preprocessing import StandardScaler
?
standardScaler = StandardScaler()
standardScaler.fit(X_train)
X_train_standard = standardScaler.transform(X_train)
X_test_standard = standardScaler.transform(X_test)
from playML.LinearRegression import LinearRegression
?
lin_reg = LinearRegression()
%time lin_reg.fit_sgd(X_train_standard, y_train, n_iters=2)
lin_reg.score(X_test_standard, y_test)
# 此時沒有達到最好的結果(0.8129),可以適當增加n_iters
CPU times: user 8.04 ms, sys: 2 ms, total: 10 ms
Wall time: 9.77 ms
# Out[9]:
# 0.79233295554251493
%time lin_reg.fit_sgd(X_train_standard, y_train, n_iters=50)
lin_reg.score(X_test_standard, y_test)
CPU times: user 110 ms, sys: 2.17 ms, total: 112 ms
Wall time: 113 ms
# Out[10]:
# 0.81324404894409674
%time lin_reg.fit_sgd(X_train_standard, y_train, n_iters=100)
lin_reg.score(X_test_standard, y_test)
CPU times: user 205 ms, sys: 4.12 ms, total: 209 ms
Wall time: 212 ms
# Out[11]:
# 0.81316850059297174
scikit-learn中的SGD
from sklearn.linear_model import SGDRegressor # 在linear_model包中,只能解決線性回歸問題
sgd_reg = SGDRegressor()
%time sgd_reg.fit(X_train_standard, y_train)
sgd_reg.score(X_test_standard, y_test)
CPU times: user 881 μs, sys: 48 μs, total: 929 μs
Wall time: 1.04 ms
# Out[14]:
# 0.80584845142813721
sgd_reg = SGDRegressor(n_iter=100) # 遍歷整個數(shù)據(jù)集100遍
%time sgd_reg.fit(X_train_standard, y_train)
sgd_reg.score(X_test_standard, y_test)
CPU times: user 7.05 ms, sys: 2.22 ms, total: 9.28 ms
Wall time: 6.44 ms
# Out[15]:
# 0.81312163515220082
06 如何調試梯度
為了逼近曲線在某一點(紅色)的梯度,可以在此點一前一后分別取兩個點(藍色)進行連線,則藍色點連線的斜率就近似等于所要逼近的梯度,回顧《高等數(shù)學一》中,當藍色點無限逼近紅色點時的描述就是斜率的定義。
對于多維情況:
準備數(shù)據(jù)
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(666)
X = np.random.random(size=(1000, 10)) # 1000個樣本,每個樣本有10個特征值
true_theta = np.arange(1, 12, dtype=float) # 對應的theta應該有11個
X_b = np.hstack([np.ones((len(X), 1)), X]) # 樣本
y = X_b.dot(true_theta) + np.random.normal(size=1000) # 標記
X.shape
# Out[5]:
# (1000, 10)
y.shape
# Out[6]:
# (1000,)
true_theta
# Out[7]:
# array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.])
def J(theta, X_b, y):
try:
return np.sum((y - X_b.dot(theta))**2) / len(X_b)
except:
return float('inf')
def dJ_math(theta, X_b, y):
return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(y)
def dJ_debug(theta, X_b, y, epsilon=0.01):
res = np.empty(len(theta))
for i in range(len(theta)):
theta_1 = theta.copy()
theta_1[i] += epsilon
theta_2 = theta.copy()
theta_2[i] -= epsilon
res[i] = (J(theta_1, X_b, y) - J(theta_2, X_b, y)) / (2*epsilon)
return res
def gradient_descent(dJ, X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):
theta = initial_theta
i_iter = 0
while i_iter < n_iters:
gradient = dJ(theta, X_b, y)
last_theta = theta
theta = theta - eta*gradient
if(abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon):
break
i_iter += 1
return theta
X_b = np.hstack([np.ones((len(X), 1)), X])
initial_theta = np.zeros(X_b.shape[1])
eta = 0.01
?
%time theta = gradient_descent(dJ_debug, X_b, y, initial_theta, eta)
theta
CPU times: user 4.09 s, sys: 12.4 ms, total: 4.11 s
Wall time: 4.13 s
"""
Out[12]:
array([ 1.1251597 , 2.05312521, 2.91522497, 4.11895968,
5.05002117, 5.90494046, 6.97383745, 8.00088367,
8.86213468, 9.98608331, 10.90529198])
"""
%time theta = gradient_descent(dJ_math, X_b, y, initial_theta, eta)
theta
CPU times: user 562 ms, sys: 6.68 ms, total: 568 ms
Wall time: 577 ms
"""
Out[13]:
array([ 1.1251597 , 2.05312521, 2.91522497, 4.11895968,
5.05002117, 5.90494046, 6.97383745, 8.00088367,
8.86213468, 9.98608331, 10.90529198])
"""
結尾
本文還沒有涉及到梯度下降法的小批量梯度下降法,待我慢慢整理。。。
最后再聊一下“隨機”,沒有固定模式的好處是,我們可以跳出局部最優(yōu)解,并且意味著更快的運行速度。在機器學習中處處存在著,比如:隨機搜索,隨機森林。。。
附件:
LinearRegression.py
import numpy as np
from .metrics import r2_score
class LinearRegression:
def __init__(self):
"""初始化inear Regression模型"""
self.coef_ = None # 系數(shù)(即seita1...N)
self.intercept_ = None # 截距(即seita0)
self._theta = None # 私有向量
def fit_normal(self, X_train, y_train):
"""根據(jù)訓練數(shù)據(jù)集X_train, y_train訓練Linear Regression模型"""
# 樣本數(shù)量必須等于標記數(shù)量,有多少樣本就要多少標記
assert X_train.shape[0] == y_train.shape[0], "the size of X_train must be equal to the size of y_train"
# X_b為向量x_train左邊添加一列1
X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
self._theta = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y_train)
self.intercept_ = self._theta[0]
self.coef_ = self._theta[1:]
return self
def fit_gd(self, X_train, y_train, eta=0.01, n_iters=1e4):
"""根據(jù)訓練數(shù)據(jù)集X_train, y_train, 使用梯度下降法訓練Linear Regression模型"""
assert X_train.shape[0] == y_train.shape[0], "the size of X_train must be equal to the size of y_train"
def J(theta, X_b, y):
try:
return np.sum((y - X_b.dot(theta)) ** 2) / len(y)
except:
return float('inf')
def dJ(theta, X_b, y):
# res = np.empty(len(theta))
# res[0] = np.sum(X_b.dot(theta) - y)
# for i in range(1, len(theta)):
# res[i] = (X_b.dot(theta) - y).dot(X_b[:, i])
# return res * 2 / len(X_b)
return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(X_b)
def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):
theta = initial_theta
cur_iter = 0
while cur_iter < n_iters:
gradient = dJ(theta, X_b, y)
last_theta = theta
theta = theta - eta * gradient
if (abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon):
break
cur_iter += 1
return theta
X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
initial_theta = np.zeros(X_b.shape[1])
self._theta = gradient_descent(X_b, y_train, initial_theta, eta, n_iters)
self.intercept_ = self._theta[0]
self.coef_ = self._theta[1:]
return self
def fit_sgd(self, X_train, y_train, n_iters=5, t0=5, t1=50):
"""根據(jù)訓練數(shù)據(jù)集X_train, y_train, 使用梯度下降法訓練Linear Regression模型"""
# 數(shù)據(jù)個數(shù)必須與標記個數(shù)相同
assert X_train.shape[0] == y_train.shape[0], "the size of X_train must be equal to the size of y_train"
assert n_iters >= 1
def dJ_sgd(theta, X_b_i, y_i):
return X_b_i * (X_b_i.dot(theta) - y_i) * 2.
def sgd(X_b, y, initial_theta, n_iters, t0=5, t1=50):
def learning_rate(t):
return t0 / (t + t1)
theta = initial_theta
m = len(X_b)
# n_iters表示將樣本循環(huán)多少遍
for cur_iter in range(n_iters):
# 對索引亂序處理,使每一遍的遍歷更隨機
indexes = np.random.permutation(m)
X_b_new = X_b[indexes]
y_new = y[indexes]
for i in range(m):
# rand_i = np.random.randint(m)
grandient = dJ_sgd(theta, X_b_new[i], y_new[i])
theta = theta - grandient * learning_rate(cur_iter * m + i) # 當前遍歷次數(shù)為cur_iter * m + i
return theta
X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
initial_theta = np.zeros(X_b.shape[1])
self._theta = sgd(X_b, y_train, initial_theta, n_iters, t0, t1) # 循環(huán)次數(shù)設置為樣本量的三分之一
self.intercept_ = self._theta[0]
self.coef_ = self._theta[1:]
return self
def predict(self, X_predict):
"""給定待預測數(shù)據(jù)集X_predict,返回表示X_predict的結果向量"""
assert self.intercept_ is not None and self.coef_ is not None, "must fit before predict!"
# 傳入數(shù)據(jù)的特征數(shù)量應該等于系數(shù)的個數(shù),每一個特征對應一個系數(shù)
assert X_predict.shape[1] == len(self.coef_), "the feature number of X_predict must be equal to X_train"
X_b = np.hstack([np.ones((len(X_predict), 1)), X_predict])
return X_b.dot(self._theta)
def score(self, X_test, y_test):
"""根據(jù)測試數(shù)據(jù)集 X_test 和 y_test 確定當前模型的準確度"""
y_predict = self.predict(X_test)
return r2_score(y_test, y_predict)
def __repr__(self):
return "LinearRegression()"
最后,歡迎各位讀者共同交流,祝好。