文章目录
1.课本课后习题2.视频课后习题(1)自编程实现(2)sklearn形式(3)感知机
1.课本课后习题
2.视频课后习题
(1)自编程实现
import numpy
as np
import matplotlib
.pyplot
as plt
class Myperceptron():
def __init__(self
):
self
.w
= None
self
.b
= 0
self
.η
= 1
def fit(self
,x_train
,y
):
m
,n
= np
.shape
(x_train
)
self
.w
= np
.zeros
((1,n
))
i
= 0
while i
< m
:
xi
= x_train
[i
]
yi
= y
[i
]
if -1 * yi
* (np
.dot
(self
.w
,xi
.T
) + self
.b
) >= 0:
self
.w
= self
.w
+ self
.η
* np
.dot
(yi
,xi
)
self
.b
= self
.b
+ self
.η
* yi
i
= 0
else:
i
+= 1
def draw(x
,w
,b
):
x_new
= np
.array
([[0], [6]])
y_predict
= -(b
+ w
[0][0] * x_new
) / w
[0][1]
plt
.plot
(x
[:2,0],x
[:2,1],'b*',label
='1')
plt
.plot
(x
[-1,0],x
[-1,1],'rx',label
='-1')
plt
.plot
(x_new
, y_predict
, "b-")
plt
.axis
([0, 6, 0, 6])
plt
.xlabel
('x1')
plt
.ylabel
('x2')
plt
.legend
()
plt
.show
()
def main():
x_train
= np
.array
([[3,3],[4,3],[1,1]])
y
= np
.array
([1,1,-1])
perceptron
= Myperceptron
()
perceptron
.fit
(x_train
,y
)
print(perceptron
.w
)
draw
(x_train
,perceptron
.w
,perceptron
.b
)
if __name__
== '__main__':
main
()
(2)sklearn形式
L1正则化:特征值更稀疏
L2正则化:权值更均匀
w,b初始值为
0时,w、b均为eta0的整数倍,
误分类点的判定条件为yi
(w
*xi
+ b
) <= 0,此时eta0的影响为
0
所以不同的学习速率对模型学习速度和结果无影响
当w,b不为
0时,有影响
import numpy
as np
from sklearn
.linear_model
import Perceptron
x_train
= np
.array
([[3,3],[4,3],[1,1]])
y
= np
.array
([1,1,-1])
perceptron
= Perceptron
()
perceptron
.fit
(x_train
,y
)
print('w:',perceptron
.coef_
)
print('b:',perceptron
.intercept_
)
print('n_iter:',perceptron
.n_iter_
)
res
= perceptron
.score
(x_train
,y
)
print('correct rate : %d'%res
)
w
: [[1. 0.]]
b
: [-2.]
n_iter
: 9
correct rate
: 1
(3)感知机
感知机模型的假设空间是分离超平面wx
+b
=0
模型的复杂度主要体现在x的特征数量上面,也就是维度d上