针对波士顿房价数据集采用简单线性回归,预测最后一列给出的房价 波士顿房价数据集可从http://lib.stat.cmu.edu/datasets/boston处获取。
实现简单线性回归的具体做法
1:导入所需要的软件包 2:在神经网络中,所有的输入都线性增加,为了使训练有效,输入应该被归一化,所以这里定义一个函数来归一化输入数据 3:加载波士顿房价数据集,将其分解为X_train和Y_train,并对数据进行归一化处理 4:为训练数据声明Tensorflow占位符 5:创建Tensorflow的权重和偏置变量且初始化值为0 6:定义用于预测的线性回归模型 7:定义损失函数 8:选择梯度下降优化器 9:声明初始化操作符 10:开始计算图 11:查看结果
代码如下
import tensorflow
as tf
import numpy
as np
import matplotlib
.pyplot
as plt
from sklearn
.datasets
import load_boston
tf
.compat
.v1
.disable_eager_execution
()
def normalize(X
):
''' Normalizes the array X'''
mean
= np
.mean
(X
)
std
= np
.std
(X
)
X
= (X
-mean
)/std
return X
boston
= load_boston
()
X_train
,Y_train
= boston
.data
[:,5],boston
.target
X_train
= normalize
(X_train
)
n_samples
= len(X_train
)
X
= tf
.compat
.v1
.placeholder
(tf
.float32
,name
='X')
Y
= tf
.compat
.v1
.placeholder
(tf
.float32
,name
='Y')
b
= tf
.compat
.v1
.Variable
(0.0)
w
= tf
.compat
.v1
.Variable
(0.0)
Y_hat
= X
*w
+b
loss
= tf
.compat
.v1
.square
(Y
-Y_hat
,name
='loss')
optimizer
=tf
.compat
.v1
.train
.GradientDescentOptimizer
(learning_rate
=0.01).minimize
(loss
)
init_op
= tf
.compat
.v1
.global_variables_initializer
()
total
= []
with tf
.compat
.v1
.Session
() as sess
:
sess
.run
(init_op
)
writer
= tf
.compat
.v1
.summary
.FileWriter
('graphs',sess
.graph
)
for i
in range(100):
total_loss
= 0
for x
,y
in zip(X_train
,Y_train
):
l
= sess
.run
([optimizer
,loss
],feed_dict
={X
:x
,Y
:y
})
total_loss
+=1
total
.append
(total_loss
/n_samples
)
print('Epoch {0}:Loss {1}'.format(i
,total_loss
/n_samples
))
writer
.close
()
b_value
,w_value
= sess
.run
([b
,w
])
Y_pred
= X_train
*w_value
+b_value
print('Done')
plt
.plot
(X_train
,Y_train
,'bo',label
= 'Real Data')
plt
.plot
(X_train
,Y_pred
,'r',label
= 'Predicted Data')
plt
.legend
()
plt
.show
()
plt
.plot
(total
)
plt
.show
()
最后结果