配置环境
windows 10 python 3.7.5 tensorflow2.0-GPU cuda 10.0 cudnn 7.6.5 CPU i5 9400f GPU RTX2060
相关程序
for epoch_1
in range
(epochs
):
index
= 0
for ind_t
in range
(trainsize - input_length
):
input_model_1
= operation.make_inputdata
(threshold, y1_input
[ind_t
], y2_input
[ind_t
], y3_input
[ind_t
], ind_t
)
with tf.GradientTape
() as tape:
pre_temp
= model_1
(input_model_1
)
loss
= tf.losses.MSE
(y_out
[ind_t
], y_model_1_out
[ind_t
])
loss
= tf.reduce_mean
(loss
)
if index % 200
== 0:
print
('epochs:', epoch_1,
'/', epochs,
'ind:', ind_t,
'/', trainsize - input_length,
'loss:', loss.numpy
())
grads1
= tape.gradient
(loss, model_1.trainable_variables
)
optimizer_1.apply_gradients
(zip
(grads1, model_1.trainable_variables
))
index
= index + 1
问题描述
No gradients provided for any variable 这个问题翻译过来大概是任何变量都没有梯度的意思
问题解决办法
在我的程序中,计算损失的步骤是
loss
= tf.losses.MSE
(y_out
[ind_t
], y_model_1_out
[ind_t
])
实际上这里的y_out[ind_t]和y_model_1_out[ind_t]都是常量,所以将y_model_1_out[ind_t]改成模型的输出值(预测值)pre_temp就可以结果上述问题了,主要还是编程不仔细造成的问题。