1、输出形式有两种是因为:当构建循环神经网络的时候,通常会将多个循环层堆叠起来,这时,当前循环层的输出将会成为下一层网络层的输入,而循环层的输入要求的是samples、timesteps、input_dim的shape ,所以中间层的循环层就需要保持同样的输出形状。
model=Sequential() model.add(LSTM(32,input_shape=(10,64),return_sequences=True)) print(model.summary()) LSTM没有指定(输出的)return_sequence Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm (LSTM) (None, 32) 12416 ================================================================= Total params: 12,416 Trainable params: 12,416 Non-trainable params: 0 _________________________________________________________________ None LSTM指定(输出的)return_sequence Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm (LSTM) (None, 10, 32) 12416 ================================================================= Total params: 12,416 Trainable params: 12,416 Non-trainable params: 0 _________________________________________________________________ None model=Sequential() model.add(LSTM(32,input_shape=(10,64),return_sequences=True)) model.add(LSTM(10,return_sequences=True)) model.add(LSTM(3)) print(model.summary()) Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm (LSTM) (None, 10, 32) 12416 _________________________________________________________________ lstm_1 (LSTM) (None, 10, 10) 1720 _________________________________________________________________ lstm_2 (LSTM) (None, 3) 168 ================================================================= Total params: 14,304 Trainable params: 14,304 Non-trainable params: 0 _________________________________________________________________ None2、嵌入层 词嵌入 首先将文字转化为数值, 其次使用词嵌入将单词向量化,以方便计算单词间的距离。
#嵌入层输入词汇量是1000,嵌入的维度64,对于输入词汇,将其打包成一个个的序列,每次输入一个序列,每一个序列包含10个单词 from keras.layers import Embedding model=Sequential() model.add(Embedding(1000,64,input_length=10)) print(model.summary()) Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 10, 64) 64000 ================================================================= Total params: 64,000 Trainable params: 64,000 Non-trainable params: 0 _________________________________________________________________ None from keras.models import Sequential #导入模型 分别对应循环神经网络、长短期记忆网络、门控循环单元 from keras.layers import SimpleRNN,LSTM,GRU ''' 这三类网络层的使用都很类似,keras将所有循环层看成一个抽象类,所以每一个循环层都有共同的性质,并接受相同的关键字参数。 除去与神经网络结构本身有关的特性,每一个循环层的参数略有不同外,其他的通用参数都是一样的。 每一种循环层的输入和输出格式都一样. 输入:Batch_size timesteps input_dim 三维张量 批量数据大小、时间步长、输入数据特征数 比如:输入3200条一维数据进行训练 分成了100个batch(批次/批量)那每个batch的Batch_size为32, 如果用前3个数值预测第4个值的话 步长就是 timesteps=3 输入数据维度是1维数据,input_dim=1 所以这个模型的输入数据shape就是(32,3,1) ''' ''' 当使用循环层作为第一层时(输入),不用预先设置Batch_size,但要定义 输入维度input_dim 1 输出维度output_dim 6 滑动窗口input_length 3 也就是上边说的timesteps ''' #当LSTM作为网络的第一层时需设置的参数 写法 # model=Sequential() # model.add(LSTM(input_dim=1,output_dim=6,input_length=3)) # model.add(LSTM(6,input_dim=1,input_length=3)) # model.add(LSTM(6,input_shape=(3,1)))#推荐这个 ''' 循环层的输出 return_sequence=True 返回一个序列 形如samples、timesteps、output_dim的三维张量 return_sequence=False 返回一个值 形如samples、output_dim的二维张量 默认为False ''' # model.add(LSTM(32,input_shape=(10,64),return_sequences=True)) # print(model.summary()) # model=Sequential() # model.add(LSTM(32,input_shape=(10,64),return_sequences=True)) # model.add(LSTM(10,return_sequences=True)) # model.add(LSTM(3)) # print(model.summary()) ''' 嵌入层embedding 文本问题 词嵌入(将单词转化为向量)是文本任务的第一步,所以keras里Embedding层也只能作为模型的第一层使用 词嵌入 首先将文字转化为数值, 其次使用词嵌入将单词向量化,以方便计算单词间的距离 指定的参数: input_dim 文本数据中进入嵌入层的词汇表的大小 index+1 output_dim 词向量的维度 单词被转化为向量后的嵌入空间的大小 input_length 输入序列的长度 是一个固定的整数 如果嵌入层的下一层连接了Flatten Dense 层 则这个参数是必须的 输出: 尺寸为Batch_size 、input_length、output_dim的三维张量 ''' # #嵌入层输入词汇量是1000,嵌入的维度64,对于输入词汇,将其打包成一个个的序列,每次输入一个序列,每一个序列包含10个单词 # from keras.layers import Embedding # model=Sequential() # model.add(Embedding(1000,64,input_length=10)) # print(model.summary())1、全连接神经网络
#IMDB文本数据 电影评论情感分析 正面/负面 from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense,Embedding,Flatten from keras.datasets import imdb max_features=20000 maxlen=80 batch_size=32 import numpy as np data=np.load('imdb.npz',allow_pickle=True) print(data.files) print('loading data...') (x_train,y_train),(x_test,y_test)=imdb.load_data(num_words=max_features)#只取数据集中前20000个经常出现的单词 # x_train=data['x_train'] # y_train=data['y_train'] # x_test=data['x_test'] # y_test=data['y_test'] print(len(x_train),'train sequences') print(len(x_test),'test sequences') #对序列的长度进一步的截断和填充 pad_sequence 可以将多个序列截断或补齐 print('pad sequences (samples x time)') x_train=sequence.pad_sequences(x_train,maxlen=maxlen) x_test=sequence.pad_sequences(x_test,maxlen=maxlen) print('x_train shape:',x_train.shape) print('x_test shape:',x_test.shape) #全连接网络 print('build model...') model=Sequential() model.add(Embedding(max_features,128,input_length=maxlen))#20000/128=250 model.add(Flatten())#80*128 =10240 model.add(Dense(250,activation='relu')) model.add(Dense(1,activation='sigmoid'))#输出情感分类 1个神经元就够了 print(model.summary()) print('train model ...') model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.fit(x_train,y_train,batch_size=batch_size,epochs=15,validation_data=(x_test,y_test)) print('evaluate model') score,acc=model.evaluate(x_test,y_test,batch_size=batch_size) print('test score:',score) print('test accuracy:',acc) ['x_test', 'x_train', 'y_train', 'y_test'] loading data... Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz 17465344/17464789 [==============================] - 18s 1us/step 25000 train sequences 25000 test sequences pad sequences (samples x time) x_train shape: (25000, 80) x_test shape: (25000, 80) build model... Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 80, 128) 2560000 _________________________________________________________________ flatten (Flatten) (None, 10240) 0 _________________________________________________________________ dense (Dense) (None, 250) 2560250 _________________________________________________________________ dense_1 (Dense) (None, 1) 251 ================================================================= Total params: 5,120,501 Trainable params: 5,120,501 Non-trainable params: 0 _________________________________________________________________ None train model ... Epoch 1/15 782/782 [==============================] - 40s 51ms/step - loss: 0.4333 - accuracy: 0.7892 - val_loss: 0.3613 - val_accuracy: 0.8396 Epoch 2/15 782/782 [==============================] - 38s 49ms/step - loss: 0.0689 - accuracy: 0.9767 - val_loss: 0.6667 - val_accuracy: 0.8026 Epoch 3/15 782/782 [==============================] - 39s 50ms/step - loss: 0.0117 - accuracy: 0.9958 - val_loss: 0.9379 - val_accuracy: 0.8076 Epoch 4/15 782/782 [==============================] - 38s 49ms/step - loss: 0.0070 - accuracy: 0.9979 - val_loss: 1.1040 - val_accuracy: 0.7960 Epoch 5/15 782/782 [==============================] - 39s 50ms/step - loss: 0.0194 - accuracy: 0.9928 - val_loss: 0.9399 - val_accuracy: 0.8030 Epoch 6/15 782/782 [==============================] - 39s 50ms/step - loss: 0.0077 - accuracy: 0.9973 - val_loss: 1.1719 - val_accuracy: 0.8010 Epoch 7/15 782/782 [==============================] - 40s 52ms/step - loss: 0.0022 - accuracy: 0.9993 - val_loss: 1.4221 - val_accuracy: 0.8028 Epoch 8/15 782/782 [==============================] - 44s 56ms/step - loss: 0.0021 - accuracy: 0.9992 - val_loss: 1.6696 - val_accuracy: 0.7930 Epoch 9/15 782/782 [==============================] - 41s 52ms/step - loss: 0.0170 - accuracy: 0.9942 - val_loss: 1.1140 - val_accuracy: 0.7965 Epoch 10/15 782/782 [==============================] - 39s 50ms/step - loss: 0.0040 - accuracy: 0.9988 - val_loss: 1.3162 - val_accuracy: 0.8000 Epoch 11/15 782/782 [==============================] - 40s 51ms/step - loss: 1.7681e-04 - accuracy: 1.0000 - val_loss: 1.4336 - val_accuracy: 0.8008 Epoch 12/15 782/782 [==============================] - 39s 50ms/step - loss: 1.0483e-05 - accuracy: 1.0000 - val_loss: 1.4591 - val_accuracy: 0.8007 Epoch 13/15 782/782 [==============================] - 39s 50ms/step - loss: 4.8796e-06 - accuracy: 1.0000 - val_loss: 1.4801 - val_accuracy: 0.8010 Epoch 14/15 782/782 [==============================] - 40s 51ms/step - loss: 3.2985e-06 - accuracy: 1.0000 - val_loss: 1.5021 - val_accuracy: 0.8010 Epoch 15/15 782/782 [==============================] - 40s 52ms/step - loss: 2.2740e-06 - accuracy: 1.0000 - val_loss: 1.5252 - val_accuracy: 0.8012 evaluate model 782/782 [==============================] - 3s 4ms/step - loss: 1.5252 - accuracy: 0.8012 test score: 1.525183916091919 test accuracy: 0.80124002695083622、循环神经网络 RNN
#2循环神经网络 from keras.layers import SimpleRNN print('build model...') model=Sequential() model.add(Embedding(max_features,128)) model.add(SimpleRNN(128,dropout=0.2,recurrent_dropout=0.2)) model.add(Dense(1,activation='sigmoid')) print(model.summary()) print('train model ...') model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.fit(x_train,y_train,batch_size=batch_size,epochs=15,validation_data=(x_test,y_test)) print('evaluate model') score,acc=model.evaluate(x_test,y_test,batch_size=batch_size) print('test score:',score) print('test accuracy',acc) ['x_test', 'x_train', 'y_train', 'y_test'] loading data... 25000 train sequences 25000 test sequences pad sequences (samples x time) x_train shape: (25000, 80) x_test shape: (25000, 80) build model... Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, None, 128) 2560000 _________________________________________________________________ simple_rnn (SimpleRNN) (None, 128) 32896 _________________________________________________________________ dense (Dense) (None, 1) 129 ================================================================= Total params: 2,593,025 Trainable params: 2,593,025 Non-trainable params: 0 _________________________________________________________________ None train model ... Epoch 1/15 782/782 [==============================] - 61s 78ms/step - loss: 0.6517 - accuracy: 0.5940 - val_loss: 0.5279 - val_accuracy: 0.7306 Epoch 2/15 782/782 [==============================] - 59s 75ms/step - loss: 0.5708 - accuracy: 0.7026 - val_loss: 0.6750 - val_accuracy: 0.5628 Epoch 3/15 782/782 [==============================] - 58s 75ms/step - loss: 0.5976 - accuracy: 0.6772 - val_loss: 0.6126 - val_accuracy: 0.6674 Epoch 4/15 782/782 [==============================] - 60s 76ms/step - loss: 0.5226 - accuracy: 0.7443 - val_loss: 0.5488 - val_accuracy: 0.7331 Epoch 5/15 782/782 [==============================] - 61s 78ms/step - loss: 0.4604 - accuracy: 0.7878 - val_loss: 0.5284 - val_accuracy: 0.7459 Epoch 6/15 782/782 [==============================] - 60s 76ms/step - loss: 0.4495 - accuracy: 0.8003 - val_loss: 0.7158 - val_accuracy: 0.6234 Epoch 7/15 782/782 [==============================] - 60s 76ms/step - loss: 0.5264 - accuracy: 0.7410 - val_loss: 0.6283 - val_accuracy: 0.6299 Epoch 8/15 782/782 [==============================] - 59s 76ms/step - loss: 0.5444 - accuracy: 0.7078 - val_loss: 0.6397 - val_accuracy: 0.6372 Epoch 9/15 782/782 [==============================] - 63s 80ms/step - loss: 0.5173 - accuracy: 0.7301 - val_loss: 0.6425 - val_accuracy: 0.6427 Epoch 10/15 782/782 [==============================] - 62s 79ms/step - loss: 0.5204 - accuracy: 0.7289 - val_loss: 0.6761 - val_accuracy: 0.5485 Epoch 11/15 782/782 [==============================] - 64s 81ms/step - loss: 0.5403 - accuracy: 0.7048 - val_loss: 0.6783 - val_accuracy: 0.6425 Epoch 12/15 782/782 [==============================] - 62s 80ms/step - loss: 0.4827 - accuracy: 0.7600 - val_loss: 0.6377 - val_accuracy: 0.6850 Epoch 13/15 782/782 [==============================] - 66s 84ms/step - loss: 0.4288 - accuracy: 0.8037 - val_loss: 0.6632 - val_accuracy: 0.6998 Epoch 14/15 782/782 [==============================] - 58s 75ms/step - loss: 0.4031 - accuracy: 0.8217 - val_loss: 0.6253 - val_accuracy: 0.7195 Epoch 15/15 782/782 [==============================] - 59s 76ms/step - loss: 0.3763 - accuracy: 0.8329 - val_loss: 0.6411 - val_accuracy: 0.7197 evaluate model 782/782 [==============================] - 6s 7ms/step - loss: 0.6411 - accuracy: 0.7197 test score: 0.6411298513412476 test accuracy 0.71972000598907473、长短期记忆网络LSTM
#3LSTM from keras.layers import LSTM print('build model...') model=Sequential() model.add(Embedding(max_features,128)) model.add(LSTM(128,dropout=0.2,recurrent_dropout=0.2)) model.add(Dense(1,activation='sigmoid')) print(model.summary()) print('train model...') model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.fit(x_train,y_train,batch_size=batch_size,epochs=15,validation_data=(x_test,y_test)) score,acc=model.evaluate(x_test,y_test,batch_size=batch_size) print('test score:',score) print('test accuracy:',acc) ['x_test', 'x_train', 'y_train', 'y_test'] loading data... 25000 train sequences 25000 test sequences pad sequences (samples x time) x_train shape: (25000, 80) x_test shape: (25000, 80) build model... Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, None, 128) 2560000 _________________________________________________________________ lstm (LSTM) (None, 128) 131584 _________________________________________________________________ dense (Dense) (None, 1) 129 ================================================================= Total params: 2,691,713 Trainable params: 2,691,713 Non-trainable params: 0 _________________________________________________________________ None train model... Epoch 1/15 782/782 [==============================] - 143s 183ms/step - loss: 0.4282 - accuracy: 0.7948 - val_loss: 0.3906 - val_accuracy: 0.8238 Epoch 2/15 782/782 [==============================] - 163s 208ms/step - loss: 0.2539 - accuracy: 0.8988 - val_loss: 0.4069 - val_accuracy: 0.8275 Epoch 3/15 782/782 [==============================] - 157s 201ms/step - loss: 0.1675 - accuracy: 0.9362 - val_loss: 0.4504 - val_accuracy: 0.8333 Epoch 4/15 782/782 [==============================] - 139s 178ms/step - loss: 0.1095 - accuracy: 0.9588 - val_loss: 0.5762 - val_accuracy: 0.8237 Epoch 5/15 782/782 [==============================] - 141s 181ms/step - loss: 0.0687 - accuracy: 0.9750 - val_loss: 0.7429 - val_accuracy: 0.8143 Epoch 6/15 782/782 [==============================] - 143s 183ms/step - loss: 0.0487 - accuracy: 0.9838 - val_loss: 0.7675 - val_accuracy: 0.8168 Epoch 7/15 782/782 [==============================] - 145s 185ms/step - loss: 0.0452 - accuracy: 0.9856 - val_loss: 0.7087 - val_accuracy: 0.8148 Epoch 8/15 782/782 [==============================] - 143s 183ms/step - loss: 0.0282 - accuracy: 0.9907 - val_loss: 0.8704 - val_accuracy: 0.8238 Epoch 9/15 782/782 [==============================] - 145s 185ms/step - loss: 0.0243 - accuracy: 0.9923 - val_loss: 0.8190 - val_accuracy: 0.8186 Epoch 10/15 782/782 [==============================] - 141s 181ms/step - loss: 0.0192 - accuracy: 0.9938 - val_loss: 1.0608 - val_accuracy: 0.8142 Epoch 11/15 782/782 [==============================] - 141s 181ms/step - loss: 0.0203 - accuracy: 0.9935 - val_loss: 0.9003 - val_accuracy: 0.8192 Epoch 12/15 782/782 [==============================] - 142s 181ms/step - loss: 0.0146 - accuracy: 0.9950 - val_loss: 0.9895 - val_accuracy: 0.8158 Epoch 13/15 782/782 [==============================] - 141s 180ms/step - loss: 0.0110 - accuracy: 0.9967 - val_loss: 0.9612 - val_accuracy: 0.8224 Epoch 14/15 782/782 [==============================] - 140s 179ms/step - loss: 0.0081 - accuracy: 0.9976 - val_loss: 1.1151 - val_accuracy: 0.8215 Epoch 15/15 782/782 [==============================] - 172s 221ms/step - loss: 0.0096 - accuracy: 0.9965 - val_loss: 1.0569 - val_accuracy: 0.8200 782/782 [==============================] - 15s 20ms/step - loss: 1.0569 - accuracy: 0.8200 test score: 1.0568664073944092 test accuracy: 0.81999999284744264、双向循环神经网络–BiLstm
#4双向循环神经网络--BiLstm from keras.layers import Bidirectional,Dropout,LSTM print('build model...') model=Sequential() model.add(Embedding(max_features,128,input_length=maxlen)) model.add(Bidirectional(LSTM(64))) model.add(Dropout(0.5)) model.add(Dense(1,activation='sigmoid')) print(model.summary()) print('train model...') model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.fit(x_train,y_train,batch_size=batch_size,epochs=4,validation_data=(x_test,y_test)) score,acc=model.evaluate(x_test,y_test,batch_size=batch_size) print('test score:',score) print('test accuracy:',acc) ['x_test', 'x_train', 'y_train', 'y_test'] loading data... 25000 train sequences 25000 test sequences pad sequences (samples x time) x_train shape: (25000, 80) x_test shape: (25000, 80) build model... Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 80, 128) 2560000 _________________________________________________________________ bidirectional (Bidirectional (None, 128) 98816 _________________________________________________________________ dropout (Dropout) (None, 128) 0 _________________________________________________________________ dense (Dense) (None, 1) 129 ================================================================= Total params: 2,658,945 Trainable params: 2,658,945 Non-trainable params: 0 _________________________________________________________________ None train model... Epoch 1/4 782/782 [==============================] - 67s 86ms/step - loss: 0.4214 - accuracy: 0.8031 - val_loss: 0.3565 - val_accuracy: 0.8408 Epoch 2/4 782/782 [==============================] - 66s 84ms/step - loss: 0.2359 - accuracy: 0.9062 - val_loss: 0.3671 - val_accuracy: 0.8400 Epoch 3/4 782/782 [==============================] - 69s 88ms/step - loss: 0.1265 - accuracy: 0.9530 - val_loss: 0.4603 - val_accuracy: 0.8272 Epoch 4/4 782/782 [==============================] - 71s 91ms/step - loss: 0.0588 - accuracy: 0.9803 - val_loss: 0.6167 - val_accuracy: 0.8301 782/782 [==============================] - 10s 13ms/step - loss: 0.6167 - accuracy: 0.8301 test score: 0.616662323474884 test accuracy: 0.83012002706527715、双向循环神经网络–BRNN
#5双向循环神经网络--BRNN from keras.layers import Bidirectional,Dropout,SimpleRNN print('build model...') model=Sequential() model.add(Embedding(max_features,128,input_length=maxlen)) model.add(Bidirectional(SimpleRNN(64))) model.add(Dropout(0.5)) model.add(Dense(1,activation='sigmoid')) print(model.summary()) print('train model...') model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.fit(x_train,y_train,batch_size=batch_size,epochs=4,validation_data=(x_test,y_test)) score,acc=model.evaluate(x_test,y_test,batch_size=batch_size) print('test score:',score) print('test accuracy:',acc) ['x_test', 'x_train', 'y_train', 'y_test'] loading data... 25000 train sequences 25000 test sequences pad sequences (samples x time) x_train shape: (25000, 80) x_test shape: (25000, 80) build model... Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 80, 128) 2560000 _________________________________________________________________ bidirectional (Bidirectional (None, 128) 24704 _________________________________________________________________ dropout (Dropout) (None, 128) 0 _________________________________________________________________ dense (Dense) (None, 1) 129 ================================================================= Total params: 2,584,833 Trainable params: 2,584,833 Non-trainable params: 0 _________________________________________________________________ None train model... Epoch 1/4 782/782 [==============================] - 45s 58ms/step - loss: 0.5930 - accuracy: 0.6672 - val_loss: 0.4574 - val_accuracy: 0.7922 Epoch 2/4 782/782 [==============================] - 46s 59ms/step - loss: 0.4713 - accuracy: 0.7875 - val_loss: 0.6302 - val_accuracy: 0.6385 Epoch 3/4 782/782 [==============================] - 46s 59ms/step - loss: 0.3216 - accuracy: 0.8634 - val_loss: 0.5944 - val_accuracy: 0.7207 Epoch 4/4 782/782 [==============================] - 47s 60ms/step - loss: 0.1346 - accuracy: 0.9500 - val_loss: 0.5984 - val_accuracy: 0.7755 782/782 [==============================] - 6s 8ms/step - loss: 0.5984 - accuracy: 0.7755 test score: 0.5983877182006836 test accuracy: 0.77547997236251836、用了卷积层的循环神经网络–ConvLSTM
from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense,Embedding,Flatten from keras.datasets import imdb #Embedding max_features=20000 maxlen=100 embedding_size=128 #convolution kernel_size=5 filters=64 pool_size=4 #LSTM lstm_output_size=70 #training batch_size=30 epochs=2 import numpy as np data=np.load('imdb.npz',allow_pickle=True) print(data.files) print('loading data...') (x_train,y_train),(x_test,y_test)=imdb.load_data(num_words=max_features)#只取数据集中前20000个经常出现的单词 # x_train=data['x_train'] # y_train=data['y_train'] # x_test=data['x_test'] # y_test=data['y_test'] print(len(x_train),'train sequences') print(len(x_test),'test sequences') #对序列的长度进一步的截断和填充 pad_sequence 可以将多个序列截断或补齐 print('pad sequences (samples x time)') x_train=sequence.pad_sequences(x_train,maxlen=maxlen) x_test=sequence.pad_sequences(x_test,maxlen=maxlen) print('x_train shape:',x_train.shape) print('x_test shape:',x_test.shape) #6用了卷积层的循环神经网络--ConvLSTM from keras.layers import Conv1D,MaxPool1D,Dropout,LSTM print('build model...') model=Sequential() model.add(Embedding(max_features,embedding_size,input_length=maxlen)) model.add(Dropout(0.25)) model.add(Conv1D(filters,kernel_size,padding='valid',activation='relu',strides=1)) model.add(MaxPool1D(pool_size=pool_size)) model.add(LSTM(lstm_output_size)) model.add(Dense(1,activation='sigmoid')) print(model.summary()) print('train model...') model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,validation_data=(x_test,y_test)) score,acc=model.evaluate(x_test,y_test,batch_size=batch_size) print('test score:',score) print('test accuracy:',acc) ['x_test', 'x_train', 'y_train', 'y_test'] loading data... 25000 train sequences 25000 test sequences pad sequences (samples x time) x_train shape: (25000, 100) x_test shape: (25000, 100) build model... Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 100, 128) 2560000 _________________________________________________________________ dropout (Dropout) (None, 100, 128) 0 _________________________________________________________________ conv1d (Conv1D) (None, 96, 64) 41024 _________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 24, 64) 0 _________________________________________________________________ lstm (LSTM) (None, 70) 37800 _________________________________________________________________ dense (Dense) (None, 1) 71 ================================================================= Total params: 2,638,895 Trainable params: 2,638,895 Non-trainable params: 0 _________________________________________________________________ None train model... Epoch 1/2 834/834 [==============================] - 52s 62ms/step - loss: 0.3899 - accuracy: 0.8119 - val_loss: 0.3155 - val_accuracy: 0.8617 Epoch 2/2 834/834 [==============================] - 51s 61ms/step - loss: 0.1965 - accuracy: 0.9246 - val_loss: 0.3520 - val_accuracy: 0.8564 834/834 [==============================] - 5s 6ms/step - loss: 0.3520 - accuracy: 0.8564 test score: 0.35197147727012634 test accuracy: 0.8564400076866151、LSTM拟合余弦函数
from keras.models import Sequential from keras.layers import LSTM,Dense import numpy as np import matplotlib.pyplot as plt dataset=np.cos(np.arange(1000)*(20*np.pi/1000)) plt.plot(dataset) plt.show() #look_back指定用过去几个时间步长来预测未来 #look_back=1 使用t时刻的数据去预测t+1时刻的数据 def create_dataset(dataset,look_back=1): dataX,dataY=[],[] for i in range(len(dataset)-look_back): dataX.append(dataset[i:(i+look_back)]) dataY.append(dataset[i+look_back]) return np.array(dataX),np.array(dataY) # x,y=create_dataset(dataset,look_back=1) # print(x.shape) # print(y.shape) look_back=1 #划分训练集和测试集 train_size=int(len(dataset)*0.7) test_size=len(dataset)-train_size train,test=dataset[:train_size],dataset[train_size:] #产生时间序列 trainX,trainY=create_dataset(train,look_back) testX,testY=create_dataset(test,look_back) #转换成 samples timesteps features 样本数 时间步长 特征数 的形式 trainX=np.reshape(trainX,(trainX.shape[0],1,trainX.shape[1])) testX=np.reshape(testX,(testX.shape[0],1,testX.shape[1])) print('trainX.shape:',trainX.shape) print('testX.shape:',testX.shape) print('trainY.shape:',trainY.shape) print('testY.shape:',testY.shape) print('build model...') model=Sequential() model.add(LSTM(32,input_shape=(look_back,1))) model.add(Dense(1,activation='sigmoid')) model.compile(loss='mse',optimizer='adam',metrics=['accuracy']) model.fit(trainX,trainY,batch_size=32,epochs=10,validation_data=(testX,testY)) #预测 trainPredict=model.predict(trainX) testPredict=model.predict(testX) trainPredictPlot=np.zeros(shape=(len(dataset),1)) trainPredictPlot[:]=np.nan trainPredictPlot[look_back:len(trainPredict)+look_back,:]=trainPredict testPredictPlot=np.zeros(shape=(len(dataset),1)) testPredictPlot[:]=np.nan testPredictPlot[len(trainPredict)+1:len(dataset)-1,:]=testPredict plt.plot(dataset,label='origin') plt.plot(trainPredictPlot,label='trainPredict') plt.plot(testPredictPlot,label='testPredict') plt.legend(loc='upper right') plt.show() trainX.shape: (699, 1, 1) testX.shape: (299, 1, 1) trainY.shape: (699,) testY.shape: (299,) build model... Epoch 1/10 22/22 [==============================] - 1s 28ms/step - loss: 0.7289 - accuracy: 0.0086 - val_loss: 0.7152 - val_accuracy: 0.0067 Epoch 2/10 22/22 [==============================] - 0s 3ms/step - loss: 0.6999 - accuracy: 0.0086 - val_loss: 0.6856 - val_accuracy: 0.0067 Epoch 3/10 22/22 [==============================] - 0s 2ms/step - loss: 0.6695 - accuracy: 0.0086 - val_loss: 0.6518 - val_accuracy: 0.0067 Epoch 4/10 22/22 [==============================] - 0s 2ms/step - loss: 0.6337 - accuracy: 0.0086 - val_loss: 0.6145 - val_accuracy: 0.0067 Epoch 5/10 22/22 [==============================] - 0s 3ms/step - loss: 0.5942 - accuracy: 0.0086 - val_loss: 0.5724 - val_accuracy: 0.0067 Epoch 6/10 22/22 [==============================] - 0s 2ms/step - loss: 0.5508 - accuracy: 0.0086 - val_loss: 0.5289 - val_accuracy: 0.0067 Epoch 7/10 22/22 [==============================] - 0s 2ms/step - loss: 0.5073 - accuracy: 0.0086 - val_loss: 0.4866 - val_accuracy: 0.0067 Epoch 8/10 22/22 [==============================] - 0s 2ms/step - loss: 0.4674 - accuracy: 0.0086 - val_loss: 0.4476 - val_accuracy: 0.0067 Epoch 9/10 22/22 [==============================] - 0s 2ms/step - loss: 0.4320 - accuracy: 0.0086 - val_loss: 0.4160 - val_accuracy: 0.0067 Epoch 10/10 22/22 [==============================] - 0s 2ms/step - loss: 0.4031 - accuracy: 0.0086 - val_loss: 0.3905 - val_accuracy: 0.00672、改进:使用更多的历史信息
from keras.models import Sequential from keras.layers import LSTM,Dense import numpy as np import matplotlib.pyplot as plt dataset=np.cos(np.arange(1000)*(20*np.pi/1000)) plt.plot(dataset) plt.show() #look_back指定用过去几个时间步长来预测未来 #look_back=1 使用t时刻的数据去预测t+1时刻的数据 def create_dataset(dataset,look_back=1): dataX,dataY=[],[] for i in range(len(dataset)-look_back): dataX.append(dataset[i:(i+look_back)]) dataY.append(dataset[i+look_back]) return np.array(dataX),np.array(dataY) # x,y=create_dataset(dataset,look_back=1) # print(x.shape) # print(y.shape) look_back=3 #划分训练集和测试集 train_size=int(len(dataset)*0.7) test_size=len(dataset)-train_size train,test=dataset[:train_size],dataset[train_size:] #产生时间序列 trainX,trainY=create_dataset(train,look_back) testX,testY=create_dataset(test,look_back) #转换成 samples timesteps features 样本数 时间步长 特征数 的形式 trainX=np.reshape(trainX,(trainX.shape[0],1,trainX.shape[1])) testX=np.reshape(testX,(testX.shape[0],1,testX.shape[1])) print('trainX.shape:',trainX.shape) print('testX.shape:',testX.shape) print('trainY.shape:',trainY.shape) print('testY.shape:',testY.shape) print('build model...') # print(trainX) model=Sequential() model.add(LSTM(32,input_shape=(1,3),return_sequences=True)) model.add(Dense(1)) model.compile(loss='mse',optimizer='adam') model.fit(trainX,trainY,batch_size=32,epochs=10) # # #预测 trainPredict=model.predict(trainX) testPredict=model.predict(testX) trainPredict=np.reshape(trainPredict,(len(trainPredict),1)) testPredict=np.reshape(testPredict,(len(testPredict),1)) print(trainPredict.shape) print(testPredict.shape) trainPredictPlot=np.zeros(shape=(len(dataset),1)) trainPredictPlot[:]=np.nan trainPredictPlot[look_back:len(trainPredict)+look_back,:]=trainPredict testPredictPlot=np.zeros(shape=(len(dataset),1)) testPredictPlot[:]=np.nan testPredictPlot[len(trainPredict)+1:len(dataset)-5,:]=testPredict plt.plot(dataset,label='origin') plt.plot(trainPredictPlot,label='trainPredict') plt.plot(testPredictPlot,label='testPredict') plt.legend(loc='upper right') plt.show() trainX.shape: (697, 1, 3) testX.shape: (297, 1, 3) trainY.shape: (697,) testY.shape: (297,) build model... Epoch 1/10 22/22 [==============================] - 0s 2ms/step - loss: 0.4728 Epoch 2/10 22/22 [==============================] - 0s 1ms/step - loss: 0.3422 Epoch 3/10 22/22 [==============================] - 0s 1ms/step - loss: 0.2302 Epoch 4/10 22/22 [==============================] - 0s 2ms/step - loss: 0.1375 Epoch 5/10 22/22 [==============================] - 0s 1ms/step - loss: 0.0694 Epoch 6/10 22/22 [==============================] - 0s 1ms/step - loss: 0.0295 Epoch 7/10 22/22 [==============================] - 0s 1ms/step - loss: 0.0129 Epoch 8/10 22/22 [==============================] - 0s 2ms/step - loss: 0.0083 Epoch 9/10 22/22 [==============================] - 0s 1ms/step - loss: 0.0077 Epoch 10/10 22/22 [==============================] - 0s 1ms/step - loss: 0.0076 (697, 1) (297, 1)3、LSTM——多个时间步长的预测
from keras.models import Sequential from keras.layers import LSTM,Dense import numpy as np import matplotlib.pyplot as plt dataset=np.cos(np.arange(1000)*(20*np.pi/1000)) plt.plot(dataset) plt.show() train_size=int(len(dataset)*0.7) test_size=len(dataset)-train_size train,test=dataset[:train_size],dataset[train_size:] def to_supervised(train,n_input,n_out=3): data=train X,y=list(),list() in_start=0 for _ in range(len(data)): #输入序列结束位置 in_end=in_start+n_input # 输出序列结束位置 out_end=in_end+n_out #确认序列不超过数据集 if out_end<len(data): x_input=data[in_start:in_end] x_input=x_input.reshape((len(x_input),1)) X.append(x_input) y.append(data[in_end:out_end]) in_start+=1 return np.asarray(X),np.asarray(y) train_x,train_y=to_supervised(train,n_input=7)#7个预测3个 print(train_x.shape) print(train_y.shape) verbose=2 epochs=10 batch_size=32 n_timesteps=train_x.shape[1] n_features=train_x.shape[2] n_outputs=train_y.shape[1] model=Sequential() model.add(LSTM(200,activation='relu',input_shape=(n_timesteps,n_features))) model.add(Dense(100,activation='relu')) model.add(Dense(n_outputs)) model.compile(loss='mse',optimizer='adam') model.fit(train_x,train_y,epochs=epochs,batch_size=batch_size,verbose=verbose) # n_input=7 # input_x=test[-n_input:] # print(input_x) # print(input_x.shape) # input_x=input_x.reshape((1,len(input_x),1)) # print(input_x.shape) # yhat=model.predict(input_x,verbose=2) # print(yhat) # n_input=7 # input_x=test[0:n_input] # print(input_x) # print(input_x.shape) # input_x=input_x.reshape((1,len(input_x),1)) # print(input_x.shape) # yhat=model.predict(input_x,verbose=2) # print(yhat) test_x,test_y=to_supervised(test,n_input=7)#7个预测3个 yhat=model.predict(test_x,verbose=2) print(yhat) (690, 7, 1) (690, 3) Epoch 1/10 22/22 - 0s - loss: 0.2520 Epoch 2/10 22/22 - 0s - loss: 0.0453 Epoch 3/10 22/22 - 0s - loss: 0.0276 Epoch 4/10 22/22 - 0s - loss: 0.0174 Epoch 5/10 22/22 - 0s - loss: 0.0100 Epoch 6/10 22/22 - 0s - loss: 0.0026 Epoch 7/10 22/22 - 0s - loss: 0.0010 Epoch 8/10 22/22 - 0s - loss: 0.0017 Epoch 9/10 22/22 - 0s - loss: 0.0011 Epoch 10/10 22/22 - 0s - loss: 7.8560e-04 10/10 - 0s