您尚未包括如何准备数据,以下是使该网络学习得更好的一项补充:
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
如果您这样进行数据标准化,那么您的网络就可以了:5个星期后,它的测试精度达到了65-70%,这是一个很好的结果。请注意,只有5个纪元只是一个开始,大约需要30-50个纪元才能真正真正地学习数据并显示接近最新技术的结果。
以下是我注意到的一些细微改进,它们可以为您带来额外的性能提升:
这是最终代码:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
model = Sequential()
model.add(Conv2D(filters=64,
kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal',
input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(filters=256,
kernel_size=(2, 2),
kernel_initializer='he_normal',
activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer=adam(),
loss=categorical_crossentropy,
metrics=['accuracy'])
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
model.fit(x_train, y_train,
batch_size=500,
epochs=5,
verbose=1,
validation_data=(x_test, y_test))
loss, accuracy = model.evaluate(x_test, y_test)
print('loss: ', loss, '\naccuracy: ', accuracy)
5个纪元后的结果:
loss: 0.822134458447
accuracy: 0.7126
顺便说一句,您可能有兴趣将您的方法与keras示例CIFAR-10 conv net进行比较。