softmax逻辑回归模型是logistic回归模型在多分类问题上的推广,在多分类问题中,类标签y可以取两个以上的值。在机器学习尤其是深度学习中,softmax是个非常常用而且比较重要的函数,尤其在多分类的场景中使用广泛。他把一些输入映射为0-1之间的实数,并且归一化保证和为1,因此多分类的概率之和也刚好为1。
Keras 是一个用 Python 编写的高级神经网络 API,它能够以 TensorFlow, CNTK, 或者 Theano 作为后端运行。 Keras可以很明确的定义了层的概念,反过来层与层之间的参数反倒是用户不需要关心的对象,所以构建神经网络的方法对于普通开发者来说,相对tensorflow,Keras更易上手。 并且Keras也是tensorflow官方在tensorflow2.0开始极力推荐使用的。
下面我们就利用tf.keras实现一个多层感知机进行多分类,然后用此模型对fashion_mnist数据集进行训练预测。
fashion_mnist数据集和手写数字集类似,训练集6w张图片,测试集1w张图片,共有十个分类:衣服,鞋子,T恤等等;每张图片是一个28*28像素的图片,每个像素值都是0-255的灰度图。
首先导入需要用的包
import numpy as np import pandas as ps import tensorflow as tf import matplotlib.pyplot as plt %matplotlib inline 分割数据集,6w张图片和相应的标签用来训练,另外1w张图片和标签用作测试 ## 分割fashion_mnist数据集 (train_image,train_label),(test_image,test_label) = tf.keras.datasets.fashion_mnist.load_data() print(train_image.shape,train_label.shape) print(test_image.shape,test_label.shape) >> (60000, 28, 28) (60000,) (10000, 28, 28) (10000,) 查看一下数据,可以看到训练集的第一张图片是鞋子,标签是9 plt.imshow(train_image[0]) train_label[0] >> 9 train_image[0] # 0-255的灰度图 >> array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ... [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 200, 232, 232, 233, 229, 223, 223, 215, 213, 164, 127, 123, 196, 229, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 183, 225, 216, 223, 228, 235, 227, 224, 222, 224, 221, 223, 245, 173, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 193, 228, 218, 213, 198, 180, 212, 210, 211, 213, 223, 220, 243, 202, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 0, 12, 219, 220, 212, 218, 192, 169, 227, 208, 218, 224, 212, 226, 197, 209, 52], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 99, 244, 222, 220, 218, 203, 198, 221, 215, 213, 222, 220, 245, 119, 167, 56], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 55, 236, 228, 230, 228, 240, 232, 213, 218, 223, 234, 217, 217, 209, 92, 0], [ 0, 0, 1, 4, 6, 7, 2, 0, 0, 0, 0, 0, 237, 226, 217, 223, 222, 219, 222, 221, 216, 223, 229, 215, 218, 255, 77, 0], [ 0, 3, 0, 0, 0, 0, 0, 0, 0, 62, 145, 204, 228, 207, 213, 221, 218, 208, 211, 218, 224, 223, 219, 215, 224, 244, 159, 0], [ 0, 0, 0, 0, 18, 44, 82, 107, 189, 228, 220, 222, 217, 226, 200, 205, 211, 230, 224, 234, 176, 188, 250, 248, 233, 238, 215, 0], [ 0, 57, 187, 208, 224, 221, 224, 208, 204, 214, 208, 209, 200, 159, 245, 193, 206, 223, 255, 255, 221, 234, 221, 211, 220, 232, 246, 0], [ 3, 202, 228, 224, 221, 211, 211, 214, 205, 205, 205, 220, 240, 80, 150, 255, 229, 221, 188, 154, 191, 210, 204, 209, 222, 228, 225, 0], ... [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) 由于图片的像素值是0-255的,所以我们对图片进行归一化只需对每张图片除以255,然后利用归一化的数据初始化模型,因为模型输出是10个分类所以最后一层需要用softmax激活。 # 对数据进行归一化 train_image = train_image/255 test_image = test_image/255 # 初始化线性模型 model = tf.keras.Sequential() model.add(tf.keras.layers.Flatten(input_shape = (28,28))) # 把二维的图片扁平化处理(28,28)→(784,) model.add(tf.keras.layers.Dense(128, activation = "relu")) # model.add(tf.keras.layers.Dense(64, activation = "relu")) model.add(tf.keras.layers.Dense(10, activation = "softmax")) model.summary() >> Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten (Flatten) (None, 784) 0 _________________________________________________________________ dense (Dense) (None, 128) 100480 _________________________________________________________________ dense_1 (Dense) (None, 10) 1290 ================================================================= Total params: 101,770 Trainable params: 101,770 Non-trainable params: 0 _________________________________________________________________ 下面对模型进行编译,然后对模型训练 # 当label是数字编码时,loss用sparse_categorical_crossentropy,独热编码时用categorical_crossentropy model.compile( optimizer="adam", loss = 'sparse_categorical_crossentropy', metrics = ['acc']) # 模型的训练 history = model.fit(train_image,train_label,epochs=10) >> Epoch 1/10 60000/60000 [==============================] - 3s 49us/sample - loss: 0.4967 - acc: 0.8255 Epoch 2/10 60000/60000 [==============================] - 2s 41us/sample - loss: 0.3736 - acc: 0.8658 Epoch 3/10 60000/60000 [==============================] - 3s 43us/sample - loss: 0.3374 - acc: 0.8773 Epoch 4/10 60000/60000 [==============================] - 2s 41us/sample - loss: 0.3118 - acc: 0.8858 Epoch 5/10 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2958 - acc: 0.8913 Epoch 6/10 60000/60000 [==============================] - 2s 39us/sample - loss: 0.2806 - acc: 0.8963 Epoch 7/10 60000/60000 [==============================] - 2s 36us/sample - loss: 0.2691 - acc: 0.9012 Epoch 8/10 60000/60000 [==============================] - 2s 39us/sample - loss: 0.2567 - acc: 0.9047 Epoch 9/10 60000/60000 [==============================] - 2s 33us/sample - loss: 0.2440 - acc: 0.9092 Epoch 10/10 60000/60000 [==============================] - 2s 32us/sample - loss: 0.2370 - acc: 0.9116 画图观察一下训练的效果 history.history.keys() # 字典的形式读出训练的损失loss和精度acc y_loss = history.history.get('loss') y_acc = history.history.get('acc') # plt.figure(figsize= (20,8), dpi = 80) plt.plot(history.epoch, y_acc) plt.show() plt.plot(history.epoch, y_loss) plt.show() 下面利用evaluate方法对模型效果进行评估 # 模型评估 print(model.evaluate(test_image,test_label)) model.evaluate(train_image,train_label) >> 10000/10000 [==============================] - 0s 36us/sample - loss: 0.3444 - acc: 0.8922 [0.3443740872859955, 0.8922] 60000/60000 [==============================] - 2s 31us/sample - loss: 0.1544 - acc: 0.9440 [0.15438867097496986, 0.9440333]可以看出acc和val_acc都一直在上升,说明模型训练不足,还有上升空间,可以多加一些训练轮数,或者加深网络层数。