机器学习练习目录
一、理解人脸图像特征提取的各种方法的特征1.HOG2.Dlib3.卷积神经网络(CNN)
二、卷积神经网络(CNN)笑脸数据集(genki4k)正负样本的划分、模型训练和测试的过程,输出模型训练精度和测试精度,完成对人脸微笑与否的模型训练1.查看运行的环境2.数据集准备3.网络模型4.资料预处理5.开始训练6.使用数据填充7.对人脸微笑与否的模型训练
三、卷积神经网络(CNN)对口罩数据集正负样本的划分、模型训练和测试的过程,输出模型训练精度和测试精度,完成对口罩佩戴与否的模型训练1.数据集准备2.网络模型3.资料预处理4.开始训练5.使用数据填充6.对人脸戴口罩与否的模型训练
四、完成一个摄像头采集自己人脸、并对表情(笑脸/非笑脸、戴口罩和没戴口罩)的实时分类判读(输出分类文字)的程序1.笑脸/非笑脸实时分类判读(输出分类文字)的程序2.戴口罩和没戴口罩的实时分类判读(输出分类文字)的程序
一、理解人脸图像特征提取的各种方法的特征
1.HOG
介绍: 方向梯度直方图(Histogram of Oriented Gradient, HOG)特征是一种在计算机视觉和图像处理中用来进行物体检测的特征描述子。它通过计算和统计图像局部区域的梯度方向直方图来构成特征。主要思想: 在一副图像中,局部目标的表象和形状能够被梯度或边缘的方向密度分布很好地描述。(本质:梯度的统计信息,而梯度主要存在于边缘的地方)。具体的实现方法: 首先将图像分成小的连通区域,把它叫细胞单元。然后采集细胞单元中各像素点的梯度的或边缘的方向直方图。最后把这些直方图组合起来就可以构成特征描述器。步骤: 灰度图像转换、梯度计算、分网格的梯度方向直方图、快描述子、快描述子归一化、特征数据与检测窗口、匹配方法。优点: ① HOG是在图像的局部方格单元上操作,它对图像几何的和光学的形变都能保持很好的不变性,这两种形变只会出现在更大的空间领域上;②在粗的空域抽样、精细的方向抽样以及较强的局部光学归一化等条件下,只要行人大体上能够保持直立的姿势,可以容许行人有一些细微的肢体动作,这些细微的动作可以被忽略而不影响检测效果。
综上所述,HOG特征是特别适合于做图像中的人体检测的。
参考链接:https://zhuanlan.zhihu.com/p/104670289
2.Dlib
介绍: Dlib是一款优秀的跨平台开源的C++工具库,该库使用C++编写,具有优异的性能。Dlib库提供的功能十分丰富,包括线性代数,图像处理,机器学习,网络,最优化算法等众多功能。同时该库也提供了Python接口。核心原理: 使用了图像Hog特征来表示人脸,和其他特征提取算子相比,它对图像的几何和光学的形变都能保持很好的不变形。该特征与LBP特征,Harr特征共同作为三种经典的图像特征,该特征提取算子通常和支持向量机(SVM)算法搭配使用,用在物体检测场景。该算法大致思路:
对正样本(即包含人脸的图像)数据集提取Hog特征,得到Hog特征描述子。对负样本(即不包含人脸的图像)数据集提取Hog特征,得到Hog描述子。 其中负样本数据集中的数据量要远远大于正样本数据集中的样本数,负样本图像可以使用不含人脸的图片进行随机裁剪获取。利用支持向量机算法训练正负样本,显然这是一个二分类问题,可以得到训练后的模型。利用该模型进行负样本难例检测,也就是难分样本挖掘(hard-negtive mining),以便提高最终模型的分类能力。具体思路为:对训练集里的负样本不断进行缩放,直至与模板匹配位置,通过模板滑动串口搜索匹配(该过程即多尺度检测过程),如果分类器误检出非人脸区域则截取该部分图像加入到负样本中。集合难例样本重新训练模型,反复如此得到最终分类模型。 应用最终训练出的分类器检测人脸图片,对该图片的不同尺寸进行滑动扫描,提取Hog特征,并用分类器分类。如果检测判定为人脸,则将其标定出来,经过一轮滑动扫描后必然会出现同一个人脸被多次标定的情况,这就用NMS完成收尾工作即可。
参考链接:https://zhuanlan.zhihu.com/p/92132280
3.卷积神经网络(CNN)
介绍: 卷积神经网络(Convolutional Neural Networks, CNN)是一类包含卷积计算且具有深度结构的前馈神经网络,是深度学习的代表算法之一 。卷积神经网络具有表征学习能力,能够按其阶层结构对输入信息进行平移不变分类,因此也被称为“平移不变人工神经网络(SIANN)"。
CNN原理: 卷积神经网络(CNN)主要是用于图像识别领域,它指的是一类网络,而不是某一种,其包含很多不同种结构的网络。不同的网络结构通常表现会不一样。所有CNN最终都是把一张图片转化为特征向量。就像上图VGG网络一样,通过多层的卷积,池化,全连接,降低图片维度,最后转化成了一个一维向量。这个向量就包含了图片的特征,当然这个特征不是肉眼上的图片特征,而是针对于神经网络的特征。
CNN的训练过程:
卷积神经网络的前向传播过程
前向传播中的卷积操作前向传播中的池化操作前向传播中的全连接
卷积神经网络的反向传播过程卷积神经网络的权值更新 卷积神经网络的训练过程流程图: (
不断循环这个过程,最后得到一个稳定的权值和阈值)
CNN的应用:
图片分类相似图搜索对抗样本(比如输入一张猫的图片,人为的修改一点图片数据,肉眼上看还是一只猫,但是你告诉神经网络这是狗。)
参考链接:https://zhuanlan.zhihu.com/p/95158245
二、卷积神经网络(CNN)笑脸数据集(genki4k)正负样本的划分、模型训练和测试的过程,输出模型训练精度和测试精度,完成对人脸微笑与否的模型训练
1.查看运行的环境
import platform
import tensorflow
import keras
print("Platform: {}".format(platform
.platform
()))
print("Tensorflow version: {}".format(tensorflow
.__version__
))
print("Keras version: {}".format(keras
.__version__
))
Platform: Windows-7-6.1.7601-SP1
Tensorflow version: 1.2.1
Keras version: 2.1.2
2.数据集准备
下载图像数据集genki4k.tar,把它解压到相应的目录(我放在了D:\mango目录下), 解压出来看见的原始数据集内容如图所示 手动裁剪过程 其中裁剪之前的图片 对图像进行了裁剪,提取大头照,并把一些不符合的图片进行了剔除,执行完裁剪后的显示效果 由于各种原因没有裁剪剔除的图片 导入需要的包
import keras
import matplotlib
.pyplot
as plt
import matplotlib
.image
as mpimg
import numpy
as np
from IPython
.display
import Image
import os
划分数据集(在当前写代码的同级目录下会产生一个mangoout的文件夹,包括 train :训练集;validation:验证集;test:测试集。)
original_dataset_dir
= 'D:\\mango\\files\\cutmango'
base_dir
= 'mangoout'
os
.mkdir
(base_dir
)
train_dir
= os
.path
.join
(base_dir
, 'train')
os
.mkdir
(train_dir
)
validation_dir
= os
.path
.join
(base_dir
, 'validation')
os
.mkdir
(validation_dir
)
test_dir
= os
.path
.join
(base_dir
, 'test')
os
.mkdir
(test_dir
)
train_smile_dir
= os
.path
.join
(train_dir
, 'smile')
os
.mkdir
(train_smile_dir
)
train_unsmile_dir
= os
.path
.join
(train_dir
, 'unsmile')
validation_smile_dir
= os
.path
.join
(validation_dir
, 'smile')
os
.mkdir
(validation_smile_dir
)
validation_unsmile_dir
= os
.path
.join
(validation_dir
, 'unsmile')
os
.mkdir
(validation_unsmile_dir
)
test_smile_dir
= os
.path
.join
(test_dir
, 'smile')
os
.mkdir
(test_smile_dir
)
test_unsmile_dir
= os
.path
.join
(test_dir
, 'unsmile')
os
.mkdir
(test_unsmile_dir
)
分配数据集,可以使用人为划分和代码划分进行一次检查,计算每个分组中有多少张照片(训练/验证/测试)
print('total training smile images:', len(os
.listdir
(train_smile_dir
)))
total training smile images: 1000
print('total testing smile images:', len(os
.listdir
(test_smile_dir
)))
total testing smile images: 300
print('total training unsmile images:', len(os
.listdir
(train_unsmile_dir
)))
total training unsmile images: 1000
print('total validation smile images:', len(os
.listdir
(validation_smile_dir
)))
total validation smile images: 300
print('total testing unsmile images:', len(os
.listdir
(test_unsmile_dir
)))
total testing unsmile images: 300
print('total validation unsmile images:', len(os
.listdir
(validation_unsmile_dir
)))
total validation unsmile images: 300
有2000个训练图像,然后是600个验证图像,600个测试图像,其中每个分类都有相同数量的样本,是一个平衡的二元分类问题,意味着分类准确度将是合适的度量标准。
3.网络模型
卷积网络(convnets)将是一组交替的Conv2D(具有relu激活)和MaxPooling2D层。从大小150x150(有点任意选择)的输入开始,我们最终得到了尺寸为7x7的Flatten层之前的特征图。注意特征图的深度在网络中逐渐增加(从32到128),而特征图的大小正在减少(从148x148到7x7)。这是一个你将在几乎所有的卷积网络(convnets)结构中会看到的模式。由于我们正在处理二元分类问题,所以我们用一个神经元(一个大小为1的密集层(Dense))和一个sigmoid激活函数来结束网络。该神经元将会被用来查看图像归属于那一类或另一类的概率。
创建模型
from keras
import layers
from keras
import models
model
= models
.Sequential
()
model
.add
(layers
.Conv2D
(32, (3, 3), activation
='relu',input_shape
=(150, 150, 3)))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(64, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(128, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(128, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Flatten
())
model
.add
(layers
.Dense
(512, activation
='relu'))
model
.add
(layers
.Dense
(1, activation
='sigmoid'))
看特征图的尺寸如何随着每个连续的图层而改变,打印网络结构
model
.summary
()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_5 (Conv2D) (None, 148, 148, 32) 896
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 74, 74, 32) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 72, 72, 64) 18496
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 36, 36, 64) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 34, 34, 128) 73856
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 17, 17, 128) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 15, 15, 128) 147584
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 7, 7, 128) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 6272) 0
_________________________________________________________________
dense_3 (Dense) (None, 512) 3211776
_________________________________________________________________
dense_4 (Dense) (None, 1) 513
=================================================================
Total params: 3,453,121
Trainable params: 3,453,121
Non-trainable params: 0
_________________________________________________________________
在编译步骤里,使用RMSprop优化器。由于用一个单一的神经元(Sigmoid的激活函数)结束了网络,将使用二进制交叉熵(binary crossentropy)作为损失函数
from keras
import optimizers
model
.compile(loss
='binary_crossentropy',
optimizer
=optimizers
.RMSprop
(lr
=1e-4),
metrics
=['acc'])
4.资料预处理
网络的预处理步骤:读入图像
将JPEG内容解码为RGB网格的像素
将其转换为浮点张量
将像素值(0和255之间)重新缩放到[0,1]间隔
数据应该先被格式化成适当的预处理浮点张量,然后才能输入到神经网络中
from keras
.preprocessing
.image
import ImageDataGenerator
train_datagen
= ImageDataGenerator
(rescale
=1./255)
validation_datagen
=ImageDataGenerator
(rescale
=1./255)
test_datagen
= ImageDataGenerator
(rescale
=1./255)
train_generator
= train_datagen
.flow_from_directory
(
train_dir
,
target_size
=(150, 150),
batch_size
=20,
class_mode
='binary')
validation_generator
= test_datagen
.flow_from_directory
(
validation_dir
,
target_size
=(150, 150),
batch_size
=20,
class_mode
='binary')
test_generator
= test_datagen
.flow_from_directory
(test_dir
,
target_size
=(150, 150),
batch_size
=20,
class_mode
='binary')
Found 2000 images belonging to 2 classes.
Found 600 images belonging to 2 classes.
Found 600 images belonging to 2 classes.
图像张量生成器(generator)的输出,它产生150x150 RGB图像(形状"(20,150,150,3)")和二进制标签(形状"(20,)")的批次张量。20是每个批次中的样品数(批次大小)
for data_batch
, labels_batch
in train_generator
:
print('data batch shape:', data_batch
.shape
)
print('labels batch shape:', labels_batch
)
break
data batch shape: (20, 150, 150, 3)
labels batch shape: [ 1. 1. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0.
0. 0.]
输出结果的0(笑脸)和1(非笑脸)
train_generator
.class_indices
{'smile': 0, 'unsmile': 1}
5.开始训练
其中epochs值越大,花费的时间就越久、训练的精度更高,我电脑性能不好,运行了很久… …
history
= model
.fit_generator
(
train_generator
,
steps_per_epoch
=100,
epochs
=10,
validation_data
=validation_generator
,
validation_steps
=50)
Epoch 1/10
100/100 [==============================] - 227s 2s/step - loss: 0.6776 - acc: 0.5740 - val_loss: 0.6745 - val_acc: 0.5660
Epoch 2/10
100/100 [==============================] - 230s 2s/step - loss: 0.6422 - acc: 0.6520 - val_loss: 0.7091 - val_acc: 0.5290
Epoch 3/10
100/100 [==============================] - 222s 2s/step - loss: 0.5889 - acc: 0.7020 - val_loss: 0.5711 - val_acc: 0.7530
Epoch 4/10
100/100 [==============================] - 192s 2s/step - loss: 0.5251 - acc: 0.7575 - val_loss: 0.5592 - val_acc: 0.7330
Epoch 5/10
100/100 [==============================] - 191s 2s/step - loss: 0.4854 - acc: 0.7825 - val_loss: 0.5250 - val_acc: 0.7550
Epoch 6/10
100/100 [==============================] - 184s 2s/step - loss: 0.4503 - acc: 0.8015 - val_loss: 0.5111 - val_acc: 0.7980
Epoch 7/10
100/100 [==============================] - 183s 2s/step - loss: 0.4111 - acc: 0.8255 - val_loss: 0.5376 - val_acc: 0.7500
Epoch 8/10
100/100 [==============================] - 189s 2s/step - loss: 0.3748 - acc: 0.8380 - val_loss: 0.4906 - val_acc: 0.7850
Epoch 9/10
100/100 [==============================] - 188s 2s/step - loss: 0.3493 - acc: 0.8590 - val_loss: 0.4397 - val_acc: 0.8170
Epoch 10/10
100/100 [==============================] - 186s 2s/step - loss: 0.3177 - acc: 0.8670 - val_loss: 0.4265 - val_acc: 0.8400
训练完成后把模型保存下来
model
.save
('mangoout/smileAndUnsmile_1.h5')
使用图表来绘制在训练过程中模型对训练和验证数据的损失(loss)和准确性(accuracy)数据
import matplotlib
.pyplot
as plt
acc
= history
.history
['acc']
val_acc
= history
.history
['val_acc']
loss
= history
.history
['loss']
val_loss
= history
.history
['val_loss']
epochs
= range(len(acc
))
plt
.plot
(epochs
, acc
, 'bo', label
='Training acc')
plt
.plot
(epochs
, val_acc
, 'b', label
='Validation acc')
plt
.title
('Training and validation accuracy')
plt
.legend
()
plt
.figure
()
plt
.plot
(epochs
, loss
, 'bo', label
='Training loss')
plt
.plot
(epochs
, val_loss
, 'b', label
='Validation loss')
plt
.title
('Training and validation loss')
plt
.legend
()
plt
.show
()
可能由于我们提前对图片进行了裁剪筛选操作,过度拟合这里不是太能体现出来,训练精确度随着时间线性增长,验证精度也在增长。 验证损失和训练损失都在线性上保持下降直到接近0。
💥 为了进一步优化,下面会引入一个新的、特定于电脑视觉影像,并在使用深度学习模型处理图像时几乎普遍使用的技巧:数据扩充(data augmentation)
6.使用数据填充
数据增加采用从现有训练样本生成更多训练数据的方法,通过产生可信的图像的多个随机变换来"增加"样本。目标是在训练的时候,我们的模型永远不会再看到完全相同的画面两次。这有助于模型学习到数据的更多方面,并更好地推广。
在keras中,可以通过配置对ImageDataGenerator实例读取的图像执行多个随机变换来完成
datagen
= ImageDataGenerator
(
rotation_range
=40,
width_shift_range
=0.2,
height_shift_range
=0.2,
shear_range
=0.2,
zoom_range
=0.2,
horizontal_flip
=True,
fill_mode
='nearest')
这里列出一些可用的选项(更多选项,可以参考keras文档),快速看一下这些参数:rotation_range 是以度(0-180)为单位的值,它是随机旋转图片的范围width_shift 和 height_shift 是范围(占总宽度或高度的一小部分),用于纵向或横向随机转换图片shear_range 用于随机剪切变换zoom_range 用于随机放大图片内容horizontal_flip 用于在没有水平不对称假设(例如真实世界图片)的情况下水平地随机翻转一半图像fill_mode 是用于填充新创建的像素的策略,可以在旋转或宽/高移位后显示
看增强后的图像
import matplotlib
.pyplot
as plt
from keras
.preprocessing
import image
fnames
= [os
.path
.join
(train_smile_dir
, fname
) for fname
in os
.listdir
(train_smile_dir
)]
img_path
= fnames
[3]
img
= image
.load_img
(img_path
, target_size
=(150, 150))
x
= image
.img_to_array
(img
)
x
= x
.reshape
((1,) + x
.shape
)
i
= 0
for batch
in datagen
.flow
(x
, batch_size
=1):
plt
.figure
(i
)
imgplot
= plt
.imshow
(image
.array_to_img
(batch
[0]))
i
+= 1
if i
% 4 == 0:
break
plt
.show
()
如果我们使用这种数据增强配置来训练一个新的网络,我们的网络将永远不会看到相同重复的输入。
然而,它看到的输入仍然是相互关联的,因为它们来自少量的原始图像 - 我们不能产生新的信息,我们只能重新混合现有的信息。因此,这可能不足以完全摆脱过度拟合(overfitting)。为了进一步克服过度拟合(overfitting),我们还将在密集连接(densely-connected)的分类器之前添加一个Dropout层。
model
= models
.Sequential
()
model
.add
(layers
.Conv2D
(32, (3, 3), activation
='relu',
input_shape
=(150, 150, 3)))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(64, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(128, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(128, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Flatten
())
model
.add
(layers
.Dropout
(0.5))
model
.add
(layers
.Dense
(512, activation
='relu'))
model
.add
(layers
.Dense
(1, activation
='sigmoid'))
model
.compile(loss
='binary_crossentropy',
optimizer
=optimizers
.RMSprop
(lr
=1e-4),
metrics
=['acc'])
使用数据填充(data augmentation)和dropout来训练我们的网络
train_datagen
= ImageDataGenerator
(
rescale
=1./255,
rotation_range
=40,
width_shift_range
=0.2,
height_shift_range
=0.2,
shear_range
=0.2,
zoom_range
=0.2,
horizontal_flip
=True,)
test_datagen
= ImageDataGenerator
(rescale
=1./255)
train_generator
= train_datagen
.flow_from_directory
(
train_dir
,
target_size
=(150, 150),
batch_size
=32,
class_mode
='binary')
validation_generator
= test_datagen
.flow_from_directory
(
validation_dir
,
target_size
=(150, 150),
batch_size
=32,
class_mode
='binary')
history
= model
.fit_generator
(
train_generator
,
steps_per_epoch
=100,
epochs
=60,
validation_data
=validation_generator
,
validation_steps
=50)
Found 2000 images belonging to 2 classes.
Found 600 images belonging to 2 classes.
Epoch 1/60
100/100 [==============================] - 377s 4s/step - loss: 0.6905 - acc: 0.5312 - val_loss: 0.6815 - val_acc: 0.5587
Epoch 2/60
100/100 [==============================] - 303s 3s/step - loss: 0.6849 - acc: 0.5550 - val_loss: 0.6768 - val_acc: 0.5852
Epoch 3/60
100/100 [==============================] - 314s 3s/step - loss: 0.6790 - acc: 0.5866 - val_loss: 0.6748 - val_acc: 0.5777
Epoch 4/60
100/100 [==============================] - 312s 3s/step - loss: 0.6774 - acc: 0.5806 - val_loss: 0.6722 - val_acc: 0.5821
Epoch 5/60
100/100 [==============================] - 303s 3s/step - loss: 0.6696 - acc: 0.5869 - val_loss: 0.6734 - val_acc: 0.5909
Epoch 6/60
100/100 [==============================] - 300s 3s/step - loss: 0.6665 - acc: 0.5956 - val_loss: 0.6679 - val_acc: 0.6029
Epoch 7/60
100/100 [==============================] - 309s 3s/step - loss: 0.6719 - acc: 0.5853 - val_loss: 0.6836 - val_acc: 0.5814
Epoch 8/60
100/100 [==============================] - 299s 3s/step - loss: 0.6628 - acc: 0.6031 - val_loss: 0.7166 - val_acc: 0.5600
Epoch 9/60
100/100 [==============================] - 310s 3s/step - loss: 0.6676 - acc: 0.6009 - val_loss: 0.6723 - val_acc: 0.6048
Epoch 10/60
100/100 [==============================] - 309s 3s/step - loss: 0.6629 - acc: 0.6100 - val_loss: 0.6546 - val_acc: 0.6288
Epoch 11/60
100/100 [==============================] - 300s 3s/step - loss: 0.6558 - acc: 0.6163 - val_loss: 0.6744 - val_acc: 0.6073
Epoch 12/60
100/100 [==============================] - 300s 3s/step - loss: 0.6575 - acc: 0.6216 - val_loss: 0.6425 - val_acc: 0.6572
Epoch 13/60
100/100 [==============================] - 307s 3s/step - loss: 0.6541 - acc: 0.6294 - val_loss: 0.7095 - val_acc: 0.5960
Epoch 14/60
100/100 [==============================] - 303s 3s/step - loss: 0.6429 - acc: 0.6400 - val_loss: 0.6381 - val_acc: 0.6414
Epoch 15/60
100/100 [==============================] - 310s 3s/step - loss: 0.6427 - acc: 0.6300 - val_loss: 0.6297 - val_acc: 0.6723
Epoch 16/60
100/100 [==============================] - 308s 3s/step - loss: 0.6404 - acc: 0.6369 - val_loss: 0.6254 - val_acc: 0.6667
Epoch 17/60
100/100 [==============================] - 301s 3s/step - loss: 0.6367 - acc: 0.6500 - val_loss: 0.6145 - val_acc: 0.6408
Epoch 18/60
100/100 [==============================] - 301s 3s/step - loss: 0.6246 - acc: 0.6450 - val_loss: 0.5991 - val_acc: 0.6831
Epoch 19/60
100/100 [==============================] - 307s 3s/step - loss: 0.6230 - acc: 0.6625 - val_loss: 0.5956 - val_acc: 0.7052
Epoch 20/60
100/100 [==============================] - 303s 3s/step - loss: 0.6062 - acc: 0.6725 - val_loss: 0.5812 - val_acc: 0.6951
Epoch 21/60
100/100 [==============================] - 301s 3s/step - loss: 0.6094 - acc: 0.6647 - val_loss: 0.5640 - val_acc: 0.7033
Epoch 22/60
100/100 [==============================] - 303s 3s/step - loss: 0.6089 - acc: 0.6794 - val_loss: 0.5698 - val_acc: 0.6774
Epoch 23/60
100/100 [==============================] - 305s 3s/step - loss: 0.5846 - acc: 0.6869 - val_loss: 0.5458 - val_acc: 0.7311
Epoch 24/60
100/100 [==============================] - 300s 3s/step - loss: 0.5782 - acc: 0.7022 - val_loss: 0.5380 - val_acc: 0.7551
Epoch 25/60
100/100 [==============================] - 303s 3s/step - loss: 0.5676 - acc: 0.7112 - val_loss: 0.5731 - val_acc: 0.7109
Epoch 26/60
100/100 [==============================] - 305s 3s/step - loss: 0.5711 - acc: 0.7134 - val_loss: 0.5303 - val_acc: 0.7121
Epoch 27/60
100/100 [==============================] - 315s 3s/step - loss: 0.5565 - acc: 0.7166 - val_loss: 0.4785 - val_acc: 0.7942
Epoch 28/60
100/100 [==============================] - 307s 3s/step - loss: 0.5462 - acc: 0.7269 - val_loss: 0.5904 - val_acc: 0.7052
Epoch 29/60
100/100 [==============================] - 309s 3s/step - loss: 0.5367 - acc: 0.7309 - val_loss: 0.4601 - val_acc: 0.7847
Epoch 30/60
100/100 [==============================] - 304s 3s/step - loss: 0.5283 - acc: 0.7359 - val_loss: 0.5043 - val_acc: 0.7532
Epoch 31/60
100/100 [==============================] - 302s 3s/step - loss: 0.5202 - acc: 0.7472 - val_loss: 0.5292 - val_acc: 0.7424
Epoch 32/60
100/100 [==============================] - 308s 3s/step - loss: 0.5147 - acc: 0.7531 - val_loss: 0.5043 - val_acc: 0.7860
Epoch 33/60
100/100 [==============================] - 305s 3s/step - loss: 0.5100 - acc: 0.7434 - val_loss: 0.4506 - val_acc: 0.7955
Epoch 34/60
100/100 [==============================] - 303s 3s/step - loss: 0.5067 - acc: 0.7628 - val_loss: 0.4423 - val_acc: 0.7803
Epoch 35/60
100/100 [==============================] - 303s 3s/step - loss: 0.4937 - acc: 0.7591 - val_loss: 0.4281 - val_acc: 0.8037
Epoch 36/60
100/100 [==============================] - 300s 3s/step - loss: 0.4903 - acc: 0.7619 - val_loss: 0.4191 - val_acc: 0.8125
Epoch 37/60
100/100 [==============================] - 299s 3s/step - loss: 0.4704 - acc: 0.7769 - val_loss: 0.4266 - val_acc: 0.8213
Epoch 38/60
100/100 [==============================] - 307s 3s/step - loss: 0.4811 - acc: 0.7778 - val_loss: 0.4196 - val_acc: 0.8239
Epoch 39/60
100/100 [==============================] - 312s 3s/step - loss: 0.4722 - acc: 0.7753 - val_loss: 0.4366 - val_acc: 0.7992
Epoch 40/60
100/100 [==============================] - 301s 3s/step - loss: 0.4694 - acc: 0.7797 - val_loss: 0.4597 - val_acc: 0.7879
Epoch 41/60
100/100 [==============================] - 304s 3s/step - loss: 0.4658 - acc: 0.7866 - val_loss: 0.4021 - val_acc: 0.8239
Epoch 42/60
100/100 [==============================] - 303s 3s/step - loss: 0.4700 - acc: 0.7859 - val_loss: 0.4271 - val_acc: 0.8100
Epoch 43/60
100/100 [==============================] - 300s 3s/step - loss: 0.4591 - acc: 0.7850 - val_loss: 0.4687 - val_acc: 0.7898
Epoch 44/60
100/100 [==============================] - 308s 3s/step - loss: 0.4592 - acc: 0.7847 - val_loss: 0.4136 - val_acc: 0.8169
Epoch 45/60
100/100 [==============================] - 311s 3s/step - loss: 0.4449 - acc: 0.8031 - val_loss: 0.4427 - val_acc: 0.7784
Epoch 46/60
100/100 [==============================] - 299s 3s/step - loss: 0.4505 - acc: 0.7897 - val_loss: 0.4030 - val_acc: 0.8220
Epoch 47/60
100/100 [==============================] - 306s 3s/step - loss: 0.4515 - acc: 0.7978 - val_loss: 0.4324 - val_acc: 0.7948
Epoch 48/60
100/100 [==============================] - 303s 3s/step - loss: 0.4485 - acc: 0.8025 - val_loss: 0.4979 - val_acc: 0.7544
Epoch 49/60
100/100 [==============================] - 303s 3s/step - loss: 0.4310 - acc: 0.8137 - val_loss: 0.4203 - val_acc: 0.8100
Epoch 50/60
100/100 [==============================] - 309s 3s/step - loss: 0.4327 - acc: 0.8044 - val_loss: 0.4064 - val_acc: 0.8194
Epoch 51/60
100/100 [==============================] - 313s 3s/step - loss: 0.4308 - acc: 0.8091 - val_loss: 0.4266 - val_acc: 0.7891
Epoch 52/60
100/100 [==============================] - 307s 3s/step - loss: 0.4321 - acc: 0.7997 - val_loss: 0.4612 - val_acc: 0.8169
Epoch 53/60
100/100 [==============================] - 303s 3s/step - loss: 0.4316 - acc: 0.8081 - val_loss: 0.4235 - val_acc: 0.7942
Epoch 54/60
100/100 [==============================] - 302s 3s/step - loss: 0.4226 - acc: 0.8135 - val_loss: 0.4083 - val_acc: 0.8302
Epoch 55/60
100/100 [==============================] - 300s 3s/step - loss: 0.4240 - acc: 0.8062 - val_loss: 0.3863 - val_acc: 0.8340
Epoch 56/60
100/100 [==============================] - 308s 3s/step - loss: 0.4139 - acc: 0.8184 - val_loss: 0.4100 - val_acc: 0.8232
Epoch 57/60
100/100 [==============================] - 303s 3s/step - loss: 0.4149 - acc: 0.8159 - val_loss: 0.3911 - val_acc: 0.8239
Epoch 58/60
100/100 [==============================] - 305s 3s/step - loss: 0.4171 - acc: 0.8113 - val_loss: 0.3851 - val_acc: 0.8321
Epoch 59/60
100/100 [==============================] - 304s 3s/step - loss: 0.4221 - acc: 0.8141 - val_loss: 0.4243 - val_acc: 0.7891
Epoch 60/60
100/100 [==============================] - 303s 3s/step - loss: 0.4019 - acc: 0.8262 - val_loss: 0.3919 - val_acc: 0.8371
再次查看0或1含义
train_generator
.class_indices
{'smile': 0, 'unsmile': 1}
保存模型,将在convnet可视化里使用它
model
.save
('mangoout/smileAndUnsmile_2.h5')
保存模型文件夹显示效果 绘制数据增强后的训练集与验证集的精确度与损失度的图形,看一遍结果
acc
= history
.history
['acc']
val_acc
= history
.history
['val_acc']
loss
= history
.history
['loss']
val_loss
= history
.history
['val_loss']
epochs
= range(len(acc
))
plt
.plot
(epochs
, acc
, 'bo', label
='Training acc')
plt
.plot
(epochs
, val_acc
, 'b', label
='Validation acc')
plt
.title
('Training and validation accuracy')
plt
.legend
()
plt
.figure
()
plt
.plot
(epochs
, loss
, 'bo', label
='Training loss')
plt
.plot
(epochs
, val_loss
, 'b', label
='Validation loss')
plt
.title
('Training and validation loss')
plt
.legend
()
plt
.show
()
由于数据增加(data augmentation)和dropout使用,不再有过度拟合(overfitting)的问题;训练曲线相当密切地跟随着验证曲线。训练精度和验证精度经过60个循环接近85%。 通过进一步利用正规化技术,及调整网络参数(例如每个卷积层的滤波器数量或网络层数),可以获得更好的准确度。
7.对人脸微笑与否的模型训练
判断的第一张图片(D:/mango/mangotest.jpg路径下)
这就是训练集中的某张图片
from keras
.preprocessing
import image
from keras
.models
import load_model
import numpy
as np
model
= load_model
('mangoout/smileAndUnsmile_2.h5')
img_path
='D:/mango/mangotest.jpg'
img
= image
.load_img
(img_path
, target_size
=(150, 150))
img_tensor
= image
.img_to_array
(img
)/255.0
img_tensor
= np
.expand_dims
(img_tensor
, axis
=0)
prediction
=model
.predict
(img_tensor
)
print(prediction
)
if prediction
[0][0]>0.5:
result
='非笑脸'
else:
result
='笑脸'
print(result
)
[[ 0.00120554]]
笑脸
可以看见,判断为笑脸是正确的。
判断的第二张图片(D:/mango/mangotest2.jpg路径下)
这是我假期无聊时用美图拍的,哈哈… … ☺️
from keras
.preprocessing
import image
from keras
.models
import load_model
import numpy
as np
model
= load_model
('mangoout/smileAndUnsmile_2.h5')
img_path
='D:/mango/mangotest2.jpg'
img
= image
.load_img
(img_path
, target_size
=(150, 150))
img_tensor
= image
.img_to_array
(img
)/255.0
img_tensor
= np
.expand_dims
(img_tensor
, axis
=0)
prediction
=model
.predict
(img_tensor
)
print(prediction
)
if prediction
[0][0]>0.5:
result
='非笑脸'
else:
result
='笑脸'
print(result
)
[[ 0.69130015]]
非笑脸
可以看见,判断为非笑脸是正确的。
判断的第三张图片(D:/mango/mengmengsmile.jpg路径下)
她自己拍的,顺便被我拿来做作业,哈哈… …
from keras
.preprocessing
import image
from keras
.models
import load_model
import numpy
as np
model
= load_model
('mangoout/smileAndUnsmile_1.h5')
img_path
='D:/mango/mengmengsmile.jpg'
img
= image
.load_img
(img_path
, target_size
=(150, 150))
img_tensor
= image
.img_to_array
(img
)/255.0
img_tensor
= np
.expand_dims
(img_tensor
, axis
=0)
prediction
=model
.predict
(img_tensor
)
print(prediction
)
if prediction
[0][0]>0.5:
result
='非笑脸'
else:
result
='笑脸'
print(result
)
[[ 0.03113164]]
笑脸
可以看见,判断为笑脸是正确的(但是有一些图片中人脸部分在整张图片的占比不是在特别多的话,它判断出来的准确度就比较小,甚至出现判断错误的情况,但是一般大头照判断出来的准确度还是很高的。)
三、卷积神经网络(CNN)对口罩数据集正负样本的划分、模型训练和测试的过程,输出模型训练精度和测试精度,完成对口罩佩戴与否的模型训练
1.数据集准备
下载口罩数据集,把它解压到相应的目录(我放在了D:\mangomask目录下) 解压后原始有口罩的数据集显示效果 解压后原始没有口罩的数据集显示效果 将正样本(有口罩)数据集重命名为连续序列,以便后面调整
import os
path
= "D:/mangomask/mask/have_mask"
filelist
= os
.listdir
(path
)
count
=1000
for file in filelist
:
Olddir
=os
.path
.join
(path
,file)
if os
.path
.isdir
(Olddir
):
continue
filename
=os
.path
.splitext
(file)[0]
filetype
=os
.path
.splitext
(file)[1]
Newdir
=os
.path
.join
(path
,str(count
)+filetype
)
os
.rename
(Olddir
,Newdir
)
count
+=1
对数据集重命名后,人脸正样本(有口罩)数据集显示效果如下 将负样本(没有口罩)数据集重命名为连续序列,以便后面调整
import os
path
= "D:/mangomask/mask/no_mask"
filelist
= os
.listdir
(path
)
count
=10000
for file in filelist
:
Olddir
=os
.path
.join
(path
,file)
if os
.path
.isdir
(Olddir
):
continue
filename
=os
.path
.splitext
(file)[0]
filetype
=os
.path
.splitext
(file)[1]
Newdir
=os
.path
.join
(path
,str(count
)+filetype
)
os
.rename
(Olddir
,Newdir
)
count
+=1
对数据集重命名后,人脸负样本(没有口罩)数据集显示效果如下 正负样本数据集像素处理
正样本(有口罩)数据集的像素设置为 20x20,模型训练精度更高; 负样本(没有口罩)数据集像素设置不低于50x50,加快模型训练的速度。
1.调整正样本(有口罩)像素
import pandas
as pd
import cv2
for n
in range(1000,1606):
path
='D:/mangomask/mask/have_mask/'+str(n
)+'.jpg'
img
= cv2
.imread
(path
)
img
=cv2
.resize
(img
,(20,20))
cv2
.imwrite
('D:/mangomask/mask/have_mask/' + str(n
) + '.jpg', img
)
n
+= 1
调整正样本(有口罩)像素后,数据集图像显示效果 2.调整负样本(没有口罩)像素
import pandas
as pd
import cv2
for n
in range(10000,11790):
path
='D:/mangomask/mask/no_mask/'+str(n
)+'.jpg'
img
= cv2
.imread
(path
)
img
=cv2
.resize
(img
,(80,80))
cv2
.imwrite
('D:/mangomask/mask/no_mask/' + str(n
) + '.jpg', img
)
n
+= 1
调整负样本(没有口罩)像素后,数据集图像显示效果 划分数据集(在当前写代码的同级目录下会产生一个maskout的文件夹,包括 train :训练集;validation:验证集;test:测试集。)
original_dataset_dir
= 'D://mangomask//mask'
base_dir
= 'maskout'
os
.mkdir
(base_dir
)
train_dir
= os
.path
.join
(base_dir
, 'train')
os
.mkdir
(train_dir
)
validation_dir
= os
.path
.join
(base_dir
, 'validation')
os
.mkdir
(validation_dir
)
test_dir
= os
.path
.join
(base_dir
, 'test')
os
.mkdir
(test_dir
)
train_havemask_dir
= os
.path
.join
(train_dir
, 'have_mask')
os
.mkdir
(train_havemask_dir
)
train_nomask_dir
= os
.path
.join
(train_dir
, 'no_mask')
os
.mkdir
(train_nomask_dir
)
validation_havemask_dir
= os
.path
.join
(validation_dir
, 'have_mask')
os
.mkdir
(validation_havemask_dir
)
validation_nomask_dir
= os
.path
.join
(validation_dir
, 'no_mask')
os
.mkdir
(validation_nomask_dir
)
test_havemask_dir
= os
.path
.join
(test_dir
, 'have_mask')
os
.mkdir
(test_havemask_dir
)
test_nomask_dir
= os
.path
.join
(test_dir
, 'no_mask')
os
.mkdir
(test_nomask_dir
)
分配数据集,可以使用人为划分和代码划分进行一次检查,计算每个分组中有多少张照片(训练/验证/测试)
print('total training havemask images:', len(os
.listdir
(train_havemask_dir
)))
total training havemask images: 300
print('total testing havemask images:', len(os
.listdir
(test_havemask_dir
)))
total testing havemask images: 150
print('total training nomask images:', len(os
.listdir
(train_nomask_dir
)))
total training nomask images: 300
print('total validation havemask images:', len(os
.listdir
(validation_havemask_dir
)))
total validation havemask images: 150
print('total testing nomask images:', len(os
.listdir
(test_nomask_dir
)))
total testing nomask images: 150
print('total validation nomask images:', len(os
.listdir
(validation_nomask_dir
)))
total validation nomask images: 150
有600个训练图像,然后是300个验证图像,300个测试图像,其中每个分类都有相同数量的样本,是一个平衡的二元分类问题,意味着分类准确度将是合适的度量标准。
2.网络模型
卷积网络(convnets)将是一组交替的Conv2D(具有relu激活)和MaxPooling2D层。从大小150x150(有点任意选择)的输入开始,我们最终得到了尺寸为7x7的Flatten层之前的特征图。注意特征图的深度在网络中逐渐增加(从32到128),而特征图的大小正在减少(从148x148到7x7)。这是一个你将在几乎所有的卷积网络(convnets)结构中会看到的模式。由于我们正在处理二元分类问题,所以我们用一个神经元(一个大小为1的密集层(Dense))和一个sigmoid激活函数来结束网络。该神经元将会被用来查看图像归属于那一类或另一类的概率。
创建模型
from keras
import layers
from keras
import models
model
= models
.Sequential
()
model
.add
(layers
.Conv2D
(32, (3, 3), activation
='relu',
input_shape
=(150, 150, 3)))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(64, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(128, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(128, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Flatten
())
model
.add
(layers
.Dense
(512, activation
='relu'))
model
.add
(layers
.Dense
(1, activation
='sigmoid'))
看特征图的尺寸如何随着每个连续的图层而改变,打印网络结构
model
.summary
()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_13 (Conv2D) (None, 148, 148, 32) 896
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 74, 74, 32) 0
_________________________________________________________________
conv2d_14 (Conv2D) (None, 72, 72, 64) 18496
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 36, 36, 64) 0
_________________________________________________________________
conv2d_15 (Conv2D) (None, 34, 34, 128) 73856
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 17, 17, 128) 0
_________________________________________________________________
conv2d_16 (Conv2D) (None, 15, 15, 128) 147584
_________________________________________________________________
max_pooling2d_16 (MaxPooling (None, 7, 7, 128) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 6272) 0
_________________________________________________________________
dense_7 (Dense) (None, 512) 3211776
_________________________________________________________________
dense_8 (Dense) (None, 1) 513
=================================================================
Total params: 3,453,121
Trainable params: 3,453,121
Non-trainable params: 0
_________________________________________________________________
在编译步骤里,使用RMSprop优化器。由于用一个单一的神经元(Sigmoid的激活函数)结束了网络,将使用二进制交叉熵(binary crossentropy)作为损失函数
from keras
import optimizers
model
.compile(loss
='binary_crossentropy',
optimizer
=optimizers
.RMSprop
(lr
=1e-4),
metrics
=['acc'])
3.资料预处理
网络的预处理步骤:读入图像
将JPEG内容解码为RGB网格的像素
将其转换为浮点张量
将像素值(0和255之间)重新缩放到[0,1]间隔
数据应该先被格式化成适当的预处理浮点张量,然后才能输入到神经网络中
from keras
.preprocessing
.image
import ImageDataGenerator
train_datagen
= ImageDataGenerator
(rescale
=1./255)
validation_datagen
=ImageDataGenerator
(rescale
=1./255)
test_datagen
= ImageDataGenerator
(rescale
=1./255)
train_generator
= train_datagen
.flow_from_directory
(
train_dir
,
target_size
=(150, 150),
batch_size
=20,
class_mode
='binary')
validation_generator
= test_datagen
.flow_from_directory
(
validation_dir
,
target_size
=(150, 150),
batch_size
=20,
class_mode
='binary')
test_generator
= test_datagen
.flow_from_directory
(test_dir
,
target_size
=(150, 150),
batch_size
=20,
class_mode
='binary')
Found 600 images belonging to 2 classes.
Found 300 images belonging to 2 classes.
Found 300 images belonging to 2 classes.
图像张量生成器(generator)的输出,它产生150x150 RGB图像(形状"(20,150,150,3)")和二进制标签(形状"(20,)")的批次张量。20是每个批次中的样品数(批次大小)
for data_batch
, labels_batch
in train_generator
:
print('data batch shape:', data_batch
.shape
)
print('labels batch shape:', labels_batch
)
break
data batch shape: (20, 150, 150, 3)
labels batch shape: [ 1. 1. 1. 0. 1. 1. 0. 0. 0. 0. 1. 0. 0. 1. 1. 0. 1. 0.
0. 0.]
4.开始训练
这里取epochs=10,其中epochs值越大,花费的时间就越久、训练的精度更高,我电脑性能不好,运行了很久… …
history
= model
.fit_generator
(
train_generator
,
steps_per_epoch
=100,
epochs
=10,
validation_data
=validation_generator
,
validation_steps
=50)
Epoch 1/10
100/100 [==============================] - 218s 2s/step - loss: 0.2563 - acc: 0.8990 - val_loss: 0.1740 - val_acc: 0.9400
Epoch 2/10
100/100 [==============================] - 189s 2s/step - loss: 0.0862 - acc: 0.9700 - val_loss: 0.1294 - val_acc: 0.9540
Epoch 3/10
100/100 [==============================] - 190s 2s/step - loss: 0.0548 - acc: 0.9820 - val_loss: 0.1033 - val_acc: 0.9680
Epoch 4/10
100/100 [==============================] - 186s 2s/step - loss: 0.0325 - acc: 0.9880 - val_loss: 0.1132 - val_acc: 0.9620
Epoch 5/10
100/100 [==============================] - 192s 2s/step - loss: 0.0238 - acc: 0.9925 - val_loss: 0.0922 - val_acc: 0.9800
Epoch 6/10
100/100 [==============================] - 191s 2s/step - loss: 0.0132 - acc: 0.9965 - val_loss: 0.0950 - val_acc: 0.9710
Epoch 7/10
100/100 [==============================] - 189s 2s/step - loss: 0.0061 - acc: 0.9980 - val_loss: 0.1093 - val_acc: 0.9710
Epoch 8/10
100/100 [==============================] - 188s 2s/step - loss: 0.0025 - acc: 0.9995 - val_loss: 0.1305 - val_acc: 0.9690
Epoch 9/10
100/100 [==============================] - 185s 2s/step - loss: 0.0080 - acc: 0.9980 - val_loss: 0.1067 - val_acc: 0.9770
Epoch 10/10
100/100 [==============================] - 189s 2s/step - loss: 6.6883e-04 - acc: 1.0000 - val_loss: 0.1032 - val_acc: 0.9780
训练完成后把模型保存下来
model
.save
('maskout/maskAndNomask_1.h5')
使用图表来绘制在训练过程中模型对训练和验证数据的损失(loss)和准确性(accuracy)数据
import matplotlib
.pyplot
as plt
acc
= history
.history
['acc']
val_acc
= history
.history
['val_acc']
loss
= history
.history
['loss']
val_loss
= history
.history
['val_loss']
epochs
= range(len(acc
))
plt
.plot
(epochs
, acc
, 'bo', label
='Training acc')
plt
.plot
(epochs
, val_acc
, 'b', label
='Validation acc')
plt
.title
('Training and validation accuracy')
plt
.legend
()
plt
.figure
()
plt
.plot
(epochs
, loss
, 'bo', label
='Training loss')
plt
.plot
(epochs
, val_loss
, 'b', label
='Validation loss')
plt
.title
('Training and validation loss')
plt
.legend
()
plt
.show
()
上面图标显示了过度拟合(overfitting)的特征。我们的训练精确度随着时间线性增长,直到接近100%,然而我们的验证精度却停在96%~97%。 我们的验证损失在第4个循环(epochs)之后达到最小值,然后停顿,而训练损失在线性上保持下降直到接近0。
5.使用数据填充
数据增加采用从现有训练样本生成更多训练数据的方法,通过产生可信的图像的多个随机变换来"增加"样本。目标是在训练的时候,我们的模型永远不会再看到完全相同的画面两次。这有助于模型学习到数据的更多方面,并更好地推广。
在keras中,可以通过配置对ImageDataGenerator实例读取的图像执行多个随机变换来完成
datagen
= ImageDataGenerator
(
rotation_range
=40,
width_shift_range
=0.2,
height_shift_range
=0.2,
shear_range
=0.2,
zoom_range
=0.2,
horizontal_flip
=True,
fill_mode
='nearest')
这里列出一些可用的选项(更多选项,可以参考keras文档),快速看一下这些参数:rotation_range 是以度(0-180)为单位的值,它是随机旋转图片的范围width_shift 和 height_shift 是范围(占总宽度或高度的一小部分),用于纵向或横向随机转换图片shear_range 用于随机剪切变换zoom_range 用于随机放大图片内容horizontal_flip 用于在没有水平不对称假设(例如真实世界图片)的情况下水平地随机翻转一半图像fill_mode 是用于填充新创建的像素的策略,可以在旋转或宽/高移位后显示
看增强后的图像
import matplotlib
.pyplot
as plt
from keras
.preprocessing
import image
fnames
= [os
.path
.join
(train_havemask_dir
, fname
) for fname
in os
.listdir
(train_havemask_dir
)]
img_path
= fnames
[3]
img
= image
.load_img
(img_path
, target_size
=(150, 150))
x
= image
.img_to_array
(img
)
x
= x
.reshape
((1,) + x
.shape
)
i
= 0
for batch
in datagen
.flow
(x
, batch_size
=1):
plt
.figure
(i
)
imgplot
= plt
.imshow
(image
.array_to_img
(batch
[0]))
i
+= 1
if i
% 4 == 0:
break
plt
.show
()
如果我们使用这种数据增强配置来训练一个新的网络,我们的网络将永远不会看到相同重复的输入。
然而,它看到的输入仍然是相互关联的,因为它们来自少量的原始图像 - 我们不能产生新的信息,我们只能重新混合现有的信息。因此,这可能不足以完全摆脱过度拟合(overfitting)。为了进一步克服过度拟合(overfitting),我们还将在密集连接(densely-connected)的分类器之前添加一个Dropout层。
model
= models
.Sequential
()
model
.add
(layers
.Conv2D
(32, (3, 3), activation
='relu',
input_shape
=(150, 150, 3)))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(64, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(128, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Conv2D
(128, (3, 3), activation
='relu'))
model
.add
(layers
.MaxPooling2D
((2, 2)))
model
.add
(layers
.Flatten
())
model
.add
(layers
.Dropout
(0.5))
model
.add
(layers
.Dense
(512, activation
='relu'))
model
.add
(layers
.Dense
(1, activation
='sigmoid'))
model
.compile(loss
='binary_crossentropy',
optimizer
=optimizers
.RMSprop
(lr
=1e-4),
metrics
=['acc'])
使用数据填充(data augmentation)和dropout来训练我们的网络
train_datagen
= ImageDataGenerator
(
rescale
=1./255,
rotation_range
=40,
width_shift_range
=0.2,
height_shift_range
=0.2,
shear_range
=0.2,
zoom_range
=0.2,
horizontal_flip
=True,)
test_datagen
= ImageDataGenerator
(rescale
=1./255)
train_generator
= train_datagen
.flow_from_directory
(
train_dir
,
target_size
=(150, 150),
batch_size
=32,
class_mode
='binary')
validation_generator
= test_datagen
.flow_from_directory
(
validation_dir
,
target_size
=(150, 150),
batch_size
=32,
class_mode
='binary')
history
= model
.fit_generator
(
train_generator
,
steps_per_epoch
=100,
epochs
=60,
validation_data
=validation_generator
,
validation_steps
=50)
Found 600 images belonging to 2 classes.
Found 300 images belonging to 2 classes.
Epoch 1/60
100/100 [==============================] - 351s 4s/step - loss: 0.4850 - acc: 0.7632 - val_loss: 0.2380 - val_acc: 0.8900
Epoch 2/60
100/100 [==============================] - 323s 3s/step - loss: 0.3041 - acc: 0.8703 - val_loss: 0.2513 - val_acc: 0.8833
Epoch 3/60
100/100 [==============================] - 322s 3s/step - loss: 0.2864 - acc: 0.8725 - val_loss: 0.2486 - val_acc: 0.8867
Epoch 4/60
100/100 [==============================] - 316s 3s/step - loss: 0.2490 - acc: 0.8964 - val_loss: 0.1243 - val_acc: 0.9533
Epoch 5/60
100/100 [==============================] - 308s 3s/step - loss: 0.2303 - acc: 0.9056 - val_loss: 0.1830 - val_acc: 0.9200
Epoch 6/60
100/100 [==============================] - 306s 3s/step - loss: 0.2098 - acc: 0.9152 - val_loss: 0.1101 - val_acc: 0.9633
Epoch 7/60
100/100 [==============================] - 312s 3s/step - loss: 0.1905 - acc: 0.9200 - val_loss: 0.1417 - val_acc: 0.9367
Epoch 8/60
100/100 [==============================] - 310s 3s/step - loss: 0.1771 - acc: 0.9272 - val_loss: 0.1021 - val_acc: 0.9700
Epoch 9/60
100/100 [==============================] - 302s 3s/step - loss: 0.1710 - acc: 0.9284 - val_loss: 0.1220 - val_acc: 0.9467
Epoch 10/60
100/100 [==============================] - 321s 3s/step - loss: 0.1618 - acc: 0.9375 - val_loss: 0.0920 - val_acc: 0.9667
Epoch 11/60
100/100 [==============================] - 308s 3s/step - loss: 0.1458 - acc: 0.9420 - val_loss: 0.2019 - val_acc: 0.9167
Epoch 12/60
100/100 [==============================] - 303s 3s/step - loss: 0.1411 - acc: 0.9456 - val_loss: 0.0829 - val_acc: 0.9700
Epoch 13/60
100/100 [==============================] - 304s 3s/step - loss: 0.1190 - acc: 0.9537 - val_loss: 0.0932 - val_acc: 0.9667
Epoch 14/60
100/100 [==============================] - 307s 3s/step - loss: 0.1163 - acc: 0.9569 - val_loss: 0.1085 - val_acc: 0.9567
Epoch 15/60
100/100 [==============================] - 306s 3s/step - loss: 0.1006 - acc: 0.9629 - val_loss: 0.0715 - val_acc: 0.9767
Epoch 16/60
100/100 [==============================] - 312s 3s/step - loss: 0.0960 - acc: 0.9667 - val_loss: 0.0588 - val_acc: 0.9767
Epoch 17/60
100/100 [==============================] - 308s 3s/step - loss: 0.0806 - acc: 0.9676 - val_loss: 0.0535 - val_acc: 0.9800
Epoch 18/60
100/100 [==============================] - 305s 3s/step - loss: 0.0778 - acc: 0.9711 - val_loss: 0.2239 - val_acc: 0.9300
Epoch 19/60
100/100 [==============================] - 307s 3s/step - loss: 0.0761 - acc: 0.9713 - val_loss: 0.0575 - val_acc: 0.9767
Epoch 20/60
100/100 [==============================] - 308s 3s/step - loss: 0.0507 - acc: 0.9816 - val_loss: 0.0926 - val_acc: 0.9667
Epoch 21/60
100/100 [==============================] - 306s 3s/step - loss: 0.0635 - acc: 0.9799 - val_loss: 0.0470 - val_acc: 0.9833
Epoch 22/60
100/100 [==============================] - 319s 3s/step - loss: 0.0701 - acc: 0.9750 - val_loss: 0.0437 - val_acc: 0.9867
Epoch 23/60
100/100 [==============================] - 315s 3s/step - loss: 0.0493 - acc: 0.9849 - val_loss: 0.0408 - val_acc: 0.9900
Epoch 24/60
100/100 [==============================] - 309s 3s/step - loss: 0.0513 - acc: 0.9824 - val_loss: 0.0449 - val_acc: 0.9767
Epoch 25/60
100/100 [==============================] - 304s 3s/step - loss: 0.0580 - acc: 0.9816 - val_loss: 0.0330 - val_acc: 0.9900
Epoch 26/60
100/100 [==============================] - 312s 3s/step - loss: 0.0434 - acc: 0.9884 - val_loss: 0.0357 - val_acc: 0.9833
Epoch 27/60
100/100 [==============================] - 302s 3s/step - loss: 0.0707 - acc: 0.9785 - val_loss: 0.0214 - val_acc: 0.9933
Epoch 28/60
100/100 [==============================] - 311s 3s/step - loss: 0.0431 - acc: 0.9869 - val_loss: 0.0306 - val_acc: 0.9900
Epoch 29/60
100/100 [==============================] - 305s 3s/step - loss: 0.0424 - acc: 0.9859 - val_loss: 0.0278 - val_acc: 0.9900
Epoch 30/60
100/100 [==============================] - 305s 3s/step - loss: 0.0240 - acc: 0.9934 - val_loss: 0.0233 - val_acc: 0.9933
Epoch 31/60
100/100 [==============================] - 335s 3s/step - loss: 0.0515 - acc: 0.9853 - val_loss: 0.0268 - val_acc: 0.9867
Epoch 32/60
100/100 [==============================] - 326s 3s/step - loss: 0.0515 - acc: 0.9884 - val_loss: 0.0222 - val_acc: 0.9933
Epoch 33/60
100/100 [==============================] - 320s 3s/step - loss: 0.0273 - acc: 0.9927 - val_loss: 0.0281 - val_acc: 0.9900
Epoch 34/60
100/100 [==============================] - 310s 3s/step - loss: 0.0411 - acc: 0.9909 - val_loss: 0.0282 - val_acc: 0.9900
Epoch 35/60
100/100 [==============================] - 306s 3s/step - loss: 0.0204 - acc: 0.9950 - val_loss: 0.0165 - val_acc: 0.9933
Epoch 36/60
100/100 [==============================] - 306s 3s/step - loss: 0.0623 - acc: 0.9842 - val_loss: 0.0268 - val_acc: 0.9900
Epoch 37/60
100/100 [==============================] - 304s 3s/step - loss: 0.0325 - acc: 0.9908 - val_loss: 0.0152 - val_acc: 0.9933
Epoch 38/60
100/100 [==============================] - 305s 3s/step - loss: 0.0178 - acc: 0.9933 - val_loss: 0.0117 - val_acc: 0.9967
Epoch 39/60
100/100 [==============================] - 309s 3s/step - loss: 0.0507 - acc: 0.9884 - val_loss: 0.0164 - val_acc: 0.9933
Epoch 40/60
100/100 [==============================] - 305s 3s/step - loss: 0.0398 - acc: 0.9919 - val_loss: 0.0236 - val_acc: 0.9933
Epoch 41/60
100/100 [==============================] - 300s 3s/step - loss: 0.0243 - acc: 0.9909 - val_loss: 0.0176 - val_acc: 0.9933
Epoch 42/60
100/100 [==============================] - 307s 3s/step - loss: 0.0419 - acc: 0.9922 - val_loss: 0.0145 - val_acc: 0.9933
Epoch 43/60
100/100 [==============================] - 302s 3s/step - loss: 0.0451 - acc: 0.9928 - val_loss: 0.0155 - val_acc: 0.9933
Epoch 44/60
100/100 [==============================] - 304s 3s/step - loss: 0.0640 - acc: 0.9893 - val_loss: 0.2175 - val_acc: 0.9333
Epoch 45/60
100/100 [==============================] - 314s 3s/step - loss: 0.0285 - acc: 0.9934 - val_loss: 0.0092 - val_acc: 0.9967
Epoch 46/60
100/100 [==============================] - 309s 3s/step - loss: 0.0279 - acc: 0.9937 - val_loss: 0.0116 - val_acc: 0.9933
Epoch 47/60
100/100 [==============================] - 305s 3s/step - loss: 0.0258 - acc: 0.9925 - val_loss: 0.0157 - val_acc: 0.9900
Epoch 48/60
100/100 [==============================] - 307s 3s/step - loss: 0.0319 - acc: 0.9906 - val_loss: 0.0142 - val_acc: 0.9933
Epoch 49/60
100/100 [==============================] - 305s 3s/step - loss: 0.0562 - acc: 0.9884 - val_loss: 0.0228 - val_acc: 0.9933
Epoch 50/60
100/100 [==============================] - 305s 3s/step - loss: 0.0370 - acc: 0.9931 - val_loss: 0.0230 - val_acc: 0.9867
Epoch 51/60
100/100 [==============================] - 309s 3s/step - loss: 0.0047 - acc: 0.9984 - val_loss: 0.0147 - val_acc: 0.9933
Epoch 52/60
100/100 [==============================] - 306s 3s/step - loss: 0.0237 - acc: 0.9941 - val_loss: 0.0161 - val_acc: 0.9900
Epoch 53/60
100/100 [==============================] - 301s 3s/step - loss: 0.0278 - acc: 0.9950 - val_loss: 0.0202 - val_acc: 0.9933
Epoch 54/60
100/100 [==============================] - 309s 3s/step - loss: 0.0266 - acc: 0.9945 - val_loss: 0.0267 - val_acc: 0.9933
Epoch 55/60
100/100 [==============================] - 302s 3s/step - loss: 0.0264 - acc: 0.9941 - val_loss: 0.0231 - val_acc: 0.9967
Epoch 56/60
100/100 [==============================] - 304s 3s/step - loss: 0.0132 - acc: 0.9959 - val_loss: 0.0177 - val_acc: 0.9933
Epoch 57/60
100/100 [==============================] - 326s 3s/step - loss: 0.0773 - acc: 0.9891 - val_loss: 0.0893 - val_acc: 0.9733
Epoch 58/60
100/100 [==============================] - 311s 3s/step - loss: 0.0049 - acc: 0.9984 - val_loss: 0.0277 - val_acc: 0.9933
Epoch 59/60
100/100 [==============================] - 308s 3s/step - loss: 0.0791 - acc: 0.9906 - val_loss: 0.0314 - val_acc: 0.9867
Epoch 60/60
100/100 [==============================] - 307s 3s/step - loss: 0.0133 - acc: 0.9956 - val_loss: 0.0186 - val_acc: 0.9933
查看0与1代表含义(0代表有口罩、1代表没有口罩)
train_generator
.class_indices
{'have_mask': 0, 'no_mask': 1}
保存训练模型
model
.save
('maskout/maskAndNomask_2.h5')
保存模型后的文件夹显示效果 绘制数据增强后的训练集与验证集的精确度与损失度的图形,看一遍结果
acc
= history
.history
['acc']
val_acc
= history
.history
['val_acc']
loss
= history
.history
['loss']
val_loss
= history
.history
['val_loss']
epochs
= range(len(acc
))
plt
.plot
(epochs
, acc
, 'bo', label
='Training acc')
plt
.plot
(epochs
, val_acc
, 'b', label
='Validation acc')
plt
.title
('Training and validation accuracy')
plt
.legend
()
plt
.figure
()
plt
.plot
(epochs
, loss
, 'bo', label
='Training loss')
plt
.plot
(epochs
, val_loss
, 'b', label
='Validation loss')
plt
.title
('Training and validation loss')
plt
.legend
()
plt
.show
()
由于数据增加(data augmentation)和dropout使用,不再有过度拟合(overfitting)的问题。
训练曲线相当密切地跟随着验证曲线。训练精度和验证精度经过60个循环无限接近100%。 验证损失和训练损失在线性上保持下降直到接近0。
通过进一步利用正规化技术,及调整网络参数(例如每个卷积层的滤波器数量或网络层数),可以获得更好的准确度。
总结体会:可以看出,采用卷积神经网络(CNN)对人脸微笑识别和口罩识别出来的准确的还是蛮高的。由于寒假忘记带电脑回家(没想到有疫情),用家里面的电脑运行性能不太好,所有导致我训练时花了很多时间,但我一般都是晚上训练,第二天早上起来一般就会有结果了。
6.对人脸戴口罩与否的模型训练
判断的第一张图片(D:/mango/nana.jpg路径下)
我喜欢的明星 Nana
from keras
.preprocessing
import image
from keras
.models
import load_model
import numpy
as np
model
= load_model
('maskout/maskAndNomask_1.h5')
img_path
='D:/mango/nana.jpg'
img
= image
.load_img
(img_path
, target_size
=(150, 150))
img_tensor
= image
.img_to_array
(img
)/255.0
img_tensor
= np
.expand_dims
(img_tensor
, axis
=0)
prediction
=model
.predict
(img_tensor
)
print(prediction
)
if prediction
[0][0]>0.5:
result
='未戴口罩'
else:
result
='戴口罩'
print(result
)
[[ 0.0132275]]
戴口罩
判断的第二张图片(D:/mango/mengmeng.jpg路径下)
(这也是她自己拍的,哈哈… …)
from keras
.preprocessing
import image
from keras
.models
import load_model
import numpy
as np
model
= load_model
('maskout/maskAndNomask_2.h5')
img_path
='D:/mango/mengmeng.jpg'
img
= image
.load_img
(img_path
, target_size
=(150, 150))
img_tensor
= image
.img_to_array
(img
)/255.0
img_tensor
= np
.expand_dims
(img_tensor
, axis
=0)
prediction
=model
.predict
(img_tensor
)
print(prediction
)
if prediction
[0][0]>0.5:
result
='未戴口罩'
else:
result
='戴口罩'
print(result
)
[[ 0.99999881]]
未戴口罩
可以看见,判断图片是否戴口罩准确度还是很高的,但是还是有一定得误差。 综上所述,图片中人脸越清晰越容易判别正确的精度就越高。
四、完成一个摄像头采集自己人脸、并对表情(笑脸/非笑脸、戴口罩和没戴口罩)的实时分类判读(输出分类文字)的程序
1.笑脸/非笑脸实时分类判读(输出分类文字)的程序
import cv2
from keras
.preprocessing
import image
from keras
.models
import load_model
import numpy
as np
import dlib
from PIL
import Image
model
= load_model
('mangoout/smileAndUnsmile_2.h5')
detector
= dlib
.get_frontal_face_detector
()
video
=cv2
.VideoCapture
(0)
font
= cv2
.FONT_HERSHEY_SIMPLEX
def rec(img
):
gray
=cv2
.cvtColor
(img
,cv2
.COLOR_BGR2GRAY
)
dets
=detector
(gray
,1)
if dets
is not None:
for face
in dets
:
left
=face
.left
()
top
=face
.top
()
right
=face
.right
()
bottom
=face
.bottom
()
cv2
.rectangle
(img
,(left
,top
),(right
,bottom
),(0,255,0),2)
img1
=cv2
.resize
(img
[top
:bottom
,left
:right
],dsize
=(150,150))
img1
=cv2
.cvtColor
(img1
,cv2
.COLOR_BGR2RGB
)
img1
= np
.array
(img1
)/255.
img_tensor
= img1
.reshape
(-1,150,150,3)
prediction
=model
.predict
(img_tensor
)
if prediction
[0][0]>0.5:
result
='unsmile'
else:
result
='smile'
cv2
.putText
(img
, result
, (left
,top
), font
, 2, (0, 255, 0), 2, cv2
.LINE_AA
)
cv2
.imshow
('Video', img
)
while video
.isOpened
():
res
, img_rd
= video
.read
()
if not res
:
break
rec
(img_rd
)
if cv2
.waitKey
(1) & 0xFF == ord('q'):
break
video
.release
()
cv2
.destroyAllWindows
()
识别自己的图片运行效果:
没有微笑
微笑
2.戴口罩和没戴口罩的实时分类判读(输出分类文字)的程序
import cv2
from keras
.preprocessing
import image
from keras
.models
import load_model
import numpy
as np
import dlib
from PIL
import Image
model
= load_model
('maskout/maskAndNomask_2.h5')
detector
= dlib
.get_frontal_face_detector
()
video
=cv2
.VideoCapture
(0)
font
= cv2
.FONT_HERSHEY_SIMPLEX
def rec(img
):
gray
=cv2
.cvtColor
(img
,cv2
.COLOR_BGR2GRAY
)
dets
=detector
(gray
,1)
if dets
is not None:
for face
in dets
:
left
=face
.left
()
top
=face
.top
()
right
=face
.right
()
bottom
=face
.bottom
()
cv2
.rectangle
(img
,(left
,top
),(right
,bottom
),(0,255,255),2)
def mask(img
):
img1
=cv2
.resize
(img
,dsize
=(150,150))
img1
=cv2
.cvtColor
(img1
,cv2
.COLOR_BGR2RGB
)
img1
= np
.array
(img1
)/255.
img_tensor
= img1
.reshape
(-1,150,150,3)
prediction
=model
.predict
(img_tensor
)
if prediction
[0][0]>0.5:
result
='no-mask'
else:
result
='have-mask'
cv2
.putText
(img
, result
, (100,200), font
, 2, (255, 255, 100), 2, cv2
.LINE_AA
)
cv2
.imshow
('Video', img
)
while video
.isOpened
():
res
, img_rd
= video
.read
()
if not res
:
break
rec
(img_rd
)
mask
(img_rd
)
if cv2
.waitKey
(1) & 0xFF == ord('q'):
break
video
.release
()
cv2
.destroyAllWindows
()
运行结果:
没有戴口罩的效果
戴口罩的效果
这次的关于用Python编码,利用卷积神经网路(CNN)实现的笑脸识别和口罩识别到这里就结束了。这次把自己图片也用上了,主要是想让自己写的博客尽量有意义一点(平时生活中拍照还是用美图好一点,人丑勿怪,哈哈… …)。🌱更多的还是希望自己的博客可以对一些刚开始接触这个的小萌新有所帮助。最后,作为物联网小白,如果能够得到大佬们的指点当然是很开心的,加油!