人工智能实践-Tensorflow笔记-MOOC-第四讲网络八股扩展
人工智能实践-Tensorflow笔记-MOOC-第四讲网络八股扩展

人工智能实践-Tensorflow笔记-MOOC-第四讲网络八股扩展

[TOC]

人工智能实践-Tensorflow笔记-MOOC-第四讲网络八股扩展

tf.keras 搭建神经网络八股——六步法

六步法

1)import——导入所需的各种库和包
2)x_train, y_train——导入数据集、 自制数据集、数据增强
3)model=tf.keras.models.Sequential
/class MyModel(Model) model=MyModel——定义模型
4)model.compile——配置模型
5)model.fit——训练模型、 断点续训
6)model.summary——参数提取、 acc/loss 可视化、前向推理实现应用

本讲的目标

① 自制数据集,解决本领域应用
② 数据增强,扩充数据集
③ 断点续训,存取模型
④ 参数提取,把参数存入文本
⑤ acc/loss可视化,查看训练效果
⑥ 应用程序,给图识物

上节代码回顾

p14_mnist_sequential.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# import
import tensorflow as tf

# train test
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# model.Sequential
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

# model.compile
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy'])

# model.fit
model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1)

# model.summary
model.summary()

1-自制数据集-解决本领域问题

训练集图片:...\class4\MNIST_FC\mnist_image_label\mnist_test_jpg_60000

训练集标签:...\class4\MNIST_FC\mnist_image_label\mnist_test_jpg_60000.txt

测试集图片:...\class4\MNIST_FC\mnist_image_label\mnist_test_jpg_10000

测试集标签:...\class4\MNIST_FC\mnist_image_label\mnist_test_jpg_10000.txt

自制数据集-图片 自制数据集-标签

训练60000张图片,测试10000张图片,黑底白字的灰度图,每张图28行28列像素点,每个像素点都是0到255之间的整数,纯黑色用数值0,纯白色用255表示。

在上一讲中,导入mnist数据集时,看到了数据结构:

1
2
3
4
5
6
7
8
# 导入训练和测试集
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# x_train.shape:(60000, 28, 28)
# y_train.shape:(60000,)
# x_test.shape:(10000, 28, 28)
# y_test.shape:(10000,)

def generateds(图片路径, 标签文件) 定义数据集

标签文件mnist_train_jpg_xxxxx.txt:

value[0] value[1]
0_5.jpg 5
1_0.jpg 0
2_4.jpg 4
3_1.jpg 1
4_9.jpg 9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def generateds(path, txt):
f = open(txt, 'r')
contents = f.readlines() # 按行读取
f.close()
x, y_ = [], []
for content in contents:
value = content.split() # 以空格分开,存入数组
img_path = path + value[0] # 存储图片路径
img = Image.open(img_path) # 打开图片
img = np.array(img.convert('L')) # 图片转为数组
img = img / 255. # 归一化
x.append(img)
y_.append(value[1])
print('loading : ' + content) # 打印状态提示

x = np.array(x)
y_ = np.array(y_)
y_ = y_.astype(np.int64)
return x, y_

自生成数据集完整代码

P8_fashion_train_ex1.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
### 1-impirt
import tensorflow as tf
####################
from PIL import Image
import numpy as np
import os

train_path = './fashion_image_label/fashion_train_jpg_60000/'
train_txt = './fashion_image_label/fashion_train_jpg_60000.txt'
x_train_savepath = './fashion_image_label/fashion_x_train.npy'
y_train_savepath = './fashion_image_label/fahion_y_train.npy'

test_path = './fashion_image_label/fashion_test_jpg_10000/'
test_txt = './fashion_image_label/fashion_test_jpg_10000.txt'
x_test_savepath = './fashion_image_label/fashion_x_test.npy'
y_test_savepath = './fashion_image_label/fashion_y_test.npy'

def generateds(path, txt):
f = open(txt, 'r')
contents = f.readlines() # 按行读取
f.close()
x, y_ = [], []
for content in contents:
value = content.split() # 以空格分开,存入数组
img_path = path + value[0]
img = Image.open(img_path)
img = np.array(img.convert('L'))
img = img / 255.
x.append(img)
y_.append(value[1])
print('loading : ' + content)

x = np.array(x)
y_ = np.array(y_)
y_ = y_.astype(np.int64)
return x, y_

### 2-train test
if os.path.exists(x_train_savepath) and os.path.exists(y_train_savepath) and os.path.exists(
x_test_savepath) and os.path.exists(y_test_savepath):
print('-------------Load Datasets-----------------')
x_train_save = np.load(x_train_savepath)
y_train = np.load(y_train_savepath)
x_test_save = np.load(x_test_savepath)
y_test = np.load(y_test_savepath)
x_train = np.reshape(x_train_save, (len(x_train_save), 28, 28))
x_test = np.reshape(x_test_save, (len(x_test_save), 28, 28))
else:
print('-------------Generate Datasets-----------------')
x_train, y_train = generateds(train_path, train_txt)
x_test, y_test = generateds(test_path, test_txt)

print('-------------Save Datasets-----------------')
x_train_save = np.reshape(x_train, (len(x_train), -1))
x_test_save = np.reshape(x_test, (len(x_test), -1))
np.save(x_train_savepath, x_train_save)
np.save(y_train_savepath, y_train)
np.save(x_test_savepath, x_test_save)
np.save(y_test_savepath, y_test)
####################

### 3-models.Sequential
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

### 4-model.compile
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy'])

### 5-model.fit
model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1)

### 6-model.summary
model.summary()

运行过程时,会生成generateds()函数制作了.npy格式的数据集,数据集生成好后,程序开始执行训练过程,随着迭代轮数的增加,识别准确率在不断提升。

当第一次运行完毕后,再次运行,程序直接加载数据集,执行训练过程。

2-数据增强-扩充数据集

对图像的增强,就是对图像的形变,用来应对因拍照角度不同引起的图片变形

TensorFlow2数据增强函数:

1
2
image_gen_train=tf.keras.preprocessing.image.ImageDataGenerator(增强方法)
image_gen_train.fit(x_train)
  • 常用增强方法

缩放系数: rescale = 所有数据将乘以提供的值
随机旋转: rotation_range = 随机旋转角度数范围
宽度偏移: width_shift_range = 随机宽度偏移量
高度偏移: height_shift_range = 随机高度偏移量
水平翻转: horizontal_flip = 是否水平随机翻转
随机缩放: zoom_range = 随机缩放的范围 [1-n, 1+n]

例子:

1
2
3
4
5
6
7
8
image_gen_train = ImageDataGenerator(
rescale=1./255, #原像素值 0~255 归至 0~1
rotation_range=45, #随机 45 度旋转
width_shift_range=.15, #随机宽度偏移 [-0.15,0.15)
height_shift_range=.15, #随机高度偏移 [-0.15,0.15)
horizontal_flip=True, #随机水平翻转
zoom_range=0.5 #随机缩放到 [1-50%, 1+50%]
image_gen_train.fit(x_train)

由于image_gen_train.fit()需要输入四维数据,需要对x_train进行reshape

1
2
3
4
5
6
7
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
# (60000, 28, 28) → (60000, 28, 28, 1)
# 四位数的1,表示单通道,是灰度值

model.fit(x_train, y_train, batch_size=32, ……)
# ↓
model.fit(image_gen_train.flow(x_train, y_train, batch_size=32), ……)

1、 model.fit(x_train,y_train,batch_size=32,……)变为model.fit(image_gen_train.flow(x_train, y_train,batch_size=32), ……);
2、数据增强函数的输入要求是 4 维,通过 reshape 调整;
3、如果报错:缺少scipy 库, pip install scipy 即可。

数据增强代码

p13_fashion_train_ex2.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
### 1-import
import tensorflow as tf

####################
from tensorflow.keras.preprocessing.image import ImageDataGenerator
####################

### 2-train test
fashion = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

####################
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1) # 给数据增加一个维度,使数据和网络结构匹配

image_gen_train = ImageDataGenerator(
rescale=1. / 1., # 如为图像,分母为255时,可归至0~1
rotation_range=45, # 随机45度旋转
width_shift_range=.15, # 宽度偏移
height_shift_range=.15, # 高度偏移
horizontal_flip=True, # 水平翻转
zoom_range=0.5 # 将图像随机缩放阈量50%
)
image_gen_train.fit(x_train)
####################

### 3-models.Sequential
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

### 4-models.compile
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy'])

### model.fit
####################
model.fit(image_gen_train.flow(x_train, y_train, batch_size=32), epochs=5, validation_data=(x_test, y_test), validation_freq=1)
####################

### model.summary
model.summary()

数据增强在小数据量上可以增加模型泛化性,在实际应用模型时能体现出效果。

标准minist数据集单单从准确率上是看不出来模型效果的。

3-断点续训-存取模型

读取模型

load_weights(路径文件名)

1
2
3
4
checkpoint_save_path = "./checkpoint/mnist.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):
print('-------------Load the model-----------------')
model.load_weights(checkpoint_save_path)

保存模型

借助 tensorflow 给出的回调函数,直接保存参数和网络

1
2
3
4
5
6
7
8
tf.keras.callbacks.ModelChcekpoint(
filepath = 路径文件名,
save_weights_only = True/False,
monitor = 'val_loss', # val_loss or loss
save_best_only = True/False)
history = model.fit(x_train, y_train, batch_size=32, epochs=5,
validation_data=(x_test, y_test), validation_freq=1,
callbacks=[cp_callback])

注: monitor 配合 save_best_only 可以保存最优模型,包括:训练损失最小模型、测试损失最小模型、训练准确率最高模型、测试准确率最高模型等。

1
2
3
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path, save_weights_only=True, save_best_only=True)

history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1, callbacks=[cp_callback])

断点续训代码

p16_fashion_train_ex3.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
### 1-import
import tensorflow as tf
####################
import os
####################

### 2-train test
fashion = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

### 3-models.Sequential
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

### 4-models.compile
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy'])

####################
checkpoint_save_path = "./checkpoint/fashion.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):
print('-------------load the model-----------------')
model.load_weights(checkpoint_save_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path, save_weights_only=True, save_best_only=True)
####################

### 5-models.history
history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1, callbacks=[cp_callback])

### 6-models.summary
model.summary()

运行第一次会生成checkpoint文件夹,再次运行会在前一次训练结果上继续

4-参数提取-把参数存入文本

返回模型中可训练的参数:model.trainable_variables

直接print,中间有很多数据被省略号替换掉.

设置print输出格式:np.set_printoptions(threshold = 超过多少省略显示)

1
np.set_printoptions(threshold = np.inf) #np.inf表示无限大
1
2
3
4
5
6
7
8
9
np.set_printoptions(
precision=小数点后按四舍五入保留几位,
threshold=数组元素数量少于或等于门槛值, 打印全部元素;否则打印门槛值+1个元素, 中间用省略号补充)
>>> np.set_printoptions(precision=5)
>>> print(np.array([1.123456789]))
[1.12346]
>>> np.set_printoptions(threshold=5)
>>> print(np.arange(10))
[0 1 2 … , 7 8 9]

注: precision=np.inf 打印完整小数位; threshold=np.nan 打印全部数组元素。

1
2
3
4
5
6
7
print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:
file.write(str(v.name) + '\n')
file.write(str(v.shape) + '\n')
file.write(str(v.numpy()) + '\n')
file.close()

在断点续训基础上增加参数提取功能

p19_fashion_train_ex4.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
### 1-import
import tensorflow as tf
import os
####################
import numpy as np

np.set_printoptions(threshold=np.inf)
################## ##

### 2-train test
fashion = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

### 3-models.Sequential
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

### 4-models.compile
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy'])

checkpoint_save_path = "./checkpoint/fashion.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):
print('-------------load the model-----------------')
model.load_weights(checkpoint_save_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,
save_weights_only=True,
save_best_only=True)

### 5-models.fit
history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,
callbacks=[cp_callback])

### 6-models.summary
model.summary()

####################
print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:
file.write(str(v.name) + '\n')
file.write(str(v.shape) + '\n')
file.write(str(v.numpy()) + '\n')
file.close()
####################
参数提取-打印结果

5-acc/loss可视化-查看训练效果

1
2
3
4
5
6
7
8
history=model.fit(
训练集数据,
训练集标签,
batch_size = ,
epochs = ,
validation_split = 用作测试数据的比例,
validation_data = 测试集,
validation_freq = 测试频率)

model.fit执行训练过程时,同步记录了:

loss:训练集 loss
val_loss:测试集 loss
sparse_categorical_accuracy:训练集准确率
val_sparse_categorical_accuracy:测试集准确率

可以使用history.history提取出来:

1
2
3
4
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']

画图的代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']

plt.subplot(1, 2, 1) # 将图像分为1行2列,画出第1列
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()

plt.subplot(1, 2, 2) # 将图像分为1行2列,画出第2列
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()

plt.show()

acc/loss可视化在第4部分基础上增加绘图plt模块和绘图代码

p23_fashion_train_ex5.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
### 1-import
import tensorflow as tf
import os
import numpy as np
####################
from matplotlib import pyplot as plt
####################

np.set_printoptions(threshold=np.inf)

### 2-train test
fashion = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

### 3-models.Sequential
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

### 4-models.compile
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy'])

checkpoint_save_path = "./checkpoint/fashion.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):
print('-------------load the model-----------------')
model.load_weights(checkpoint_save_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path, save_weights_only=True, save_best_only=True)

### 5-models.fit
history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1, callbacks=[cp_callback])

### 5-models.summary
model.summary()

print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:
file.write(str(v.name) + '\n')
file.write(str(v.shape) + '\n')
file.write(str(v.numpy()) + '\n')
file.close()

############################################### show ###############################################

# 显示训练集和验证集的acc和loss曲线
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']

plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()

plt.show()
acc_loss可视化

6-应用程序-给图识物

输入一张手写数字图片:

输入一张手写数字图片

输出识别结果:

输出识别结果

向前传播执行应用:

1
predict(输入特征, batch_size = 整数)	#返回向前传播计算结果

注: predict 参数详解。
(1)x: 输入数据, Numpy 数组(或者 Numpy 数组的列表,如果模型有多个输出);
(2)batch_size: 整数,由于 GPU 的特性, batch_size最好选用 8, 16, 32, 64……, 如果未指定,默认为 32;
(3)verbose: 日志显示模式, 0 或 1;
(4)steps: 声明预测结束之前的总步数(批次样本), 默认值 None;
(5)返回:预测的 Numpy 数组(或数组列表)。

1
2
3
4
5
6
7
8
9
10
11
# 1-复现模型(向前传播)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax’)])

# 2-加载参数
model.load_weights(model_save_path)

# 3-预测结果
result = model.predict(x_predict)

案例代码:

p27_fashion_app.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
from PIL import Image
import numpy as np
import tensorflow as tf

type = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

model_save_path = './checkpoint/fashion.ckpt'
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

model.load_weights(model_save_path)

preNum = int(input("input the number of test pictures:"))
for i in range(preNum):
image_path = input("the path of test picture:")
img = Image.open(image_path)
img=img.resize((28,28),Image.ANTIALIAS)
img_arr = np.array(img.convert('L'))

img_arr = 255 - img_arr #每个像素点= 255 - 各自点当前灰度值,颜色取反

img_arr = img_arr/255.0
x_predict = img_arr[tf.newaxis,...]

result = model.predict(x_predict)
pred=tf.argmax(result, axis=1)
print('\n')
print(type[int(pred)])
文章作者: HibisciDai
文章链接: http://hibiscidai.com/2023/02/16/人工智能实践-Tensorflow笔记-MOOC-第四讲网络八股扩展/
版权声明: 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 HibisciDai
好用、实惠、稳定的梯子,点击这里