深度学习入门与实践(二分类、多分类、回归问题)
lyq1996

二分类问题

  • 数据集 : imdb

一些心得:

二分类问题的神经网络输出为0或1,只有两种情况,故称为二分类,imdb数据集包含25000条电影的评价,将评价内出现的单词利用索引的方式保存在list里面,包含训练数据和测试验证数据,训练数据分为data和labels两部分,data记录评价,labels记录是否正面评价,例1为正面评价,0为负面评价。

训练中的层还并没有完全搞清楚,不做解释。

原始imdb数据集的data,取其中的train_data[0]是形如:

1
2
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141,
6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

的一个列表,将其处理为10000维np向量,维表示列,至于为什么这样做,是因为评论只取索引前10000个单词,处理规则例如,list[1,3,5,9]处理为元素1、3、5、9为1,其余9996元素为0,所有的数据就被处理为shape为(25000,10000)。

然后,新建一个模型去训练。

这东西是真的有趣,一个评论,转换成一个向量,根据向量的元素出现1或0,转换成对应评论是正面或负面。

一种微妙的感觉,就像是你指挥机器去做一件事,机器将其中隐含的某种关系挖掘出来。与此同时,机器并不需要理解某个单词的含义,他只需要找到其中的关系,当然这种关系是需要一定存在的,这样预测才会准确一点。

代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
from keras.datasets import imdb
from keras import models
from keras import layers
import numpy as np
import matplotlib.pyplot as plt


def vector_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results


(train_data, train_labels), (test_data,
test_labels) = imdb.load_data(num_words=10000)
x_train = vector_sequences(train_data)
x_test = vector_sequences(test_data)

y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')

model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy',
metrics=['accuracy'])

x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]

history = model.fit(partial_x_train, partial_y_train, epochs=4,
batch_size=512, validation_data=(x_val, y_val))
history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
acc = history_dict['acc']
val_acc = history_dict['val_acc']

epochs = range(1, len(loss_values) + 1)

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title = ('Trainning and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

多分类问题

代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import copy
from keras.datasets import reuters
from keras import models
from keras import layers
from keras.utils.np_utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt


def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results


# load data
(train_data, train_labels), (test_data,
test_labels) = reuters.load_data(num_words=10000)

# one-hot code
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)

y_train = to_categorical(train_labels)
y_test = to_categorical(test_labels)

# creat a new model
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['accuracy'])

# x_val as validation data, y_val as validation labels.
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = y_train[:1000]
partial_y_train = y_train[1000:]

# fit model
history = model.fit(partial_x_train, partial_y_train, epochs=9,
batch_size=512, validation_data=(x_val, y_val))
history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
acc = history_dict['acc']
val_acc = history_dict['val_acc']

# test the accuracy
print(model.evaluate(x_test, y_test))

test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
hits_array = np.array(test_labels) == np.array(test_labels_copy)
print(float(np.sum(hits_array)) / len(test_labels))

prediction = model.predict(x_test)
print(len(prediction))
print(np.argmax(prediction[0]))
print(test_labels[0])

# make a picture
epochs = range(1, len(loss_values) + 1)

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title = ('Trainning and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

回归问题

k折

由于这个数据集较小,所以将实例化k个相同的模型,每个模型在k-1个分区上训练,在剩下的1个分区验证,最后计算k个的平均值。

k折实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
k = 4

# 将sample化为4等分
num_val_samples = len(train_data) // k
num_epochs = 500
all_mae_histories = []

for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples:
(i+1) * num_val_samples]
val_labels = train_labels[i * num_val_samples:
(i+1) * num_val_samples]

# concatenate 中文串联,绕过验证数据,将其他的串联
partial_train_data = np.concatenate(
[train_data[:i * num_val_samples], train_data[(i+1) * num_val_samples:]], axis=0)
partial_train_labels = np.concatenate(
[train_labels[:i * num_val_samples], train_labels[(i+1) * num_val_samples:]], axis=0)

# fit model
history = model.fit(partial_train_data, partial_train_labels, validation_data=(val_data, val_labels), epochs=num_epochs,
batch_size=1, verbose=0)
mae_history = history.history['val_mean_absolute_error']
all_mae_histories.append(mae_history)
average_mae_history = [np.mean([x[i] for x in all_mae_histories])
for i in range(num_epochs)]

总结

  • 在将原始数据输入神经网络之前,通常需要对其进行预处理。
  • 如果数据特征具有不同的取值范围,那么需要进行预处理,将每个特征单独缩放。
  • 随着训练的进行,神经网络最终会过拟合,并在前所未见的数据上得到更差的结果。
  • 如果训练数据不是很多,应该使用只有一两个隐藏层的小型网络,以避免严重的过拟合。
  • 如果数据被分为多个类别,那么中间层过小可能会导致信息瓶颈。
  • 回归问题使用的损失函数和评估指标都与分类问题不同。
  • 如果要处理的数据很少,K折验证有助于可靠地评估模型。
 评论
评论插件加载失败
正在加载评论插件
由 Hexo 驱动 & 主题 Keep
本站由 提供部署服务
访客数 访问量