学习率衰减:加快学习算法的一个办法就是随时间慢慢减少学习率,我们将之称为学习率衰减(learning rate decay),在训练过程中,我们可以根据训练的结果对学习率做出改变。

import cv2
import os
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
path='flower_photos/'

读取数据集图片并添加标签,最后的形式是data 对应图片, label 是标签,roses 0,daisy 1,sunflowers 2,tulips 3,dandelion 4.

def read_img(path):
    imgs=[]
    labels=[]
    cate=[path+x for x in os.listdir(path) if os.path.isdir(path+x)]
    for idx,i in enumerate(cate):
        for j in os.listdir(i):
            im = cv2.imread(i+'/'+j)
            img = cv2.resize(im, (100,100))/255.
            #print('reading the images:%s'%(i+'/'+j))
            imgs.append(img)
            labels.append(idx)
    return np.asarray(imgs,np.float32),np.asarray(labels,np.int32)
data,label=read_img(path)

将数据集打乱顺序

num_example=data.shape[0] # data.shape是(3029, 100, 100, 3)
arr=np.arange(num_example)# 创建等差数组 0,1,...,3028
np.random.shuffle(arr)# 打乱顺序
data=data[arr]
label=label[arr]
print(label)
[4 3 4 ... 1 0 1]

标签one-hot处理

def to_one_hot(label):
    return tf.one_hot(label,5)
label_oh = to_one_hot(label)  

将所有数据集分为训练集80%、测试集20%

ratio=0.8
s=np.int(num_example*ratio)
x_train=data[:s]
y_train=label_oh.numpy()[:s]
x_val=data[s:]
y_val=label_oh.numpy()[s:]

一、使用Keras中回调(Callbacks)函数改变学习率

  • tf.keras.callbacks.LearningRateScheduler:动态改变学习速率。
  • tf.keras.callbacks.ReduceLROnPlateau:评价指标不在提升时,减少学习率

1.LearningRateScheduler

tf.keras.callbacks.LearningRateScheduler(schedule)

  • schedule:函数,该函数以epoch号为参数(从0算起的整数),返回一个新学习率(浮点数)
  • verbose:0:不显示日志信息,1:显示日志信息
def scheduler(epoch):
    # 前5个epoch学习率保持不变,5个epoch后学习率按比例衰减
    if epoch < 5:
        return 0.001
    else:
        lr = 0.001 * tf.math.exp(0.1 * (5 - epoch))
        return lr.numpy()
    
reduce_lr = tf.keras.callbacks.LearningRateScheduler(scheduler)
model.fit(train_x, train_y, batch_size=32, epochs=100, callbacks=[reduce_lr],validation_data=(val_data, val_labels))

2.ReduceLROnPlateau

  • tf.keras.callbacks.ReduceLROnPlateau(monitor=‘val_accuracy’, factor=0.1, patience=10, verbose=0, mode=‘auto’,min_lr=0)

参数说明:

  • monitor:被监测的量
  • factor:每次减少学习率的因子,学习率将以lr = lr*factor的形式被减少
  • patience:当patience个epoch过去而模型性能不提升时,学习率减少的动作会被触发
  • mode:‘auto’,‘min’,‘max’之一,在min模式下,如果检测值触发学习率减少。在max模式下,当检测值不再上升则触发学习率减少。
  • min_lr:学习率的下限
  • verbose:0不显示日志信息,1显示日志信息
reduce_lr =  tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', patience=10, mode='auto')
model.fit(train_x, train_y, batch_size=32, epochs=100, validation_split=0.1, callbacks=[reduce_lr])

二、optimizers优化器中改变学习率

tf.keras.optimizers.schedules.InverseTimeDecay:随着训练学习率衰减

参数:

  • initial_learning_rate:初始学习率
  • decay_steps: 每隔多少步进行学习率衰减
  • decay_rate: 学习率衰减率
  • staircase:衰减方式

staircase:False

def decayed_learning_rate(step):
    return initial_learning_rate / (1 + decay_rate * step / decay_step)

staircase:True

def decayed_learning_rate(step):
    return initial_learning_rate / (1 + decay_rate * floor(step / decay_step))
# floor 向下取整
initial_learning_rate = 0.1
decay_steps = 1.0
decay_rate = 0.5
learning_rate_fn = tf.keras.optimizers.schedules.InverseTimeDecay(
  initial_learning_rate, decay_steps, decay_rate)

model.compile(optimizer=tf.keras.optimizers.SGD(
              learning_rate=learning_rate_fn),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
model.fit(train_x, train_y, epochs=10)

例子:

1.LearningRateScheduler

def scheduler(epoch):
    # 前5个epoch学习率保持不变,5个epoch后学习率按比例衰减
    if epoch < 5:
        return 0.001
    else:
        lr = 0.001 * tf.math.exp(0.1 * (5 - epoch))
        return lr.numpy()
class new_re_dropout_model(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.conv1 = tf.keras.layers.Conv2D(
            filters=32,             # 卷积层神经元(卷积核)数目
            kernel_size=[3, 3],     # 感受野大小
            padding='same',         # padding策略(vaild 或 same)
            activation=tf.nn.relu,   # 激活函数
            kernel_regularizer=tf.keras.regularizers.l2(0.001)
        )
        self.pool1 = tf.keras.layers.MaxPool2D(pool_size=[2, 2], strides=2)
        self.flatten = tf.keras.layers.Flatten()
        self.drop1 = tf.keras.layers.Dropout(0.4)
        self.dense1 = tf.keras.layers.Dense(units=128,
                                            activation=tf.nn.relu,
                                            kernel_regularizer=tf.keras.regularizers.l2(0.001))
        self.drop2 = tf.keras.layers.Dropout(0.4)
        self.dense2 = tf.keras.layers.Dense(units=5)

    def call(self, inputs):
        x = self.conv1(inputs)                 
        x = self.pool1(x)                                            
        x = self.flatten(x) 
        x = self.drop1(x)
        x = self.dense1(x)  
        x = self.drop2(x)
        x = self.dense2(x)                     
        output = tf.nn.softmax(x)
        return output
    
new_re_dropout_model2 = new_re_dropout_model()
new_re_dropout_model2.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.002),
              loss = tf.keras.losses.categorical_crossentropy,
              metrics=["accuracy"])

reduce_lr = tf.keras.callbacks.LearningRateScheduler(scheduler,verbose=1)
new_re_dropout_model2.fit(x_train, y_train, epochs=10,batch_size=64, callbacks=[reduce_lr],validation_split=0.2)
Train on 2348 samples, validate on 588 samples

Epoch 00001: LearningRateScheduler reducing learning rate to 0.001.
Epoch 1/10
2348/2348 [==============================] - 10s 4ms/sample - loss: 2.8840 - accuracy: 0.2428 - val_loss: 1.7972 - val_accuracy: 0.3214

Epoch 00002: LearningRateScheduler reducing learning rate to 0.001.
Epoch 2/10
2348/2348 [==============================] - 11s 5ms/sample - loss: 1.7471 - accuracy: 0.3335 - val_loss: 1.6827 - val_accuracy: 0.3861

Epoch 00003: LearningRateScheduler reducing learning rate to 0.001.
Epoch 3/10
2348/2348 [==============================] - 11s 5ms/sample - loss: 1.6150 - accuracy: 0.3752 - val_loss: 1.5044 - val_accuracy: 0.4796

Epoch 00004: LearningRateScheduler reducing learning rate to 0.001.
Epoch 4/10
2348/2348 [==============================] - 10s 4ms/sample - loss: 1.5083 - accuracy: 0.4336 - val_loss: 1.4634 - val_accuracy: 0.4711

Epoch 00005: LearningRateScheduler reducing learning rate to 0.001.
Epoch 5/10
2348/2348 [==============================] - 10s 4ms/sample - loss: 1.4193 - accuracy: 0.4587 - val_loss: 1.3901 - val_accuracy: 0.5136

Epoch 00006: LearningRateScheduler reducing learning rate to 0.001.
Epoch 6/10
2348/2348 [==============================] - 10s 4ms/sample - loss: 1.3668 - accuracy: 0.4953 - val_loss: 1.3578 - val_accuracy: 0.5323

Epoch 00007: LearningRateScheduler reducing learning rate to 0.00090483745.
Epoch 7/10
2348/2348 [==============================] - 10s 4ms/sample - loss: 1.3017 - accuracy: 0.5336 - val_loss: 1.3276 - val_accuracy: 0.5391

Epoch 00008: LearningRateScheduler reducing learning rate to 0.0008187308.
Epoch 8/10
2348/2348 [==============================] - 10s 4ms/sample - loss: 1.2556 - accuracy: 0.5541 - val_loss: 1.3127 - val_accuracy: 0.5391

Epoch 00009: LearningRateScheduler reducing learning rate to 0.0007408182.
Epoch 9/10
2348/2348 [==============================] - 10s 4ms/sample - loss: 1.1982 - accuracy: 0.5890 - val_loss: 1.2831 - val_accuracy: 0.5612

Epoch 00010: LearningRateScheduler reducing learning rate to 0.00067032006.
Epoch 10/10
2348/2348 [==============================] - 10s 4ms/sample - loss: 1.1597 - accuracy: 0.6069 - val_loss: 1.2809 - val_accuracy: 0.5612
<tensorflow.python.keras.callbacks.History at 0x297ca85f048>

2.LearningRateScheduler

class new_re_dropout_model(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.conv1 = tf.keras.layers.Conv2D(
            filters=32,             # 卷积层神经元(卷积核)数目
            kernel_size=[3, 3],     # 感受野大小
            padding='same',         # padding策略(vaild 或 same)
            activation=tf.nn.relu,   # 激活函数
            kernel_regularizer=tf.keras.regularizers.l2(0.001)
        )
        self.pool1 = tf.keras.layers.MaxPool2D(pool_size=[2, 2], strides=2)
        self.flatten = tf.keras.layers.Flatten()
        self.drop1 = tf.keras.layers.Dropout(0.4)
        self.dense1 = tf.keras.layers.Dense(units=128,
                                            activation=tf.nn.relu,
                                            kernel_regularizer=tf.keras.regularizers.l2(0.001))
        self.drop2 = tf.keras.layers.Dropout(0.4)
        self.dense2 = tf.keras.layers.Dense(units=5)

    def call(self, inputs):
        x = self.conv1(inputs)                 
        x = self.pool1(x)                                            
        x = self.flatten(x) 
        x = self.drop1(x)
        x = self.dense1(x)  
        x = self.drop2(x)
        x = self.dense2(x)                     
        output = tf.nn.softmax(x)
        return output

new_re_dropout_model3 = new_re_dropout_model()
new_re_dropout_model3.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
              loss = tf.keras.losses.categorical_crossentropy,
              metrics=["accuracy"])

reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_accuracy', factor=0.2,patience=1, mode="max",min_lr=0.0001,verbose=1)
new_re_dropout_model3.fit(x_train, y_train, epochs=10, callbacks=[reduce_lr],validation_split=0.2)
Train on 2348 samples, validate on 588 samples
Epoch 1/10
2348/2348 [==============================] - 13s 5ms/sample - loss: 2.2031 - accuracy: 0.3233 - val_loss: 1.5434 - val_accuracy: 0.4371
Epoch 2/10
2348/2348 [==============================] - 12s 5ms/sample - loss: 1.4923 - accuracy: 0.4881 - val_loss: 1.3799 - val_accuracy: 0.5561
Epoch 3/10
2336/2348 [============================>.] - ETA: 0s - loss: 1.3589 - accuracy: 0.5411
Epoch 00003: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.
2348/2348 [==============================] - 12s 5ms/sample - loss: 1.3598 - accuracy: 0.5413 - val_loss: 1.3707 - val_accuracy: 0.5425
Epoch 4/10
2348/2348 [==============================] - 13s 5ms/sample - loss: 1.2127 - accuracy: 0.6290 - val_loss: 1.3124 - val_accuracy: 0.5884
Epoch 5/10
2336/2348 [============================>.] - ETA: 0s - loss: 1.1371 - accuracy: 0.6366
Epoch 00005: ReduceLROnPlateau reducing learning rate to 0.0001.
2348/2348 [==============================] - 13s 5ms/sample - loss: 1.1373 - accuracy: 0.6367 - val_loss: 1.2780 - val_accuracy: 0.5799
Epoch 6/10
2348/2348 [==============================] - 12s 5ms/sample - loss: 1.0925 - accuracy: 0.6742 - val_loss: 1.2752 - val_accuracy: 0.5731
Epoch 7/10
2348/2348 [==============================] - 13s 6ms/sample - loss: 1.0575 - accuracy: 0.6772 - val_loss: 1.2618 - val_accuracy: 0.5986
Epoch 8/10
2348/2348 [==============================] - 14s 6ms/sample - loss: 1.0434 - accuracy: 0.6823 - val_loss: 1.2562 - val_accuracy: 0.5816
Epoch 9/10
2348/2348 [==============================] - 14s 6ms/sample - loss: 1.0083 - accuracy: 0.7095 - val_loss: 1.2482 - val_accuracy: 0.5833
Epoch 10/10
2348/2348 [==============================] - 14s 6ms/sample - loss: 0.9754 - accuracy: 0.7129 - val_loss: 1.2401 - val_accuracy: 0.5935
<tensorflow.python.keras.callbacks.History at 0x297cb30fcf8>

3.tf.keras.optimizers.schedules.InverseTimeDecay

class new_re_dropout_model(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.conv1 = tf.keras.layers.Conv2D(
            filters=32,             # 卷积层神经元(卷积核)数目
            kernel_size=[3, 3],     # 感受野大小
            padding='same',         # padding策略(vaild 或 same)
            activation=tf.nn.relu,   # 激活函数
            kernel_regularizer=tf.keras.regularizers.l2(0.001)
        )
        self.pool1 = tf.keras.layers.MaxPool2D(pool_size=[2, 2], strides=2)
        self.flatten = tf.keras.layers.Flatten()
        self.drop1 = tf.keras.layers.Dropout(0.4)
        self.dense1 = tf.keras.layers.Dense(units=128,
                                            activation=tf.nn.relu,
                                            kernel_regularizer=tf.keras.regularizers.l2(0.001))
        self.drop2 = tf.keras.layers.Dropout(0.4)
        self.dense2 = tf.keras.layers.Dense(units=5)

    def call(self, inputs):
        x = self.conv1(inputs)                 
        x = self.pool1(x)                                            
        x = self.flatten(x) 
        x = self.drop1(x)
        x = self.dense1(x)  
        x = self.drop2(x)
        x = self.dense2(x)                     
        output = tf.nn.softmax(x)
        return output

new_re_dropout_model4 = new_re_dropout_model()

initial_learning_rate = 0.01
decay_steps = 1
decay_rate = 0.5
learning_rate_fn = tf.keras.optimizers.schedules.InverseTimeDecay(
  initial_learning_rate, decay_steps, decay_rate)

new_re_dropout_model4.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate_fn),
              loss = tf.keras.losses.categorical_crossentropy,
              metrics=["accuracy"])
new_re_dropout_model4.fit(x_train, y_train, epochs=10,validation_split=0.2)
Train on 2348 samples, validate on 588 samples
Epoch 1/10
2348/2348 [==============================] - 13s 6ms/sample - loss: 7.6401 - accuracy: 0.3037 - val_loss: 3.4360 - val_accuracy: 0.3912
Epoch 2/10
2348/2348 [==============================] - 13s 6ms/sample - loss: 3.2783 - accuracy: 0.3888 - val_loss: 3.1096 - val_accuracy: 0.4439
Epoch 3/10
2348/2348 [==============================] - 13s 6ms/sample - loss: 3.0113 - accuracy: 0.4344 - val_loss: 2.8925 - val_accuracy: 0.4354
Epoch 4/10
2348/2348 [==============================] - 13s 5ms/sample - loss: 2.7794 - accuracy: 0.4800 - val_loss: 2.7200 - val_accuracy: 0.4558
Epoch 5/10
2348/2348 [==============================] - 13s 5ms/sample - loss: 2.6162 - accuracy: 0.5077 - val_loss: 2.6006 - val_accuracy: 0.4745
Epoch 6/10
2348/2348 [==============================] - 13s 5ms/sample - loss: 2.5085 - accuracy: 0.5175 - val_loss: 2.5194 - val_accuracy: 0.4660
Epoch 7/10
2348/2348 [==============================] - 14s 6ms/sample - loss: 2.4123 - accuracy: 0.5349 - val_loss: 2.4490 - val_accuracy: 0.4966
Epoch 8/10
2348/2348 [==============================] - 14s 6ms/sample - loss: 2.3526 - accuracy: 0.5417 - val_loss: 2.3929 - val_accuracy: 0.4813
Epoch 9/10
2348/2348 [==============================] - 13s 5ms/sample - loss: 2.2809 - accuracy: 0.5605 - val_loss: 2.3447 - val_accuracy: 0.4915
Epoch 10/10
2348/2348 [==============================] - 13s 6ms/sample - loss: 2.2366 - accuracy: 0.5622 - val_loss: 2.3004 - val_accuracy: 0.4949
<tensorflow.python.keras.callbacks.History at 0x29831aca780>
Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐