RL实战——用DQN/PPO打造CartPole与LunarLander

📌 本文为强化学习系列第9篇,系统讲解Gymnasium环境、CartPole的DQN解法与LunarLander的SAC解法,配有完整可运行代码、训练曲线分析与实战诊断指南。


1. OpenAI Gym / Gymnasium 环境介绍

1.1 什么是Gymnasium?

Gymnasium(原OpenAI Gym)是强化学习的标准试验场,它定义了智能体与环境交互的统一接口:

┌─────────────────────────────────────────────────────────┐
│                      强化学习交互循环                     │
│                                                         │
│   智能体(Agent)  ──动作a──▶  环境(Environment)          │
│       ▲                        │                       │
│       │                        ▼                       │
│   观测o, 奖励r          下一状态s', 奖励r, 完成信号done  │
│                                                         │
└─────────────────────────────────────────────────────────┘

核心API——三行代码走天下:

import gymnasium as gym

env = gym.make("CartPole-v1")              # 创建环境
observation, info = env.reset()            # 重置环境,获取初始观测
action = agent.select_action(observation)    # 智能体选择动作
observation, reward, terminated, truncated, info = env.step(action)  # 执行一步
env.close()

1.2 常用环境速查表

环境名 类型 观测空间 动作空间 难度 适用算法
CartPole-v1 离散 Box(4,) Discrete(2) DQN, PPO
LunarLander-v2 离散 Box(8,) Discrete(4) ⭐⭐ DQN, PPO
Pendulum-v1 连续 Box(3,) Box(1,) ⭐⭐ SAC, TD3, PPO
HalfCheetah-v2 连续 Box(17,) Box(6,) ⭐⭐⭐ SAC, TD3
Humanoid-v2 连续 Box(376,) Box(17,) ⭐⭐⭐⭐ PPO, SAC

💡 Gymnasium vs Gym:Gymnasium是Gym的社区维护分支(2023年后),API完全兼容,推荐使用gymnasium

1.3 关键概念一览

Episode(回合):从reset()到terminated/truncated==True的全过程
Step(步):一次step()调用
Return(累计奖励):一个Episode内所有reward的累加和
Observation(观测):环境返回给智能体的状态信息
Action:智能体的决策输出
Reward:环境对动作的即时反馈信号

2. CartPole 任务详解

2.1 任务描述

CartPole(倒立摆)是强化学习的"Hello World"——小车可在轨道上自由移动,上面通过铰链连接一根竿子,目标是让竿子保持直立。

        ┌─┐
        │ │  ← 竿子(pendulum)
        │ │
   ─────┴─┴────  ← 小车(cart)
  ◄───────●──────▶  小车可左右移动
═══════════════════
    轨道(rail)

物理参数:

参数
杆长 0.5m
小车质量 1.0kg
杆质量 0.1kg
摩擦力 有(μ=0.0001)
重力 g=9.8 m/s²

2.2 状态空间 / 动作空间 / 奖励机制

# 状态空间(4维连续Box)
# [小车位置, 小车速度, 杆角度, 杆角速度]
observation = {
    "cart_position":  [-4.8,  +4.8],   # 米
    "cart_velocity":  [-Inf,  +Inf],    # 米/秒
    "pole_angle":     [-0.418, +0.418], # 弧度(约±24°)
    "pole_velocity":  [-Inf,  +Inf]     # 弧度/秒
}
# 成功标准:|pole_angle| < 0.209 rad (12°)

# 动作空间(2维离散)
action = Discrete(2)
#   Action 0: 小车向左移动
#   Action 1: 小车向右移动
# (施加恒定推力 ±10N)

# 奖励机制
reward = 1.0  # 每存活一步得1分!
# 最大步数限制:CartPole-v1 = 500步
# 成功标准:连续100个Episode平均reward ≥ 475

2.3 DQN 解决方案(完整可运行代码)

"""
CartPole-v1 + DQN 完整训练代码
依赖:pip install gymnasium torch numpy
训练目标:连续100个Episode平均reward ≥ 475(相当于500分)
"""

import gymnasium as gym
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from collections import deque, namedtuple
import random
import matplotlib.pyplot as plt
from datetime import datetime

# ==================== 超参数 ====================
GAMMA = 0.99              # 折扣因子
EPSILON_START = 1.0        # 初始探索率
EPSILON_END = 0.01         # 最小探索率
EPSILON_DECAY = 500        # 探索率衰减步数
TARGET_UPDATE = 10         # 目标网络更新频率(步)
BATCH_SIZE = 64            # 批量大小
MEMORY_SIZE = 10000        # 经验回放池大小
LR = 1e-3                  # 学习率
NUM_EPISODES = 600         # 训练Episode数

# ==================== 经验回放 ====================
Transition = namedtuple('Transition',
                        ['state', 'action', 'reward', 'next_state', 'done'])

class ReplayBuffer:
    def __init__(self, capacity):
        self.buffer = deque(maxlen=capacity)

    def push(self, *args):
        self.buffer.append(Transition(*args))

    def sample(self, batch_size):
        return random.sample(self.buffer, batch_size)

    def __len__(self):
        return len(self.buffer)

# ==================== Q网络定义 ====================
class DQN(nn.Module):
    """
    输入:状态向量(4维)
    输出:Q值(每个动作一个Q值)
    """
    def __init__(self, state_dim, action_dim, hidden_dim=128):
        super(DQN, self).__init__()
        self.net = nn.Sequential(
            nn.Linear(state_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, action_dim)
        )

    def forward(self, x):
        return self.net(x)

# ==================== Agent ====================
class DQNAgent:
    def __init__(self, state_dim, action_dim):
        self.action_dim = action_dim
        self.epsilon = EPSILON_START

        # 主网络 & 目标网络
        self.q_net = DQN(state_dim, action_dim)
        self.target_net = DQN(state_dim, action_dim)
        self.target_net.load_state_dict(self.q_net.state_dict())
        self.target_net.eval()

        self.optimizer = optim.Adam(self.q_net.parameters(), lr=LR)
        self.memory = ReplayBuffer(MEMORY_SIZE)
        self.steps = 0

    def select_action(self, state, training=True):
        """ε-贪心策略选择动作"""
        if training and random.random() < self.epsilon:
            return random.randint(0, self.action_dim - 1)

        with torch.no_grad():
            state_t = torch.FloatTensor(state).unsqueeze(0)
            q_values = self.q_net(state_t)
            return q_values.argmax(dim=1).item()

    def update_epsilon(self):
        """线性衰减探索率"""
        self.epsilon = max(EPSILON_END,
                          EPSILON_START - (EPSILON_START - EPSILON_END) * self.steps / EPSILON_DECAY)

    def store(self, *args):
        self.memory.push(*args)

    def update(self):
        if len(self.memory) < BATCH_SIZE:
            return 0.0

        transitions = self.memory.sample(BATCH_SIZE)
        batch = Transition(*zip(*transitions))

        # 构建张量
        states = torch.FloatTensor(np.array(batch.state))
        actions = torch.LongTensor(batch.action).unsqueeze(1)
        rewards = torch.FloatTensor(batch.reward)
        next_states = torch.FloatTensor(np.array(batch.next_state))
        dones = torch.FloatTensor(batch.done).float()

        # 计算当前Q值
        q_values = self.q_net(states).gather(1, actions).squeeze(1)

        # 计算目标Q值(Double DQN)
        with torch.no_grad():
            next_actions = self.q_net(next_states).argmax(1)
            next_q_values = self.target_net(next_states).gather(1, next_actions.unsqueeze(1)).squeeze(1)
            target_q = rewards + GAMMA * next_q_values * (1 - dones)

        # 优化
        loss = nn.MSELoss()(q_values, target_q)
        self.optimizer.zero_grad()
        loss.backward()
        # 梯度裁剪,防止梯度爆炸
        torch.nn.utils.clip_grad_norm_(self.q_net.parameters(), max_norm=1.0)
        self.optimizer.step()

        # 更新目标网络
        self.steps += 1
        if self.steps % TARGET_UPDATE == 0:
            self.target_net.load_state_dict(self.q_net.state_dict())

        return loss.item()

# ==================== 训练循环 ====================
def train_cartpole():
    env = gym.make("CartPole-v1")
    state_dim = env.observation_space.shape[0]
    action_dim = env.action_space.n

    agent = DQNAgent(state_dim, action_dim)

    reward_history = deque(maxlen=100)   # 最近100个Episode的reward
    loss_history = []

    print("=" * 50)
    print("CartPole-v1 + DQN 训练开始")
    print("=" * 50)

    for episode in range(NUM_EPISODES):
        state, info = env.reset()
        total_reward = 0
        episode_loss = []

        while True:
            action = agent.select_action(state)
            next_state, reward, terminated, truncated, info = env.step(action)
            done = terminated or truncated

            agent.store(state, action, reward, next_state, done)
            loss = agent.update()
            if loss > 0:
                episode_loss.append(loss)

            agent.update_epsilon()
            total_reward += reward
            state = next_state

            if done:
                break

        reward_history.append(total_reward)
        avg_reward = np.mean(reward_history)

        # 每10个Episode打印一次
        if (episode + 1) % 10 == 0:
            avg_loss = np.mean(episode_loss) if episode_loss else 0
            print(f"Episode {episode+1:4d} | "
                  f"Reward: {total_reward:3.0f} | "
                  f"Avg(100): {avg_reward:6.2f} | "
                  f"Epsilon: {agent.epsilon:.3f} | "
                  f"Loss: {avg_loss:.4f}")

        # 早停:连续100个Episode平均≥475视为成功
        if len(reward_history) == 100 and avg_reward >= 475:
            print(f"\n🎉 训练成功!连续100个Episode平均reward = {avg_reward:.2f}")
            print(f"   耗时:{episode+1} episodes")
            break

    env.close()
    return reward_history

# ==================== 运行 ====================
if __name__ == "__main__":
    reward_history = train_cartpole()

    # 绘制训练曲线
    plt.figure(figsize=(12, 4))

    plt.subplot(1, 2, 1)
    plt.plot(reward_history, alpha=0.5, label='Episode Reward')
    # 移动平均
    window = 10
    ma = np.convolve(reward_history, np.ones(window)/window, mode='valid')
    plt.plot(range(window-1, len(reward_history)), ma, 'r-', linewidth=2,
             label=f'{window}-Episode MA')
    plt.axhline(y=475, color='green', linestyle='--', label='Success Threshold (475)')
    plt.xlabel("Episode")
    plt.ylabel("Reward")
    plt.title("CartPole-v1: Training Curve")
    plt.legend()
    plt.grid(True, alpha=0.3)

    plt.subplot(1, 2, 2)
    plt.hist(reward_history[-100:], bins=20, edgecolor='black', alpha=0.7)
    plt.axvline(x=np.mean(reward_history[-100:]), color='red', linestyle='--',
                label=f'Mean={np.mean(reward_history[-100:]):.1f}')
    plt.xlabel("Reward")
    plt.ylabel("Frequency")
    plt.title("Final 100 Episodes Reward Distribution")
    plt.legend()

    plt.tight_layout()
    plt.savefig("cartpole_training_curve.png", dpi=150)
    plt.show()
    print("训练曲线已保存至 cartpole_training_curve.png")

2.4 训练曲线分析

训练阶段划分:

阶段1(0~100 episodes):
  Reward:  20~50   ← 随机探索,竿子很快倒下
  Epsilon: 1.0→0.8  ← 几乎全随机

阶段2(100~300 episodes):
  Reward:  50~200   ← 初步学到平衡策略
  Epsilon: 0.8→0.1  ← 逐渐减少随机

阶段3(300~500 episodes):
  Reward:  200~450  ← 策略趋于稳定
  Epsilon: 0.1→0.01 ← 基本以Q网络决策

成功达标(500 episodes左右):
  Reward:  ~500      ← 竿子能保持到最大步数
  Avg(100): ≥475     ← 达到CartPole-v1成功标准
训练曲线示意图:

Reward
  500 |                              ●●●●●●●●●●●●●●●●●●●●
      |                        ·······
      |                    ··
      |                 ··
      |              ··
      |           ··
      |        ··
      |     ··
      |  ··············
      | ·
    0 +------------------------------------------------→ Episode
      0      100    200    300    400    500    600

    ● = 实际episode reward
    · = 10-episode移动平均

2.5 达到500分的技术细节

技术点 推荐配置 说明
网络结构 2层隐藏层,128神经元 足够拟合Q函数,太大易过拟合
探索策略 ε=1.0→0.01,线性衰减 前期探索,后期利用
经验回放 buffer=10000,batch=64 打破时序依赖,稳定训练
目标网络 每10步更新 减少TD误差震荡
Double DQN 必须开启 消除Q值过估计问题
梯度裁剪 max_norm=1.0 防止梯度爆炸
折扣因子 γ=0.99 短期奖励优先
早停条件 100-episode均值≥475 避免无谓训练

3. LunarLander 任务(连续控制)

3.1 环境描述

LunarLander让你控制一个登月器,使其安全降落在指定平台:

              * * *           ← 星星(装饰)

            ████████          ← 登月器(lander)
              ████
             ╱    ╲           ← 腿部(legs)
            ▼      ▼          ← 推力方向

      ▓▓▓▓▓▓▓▓▓▓▓▓          ← 地形(terrain)
   ▓▓            ▓▓▓▓       ← 目标平台(goal)

状态空间(8维连续Box):

维度 描述 范围
0 着陆器X坐标 [-1.5, 1.5]
1 着陆器Y坐标 [-1.5, 1.5]
2 X方向速度 [-1.0, 1.0]
3 Y方向速度 [-1.0, 1.0]
4 角度 [-3.14, 3.14]
5 角速度 [-3.14, 3.14]
6 左腿接触(0/1) [0, 1]
7 右腿接触(0/1) [0, 1]

动作空间(4维离散):

Action 描述
0 不操作(惯性)
1 启动主引擎(向上推力)
2 启动左引擎
3 启动右引擎

💡 为什么用SAC而不是DQN? LunarLander状态空间更复杂(8维),动作有4种且奖励稀疏,DQN容易发散。SAC(Soft Actor-Critic)是连续控制的主流算法,能自动调节探索程度,训练更稳定。

3.2 SAC 解决方案(完整可运行代码)

"""
LunarLander-v2 + SAC 完整训练代码
依赖:pip install gymnasium torch numpy
训练目标:连续100个Episode平均reward ≥ 200
"""

import gymnasium as gym
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributions import Normal
from collections import deque, namedtuple
import random
import matplotlib.pyplot as plt

# ==================== 超参数 ====================
GAMMA = 0.99
TAU = 0.005               # 软更新系数
ACTOR_LR = 3e-4            # Actor学习率
CRITIC_LR = 3e-4           # Critic学习率
BATCH_SIZE = 256
MEMORY_SIZE = 100000
TARGET_ENTROPY = -4.0      # 目标熵(鼓励探索)
ALPHA_INIT = 0.1           # 熵正则化系数初始值
NUM_EPISODES = 2000
HIDDEN_DIM = 256

# ==================== 经验回放 ====================
Transition = namedtuple('Transition',
                        ['state', 'action', 'reward', 'next_state', 'done'])

class ReplayBuffer:
    def __init__(self, capacity):
        self.buffer = deque(maxlen=capacity)

    def push(self, *args):
        self.buffer.append(Transition(*args))

    def sample(self, batch_size):
        return random.sample(self.buffer, batch_size)

    def __len__(self):
        return len(self.buffer)

# ==================== Actor(策略网络) ====================
class Actor(nn.Module):
    """
    输出均值和标准差,组成对角高斯策略
    """
    def __init__(self, state_dim, action_dim, hidden_dim=256):
        super(Actor, self).__init__()
        self.net = nn.Sequential(
            nn.Linear(state_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, hidden_dim),
            nn.ReLU(),
        )
        self.mean_layer = nn.Linear(hidden_dim, action_dim)
        self.log_std_layer = nn.Linear(hidden_dim, action_dim)
        self.action_dim = action_dim

    def forward(self, x):
        x = self.net(x)
        mean = self.mean_layer(x)
        log_std = self.log_std_layer(x)
        log_std = torch.clamp(log_std, min=-20, max=2)  # 防止std过小/过大
        return mean, log_std

    def sample(self, x):
        mean, log_std = self(x)
        std = log_std.exp()
        dist = Normal(mean, std)
        # 重参数化采样
        x_t = dist.rsample()
        action = torch.tanh(x_t)   # Squash到[-1, 1]

        # 计算对数概率(考虑tanh变换)
        log_prob = dist.log_prob(x_t) - torch.log(1 - action.pow(2) + 1e-6)
        log_prob = log_prob.sum(dim=-1, keepdim=True)

        return action, log_prob

# ==================== Critic(Q网络) ====================
class Critic(nn.Module):
    """Twin Critic,防止Q值过估计"""
    def __init__(self, state_dim, action_dim, hidden_dim=256):
        super(Critic, self).__init__()

        # Q1
        self.net1 = nn.Sequential(
            nn.Linear(state_dim + action_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, 1)
        )
        # Q2
        self.net2 = nn.Sequential(
            nn.Linear(state_dim + action_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, 1)
        )

    def forward(self, state, action):
        x = torch.cat([state, action], dim=-1)
        return self.net1(x), self.net2(x)

    def Q1(self, state, action):
        x = torch.cat([state, action], dim=-1)
        return self.net1(x)

# ==================== SAC Agent ====================
class SACAgent:
    def __init__(self, state_dim, action_dim):
        self.action_dim = action_dim

        # 自动熵调节:学习alpha
        self.log_alpha = torch.tensor(np.log(ALPHA_INIT), requires_grad=True)
        self.alpha_optimizer = optim.Adam([self.log_alpha], lr=ACTOR_LR)
        self.target_entropy = TARGET_ENTROPY

        # 网络
        self.actor = Actor(state_dim, action_dim, HIDDEN_DIM)
        self.critic = Critic(state_dim, action_dim, HIDDEN_DIM)
        self.critic_target = Critic(state_dim, action_dim, HIDDEN_DIM)
        self.critic_target.load_state_dict(self.critic.state_dict())

        self.actor_optimizer = optim.Adam(self.actor.parameters(), lr=ACTOR_LR)
        self.critic_optimizer = optim.Adam(self.critic.parameters(), lr=CRITIC_LR)

        self.memory = ReplayBuffer(MEMORY_SIZE)
        self.steps = 0

    @property
    def alpha(self):
        return self.log_alpha.exp()

    def select_action(self, state, deterministic=False):
        with torch.no_grad():
            state_t = torch.FloatTensor(state).unsqueeze(0)
            if deterministic:
                mean, _ = self.actor(state_t)
                action = torch.tanh(mean)
            else:
                action, _ = self.actor.sample(state_t)
            return action.squeeze(0).numpy()

    def store(self, *args):
        self.memory.push(*args)

    def update(self):
        if len(self.memory) < BATCH_SIZE:
            return {}

        transitions = self.memory.sample(BATCH_SIZE)
        batch = Transition(*zip(*transitions))

        states = torch.FloatTensor(np.array(batch.state))
        actions = torch.FloatTensor(np.array(batch.action))
        rewards = torch.FloatTensor(batch.reward).unsqueeze(1)
        next_states = torch.FloatTensor(np.array(batch.next_state))
        dones = torch.FloatTensor(batch.done).unsqueeze(1).float()

        # ==================== 更新Critic ====================
        with torch.no_grad():
            next_actions, next_log_pi = self.actor.sample(next_states)
            next_q1_target, next_q2_target = self.critic_target(next_states, next_actions)
            next_q_target = torch.min(next_q1_target, next_q2_target)
            # 熵正则化
            next_value = next_q_target - self.alpha.detach() * next_log_pi
            target_q = rewards + GAMMA * (1 - dones) * next_value

        q1, q2 = self.critic(states, actions)
        critic_loss = nn.MSELoss()(q1, target_q) + nn.MSELoss()(q2, target_q)

        self.critic_optimizer.zero_grad()
        critic_loss.backward()
        torch.nn.utils.clip_grad_norm_(self.critic.parameters(), max_norm=10.0)
        self.critic_optimizer.step()

        # ==================== 更新Actor ====================
        new_actions, log_pi = self.actor.sample(states)
        q1_new = self.critic.Q1(states, new_actions)
        q_new = torch.min(q1_new, self.critic.Q1(states, new_actions))

        actor_loss = (self.alpha.detach() * log_pi - q_new).mean()

        self.actor_optimizer.zero_grad()
        actor_loss.backward()
        self.actor_optimizer.step()

        # ==================== 更新alpha(自动熵调节)====================
        alpha_loss = -(self.log_alpha * (log_pi + self.target_entropy).detach()).mean()
        self.alpha_optimizer.zero_grad()
        alpha_loss.backward()
        self.alpha_optimizer.step()

        # ==================== 软更新目标网络 ====================
        self.soft_update(self.critic, self.critic_target)

        return {
            'critic_loss': critic_loss.item(),
            'actor_loss': actor_loss.item(),
            'alpha': self.alpha.item(),
            'q_mean': q_new.mean().item()
        }

    def soft_update(self, source, target):
        for target_param, param in zip(target.parameters(), source.parameters()):
            target_param.data.copy_(
                target_param.data * (1.0 - TAU) + param.data * TAU
            )

# ==================== 训练循环 ====================
def train_lunarlander():
    env = gym.make("LunarLander-v2")
    state_dim = env.observation_space.shape[0]
    action_dim = env.action_space.n

    agent = SACAgent(state_dim, action_dim)

    reward_history = deque(maxlen=100)
    loss_history = {'critic': [], 'actor': [], 'alpha': []}

    print("=" * 55)
    print("LunarLander-v2 + SAC 训练开始")
    print("=" * 55)

    for episode in range(NUM_EPISODES):
        state, info = env.reset()
        episode_reward = 0
        episode_losses = {'critic': [], 'actor': [], 'alpha': []}

        while True:
            action = agent.select_action(state, deterministic=False)
            next_state, reward, terminated, truncated, info = env.step(action)
            done = terminated or truncated

            agent.store(state, action, reward, next_state, done)
            losses = agent.update()

            for k, v in losses.items():
                episode_losses[k].append(v)

            episode_reward += reward
            state = next_state

            if done:
                break

        reward_history.append(episode_reward)
        avg_reward = np.mean(reward_history)

        if (episode + 1) % 10 == 0:
            print(f"Episode {episode+1:5d} | "
                  f"Reward: {episode_reward:7.2f} | "
                  f"Avg(100): {avg_reward:7.2f} | "
                  f"Alpha: {losses.get('alpha', 0):.3f}")

        # 记录损失
        for k, v in episode_losses.items():
            if v:
                loss_history[k].append(np.mean(v))

        # 成功条件:连续100个Episode平均≥200
        if len(reward_history) == 100 and avg_reward >= 200:
            print(f"\n🎉 训练成功!连续100个Episode平均reward = {avg_reward:.2f}")
            break

    env.close()
    return reward_history, loss_history

# ==================== 运行 ====================
if __name__ == "__main__":
    reward_history, loss_history = train_lunarlander()

    plt.figure(figsize=(14, 5))

    plt.subplot(1, 3, 1)
    plt.plot(reward_history, alpha=0.5)
    window = 20
    ma = np.convolve(reward_history, np.ones(window)/window, mode='valid')
    plt.plot(range(window-1, len(reward_history)), ma, 'r-', linewidth=2)
    plt.axhline(y=200, color='green', linestyle='--', label='Success (200)')
    plt.axhline(y=0, color='gray', linestyle='-', alpha=0.5)
    plt.xlabel("Episode")
    plt.ylabel("Reward")
    plt.title("LunarLander: Training Curve")
    plt.legend()
    plt.grid(True, alpha=0.3)

    plt.subplot(1, 3, 2)
    plt.plot(loss_history['critic'], alpha=0.7, label='Critic Loss')
    plt.xlabel("Episode")
    plt.ylabel("Loss")
    plt.title("Critic Loss")
    plt.legend()
    plt.grid(True, alpha=0.3)

    plt.subplot(1, 3, 3)
    plt.plot(loss_history['actor'], alpha=0.7, label='Actor Loss', color='orange')
    plt.xlabel("Episode")
    plt.ylabel("Loss")
    plt.title("Actor Loss")
    plt.legend()
    plt.grid(True, alpha=0.3)

    plt.tight_layout()
    plt.savefig("lunarlander_training_curve.png", dpi=150)
    plt.show()
    print("训练曲线已保存至 lunarlander_training_curve.png")

3.3 训练技巧详解

🎯 技巧1:经验回放(Experience Replay)
经验回放池工作原理:

    ┌─────────────────────────────────────┐
    │         Replay Buffer (cap=10000)   │
    │                                      │
    │  [s₀,a₀,r₀,s₁,d₀]                   │
    │  [s₁,a₁,r₁,s₂,d₁]                   │
    │  [s₂,a₂,r₂,s₃,d₂]    ◄── 随机采样   │
    │  [s₃,a₃,r₃,s₄,d₃]       batch=64    │
    │       ...                           │
    │  [s₉₉₉₉,a₉₉₉₉,r₉₉₉₉,s₁₀₀₀₀,d₉₉₉₉]  │
    └─────────────────────────────────────┘
           ▲ 新经验append(超过容量自动淘汰旧经验)
           │
    ┌──────────────┐
    │   智能体      │
    │   与环境交互   │
    └──────────────┘
参数 CartPole(DQN) LunarLander(SAC) 影响
Buffer Size 10,000 100,000 越大记忆越丰富,但内存开销大
Batch Size 64 256 越大越稳定,但计算开销大
最小经验量 64 256 太小导致梯度估计不准
🎯 技巧2:学习率调度(Learning Rate Scheduling)
# 推荐:使用学习率预热 + 余弦衰减
from torch.optim.lr_scheduler import CosineAnnealingLR

# 方案1:余弦退火
scheduler = CosineAnnealingLR(optimizer, T_max=1000, eta_min=1e-5)

# 方案2:分段常数(推荐用于SAC)
def lr_lambda(episode):
    if episode < 500:
        return 1.0          # 预热
    elif episode < 1000:
        return 0.5           # 下降
    elif episode < 1500:
        return 0.1           # 再下降
    else:
        return 0.01          # 保持低位

scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda)

# 训练中使用:
for episode in range(NUM_EPISODES):
    # ... 训练代码 ...
    scheduler.step()
🎯 技巧3:目标网络更新频率
更新频率 vs 训练稳定性:

过慢更新(每1000步更新一次):
  ⚠️ TD误差大,收敛慢
  ✓ 但更稳定

适中更新(DQN:每10步 / SAC软更新τ=0.005):
  ✓ 平衡收敛速度与稳定性
  ✓ 推荐配置

过快更新(每步都更新):
  ⚠️ 训练震荡,难以收敛
  ⚠️ 常见于Early DQN实现

SAC的软更新(Soft Update)公式:

θ_target = τ * θ_online + (1 - τ) * θ_target

其中 τ = 0.005(很小,保证缓慢逼近)
这比DQN的硬更新(每N步完全复制)更平滑

4. 常见问题诊断

🔴 问题1:训练不收敛

诊断流程:

Step 1: 检查Reward设计
  ├─ reward是否全为正/全为负?(会导致Q值偏移)
  ├─ reward scale是否合适?(太大/太小都不好)
  └─ 是否存在奖励稀疏问题?(LunarLander需设计shaped reward)

Step 2: 检查探索率
  ├─ ε 是否衰减太快?
  │    诊断:print(epsilon) 看是否很快降到0
  └─ 如果ε=0但还没收敛 → 探索不足,增加EPSILON_DECAY

Step 3: 检查网络规模
  ├─ 状态空间8维用256隐藏层够用
  ├─ 4维状态用128隐藏层够用
  └─ 太小:欠拟合;太大:训练慢

Step 4: 检查学习率
  ├─ 初始LR=1e-3 对DQN合理
  ├─ 初始LR=3e-4 对SAC合理
  └─ 过大:震荡;过小:收敛太慢

快速修复代码模板:

# 如果训练不收敛,按顺序尝试以下修改:
fixes = {
    "reward_shaping": """
    # 在step()中增加奖励塑形
    if abs(state[2]) < 0.05:    # 杆接近垂直
        reward += 1.0           # 额外奖励
    if abs(state[0]) < 0.5:     # 小车接近中心
        reward += 0.5
    """,

    "epsilon_decay": "EPSILON_DECAY = 1000  # 原来500,改为1000延缓衰减",
    "lr_reduce": "LR = 5e-4      # 原来1e-3,降低学习率",
    "batch_increase": "BATCH_SIZE = 128  # 原来64,增加批量大小"
}

🔴 问题2:过拟合

过拟合表现:
  - 训练集reward持续上升
  - 验证集reward不上升甚至下降
  - 策略只对特定状态有效,泛化能力差

诊断方法:
┌──────────────────────────────────────────────────────┐
│         Reward                                       │
│   500 |        ╭─────────── train                     │
│       |       ╱                                       │
│   400 |      ╱  ╭──────── val (真实环境中测试)         │
│       |     ╱ ╱                                       │
│   300 |    ╱╱╱                                       │
│       |   ╱                                       │
│     0 +──────────────────────────────────────────▶   │
│       0    100    200    300    400    500 Episode   │
│                                                      │
│   ▲ 出现gap = 过拟合信号                             │
└──────────────────────────────────────────────────────┘

解决方案:

# 方法1:加正则化
loss = nn.MSELoss()(q_values, target_q) + 0.01 * sum(
    p.pow(2).sum() for p in q_net.parameters()
)

# 方法2:减小模型规模
# 原来:hidden_dim=256 → 改为 hidden_dim=128

# 方法3:增加经验回放多样性
min_experiences = 5000  # CartPole
if len(memory) < min_experiences:
    continue  # 等待积累足够多样的经验再开始训练

# 方法4:增加探索(增大熵项权重)
TARGET_ENTROPY = -2.0  # 原来是-4.0,增大目标熵鼓励更多探索

🔴 问题3:内存不足

内存溢出常见原因:
  1. 经验回放池设置过大(如MEMORY_SIZE=1e6在内存小的机器上)
  2. Batch Size设置过大
  3. 网络规模太大

诊断与解决:
┌──────────────────────────────────────────────────┐
│ 问题现象           │ 解决方案                    │
├──────────────────────────────────────────────────┤
│ OOM during replay  │ 减小MEMORY_SIZE            │
│                    │ CartPole: 10000 → 5000     │
│                    │ LunarLander: 100000→50000  │
├──────────────────────────────────────────────────┤
│ OOM during batch   │ 减小BATCH_SIZE             │
│                    │ 256 → 128 → 64             │
├──────────────────────────────────────────────────┤
│ 训练突然崩溃        │ 加try-except捕获OOM        │
│                    │ 逐步减小各参数              │
└──────────────────────────────────────────────────┘
# 内存优化:使用更紧凑的数据类型
transitions = self.memory.sample(BATCH_SIZE)
batch = Transition(*zip(*transitions))

# 原来:FloatTensor
states = torch.FloatTensor(np.array(batch.state))

# 优化:使用half精度(GPU内存减半)
states = torch.HalfTensor(np.array(batch.state))
# 或使用BFloat16(更稳定)
states = torch.BFloat16Tensor(np.array(batch.state))

5. 实战技巧总结表

场景 问题 症状 解决方案 优先级
CartPole 训练不收敛 reward卡在30~80 增加探索衰减步数EPSILON_DECAY ⭐⭐⭐
CartPole Q值过估计 训练震荡 启用Double DQN ⭐⭐⭐
CartPole 梯度爆炸 loss=NaN 梯度裁剪max_norm=1.0 ⭐⭐⭐
LunarLander 训练不稳定 critic loss震荡 减小学习率,增大buffer ⭐⭐⭐
LunarLander 熵崩塌 alpha→0,探索停止 增大TARGET_ENTROPY ⭐⭐
通用 收敛慢 1000episode后仍无起色 增加网络容量,检查reward scale ⭐⭐
通用 过拟合 train好/val差 加正则,减小模型,加diversity ⭐⭐
通用 内存不足 OOM崩溃 减小batch和buffer ⭐⭐

推荐硬件配置

环境 最低配置 推荐配置
CartPole 4GB RAM, CPU 8GB RAM, CPU
LunarLander 8GB RAM, CPU 16GB RAM, GPU (RTX 3060)
HalfCheetah 16GB RAM, GPU 32GB RAM, GPU (RTX 3080+)

6. 🎯 记忆口诀

RL实战六句诀

一行 gym.make(),环境建好别忘了;
二行 reset(),初始观测拿到了;
三行 step(action),奖励状态一起收;
四行 buffer 存,经验回放打破关联;
五行 target_net,硬更新或软更新;
六行 clip_grad,梯度爆炸远离我。

CartPole速记

平衡问题四维观,杆角车速要记全;
DQN三件套:回放、目标、Double-DQN;
epsilon衰减莫贪快,梯度裁剪防爆炸;
达到475即成功,五百步来竿不倒。

SAC三剑客

Actor 学策略,输出均值和方差;
Critic 评Q值,Twin网络防高估;
Alpha 自动调,熵正则探索佳;
软更新 τ=0.005,稳步逼近目标值。

调试三板斧

第一斧:看reward——正负、大小、稀疏否;
第二斧:看epsilon——衰减太快探索不足;
第三斧:看loss曲线——NaN震荡过拟合。

问题自检流程

不收敛?→ 先调reward,再加探索,最后改网络
过拟合?→ 加正则,减规模,多样性
内存爆?→ 减batch,减buffer,降精度

Logo

AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。

更多推荐