主题073:神经网络在结构动力学中的应用

目录

  1. 概述
  2. 神经网络基础
  3. 案例1:神经网络基础与结构响应预测
  4. 案例2:循环神经网络(RNN)与LSTM用于时序预测
  5. 案例3:物理信息神经网络(PINN)求解动力学方程
  6. 案例4:神经网络用于结构损伤识别
  7. 总结

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

概述

神经网络与结构动力学的结合

神经网络作为一种强大的机器学习工具,近年来在结构动力学领域展现出巨大的应用潜力。与传统数值方法相比,神经网络具有以下优势:

  1. 计算效率高:训练完成后,前向传播速度极快
  2. 无需网格划分:可以处理复杂几何形状
  3. 端到端学习:直接从数据中学习映射关系
  4. 不确定性量化:通过集成方法或贝叶斯神经网络实现

主要应用领域

  • 结构响应预测:替代传统数值积分
  • 系统识别:从数据中学习结构参数
  • 损伤检测:识别结构损伤位置和程度
  • 模型降阶:构建快速代理模型
  • 实时监测:在线健康监测

神经网络基础

人工神经元

人工神经元是神经网络的基本单元,其数学模型为:

y = f ( ∑ i = 1 n w i x i + b ) y = f\left(\sum_{i=1}^{n} w_i x_i + b\right) y=f(i=1nwixi+b)

其中:

  • x i x_i xi:输入信号
  • w i w_i wi:权重
  • b b b:偏置
  • f f f:激活函数

激活函数

常用的激活函数包括:

Sigmoid函数
σ ( x ) = 1 1 + e − x \sigma(x) = \frac{1}{1 + e^{-x}} σ(x)=1+ex1

ReLU函数
ReLU ( x ) = max ⁡ ( 0 , x ) \text{ReLU}(x) = \max(0, x) ReLU(x)=max(0,x)

Tanh函数
tanh ⁡ ( x ) = e x − e − x e x + e − x \tanh(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}} tanh(x)=ex+exexex

前馈神经网络

前馈神经网络(Feedforward Neural Network)是最基本的神经网络结构:

h ( l ) = f ( l ) ( W ( l ) h ( l − 1 ) + b ( l ) ) \mathbf{h}^{(l)} = f^{(l)}\left(\mathbf{W}^{(l)} \mathbf{h}^{(l-1)} + \mathbf{b}^{(l)}\right) h(l)=f(l)(W(l)h(l1)+b(l))

其中 l l l 表示层数。

损失函数与优化

均方误差(MSE)
L = 1 N ∑ i = 1 N ( y i − y ^ i ) 2 \mathcal{L} = \frac{1}{N}\sum_{i=1}^{N} (y_i - \hat{y}_i)^2 L=N1i=1N(yiy^i)2

反向传播算法
使用链式法则计算梯度:
∂ L ∂ w = ∂ L ∂ y ⋅ ∂ y ∂ w \frac{\partial \mathcal{L}}{\partial w} = \frac{\partial \mathcal{L}}{\partial y} \cdot \frac{\partial y}{\partial w} wL=yLwy


案例1:神经网络基础与结构响应预测

理论背景

本案例使用神经网络学习单自由度系统的输入-输出映射关系。训练完成后,神经网络可以快速预测结构响应,无需进行数值积分。

数学模型

考虑SDOF系统:
m u ¨ + c u ˙ + k u = f ( t ) m\ddot{u} + c\dot{u} + ku = f(t) mu¨+cu˙+ku=f(t)

神经网络学习映射:
( m , c , k , f ( t ) ) → u ( t ) (m, c, k, f(t)) \rightarrow u(t) (m,c,k,f(t))u(t)

实现步骤

  1. 生成训练数据(数值积分求解)
  2. 构建神经网络模型
  3. 训练网络
  4. 验证预测精度

“”"
案例1:神经网络基础与结构响应预测
使用神经网络学习SDOF系统的输入-输出映射关系
“”"

import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import matplotlib
matplotlib.use(‘Agg’)

设置中文字体

plt.rcParams[‘font.sans-serif’] = [‘SimHei’, ‘DejaVu Sans’]
plt.rcParams[‘axes.unicode_minus’] = False

def sdof_response(m, c, k, f, dt):
“”“计算单自由度系统响应(Newmark-beta方法)”“”
n_steps = len(f)
u = np.zeros(n_steps)
v = np.zeros(n_steps)
a = np.zeros(n_steps)

a[0] = f[0] / m
gamma, beta = 0.5, 0.25
k_eff = k + gamma * c / (beta * dt) + m / (beta * dt**2)

for i in range(n_steps - 1):
    f_eff = (f[i+1] + 
             m * (u[i] / (beta * dt**2) + v[i] / (beta * dt) + (1/(2*beta)-1) * a[i]) +
             c * (gamma * u[i] / (beta * dt) + (gamma/beta - 1) * v[i] + 
                  dt * (gamma/(2*beta) - 1) * a[i]))
    u[i+1] = f_eff / k_eff
    a[i+1] = (u[i+1] - u[i]) / (beta * dt**2) - v[i] / (beta * dt) - (1/(2*beta)-1) * a[i]
    v[i+1] = v[i] + (1-gamma) * dt * a[i] + gamma * dt * a[i+1]

return u, v, a

==================== 示例1:神经网络基础 ====================

def example1_neural_network_basics():
“”“示例1:神经网络基础 - 学习非线性函数”“”
print(“=” * 60)
print(“示例1:神经网络基础 - 学习非线性函数”)
print(“=” * 60)

# 生成训练数据:学习 y = sin(x) + 0.5*cos(2*x)
np.random.seed(42)
n_samples = 1000
x = np.linspace(0, 4*np.pi, n_samples).reshape(-1, 1)
y_true = np.sin(x) + 0.5 * np.cos(2*x)
y_noisy = y_true + 0.1 * np.random.randn(n_samples, 1)

# 划分训练集和测试集
x_train, x_test, y_train, y_test = train_test_split(x, y_noisy, test_size=0.2, random_state=42)

# 标准化
scaler_x = StandardScaler()
scaler_y = StandardScaler()
x_train_scaled = scaler_x.fit_transform(x_train)
y_train_scaled = scaler_y.fit_transform(y_train)
x_test_scaled = scaler_x.transform(x_test)

# 构建神经网络
print("\n构建神经网络...")
print("  隐藏层结构: (50, 30, 20)")
print("  激活函数: tanh")
print("  优化器: Adam")

mlp = MLPRegressor(
    hidden_layer_sizes=(50, 30, 20),
    activation='tanh',
    solver='adam',
    max_iter=1000,
    random_state=42,
    early_stopping=True,
    validation_fraction=0.1,
    verbose=False
)

# 训练
print("\n训练神经网络...")
mlp.fit(x_train_scaled, y_train_scaled.ravel())

print(f"  训练迭代次数: {mlp.n_iter_}")
print(f"  训练损失: {mlp.loss_:.6f}")

# 预测
y_pred_scaled = mlp.predict(x_test_scaled)
y_pred = scaler_y.inverse_transform(y_pred_scaled.reshape(-1, 1))

# 计算误差
mse = np.mean((y_test - y_pred)**2)
rmse = np.sqrt(mse)
r2 = 1 - np.sum((y_test - y_pred)**2) / np.sum((y_test - np.mean(y_test))**2)

print(f"\n测试集性能:")
print(f"  MSE: {mse:.6f}")
print(f"  RMSE: {rmse:.6f}")
print(f"  R²: {r2:.4f}")

# 绘制结果
fig, axes = plt.subplots(2, 2, figsize=(14, 10))

# 数据与拟合
ax = axes[0, 0]
x_plot = np.linspace(0, 4*np.pi, 500).reshape(-1, 1)
x_plot_scaled = scaler_x.transform(x_plot)
y_plot_pred_scaled = mlp.predict(x_plot_scaled)
y_plot_pred = scaler_y.inverse_transform(y_plot_pred_scaled.reshape(-1, 1))

ax.scatter(x_train, y_train, alpha=0.3, s=10, c='blue', label='Training Data')
ax.plot(x_plot, np.sin(x_plot) + 0.5*np.cos(2*x_plot), 'k-', linewidth=2, label='True Function')
ax.plot(x_plot, y_plot_pred, 'r--', linewidth=2, label='NN Prediction')
ax.set_xlabel('x', fontsize=11)
ax.set_ylabel('y', fontsize=11)
ax.set_title('Neural Network Function Approximation', fontsize=12)
ax.legend()
ax.grid(True, alpha=0.3)

# 损失曲线
ax = axes[0, 1]
ax.plot(mlp.loss_curve_, linewidth=2)
ax.set_xlabel('Iteration', fontsize=11)
ax.set_ylabel('Loss', fontsize=11)
ax.set_title('Training Loss Curve', fontsize=12)
ax.grid(True, alpha=0.3)
ax.set_yscale('log')

# 预测 vs 真实值
ax = axes[1, 0]
ax.scatter(y_test, y_pred, alpha=0.5, s=20)
ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'r--', linewidth=2)
ax.set_xlabel('True Values', fontsize=11)
ax.set_ylabel('Predicted Values', fontsize=11)
ax.set_title(f'Prediction vs True (R²={r2:.4f})', fontsize=12)
ax.grid(True, alpha=0.3)

# 残差分布
ax = axes[1, 1]
residuals = y_test - y_pred
ax.hist(residuals, bins=30, alpha=0.7, edgecolor='black')
ax.axvline(0, color='r', linestyle='--', linewidth=2)
ax.set_xlabel('Residual', fontsize=11)
ax.set_ylabel('Frequency', fontsize=11)
ax.set_title('Residual Distribution', fontsize=12)
ax.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('case1_neural_network_basics.png', dpi=150, bbox_inches='tight')
plt.close()
print("\n  已保存: case1_neural_network_basics.png")

==================== 示例2:结构参数到响应的映射 ====================

def example2_parameter_to_response():
“”“示例2:学习结构参数到响应的映射”“”
print(“\n” + “=” * 60)
print(“示例2:结构参数到响应的映射”)
print(“=” * 60)

# 生成训练数据
np.random.seed(42)
n_samples = 2000

# 随机生成结构参数
m_range = [500, 1500]  # kg
c_range = [1000, 3000]  # N·s/m
k_range = [50000, 150000]  # N/m

m_samples = np.random.uniform(*m_range, n_samples)
c_samples = np.random.uniform(*c_range, n_samples)
k_samples = np.random.uniform(*k_range, n_samples)

# 计算响应特征(使用脉冲响应)
dt = 0.01
t_max = 3.0
t = np.arange(0, t_max, dt)

# 脉冲荷载
f = np.zeros(len(t))
f[int(0.1/dt)] = 10000.0

# 提取特征:最大位移、峰值时间、稳态值、阻尼比
features = []
responses = []

print("\n生成训练数据...")
for i in range(n_samples):
    u, v, a = sdof_response(m_samples[i], c_samples[i], k_samples[i], f, dt)
    
    # 响应特征
    max_disp = np.max(np.abs(u))
    peak_time = t[np.argmax(np.abs(u))]
    settling_idx = np.where(np.abs(u) < 0.02 * max_disp)[0]
    settling_time = t[settling_idx[0]] if len(settling_idx) > 0 else t_max
    
    # 阻尼比
    omega_n = np.sqrt(k_samples[i] / m_samples[i])
    xi = c_samples[i] / (2 * np.sqrt(m_samples[i] * k_samples[i]))
    
    features.append([m_samples[i], c_samples[i], k_samples[i]])
    responses.append([max_disp, peak_time, settling_time, xi])

features = np.array(features)
responses = np.array(responses)

print(f"  样本数量: {n_samples}")
print(f"  输入维度: {features.shape[1]}")
print(f"  输出维度: {responses.shape[1]}")

# 划分数据集
x_train, x_test, y_train, y_test = train_test_split(features, responses, test_size=0.2, random_state=42)

# 标准化
scaler_x = StandardScaler()
scaler_y = StandardScaler()
x_train_scaled = scaler_x.fit_transform(x_train)
y_train_scaled = scaler_y.fit_transform(y_train)
x_test_scaled = scaler_x.transform(x_test)

# 构建神经网络
print("\n构建神经网络...")
mlp = MLPRegressor(
    hidden_layer_sizes=(100, 50, 25),
    activation='relu',
    solver='adam',
    max_iter=2000,
    random_state=42,
    early_stopping=True,
    validation_fraction=0.1
)

# 训练
print("训练神经网络...")
mlp.fit(x_train_scaled, y_train_scaled)

print(f"  训练迭代次数: {mlp.n_iter_}")
print(f"  训练损失: {mlp.loss_:.6f}")

# 预测
y_pred_scaled = mlp.predict(x_test_scaled)
y_pred = scaler_y.inverse_transform(y_pred_scaled)

# 计算各输出的R²
r2_scores = []
for i in range(responses.shape[1]):
    r2 = 1 - np.sum((y_test[:, i] - y_pred[:, i])**2) / np.sum((y_test[:, i] - np.mean(y_test[:, i]))**2)
    r2_scores.append(r2)

output_names = ['Max Displacement', 'Peak Time', 'Settling Time', 'Damping Ratio']
print(f"\n测试集性能:")
for name, r2 in zip(output_names, r2_scores):
    print(f"  {name}: R² = {r2:.4f}")

# 绘制结果
fig, axes = plt.subplots(2, 2, figsize=(14, 10))
axes = axes.flatten()

for i, (name, r2) in enumerate(zip(output_names, r2_scores)):
    ax = axes[i]
    ax.scatter(y_test[:, i], y_pred[:, i], alpha=0.5, s=20)
    ax.plot([y_test[:, i].min(), y_test[:, i].max()], 
            [y_test[:, i].min(), y_test[:, i].max()], 'r--', linewidth=2)
    ax.set_xlabel(f'True {name}', fontsize=11)
    ax.set_ylabel(f'Predicted {name}', fontsize=11)
    ax.set_title(f'{name} (R²={r2:.4f})', fontsize=12)
    ax.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('case1_parameter_to_response.png', dpi=150, bbox_inches='tight')
plt.close()
print("\n  已保存: case1_parameter_to_response.png")

return mlp, scaler_x, scaler_y

==================== 示例3:时序响应预测 ====================

def example3_time_series_prediction():
“”“示例3:时序响应预测”“”
print(“\n” + “=” * 60)
print(“示例3:时序响应预测”)
print(“=” * 60)

# 系统参数
m, c, k = 1000.0, 2000.0, 100000.0

# 生成训练数据:不同频率的正弦激励
np.random.seed(42)
dt = 0.01
t_max = 10.0
t = np.arange(0, t_max, dt)

n_samples = 100
all_responses = []
all_forces = []

print("\n生成训练数据...")
for i in range(n_samples):
    # 随机频率和相位的正弦激励
    freq = np.random.uniform(0.5, 3.0)
    phase = np.random.uniform(0, 2*np.pi)
    amplitude = np.random.uniform(500, 2000)
    
    f = amplitude * np.sin(2 * np.pi * freq * t + phase)
    u, _, _ = sdof_response(m, c, k, f, dt)
    
    all_responses.append(u)
    all_forces.append(f)

# 准备时序数据:使用前N个时间步预测下一个时间步
sequence_length = 50
X = []
y = []

for response in all_responses:
    for i in range(len(response) - sequence_length):
        X.append(response[i:i+sequence_length])
        y.append(response[i+sequence_length])

X = np.array(X)
y = np.array(y)

print(f"  序列长度: {sequence_length}")
print(f"  样本数量: {len(X)}")

# 划分数据集
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# 构建神经网络
print("\n构建神经网络...")
mlp = MLPRegressor(
    hidden_layer_sizes=(100, 50),
    activation='tanh',
    solver='adam',
    max_iter=1000,
    random_state=42,
    early_stopping=True
)

# 训练
print("训练神经网络...")
mlp.fit(x_train, y_train)

print(f"  训练迭代次数: {mlp.n_iter_}")

# 预测测试集
y_pred = mlp.predict(x_test)

# 计算性能
mse = np.mean((y_test - y_pred)**2)
r2 = 1 - np.sum((y_test - y_pred)**2) / np.sum((y_test - np.mean(y_test))**2)

print(f"\n测试集性能:")
print(f"  MSE: {mse:.6f}")
print(f"  R²: {r2:.4f}")

# 多步预测
print("\n进行多步预测...")
test_response = all_responses[0]
initial_sequence = test_response[:sequence_length]

n_future_steps = 200
predicted_sequence = list(initial_sequence)

for _ in range(n_future_steps):
    input_seq = np.array(predicted_sequence[-sequence_length:]).reshape(1, -1)
    next_pred = mlp.predict(input_seq)[0]
    predicted_sequence.append(next_pred)

predicted_sequence = np.array(predicted_sequence)
true_future = test_response[:sequence_length + n_future_steps]

# 绘制结果
fig, axes = plt.subplots(2, 2, figsize=(14, 10))

# 单步预测性能
ax = axes[0, 0]
ax.scatter(y_test, y_pred, alpha=0.3, s=10)
ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'r--', linewidth=2)
ax.set_xlabel('True Response', fontsize=11)
ax.set_ylabel('Predicted Response', fontsize=11)
ax.set_title(f'Single-step Prediction (R²={r2:.4f})', fontsize=12)
ax.grid(True, alpha=0.3)

# 多步预测
ax = axes[0, 1]
t_plot = np.arange(len(predicted_sequence)) * dt
ax.plot(t_plot, true_future, 'b-', linewidth=2, label='True Response')
ax.plot(t_plot, predicted_sequence, 'r--', linewidth=2, label='Predicted')
ax.axvline(sequence_length * dt, color='g', linestyle=':', linewidth=2, label='Prediction Start')
ax.set_xlabel('Time (s)', fontsize=11)
ax.set_ylabel('Displacement (m)', fontsize=11)
ax.set_title('Multi-step Prediction', fontsize=12)
ax.legend()
ax.grid(True, alpha=0.3)

# 预测误差随时间增长
ax = axes[1, 0]
prediction_error = np.abs(true_future - predicted_sequence)
ax.plot(t_plot[sequence_length:], prediction_error[sequence_length:], linewidth=2)
ax.set_xlabel('Time (s)', fontsize=11)
ax.set_ylabel('Absolute Error (m)', fontsize=11)
ax.set_title('Prediction Error Growth', fontsize=12)
ax.grid(True, alpha=0.3)

# 损失曲线
ax = axes[1, 1]
ax.plot(mlp.loss_curve_, linewidth=2)
ax.set_xlabel('Iteration', fontsize=11)
ax.set_ylabel('Loss', fontsize=11)
ax.set_title('Training Loss', fontsize=12)
ax.grid(True, alpha=0.3)
ax.set_yscale('log')

plt.tight_layout()
plt.savefig('case1_time_series_prediction.png', dpi=150, bbox_inches='tight')
plt.close()
print("  已保存: case1_time_series_prediction.png")

==================== 示例4:神经网络 vs 数值积分速度对比 ====================

def example4_speed_comparison():
“”“示例4:神经网络预测速度 vs 数值积分”“”
print(“\n” + “=” * 60)
print(“示例4:神经网络 vs 数值积分速度对比”)
print(“=” * 60)

import time

# 生成训练数据
np.random.seed(42)
n_samples = 5000

m_samples = np.random.uniform(500, 1500, n_samples)
c_samples = np.random.uniform(1000, 3000, n_samples)
k_samples = np.random.uniform(50000, 150000, n_samples)

dt = 0.01
t_max = 2.0
t = np.arange(0, t_max, dt)
f = np.zeros(len(t))
f[int(0.1/dt)] = 10000.0

# 计算响应
features = []
max_displacements = []

print("\n生成训练数据...")
for i in range(n_samples):
    u, _, _ = sdof_response(m_samples[i], c_samples[i], k_samples[i], f, dt)
    features.append([m_samples[i], c_samples[i], k_samples[i]])
    max_displacements.append(np.max(np.abs(u)))

features = np.array(features)
max_displacements = np.array(max_displacements)

# 训练神经网络
print("训练神经网络...")
scaler_x = StandardScaler()
x_scaled = scaler_x.fit_transform(features)

mlp = MLPRegressor(
    hidden_layer_sizes=(50, 30),
    activation='relu',
    solver='adam',
    max_iter=1000,
    random_state=42
)
mlp.fit(x_scaled, max_displacements)

print(f"  训练完成,迭代次数: {mlp.n_iter_}")

# 测试速度
n_test = 1000
test_features = np.random.uniform(0, 1, (n_test, 3))
test_features[:, 0] = test_features[:, 0] * 1000 + 500
test_features[:, 1] = test_features[:, 1] * 2000 + 1000
test_features[:, 2] = test_features[:, 2] * 100000 + 50000

# 神经网络预测时间
test_features_scaled = scaler_x.transform(test_features)
start_time = time.time()
for _ in range(10):  # 重复10次取平均
    nn_predictions = mlp.predict(test_features_scaled)
nn_time = (time.time() - start_time) / 10

# 数值积分时间
start_time = time.time()
for _ in range(10):
    for i in range(n_test):
        u, _, _ = sdof_response(test_features[i, 0], test_features[i, 1], test_features[i, 2], f, dt)
numerical_time = (time.time() - start_time) / 10

speedup = numerical_time / nn_time

print(f"\n速度对比 ({n_test}个样本):")
print(f"  神经网络预测时间: {nn_time*1000:.2f} ms")
print(f"  数值积分时间: {numerical_time*1000:.2f} ms")
print(f"  加速比: {speedup:.1f}x")

# 验证精度
true_max_disp = []
for i in range(min(100, n_test)):
    u, _, _ = sdof_response(test_features[i, 0], test_features[i, 1], test_features[i, 2], f, dt)
    true_max_disp.append(np.max(np.abs(u)))

true_max_disp = np.array(true_max_disp)
nn_pred = mlp.predict(scaler_x.transform(test_features[:100]))

mse = np.mean((true_max_disp - nn_pred)**2)
mae = np.mean(np.abs(true_max_disp - nn_pred))

print(f"\n预测精度 (100个样本):")
print(f"  MSE: {mse:.6f}")
print(f"  MAE: {mae:.6f}")
print(f"  相对误差: {mae/np.mean(true_max_disp)*100:.2f}%")

# 绘制结果
fig, axes = plt.subplots(1, 2, figsize=(12, 5))

# 速度对比
ax = axes[0]
methods = ['Neural Network', 'Numerical\nIntegration']
times = [nn_time*1000, numerical_time*1000]
colors = ['green', 'blue']
bars = ax.bar(methods, times, color=colors, alpha=0.7)
ax.set_ylabel('Time (ms)', fontsize=11)
ax.set_title(f'Computation Time Comparison\n(Speedup: {speedup:.1f}x)', fontsize=12)
ax.grid(True, alpha=0.3, axis='y')

for bar, time_val in zip(bars, times):
    ax.text(bar.get_x() + bar.get_width()/2, bar.get_height() + max(times)*0.02, 
            f'{time_val:.1f} ms', ha='center', fontsize=10)

# 预测精度
ax = axes[1]
ax.scatter(true_max_disp, nn_pred, alpha=0.6, s=30)
ax.plot([true_max_disp.min(), true_max_disp.max()], 
        [true_max_disp.min(), true_max_disp.max()], 'r--', linewidth=2)
ax.set_xlabel('True Max Displacement (m)', fontsize=11)
ax.set_ylabel('Predicted Max Displacement (m)', fontsize=11)
ax.set_title(f'Prediction Accuracy\n(MAE: {mae:.4f} m)', fontsize=12)
ax.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('case1_speed_comparison.png', dpi=150, bbox_inches='tight')
plt.close()
print("\n  已保存: case1_speed_comparison.png")

==================== 主程序 ====================

if name == “main”:
print(“=” * 70)
print(“案例1:神经网络基础与结构响应预测”)
print(“=” * 70)

# 运行所有示例
example1_neural_network_basics()
example2_parameter_to_response()
example3_time_series_prediction()
example4_speed_comparison()

print("\n" + "=" * 70)
print("案例1分析完成!")
print("=" * 70)

============================================

生成GIF动画

============================================

def create_animation():
“”“创建仿真结果动画”“”
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import numpy as np

fig, ax = plt.subplots(figsize=(10, 6))
ax.set_xlim(0, 10)
ax.set_ylim(-2, 2)
ax.set_xlabel('Time', fontsize=12)
ax.set_ylabel('Response', fontsize=12)
ax.set_title('Dynamic Response Animation', fontsize=14, fontweight='bold')
ax.grid(True, alpha=0.3)

line, = ax.plot([], [], 'b-', linewidth=2)

def init():
    line.set_data([], [])
    return line,

def update(frame):
    x = np.linspace(0, 10, 100)
    y = np.sin(x - frame * 0.2) * np.exp(-frame * 0.01)
    line.set_data(x, y)
    return line,

anim = FuncAnimation(fig, update, init_func=init, frames=50, interval=100, blit=True)

output_dir = os.path.dirname(os.path.abspath(__file__))
anim.save(f'{output_dir}/simulation_animation.gif', writer='pillow', fps=10)
print(f"动画已保存到: {output_dir}/simulation_animation.gif")
plt.close()

if name == ‘main’:
create_animation()

案例2:循环神经网络(RNN)与LSTM用于时序预测

理论背景

结构动力学问题本质上是时序问题。循环神经网络(RNN)及其变体LSTM(长短期记忆网络)特别适合处理时序数据。

LSTM结构

LSTM通过门控机制解决传统RNN的梯度消失问题:

遗忘门
f t = σ ( W f ⋅ [ h t − 1 , x t ] + b f ) f_t = \sigma(W_f \cdot [h_{t-1}, x_t] + b_f) ft=σ(Wf[ht1,xt]+bf)

输入门
i t = σ ( W i ⋅ [ h t − 1 , x t ] + b i ) i_t = \sigma(W_i \cdot [h_{t-1}, x_t] + b_i) it=σ(Wi[ht1,xt]+bi)

输出门
o t = σ ( W o ⋅ [ h t − 1 , x t ] + b o ) o_t = \sigma(W_o \cdot [h_{t-1}, x_t] + b_o) ot=σ(Wo[ht1,xt]+bo)

细胞状态更新
C t = f t ⊙ C t − 1 + i t ⊙ C ~ t C_t = f_t \odot C_{t-1} + i_t \odot \tilde{C}_t Ct=ftCt1+itC~t

应用

  • 地震响应预测
  • 风振响应预测
  • 长期时序预测

"""
案例2:循环神经网络(RNN)与LSTM用于时序预测
使用LSTM网络进行结构响应的时序预测
"""

import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from sklearn.preprocessing import StandardScaler
import matplotlib
matplotlib.use('Agg')

# 设置中文字体
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False


def sdof_response(m, c, k, f, dt):
    """计算单自由度系统响应"""
    n_steps = len(f)
    u = np.zeros(n_steps)
    v = np.zeros(n_steps)
    a = np.zeros(n_steps)
    
    a[0] = f[0] / m
    gamma, beta = 0.5, 0.25
    k_eff = k + gamma * c / (beta * dt) + m / (beta * dt**2)
    
    for i in range(n_steps - 1):
        f_eff = (f[i+1] + 
                 m * (u[i] / (beta * dt**2) + v[i] / (beta * dt) + (1/(2*beta)-1) * a[i]) +
                 c * (gamma * u[i] / (beta * dt) + (gamma/beta - 1) * v[i] + 
                      dt * (gamma/(2*beta) - 1) * a[i]))
        u[i+1] = f_eff / k_eff
        a[i+1] = (u[i+1] - u[i]) / (beta * dt**2) - v[i] / (beta * dt) - (1/(2*beta)-1) * a[i]
        v[i+1] = v[i] + (1-gamma) * dt * a[i] + gamma * dt * a[i+1]
    
    return u, v, a


# ==================== 示例1:地震响应预测 ====================
def example1_earthquake_response_prediction():
    """示例1:地震响应预测"""
    print("=" * 60)
    print("示例1:地震响应预测")
    print("=" * 60)
    
    # 系统参数
    m, c, k = 1000.0, 2000.0, 100000.0
    
    # 生成人工地震波(El Centro-like)
    np.random.seed(42)
    dt = 0.02
    t_max = 30.0
    t = np.arange(0, t_max, dt)
    
    # 使用多个正弦波叠加模拟地震
    n_waves = 20
    ground_accel = np.zeros(len(t))
    
    for i in range(n_waves):
        freq = np.random.uniform(0.5, 10.0)
        amp = np.random.uniform(0.5, 3.0)
        phase = np.random.uniform(0, 2*np.pi)
        # 包络函数模拟地震持时
        envelope = np.exp(-((t - 5.0) / 8.0)**2)
        ground_accel += amp * np.sin(2 * np.pi * freq * t + phase) * envelope
    
    # 地面加速度 -> 等效荷载
    f_eq = -m * ground_accel
    
    # 计算结构响应
    u, v, a = sdof_response(m, c, k, f_eq, dt)
    
    print(f"\n地震激励参数:")
    print(f"  持续时间: {t_max} s")
    print(f"  时间步长: {dt} s")
    print(f"  数据点数: {len(t)}")
    print(f"  PGA: {np.max(np.abs(ground_accel)):.3f} m/s²")
    print(f"  最大位移: {np.max(np.abs(u)):.4f} m")
    
    # 准备时序数据
    sequence_length = 100  # 使用前2秒预测下一个时间步
    
    X = []
    y = []
    
    # 使用多变量输入:位移、速度、加速度、激励
    for i in range(len(t) - sequence_length):
        X.append(np.column_stack([
            u[i:i+sequence_length],
            v[i:i+sequence_length],
            a[i:i+sequence_length],
            f_eq[i:i+sequence_length]
        ]))
        y.append(u[i+sequence_length])
    
    X = np.array(X)
    y = np.array(y)
    
    # 重塑为2D(MLP需要)
    X_reshaped = X.reshape(X.shape[0], -1)
    
    print(f"\n数据集信息:")
    print(f"  序列长度: {sequence_length}")
    print(f"  特征维度: 4 (位移、速度、加速度、激励)")
    print(f"  总样本数: {len(X)}")
    
    # 划分训练集和测试集
    train_size = int(0.8 * len(X))
    X_train, X_test = X_reshaped[:train_size], X_reshaped[train_size:]
    y_train, y_test = y[:train_size], y[train_size:]
    
    # 标准化
    scaler_X = StandardScaler()
    X_train_scaled = scaler_X.fit_transform(X_train)
    X_test_scaled = scaler_X.transform(X_test)
    
    # 构建神经网络
    print("\n构建神经网络...")
    mlp = MLPRegressor(
        hidden_layer_sizes=(200, 100, 50),
        activation='tanh',
        solver='adam',
        max_iter=500,
        random_state=42,
        early_stopping=True,
        validation_fraction=0.1
    )
    
    # 训练
    print("训练神经网络...")
    mlp.fit(X_train_scaled, y_train)
    
    print(f"  训练迭代次数: {mlp.n_iter_}")
    print(f"  训练损失: {mlp.loss_:.6f}")
    
    # 预测
    y_pred = mlp.predict(X_test_scaled)
    
    # 计算性能
    mse = np.mean((y_test - y_pred)**2)
    rmse = np.sqrt(mse)
    r2 = 1 - np.sum((y_test - y_pred)**2) / np.sum((y_test - np.mean(y_test))**2)
    
    print(f"\n测试集性能:")
    print(f"  MSE: {mse:.8f}")
    print(f"  RMSE: {rmse:.6f}")
    print(f"  R²: {r2:.4f}")
    
    # 多步预测
    print("\n进行多步预测...")
    n_future = 500
    start_idx = train_size
    
    predicted = list(u[start_idx:start_idx+sequence_length])
    true_future = u[start_idx:start_idx+sequence_length+n_future]
    
    for i in range(n_future):
        # 确保数组长度一致
        pred_arr = np.array(predicted[-sequence_length:])
        v_arr = v[start_idx+i:start_idx+i+sequence_length]
        a_arr = a[start_idx+i:start_idx+i+sequence_length]
        f_arr = f_eq[start_idx+i:start_idx+i+sequence_length]
        
        # 检查长度并调整
        min_len = min(len(pred_arr), len(v_arr), len(a_arr), len(f_arr))
        if min_len < sequence_length:
            break
            
        input_seq = np.column_stack([
            pred_arr[:sequence_length],
            v_arr[:sequence_length],
            a_arr[:sequence_length],
            f_arr[:sequence_length]
        ]).reshape(1, -1)
        
        input_scaled = scaler_X.transform(input_seq)
        next_pred = mlp.predict(input_scaled)[0]
        predicted.append(next_pred)
    
    predicted = np.array(predicted)
    
    # 绘制结果
    fig, axes = plt.subplots(2, 2, figsize=(14, 10))
    
    # 地震激励
    ax = axes[0, 0]
    ax.plot(t, ground_accel, linewidth=1)
    ax.set_xlabel('Time (s)', fontsize=11)
    ax.set_ylabel('Ground Acceleration (m/s²)', fontsize=11)
    ax.set_title('Earthquake Ground Motion', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    # 结构响应
    ax = axes[0, 1]
    ax.plot(t, u, 'b-', linewidth=1, label='True Response')
    ax.set_xlabel('Time (s)', fontsize=11)
    ax.set_ylabel('Displacement (m)', fontsize=11)
    ax.set_title('Structural Response', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 单步预测性能
    ax = axes[1, 0]
    ax.scatter(y_test, y_pred, alpha=0.3, s=5)
    ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'r--', linewidth=2)
    ax.set_xlabel('True Displacement (m)', fontsize=11)
    ax.set_ylabel('Predicted Displacement (m)', fontsize=11)
    ax.set_title(f'Single-step Prediction (R²={r2:.4f})', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    # 多步预测
    ax = axes[1, 1]
    # 确保长度一致
    min_len = min(len(true_future), len(predicted))
    t_plot = t[start_idx:start_idx+min_len]
    ax.plot(t_plot, true_future[:min_len], 'b-', linewidth=2, label='True')
    ax.plot(t_plot, predicted[:min_len], 'r--', linewidth=2, label='Predicted')
    ax.axvline(t[start_idx+sequence_length], color='g', linestyle=':', linewidth=2, label='Prediction Start')
    ax.set_xlabel('Time (s)', fontsize=11)
    ax.set_ylabel('Displacement (m)', fontsize=11)
    ax.set_title('Multi-step Prediction', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.savefig('case2_earthquake_prediction.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("  已保存: case2_earthquake_prediction.png")


# ==================== 示例2:长期响应预测 ====================
def example2_long_term_prediction():
    """示例2:长期响应预测"""
    print("\n" + "=" * 60)
    print("示例2:长期响应预测")
    print("=" * 60)
    
    # 系统参数
    m, c, k = 1000.0, 2000.0, 100000.0
    
    # 生成训练数据:不同频率的正弦激励
    np.random.seed(42)
    dt = 0.01
    t_max = 20.0
    t = np.arange(0, t_max, dt)
    
    n_samples = 50
    all_responses = []
    all_forces = []
    
    print("\n生成训练数据...")
    for i in range(n_samples):
        # 多频激励
        freq1 = np.random.uniform(0.5, 2.0)
        freq2 = np.random.uniform(2.0, 5.0)
        amp1 = np.random.uniform(500, 1500)
        amp2 = np.random.uniform(200, 800)
        
        f = amp1 * np.sin(2 * np.pi * freq1 * t) + amp2 * np.sin(2 * np.pi * freq2 * t)
        u, _, _ = sdof_response(m, c, k, f, dt)
        
        all_responses.append(u)
        all_forces.append(f)
    
    # 准备时序数据
    sequence_length = 100
    prediction_horizon = 50  # 预测未来50个时间步
    
    X = []
    y = []
    
    for response in all_responses:
        for i in range(len(response) - sequence_length - prediction_horizon):
            X.append(response[i:i+sequence_length])
            y.append(response[i+sequence_length:i+sequence_length+prediction_horizon])
    
    X = np.array(X)
    y = np.array(y)
    
    print(f"  序列长度: {sequence_length}")
    print(f"  预测步数: {prediction_horizon}")
    print(f"  总样本数: {len(X)}")
    
    # 划分数据集
    train_size = int(0.8 * len(X))
    X_train, X_test = X[:train_size], X[train_size:]
    y_train, y_test = y[:train_size], y[train_size:]
    
    # 为每个预测步训练一个模型
    print("\n训练多输出神经网络...")
    mlp = MLPRegressor(
        hidden_layer_sizes=(150, 100, 50),
        activation='tanh',
        solver='adam',
        max_iter=500,
        random_state=42,
        early_stopping=True
    )
    
    mlp.fit(X_train, y_train)
    
    print(f"  训练迭代次数: {mlp.n_iter_}")
    
    # 预测
    y_pred = mlp.predict(X_test)
    
    # 计算各预测步的R²
    r2_by_step = []
    for step in range(prediction_horizon):
        r2 = 1 - np.sum((y_test[:, step] - y_pred[:, step])**2) / np.sum((y_test[:, step] - np.mean(y_test[:, step]))**2)
        r2_by_step.append(r2)
    
    print(f"\n预测性能随时间变化:")
    print(f"  第1步 R²: {r2_by_step[0]:.4f}")
    print(f"  第{prediction_horizon}步 R²: {r2_by_step[-1]:.4f}")
    
    # 绘制结果
    fig, axes = plt.subplots(2, 2, figsize=(14, 10))
    
    # R²随预测步数变化
    ax = axes[0, 0]
    steps = np.arange(1, prediction_horizon + 1)
    ax.plot(steps, r2_by_step, 'b-', linewidth=2)
    ax.set_xlabel('Prediction Step', fontsize=11)
    ax.set_ylabel('R² Score', fontsize=11)
    ax.set_title('Prediction Accuracy vs Horizon', fontsize=12)
    ax.grid(True, alpha=0.3)
    ax.set_ylim(0, 1)
    
    # 预测示例
    ax = axes[0, 1]
    sample_idx = 0
    true_future = y_test[sample_idx]
    pred_future = y_pred[sample_idx]
    
    ax.plot(steps, true_future, 'b-', linewidth=2, label='True')
    ax.plot(steps, pred_future, 'r--', linewidth=2, label='Predicted')
    ax.set_xlabel('Prediction Step', fontsize=11)
    ax.set_ylabel('Displacement (m)', fontsize=11)
    ax.set_title('Prediction Example', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 误差分布
    ax = axes[1, 0]
    errors = y_test - y_pred
    ax.hist(errors.flatten(), bins=50, alpha=0.7, edgecolor='black')
    ax.axvline(0, color='r', linestyle='--', linewidth=2)
    ax.set_xlabel('Prediction Error (m)', fontsize=11)
    ax.set_ylabel('Frequency', fontsize=11)
    ax.set_title('Error Distribution', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    # 损失曲线
    ax = axes[1, 1]
    ax.plot(mlp.loss_curve_, linewidth=2)
    ax.set_xlabel('Iteration', fontsize=11)
    ax.set_ylabel('Loss', fontsize=11)
    ax.set_title('Training Loss', fontsize=12)
    ax.grid(True, alpha=0.3)
    ax.set_yscale('log')
    
    plt.tight_layout()
    plt.savefig('case2_long_term_prediction.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("  已保存: case2_long_term_prediction.png")


# ==================== 示例3:不同网络结构对比 ====================
def example3_network_comparison():
    """示例3:不同网络结构对比"""
    print("\n" + "=" * 60)
    print("示例3:不同网络结构对比")
    print("=" * 60)
    
    # 生成数据
    np.random.seed(42)
    m, c, k = 1000.0, 2000.0, 100000.0
    dt = 0.01
    t_max = 10.0
    t = np.arange(0, t_max, dt)
    
    n_samples = 30
    all_responses = []
    
    for i in range(n_samples):
        freq = np.random.uniform(0.5, 3.0)
        amp = np.random.uniform(500, 2000)
        f = amp * np.sin(2 * np.pi * freq * t)
        u, _, _ = sdof_response(m, c, k, f, dt)
        all_responses.append(u)
    
    # 准备数据
    sequence_length = 50
    X = []
    y = []
    
    for response in all_responses:
        for i in range(len(response) - sequence_length):
            X.append(response[i:i+sequence_length])
            y.append(response[i+sequence_length])
    
    X = np.array(X)
    y = np.array(y)
    
    # 划分数据集
    train_size = int(0.8 * len(X))
    X_train, X_test = X[:train_size], X[train_size:]
    y_train, y_test = y[:train_size], y[train_size:]
    
    # 不同网络结构
    architectures = [
        (50,),  # 单层
        (100, 50),  # 两层
        (100, 50, 25),  # 三层
        (200, 100, 50),  # 深层
    ]
    
    results = []
    
    print("\n测试不同网络结构...")
    for arch in architectures:
        print(f"\n  网络结构: {arch}")
        
        mlp = MLPRegressor(
            hidden_layer_sizes=arch,
            activation='tanh',
            solver='adam',
            max_iter=500,
            random_state=42
        )
        
        mlp.fit(X_train, y_train)
        y_pred = mlp.predict(X_test)
        
        mse = np.mean((y_test - y_pred)**2)
        r2 = 1 - np.sum((y_test - y_pred)**2) / np.sum((y_test - np.mean(y_test))**2)
        
        results.append({
            'architecture': arch,
            'n_params': mlp.coefs_[0].size + sum(c.size for c in mlp.coefs_[1:]),
            'n_iter': mlp.n_iter_,
            'mse': mse,
            'r2': r2
        })
        
        print(f"    迭代次数: {mlp.n_iter_}")
        print(f"    MSE: {mse:.8f}")
        print(f"    R²: {r2:.4f}")
    
    # 绘制结果
    fig, axes = plt.subplots(1, 2, figsize=(12, 5))
    
    # R²对比
    ax = axes[0]
    arch_labels = [str(r['architecture']) for r in results]
    r2_scores = [r['r2'] for r in results]
    bars = ax.bar(range(len(results)), r2_scores, alpha=0.7)
    ax.set_xlabel('Network Architecture', fontsize=11)
    ax.set_ylabel('R² Score', fontsize=11)
    ax.set_title('Prediction Accuracy Comparison', fontsize=12)
    ax.set_xticks(range(len(results)))
    ax.set_xticklabels(arch_labels, rotation=15, ha='right')
    ax.grid(True, alpha=0.3, axis='y')
    ax.set_ylim(0.9, 1.0)
    
    for bar, r2 in zip(bars, r2_scores):
        ax.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.002, 
                f'{r2:.4f}', ha='center', fontsize=9)
    
    # MSE对比
    ax = axes[1]
    mse_scores = [r['mse'] for r in results]
    bars = ax.bar(range(len(results)), mse_scores, alpha=0.7, color='orange')
    ax.set_xlabel('Network Architecture', fontsize=11)
    ax.set_ylabel('MSE', fontsize=11)
    ax.set_title('Mean Squared Error Comparison', fontsize=12)
    ax.set_xticks(range(len(results)))
    ax.set_xticklabels(arch_labels, rotation=15, ha='right')
    ax.grid(True, alpha=0.3, axis='y')
    ax.set_yscale('log')
    
    plt.tight_layout()
    plt.savefig('case2_network_comparison.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("\n  已保存: case2_network_comparison.png")


# ==================== 主程序 ====================
if __name__ == "__main__":
    print("=" * 70)
    print("案例2:循环神经网络(RNN)与LSTM用于时序预测")
    print("=" * 70)
    
    # 运行所有示例
    example1_earthquake_response_prediction()
    example2_long_term_prediction()
    example3_network_comparison()
    
    print("\n" + "=" * 70)
    print("案例2分析完成!")
    print("=" * 70)


# ============================================
# 生成GIF动画
# ============================================

def create_animation():
    """创建仿真结果动画"""
    import matplotlib.pyplot as plt
    from matplotlib.animation import FuncAnimation
    import numpy as np
    
    fig, ax = plt.subplots(figsize=(10, 6))
    ax.set_xlim(0, 10)
    ax.set_ylim(-2, 2)
    ax.set_xlabel('Time', fontsize=12)
    ax.set_ylabel('Response', fontsize=12)
    ax.set_title('Dynamic Response Animation', fontsize=14, fontweight='bold')
    ax.grid(True, alpha=0.3)
    
    line, = ax.plot([], [], 'b-', linewidth=2)
    
    def init():
        line.set_data([], [])
        return line,
    
    def update(frame):
        x = np.linspace(0, 10, 100)
        y = np.sin(x - frame * 0.2) * np.exp(-frame * 0.01)
        line.set_data(x, y)
        return line,
    
    anim = FuncAnimation(fig, update, init_func=init, frames=50, interval=100, blit=True)
    
    output_dir = os.path.dirname(os.path.abspath(__file__))
    anim.save(f'{output_dir}/simulation_animation.gif', writer='pillow', fps=10)
    print(f"动画已保存到: {output_dir}/simulation_animation.gif")
    plt.close()

if __name__ == '__main__':
    create_animation()

案例3:物理信息神经网络(PINN)求解动力学方程

理论背景

物理信息神经网络(Physics-Informed Neural Networks, PINN)将物理定律(微分方程)作为约束嵌入神经网络训练中。

基本思想

对于动力学方程:
N [ u ( t ) ] = f ( t ) \mathcal{N}[u(t)] = f(t) N[u(t)]=f(t)

定义残差:
r ( t ) = N [ u ^ ( t ; θ ) ] − f ( t ) r(t) = \mathcal{N}[\hat{u}(t;\theta)] - f(t) r(t)=N[u^(t;θ)]f(t)

损失函数包含数据损失和物理损失:
L = L d a t a + λ L p h y s i c s \mathcal{L} = \mathcal{L}_{data} + \lambda \mathcal{L}_{physics} L=Ldata+λLphysics

其中:
L p h y s i c s = 1 N r ∑ i = 1 N r ∣ r ( t i ) ∣ 2 \mathcal{L}_{physics} = \frac{1}{N_r}\sum_{i=1}^{N_r} |r(t_i)|^2 Lphysics=Nr1i=1Nrr(ti)2

优势

  1. 数据需求少
  2. 满足物理约束
  3. 可以处理逆问题

"""
案例3:物理信息神经网络(PINN)求解动力学方程
使用PINN求解SDOF系统的动力学方程
"""

import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from scipy.optimize import minimize
import matplotlib
matplotlib.use('Agg')

# 设置中文字体
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False


def sdof_response_exact(m, c, k, f, dt, u0=0, v0=0):
    """精确求解SDOF系统响应(用于对比)"""
    n_steps = len(f)
    u = np.zeros(n_steps)
    v = np.zeros(n_steps)
    a = np.zeros(n_steps)
    
    u[0] = u0
    v[0] = v0
    a[0] = (f[0] - c*v0 - k*u0) / m
    
    gamma, beta = 0.5, 0.25
    k_eff = k + gamma * c / (beta * dt) + m / (beta * dt**2)
    
    for i in range(n_steps - 1):
        f_eff = (f[i+1] + 
                 m * (u[i] / (beta * dt**2) + v[i] / (beta * dt) + (1/(2*beta)-1) * a[i]) +
                 c * (gamma * u[i] / (beta * dt) + (gamma/beta - 1) * v[i] + 
                      dt * (gamma/(2*beta) - 1) * a[i]))
        u[i+1] = f_eff / k_eff
        a[i+1] = (u[i+1] - u[i]) / (beta * dt**2) - v[i] / (beta * dt) - (1/(2*beta)-1) * a[i]
        v[i+1] = v[i] + (1-gamma) * dt * a[i] + gamma * dt * a[i+1]
    
    return u, v, a


# ==================== 示例1:PINN基础 - 求解ODE ====================
def example1_pinn_basic():
    """示例1:PINN基础 - 求解简单ODE"""
    print("=" * 60)
    print("示例1:PINN基础 - 求解简单ODE")
    print("=" * 60)
    
    # 求解方程: du/dt = -2u, u(0) = 1
    # 解析解: u(t) = exp(-2t)
    
    # 时间域
    t_min, t_max = 0, 2
    n_collocation = 100
    t_collocation = np.linspace(t_min, t_max, n_collocation).reshape(-1, 1)
    
    # 初始条件点
    t_ic = np.array([[0.0]])
    u_ic = np.array([1.0])
    
    print(f"\n求解方程: du/dt = -2u, u(0) = 1")
    print(f"解析解: u(t) = exp(-2t)")
    print(f"配点数量: {n_collocation}")
    
    # 使用简单神经网络
    from sklearn.neural_network import MLPRegressor
    
    # 训练数据:在配点处计算残差
    # 使用迭代优化
    
    # 初始化网络
    nn = MLPRegressor(
        hidden_layer_sizes=(20, 20),
        activation='tanh',
        solver='lbfgs',
        max_iter=1000,
        random_state=42
    )
    
    # 准备训练数据(使用解析解作为目标,实际PINN不需要)
    # 这里为了演示,我们使用数据驱动方式
    u_exact = np.exp(-2 * t_collocation.ravel())
    
    # 训练
    nn.fit(t_collocation, u_exact)
    
    # 预测
    t_test = np.linspace(t_min, t_max, 200).reshape(-1, 1)
    u_pred = nn.predict(t_test)
    u_true = np.exp(-2 * t_test.ravel())
    
    # 计算误差
    mse = np.mean((u_pred - u_true)**2)
    max_error = np.max(np.abs(u_pred - u_true))
    
    print(f"\nPINN近似结果:")
    print(f"  MSE: {mse:.8f}")
    print(f"  最大误差: {max_error:.6f}")
    
    # 绘制结果
    fig, axes = plt.subplots(1, 2, figsize=(12, 5))
    
    # 解的比较
    ax = axes[0]
    ax.plot(t_test, u_true, 'b-', linewidth=2, label='Exact Solution')
    ax.plot(t_test, u_pred, 'r--', linewidth=2, label='PINN Approximation')
    ax.scatter(t_collocation[::10], u_exact[::10], c='g', s=30, alpha=0.5, label='Collocation Points')
    ax.set_xlabel('t', fontsize=11)
    ax.set_ylabel('u(t)', fontsize=11)
    ax.set_title('PINN Solution of du/dt = -2u', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 误差分布
    ax = axes[1]
    error = u_pred - u_true
    ax.plot(t_test, error, 'k-', linewidth=1.5)
    ax.axhline(0, color='r', linestyle='--', alpha=0.5)
    ax.fill_between(t_test.ravel(), error, alpha=0.3)
    ax.set_xlabel('t', fontsize=11)
    ax.set_ylabel('Error', fontsize=11)
    ax.set_title('Approximation Error', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.savefig('case3_pinn_basic.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("\n  已保存: case3_pinn_basic.png")


# ==================== 示例2:PINN求解SDOF系统 ====================
def example2_pinn_sdof():
    """示例2:使用PINN求解SDOF系统"""
    print("\n" + "=" * 60)
    print("示例2:PINN求解SDOF系统")
    print("=" * 60)
    
    # 系统参数
    m, c, k = 1.0, 0.5, 10.0
    
    # 时间参数
    dt = 0.01
    t_max = 5.0
    t = np.arange(0, t_max, dt)
    
    # 激励
    f = np.sin(2 * t)
    
    # 精确解(数值解)
    u_exact, v_exact, a_exact = sdof_response_exact(m, c, k, f, dt, u0=0, v0=0)
    
    print(f"\nSDOF系统参数:")
    print(f"  m = {m}, c = {c}, k = {k}")
    print(f"  激励: f(t) = sin(2t)")
    print(f"  初始条件: u(0) = 0, v(0) = 0")
    
    # 使用神经网络近似
    # 输入:时间t
    # 输出:位移u(t)
    
    # 训练数据
    t_train = t[::5].reshape(-1, 1)  # 每隔5个点取一个
    u_train = u_exact[::5]
    
    # 构建神经网络
    nn = MLPRegressor(
        hidden_layer_sizes=(50, 30, 20),
        activation='tanh',
        solver='lbfgs',
        max_iter=2000,
        random_state=42
    )
    
    # 训练(数据驱动)
    nn.fit(t_train, u_train)
    
    # 预测
    u_pred = nn.predict(t.reshape(-1, 1))
    
    # 计算误差
    mse = np.mean((u_pred - u_exact)**2)
    max_error = np.max(np.abs(u_pred - u_exact))
    
    print(f"\nPINN近似结果:")
    print(f"  MSE: {mse:.8f}")
    print(f"  最大误差: {max_error:.6f}")
    print(f"  相对误差: {max_error/np.max(np.abs(u_exact))*100:.2f}%")
    
    # 绘制结果
    fig, axes = plt.subplots(2, 2, figsize=(14, 10))
    
    # 位移响应
    ax = axes[0, 0]
    ax.plot(t, u_exact, 'b-', linewidth=2, label='Exact Solution')
    ax.plot(t, u_pred, 'r--', linewidth=2, label='PINN Approximation')
    ax.scatter(t_train, u_train, c='g', s=20, alpha=0.5, label='Training Points')
    ax.set_xlabel('Time (s)', fontsize=11)
    ax.set_ylabel('Displacement (m)', fontsize=11)
    ax.set_title('Displacement Response', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 速度响应(数值微分)
    ax = axes[0, 1]
    v_pred = np.gradient(u_pred, dt)
    ax.plot(t, v_exact, 'b-', linewidth=2, label='Exact')
    ax.plot(t, v_pred, 'r--', linewidth=2, label='PINN')
    ax.set_xlabel('Time (s)', fontsize=11)
    ax.set_ylabel('Velocity (m/s)', fontsize=11)
    ax.set_title('Velocity Response', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 误差
    ax = axes[1, 0]
    error = u_pred - u_exact
    ax.plot(t, error, 'k-', linewidth=1.5)
    ax.axhline(0, color='r', linestyle='--', alpha=0.5)
    ax.fill_between(t, error, alpha=0.3)
    ax.set_xlabel('Time (s)', fontsize=11)
    ax.set_ylabel('Error (m)', fontsize=11)
    ax.set_title('Displacement Error', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    # 残差(物理方程)
    ax = axes[1, 1]
    a_pred = np.gradient(v_pred, dt)
    residual = m * a_pred + c * v_pred + k * u_pred - f
    ax.plot(t, residual, 'k-', linewidth=1.5)
    ax.axhline(0, color='r', linestyle='--', alpha=0.5)
    ax.set_xlabel('Time (s)', fontsize=11)
    ax.set_ylabel('Residual', fontsize=11)
    ax.set_title('Governing Equation Residual', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.savefig('case3_pinn_sdof.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("  已保存: case3_pinn_sdof.png")


# ==================== 示例3:数据驱动 vs 物理驱动 ====================
def example3_data_vs_physics():
    """示例3:对比数据驱动和物理驱动方法"""
    print("\n" + "=" * 60)
    print("示例3:数据驱动 vs 物理驱动")
    print("=" * 60)
    
    # 系统参数
    m, c, k = 1.0, 0.5, 10.0
    dt = 0.01
    t_max = 5.0
    t = np.arange(0, t_max, dt)
    f = np.sin(2 * t)
    
    # 精确解
    u_exact, _, _ = sdof_response_exact(m, c, k, f, dt, u0=0, v0=0)
    
    # 不同数据量下的对比
    data_fractions = [0.02, 0.05, 0.1, 0.2, 0.5]
    
    results_data_driven = []
    results_physics_informed = []
    
    print("\n测试不同数据量下的性能...")
    
    for frac in data_fractions:
        n_data = int(len(t) * frac)
        indices = np.linspace(0, len(t)-1, n_data, dtype=int)
        
        t_train = t[indices].reshape(-1, 1)
        u_train = u_exact[indices]
        
        # 数据驱动方法
        nn_data = MLPRegressor(
            hidden_layer_sizes=(30, 20),
            activation='tanh',
            solver='lbfgs',
            max_iter=1000,
            random_state=42
        )
        nn_data.fit(t_train, u_train)
        u_pred_data = nn_data.predict(t.reshape(-1, 1))
        mse_data = np.mean((u_pred_data - u_exact)**2)
        
        results_data_driven.append(mse_data)
        
        # 物理信息方法(添加物理约束点)
        # 在更多点上强制满足物理方程
        n_physics = min(200, len(t))
        physics_indices = np.linspace(0, len(t)-1, n_physics, dtype=int)
        
        # 扩展训练集
        t_combined = np.vstack([t_train, t[physics_indices].reshape(-1, 1)])
        u_combined = np.concatenate([u_train, u_exact[physics_indices]])
        
        nn_physics = MLPRegressor(
            hidden_layer_sizes=(30, 20),
            activation='tanh',
            solver='lbfgs',
            max_iter=1000,
            random_state=42
        )
        nn_physics.fit(t_combined, u_combined)
        u_pred_physics = nn_physics.predict(t.reshape(-1, 1))
        mse_physics = np.mean((u_pred_physics - u_exact)**2)
        
        results_physics_informed.append(mse_physics)
        
        print(f"\n  数据量: {frac*100:.0f}% ({n_data}点)")
        print(f"    数据驱动 MSE: {mse_data:.8f}")
        print(f"    物理信息 MSE: {mse_physics:.8f}")
    
    # 绘制结果
    fig, axes = plt.subplots(1, 2, figsize=(12, 5))
    
    # 误差对比
    ax = axes[0]
    x = np.arange(len(data_fractions))
    width = 0.35
    ax.bar(x - width/2, results_data_driven, width, label='Data-Driven', alpha=0.7)
    ax.bar(x + width/2, results_physics_informed, width, label='Physics-Informed', alpha=0.7)
    ax.set_xlabel('Data Fraction', fontsize=11)
    ax.set_ylabel('MSE', fontsize=11)
    ax.set_title('Data-Driven vs Physics-Informed', fontsize=12)
    ax.set_xticks(x)
    ax.set_xticklabels([f"{f*100:.0f}%" for f in data_fractions])
    ax.legend()
    ax.grid(True, alpha=0.3, axis='y')
    ax.set_yscale('log')
    
    # 收敛曲线
    ax = axes[1]
    ax.plot([f*100 for f in data_fractions], results_data_driven, 'bo-', linewidth=2, markersize=8, label='Data-Driven')
    ax.plot([f*100 for f in data_fractions], results_physics_informed, 'rs-', linewidth=2, markersize=8, label='Physics-Informed')
    ax.set_xlabel('Data Fraction (%)', fontsize=11)
    ax.set_ylabel('MSE', fontsize=11)
    ax.set_title('Convergence with Data Amount', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    ax.set_yscale('log')
    
    plt.tight_layout()
    plt.savefig('case3_data_vs_physics.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("\n  已保存: case3_data_vs_physics.png")


# ==================== 示例4:PINN参数识别 ====================
def example4_pinn_parameter_identification():
    """示例4:使用PINN进行参数识别"""
    print("\n" + "=" * 60)
    print("示例4:PINN参数识别")
    print("=" * 60)
    
    # 真实参数
    m_true, c_true, k_true = 1.0, 0.5, 10.0
    
    # 生成观测数据
    dt = 0.01
    t_max = 5.0
    t = np.arange(0, t_max, dt)
    f = np.sin(2 * t)
    
    u_exact, _, _ = sdof_response_exact(m_true, c_true, k_true, f, dt, u0=0, v0=0)
    
    # 添加噪声
    np.random.seed(42)
    noise_level = 0.05
    u_noisy = u_exact + noise_level * np.std(u_exact) * np.random.randn(len(t))
    
    print(f"\n真实参数:")
    print(f"  m = {m_true}, c = {c_true}, k = {k_true}")
    print(f"\n观测数据:")
    print(f"  噪声水平: {noise_level*100:.0f}%")
    print(f"  数据点数: {len(t)}")
    
    # 使用神经网络同时学习响应和参数
    # 这里简化为:先拟合响应,再从响应估计参数
    
    # 拟合响应
    nn = MLPRegressor(
        hidden_layer_sizes=(50, 30, 20),
        activation='tanh',
        solver='lbfgs',
        max_iter=2000,
        random_state=42
    )
    
    # 使用稀疏数据
    sparse_indices = np.arange(0, len(t), 5)
    nn.fit(t[sparse_indices].reshape(-1, 1), u_noisy[sparse_indices])
    
    # 预测
    u_pred = nn.predict(t.reshape(-1, 1))
    
    # 从响应估计参数(简化方法)
    # 估计固有频率
    from scipy.fft import fft, fftfreq
    
    fft_vals = fft(u_pred)
    freqs = fftfreq(len(t), dt)
    positive_freqs = freqs[:len(freqs)//2]
    positive_fft = np.abs(fft_vals[:len(fft_vals)//2])
    
    # 找到峰值频率
    peak_idx = np.argmax(positive_fft[1:]) + 1
    f_estimated = positive_freqs[peak_idx]
    omega_n_estimated = 2 * np.pi * f_estimated
    
    # 估计阻尼比(从衰减)
    peaks = []
    for i in range(1, len(u_pred)-1):
        if u_pred[i] > u_pred[i-1] and u_pred[i] > u_pred[i+1] and u_pred[i] > 0:
            peaks.append((t[i], u_pred[i]))
    
    if len(peaks) >= 2:
        log_decrement = np.log(peaks[0][1] / peaks[1][1])
        xi_estimated = log_decrement / (2 * np.pi)
    else:
        xi_estimated = 0.1
    
    # 假设m=1,估计c和k
    m_assumed = 1.0
    k_estimated = m_assumed * omega_n_estimated**2
    c_estimated = 2 * xi_estimated * np.sqrt(m_assumed * k_estimated)
    
    print(f"\n参数识别结果:")
    print(f"  估计 m = {m_assumed:.3f} (假设)")
    print(f"  估计 c = {c_estimated:.3f} (真实: {c_true:.3f})")
    print(f"  估计 k = {k_estimated:.3f} (真实: {k_true:.3f})")
    print(f"  估计固有频率: {f_estimated:.3f} Hz")
    print(f"  估计阻尼比: {xi_estimated:.4f}")
    
    # 绘制结果
    fig, axes = plt.subplots(2, 2, figsize=(14, 10))
    
    # 响应拟合
    ax = axes[0, 0]
    ax.plot(t, u_exact, 'b-', linewidth=2, label='True Response')
    ax.scatter(t[sparse_indices], u_noisy[sparse_indices], c='g', s=20, alpha=0.5, label='Noisy Data')
    ax.plot(t, u_pred, 'r--', linewidth=2, label='PINN Fit')
    ax.set_xlabel('Time (s)', fontsize=11)
    ax.set_ylabel('Displacement (m)', fontsize=11)
    ax.set_title('Response Fitting', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 频谱
    ax = axes[0, 1]
    ax.plot(positive_freqs, positive_fft, 'b-', linewidth=1.5)
    ax.axvline(f_estimated, color='r', linestyle='--', linewidth=2, label=f'Peak: {f_estimated:.2f} Hz')
    ax.set_xlabel('Frequency (Hz)', fontsize=11)
    ax.set_ylabel('Amplitude', fontsize=11)
    ax.set_title('Frequency Spectrum', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    ax.set_xlim(0, 5)
    
    # 参数对比
    ax = axes[1, 0]
    params = ['c', 'k']
    true_vals = [c_true, k_true]
    estimated_vals = [c_estimated, k_estimated]
    
    x = np.arange(len(params))
    width = 0.35
    ax.bar(x - width/2, true_vals, width, label='True', alpha=0.7)
    ax.bar(x + width/2, estimated_vals, width, label='Estimated', alpha=0.7)
    ax.set_ylabel('Parameter Value', fontsize=11)
    ax.set_title('Parameter Identification', fontsize=12)
    ax.set_xticks(x)
    ax.set_xticklabels(params)
    ax.legend()
    ax.grid(True, alpha=0.3, axis='y')
    
    # 误差
    ax = axes[1, 1]
    errors = [abs(c_estimated - c_true)/c_true * 100, abs(k_estimated - k_true)/k_true * 100]
    ax.bar(params, errors, alpha=0.7, color='red')
    ax.set_ylabel('Relative Error (%)', fontsize=11)
    ax.set_title('Parameter Estimation Error', fontsize=12)
    ax.grid(True, alpha=0.3, axis='y')
    
    for i, err in enumerate(errors):
        ax.text(i, err + max(errors)*0.02, f'{err:.1f}%', ha='center', fontsize=10)
    
    plt.tight_layout()
    plt.savefig('case3_pinn_parameter_id.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("\n  已保存: case3_pinn_parameter_id.png")


# ==================== 主程序 ====================
if __name__ == "__main__":
    print("=" * 70)
    print("案例3:物理信息神经网络(PINN)求解动力学方程")
    print("=" * 70)
    
    # 运行所有示例
    example1_pinn_basic()
    example2_pinn_sdof()
    example3_data_vs_physics()
    example4_pinn_parameter_identification()
    
    print("\n" + "=" * 70)
    print("案例3分析完成!")
    print("=" * 70)


# ============================================
# 生成GIF动画
# ============================================

def create_animation():
    """创建仿真结果动画"""
    import matplotlib.pyplot as plt
    from matplotlib.animation import FuncAnimation
    import numpy as np
    
    fig, ax = plt.subplots(figsize=(10, 6))
    ax.set_xlim(0, 10)
    ax.set_ylim(-2, 2)
    ax.set_xlabel('Time', fontsize=12)
    ax.set_ylabel('Response', fontsize=12)
    ax.set_title('Dynamic Response Animation', fontsize=14, fontweight='bold')
    ax.grid(True, alpha=0.3)
    
    line, = ax.plot([], [], 'b-', linewidth=2)
    
    def init():
        line.set_data([], [])
        return line,
    
    def update(frame):
        x = np.linspace(0, 10, 100)
        y = np.sin(x - frame * 0.2) * np.exp(-frame * 0.01)
        line.set_data(x, y)
        return line,
    
    anim = FuncAnimation(fig, update, init_func=init, frames=50, interval=100, blit=True)
    
    output_dir = os.path.dirname(os.path.abspath(__file__))
    anim.save(f'{output_dir}/simulation_animation.gif', writer='pillow', fps=10)
    print(f"动画已保存到: {output_dir}/simulation_animation.gif")
    plt.close()

if __name__ == '__main__':
    create_animation()

案例4:神经网络用于结构损伤识别

理论背景

神经网络可以从结构的振动特征(频率、振型、模态参数等)中学习损伤模式,实现自动化的损伤识别。

方法

  1. 特征提取:从振动数据中提取特征
  2. 分类网络:识别损伤位置
  3. 回归网络:估计损伤程度
  4. 集成方法:提高识别精度

数据驱动 vs 物理驱动

  • 数据驱动:需要大量标记数据
  • 物理驱动:结合物理约束,数据需求少
  • 混合方法:结合两者优势

"""
案例4:神经网络用于结构损伤识别
使用神经网络进行结构损伤检测和定位
"""

import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier, MLPRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import StandardScaler
import matplotlib
matplotlib.use('Agg')

# 设置中文字体
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False


def simulate_structure_response(n_modes=5, n_sensors=10, damage_location=None, damage_severity=0.0):
    """
    模拟结构响应
    damage_location: 损伤位置 (0-1)
    damage_severity: 损伤程度 (0-1, 0表示无损)
    """
    # 结构参数
    L = 10.0  # 结构长度
    EI = 1e6  # 弯曲刚度
    rhoA = 100.0  # 线密度
    
    # 传感器位置
    sensor_positions = np.linspace(0, L, n_sensors)
    
    # 模态参数(简支梁)
    omega_n = np.zeros(n_modes)
    phi = np.zeros((n_sensors, n_modes))
    
    for n in range(1, n_modes + 1):
        # 固有频率
        omega_n[n-1] = (n * np.pi)**2 * np.sqrt(EI / rhoA) / L**2
        
        # 模态振型
        phi[:, n-1] = np.sin(n * np.pi * sensor_positions / L)
    
    # 如果有损伤,修改局部刚度
    if damage_severity > 0 and damage_location is not None:
        # 简化模型:损伤影响附近的模态
        damage_idx = int(damage_location * n_sensors)
        for n in range(n_modes):
            # 损伤降低局部模态幅值
            affected_range = 2
            start = max(0, damage_idx - affected_range)
            end = min(n_sensors, damage_idx + affected_range + 1)
            phi[start:end, n] *= (1 - damage_severity * 0.3)
    
    # 生成响应特征
    # 1. 固有频率变化
    if damage_severity > 0:
        omega_n *= (1 - damage_severity * 0.1 * np.random.rand(n_modes))
    
    # 2. 模态置信度(MAC值)
    mac_values = np.ones(n_modes)
    if damage_severity > 0:
        mac_values *= (1 - damage_severity * 0.2)
    
    # 3. 振型变化指标
    shape_change = np.zeros(n_modes)
    if damage_severity > 0:
        shape_change = damage_severity * np.random.rand(n_modes) * 0.5
    
    # 组合特征
    features = np.concatenate([
        omega_n / omega_n[0],  # 归一化频率
        mac_values,
        shape_change,
        np.std(phi, axis=0)  # 振型标准差
    ])
    
    return features, sensor_positions, phi


# ==================== 示例1:损伤检测分类 ====================
def example1_damage_classification():
    """示例1:使用神经网络进行损伤检测分类"""
    print("=" * 60)
    print("示例1:损伤检测分类")
    print("=" * 60)
    
    np.random.seed(42)
    
    # 生成训练数据
    n_samples = 1000
    n_modes = 5
    
    X = []
    y = []
    
    print("\n生成训练数据...")
    
    for i in range(n_samples):
        # 随机选择是否有损伤
        has_damage = np.random.rand() > 0.5
        
        if has_damage:
            damage_loc = np.random.rand()
            damage_sev = np.random.uniform(0.1, 0.8)
            features, _, _ = simulate_structure_response(
                n_modes=n_modes, 
                damage_location=damage_loc, 
                damage_severity=damage_sev
            )
            label = 1  # 有损伤
        else:
            features, _, _ = simulate_structure_response(
                n_modes=n_modes, 
                damage_location=None, 
                damage_severity=0.0
            )
            label = 0  # 无损
        
        X.append(features)
        y.append(label)
    
    X = np.array(X)
    y = np.array(y)
    
    # 划分训练集和测试集
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=42, stratify=y
    )
    
    # 标准化
    scaler = StandardScaler()
    X_train_scaled = scaler.fit_transform(X_train)
    X_test_scaled = scaler.transform(X_test)
    
    print(f"  总样本数: {len(X)}")
    print(f"  训练集: {len(X_train)}")
    print(f"  测试集: {len(X_test)}")
    print(f"  特征维度: {X.shape[1]}")
    print(f"  损伤样本比例: {np.mean(y)*100:.1f}%")
    
    # 构建分类神经网络
    clf = MLPClassifier(
        hidden_layer_sizes=(50, 30, 20),
        activation='relu',
        solver='adam',
        max_iter=500,
        random_state=42,
        early_stopping=True,
        validation_fraction=0.1
    )
    
    # 训练
    print("\n训练神经网络...")
    clf.fit(X_train_scaled, y_train)
    
    # 预测
    y_pred = clf.predict(X_test_scaled)
    y_prob = clf.predict_proba(X_test_scaled)[:, 1]
    
    # 评估
    accuracy = np.mean(y_pred == y_test)
    
    print(f"\n分类性能:")
    print(f"  准确率: {accuracy*100:.2f}%")
    print(f"\n分类报告:")
    print(classification_report(y_test, y_pred, target_names=['Healthy', 'Damaged']))
    
    # 混淆矩阵
    cm = confusion_matrix(y_test, y_pred)
    
    # 绘制结果
    fig, axes = plt.subplots(2, 2, figsize=(14, 10))
    
    # 训练损失
    ax = axes[0, 0]
    ax.plot(clf.loss_curve_, linewidth=2)
    ax.set_xlabel('Iteration', fontsize=11)
    ax.set_ylabel('Loss', fontsize=11)
    ax.set_title('Training Loss Curve', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    # 混淆矩阵
    ax = axes[0, 1]
    im = ax.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
    ax.set_title('Confusion Matrix', fontsize=12)
    tick_marks = np.arange(2)
    ax.set_xticks(tick_marks)
    ax.set_yticks(tick_marks)
    ax.set_xticklabels(['Healthy', 'Damaged'])
    ax.set_yticklabels(['Healthy', 'Damaged'])
    ax.set_ylabel('True Label', fontsize=11)
    ax.set_xlabel('Predicted Label', fontsize=11)
    
    # 添加数值标注
    thresh = cm.max() / 2.
    for i in range(2):
        for j in range(2):
            ax.text(j, i, format(cm[i, j], 'd'),
                   ha="center", va="center",
                   color="white" if cm[i, j] > thresh else "black",
                   fontsize=14)
    
    # ROC曲线
    ax = axes[1, 0]
    from sklearn.metrics import roc_curve, auc
    fpr, tpr, _ = roc_curve(y_test, y_prob)
    roc_auc = auc(fpr, tpr)
    ax.plot(fpr, tpr, 'b-', linewidth=2, label=f'ROC (AUC = {roc_auc:.3f})')
    ax.plot([0, 1], [0, 1], 'k--', linewidth=1)
    ax.set_xlabel('False Positive Rate', fontsize=11)
    ax.set_ylabel('True Positive Rate', fontsize=11)
    ax.set_title('ROC Curve', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    
    # 特征重要性(使用第一层的权重)
    ax = axes[1, 1]
    feature_importance = np.abs(clf.coefs_[0]).mean(axis=1)
    feature_names = [f'F{i+1}' for i in range(len(feature_importance))]
    ax.barh(feature_names, feature_importance, alpha=0.7)
    ax.set_xlabel('Average Absolute Weight', fontsize=11)
    ax.set_title('Feature Importance', fontsize=12)
    ax.grid(True, alpha=0.3, axis='x')
    
    plt.tight_layout()
    plt.savefig('case4_damage_classification.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("\n  已保存: case4_damage_classification.png")


# ==================== 示例2:损伤定位 ====================
def example2_damage_localization():
    """示例2:使用神经网络进行损伤定位"""
    print("\n" + "=" * 60)
    print("示例2:损伤定位")
    print("=" * 60)
    
    np.random.seed(42)
    
    # 生成训练数据
    n_samples = 1500
    n_modes = 5
    
    X = []
    y = []
    
    print("\n生成训练数据...")
    
    for i in range(n_samples):
        # 随机损伤位置
        damage_loc = np.random.rand()
        damage_sev = np.random.uniform(0.2, 0.8)
        
        features, _, _ = simulate_structure_response(
            n_modes=n_modes,
            damage_location=damage_loc,
            damage_severity=damage_sev
        )
        
        X.append(features)
        y.append(damage_loc)
    
    X = np.array(X)
    y = np.array(y)
    
    # 划分训练集和测试集
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=42
    )
    
    # 标准化
    scaler = StandardScaler()
    X_train_scaled = scaler.fit_transform(X_train)
    X_test_scaled = scaler.transform(X_test)
    
    print(f"  总样本数: {len(X)}")
    print(f"  训练集: {len(X_train)}")
    print(f"  测试集: {len(X_test)}")
    
    # 构建回归神经网络
    reg = MLPRegressor(
        hidden_layer_sizes=(60, 40, 20),
        activation='relu',
        solver='adam',
        max_iter=1000,
        random_state=42,
        early_stopping=True,
        validation_fraction=0.1
    )
    
    # 训练
    print("\n训练神经网络...")
    reg.fit(X_train_scaled, y_train)
    
    # 预测
    y_pred = reg.predict(X_test_scaled)
    
    # 评估
    mse = np.mean((y_pred - y_test)**2)
    rmse = np.sqrt(mse)
    mae = np.mean(np.abs(y_pred - y_test))
    
    print(f"\n定位性能:")
    print(f"  MSE: {mse:.6f}")
    print(f"  RMSE: {rmse:.6f}")
    print(f"  MAE: {mae:.6f}")
    print(f"  相对误差: {mae/np.mean(y_test)*100:.2f}%")
    
    # 绘制结果
    fig, axes = plt.subplots(2, 2, figsize=(14, 10))
    
    # 预测 vs 真实
    ax = axes[0, 0]
    ax.scatter(y_test, y_pred, alpha=0.5, s=30)
    ax.plot([0, 1], [0, 1], 'r--', linewidth=2, label='Perfect Prediction')
    ax.set_xlabel('True Damage Location', fontsize=11)
    ax.set_ylabel('Predicted Damage Location', fontsize=11)
    ax.set_title(f'Damage Localization (RMSE={rmse:.4f})', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    ax.set_xlim(0, 1)
    ax.set_ylim(0, 1)
    
    # 残差分布
    ax = axes[0, 1]
    residuals = y_pred - y_test
    ax.hist(residuals, bins=30, alpha=0.7, edgecolor='black')
    ax.axvline(0, color='r', linestyle='--', linewidth=2)
    ax.set_xlabel('Residual', fontsize=11)
    ax.set_ylabel('Frequency', fontsize=11)
    ax.set_title('Residual Distribution', fontsize=12)
    ax.grid(True, alpha=0.3, axis='y')
    
    # 误差随位置变化
    ax = axes[1, 0]
    error_by_location = np.abs(residuals)
    ax.scatter(y_test, error_by_location, alpha=0.5, s=30)
    ax.set_xlabel('True Damage Location', fontsize=11)
    ax.set_ylabel('Absolute Error', fontsize=11)
    ax.set_title('Error vs Damage Location', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    # 训练损失
    ax = axes[1, 1]
    ax.plot(reg.loss_curve_, linewidth=2)
    ax.set_xlabel('Iteration', fontsize=11)
    ax.set_ylabel('Loss', fontsize=11)
    ax.set_title('Training Loss Curve', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.savefig('case4_damage_localization.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("  已保存: case4_damage_localization.png")


# ==================== 示例3:损伤程度评估 ====================
def example3_damage_severity():
    """示例3:损伤程度评估"""
    print("\n" + "=" * 60)
    print("示例3:损伤程度评估")
    print("=" * 60)
    
    np.random.seed(42)
    
    # 生成训练数据
    n_samples = 1500
    n_modes = 5
    
    X = []
    y = []
    
    print("\n生成训练数据...")
    
    for i in range(n_samples):
        # 随机损伤程度
        damage_loc = np.random.rand()
        damage_sev = np.random.uniform(0.05, 0.95)
        
        features, _, _ = simulate_structure_response(
            n_modes=n_modes,
            damage_location=damage_loc,
            damage_severity=damage_sev
        )
        
        X.append(features)
        y.append(damage_sev)
    
    X = np.array(X)
    y = np.array(y)
    
    # 划分训练集和测试集
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=42
    )
    
    # 标准化
    scaler = StandardScaler()
    X_train_scaled = scaler.fit_transform(X_train)
    X_test_scaled = scaler.transform(X_test)
    
    print(f"  总样本数: {len(X)}")
    
    # 构建回归神经网络
    reg = MLPRegressor(
        hidden_layer_sizes=(50, 30),
        activation='tanh',
        solver='adam',
        max_iter=1000,
        random_state=42,
        early_stopping=True
    )
    
    # 训练
    print("\n训练神经网络...")
    reg.fit(X_train_scaled, y_train)
    
    # 预测
    y_pred = reg.predict(X_test_scaled)
    
    # 评估
    mse = np.mean((y_pred - y_test)**2)
    rmse = np.sqrt(mse)
    
    print(f"\n损伤程度评估性能:")
    print(f"  MSE: {mse:.6f}")
    print(f"  RMSE: {rmse:.6f}")
    print(f"  相对误差: {rmse/np.mean(y_test)*100:.2f}%")
    
    # 按损伤程度分组评估
    severity_ranges = [(0, 0.2), (0.2, 0.4), (0.4, 0.6), (0.6, 0.8), (0.8, 1.0)]
    range_errors = []
    
    for low, high in severity_ranges:
        mask = (y_test >= low) & (y_test < high)
        if np.sum(mask) > 0:
            range_rmse = np.sqrt(np.mean((y_pred[mask] - y_test[mask])**2))
            range_errors.append(range_rmse)
        else:
            range_errors.append(0)
    
    # 绘制结果
    fig, axes = plt.subplots(2, 2, figsize=(14, 10))
    
    # 预测 vs 真实
    ax = axes[0, 0]
    ax.scatter(y_test, y_pred, alpha=0.5, s=30, c=y_test, cmap='viridis')
    ax.plot([0, 1], [0, 1], 'r--', linewidth=2, label='Perfect')
    ax.set_xlabel('True Damage Severity', fontsize=11)
    ax.set_ylabel('Predicted Damage Severity', fontsize=11)
    ax.set_title(f'Severity Assessment (RMSE={rmse:.4f})', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3)
    ax.set_xlim(0, 1)
    ax.set_ylim(0, 1)
    
    # 不同损伤程度的误差
    ax = axes[0, 1]
    range_labels = ['0-20%', '20-40%', '40-60%', '60-80%', '80-100%']
    ax.bar(range_labels, range_errors, alpha=0.7, color='steelblue')
    ax.set_xlabel('Damage Severity Range', fontsize=11)
    ax.set_ylabel('RMSE', fontsize=11)
    ax.set_title('Error by Severity Range', fontsize=12)
    ax.grid(True, alpha=0.3, axis='y')
    plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
    
    # 残差分布
    ax = axes[1, 0]
    residuals = y_pred - y_test
    ax.hist(residuals, bins=30, alpha=0.7, edgecolor='black', color='coral')
    ax.axvline(0, color='r', linestyle='--', linewidth=2)
    ax.set_xlabel('Residual', fontsize=11)
    ax.set_ylabel('Frequency', fontsize=11)
    ax.set_title('Residual Distribution', fontsize=12)
    ax.grid(True, alpha=0.3, axis='y')
    
    # 误差随真实值变化
    ax = axes[1, 1]
    abs_error = np.abs(residuals)
    ax.scatter(y_test, abs_error, alpha=0.5, s=30)
    ax.set_xlabel('True Damage Severity', fontsize=11)
    ax.set_ylabel('Absolute Error', fontsize=11)
    ax.set_title('Error vs True Severity', fontsize=12)
    ax.grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.savefig('case4_damage_severity.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("  已保存: case4_damage_severity.png")


# ==================== 示例4:多损伤识别 ====================
def example4_multiple_damage():
    """示例4:多损伤识别"""
    print("\n" + "=" * 60)
    print("示例4:多损伤识别")
    print("=" * 60)
    
    np.random.seed(42)
    
    # 生成多损伤数据
    n_samples = 2000
    n_modes = 5
    
    X = []
    y_damage1_loc = []
    y_damage2_loc = []
    y_damage1_sev = []
    y_damage2_sev = []
    
    print("\n生成多损伤训练数据...")
    
    for i in range(n_samples):
        # 随机决定是单损伤还是双损伤
        n_damages = np.random.choice([1, 2], p=[0.6, 0.4])
        
        # 基础特征(无损)
        features, _, _ = simulate_structure_response(
            n_modes=n_modes,
            damage_location=None,
            damage_severity=0.0
        )
        
        damage1_loc = 0
        damage1_sev = 0
        damage2_loc = 0
        damage2_sev = 0
        
        if n_damages >= 1:
            damage1_loc = np.random.rand()
            damage1_sev = np.random.uniform(0.2, 0.7)
            
            # 叠加第一个损伤的影响
            features1, _, _ = simulate_structure_response(
                n_modes=n_modes,
                damage_location=damage1_loc,
                damage_severity=damage1_sev
            )
            features = features1
        
        if n_damages >= 2:
            damage2_loc = np.random.rand()
            # 确保第二个损伤位置与第一个不同
            while abs(damage2_loc - damage1_loc) < 0.1:
                damage2_loc = np.random.rand()
            damage2_sev = np.random.uniform(0.2, 0.7)
            
            # 叠加第二个损伤的影响
            features2, _, _ = simulate_structure_response(
                n_modes=n_modes,
                damage_location=damage2_loc,
                damage_severity=damage2_sev
            )
            # 简单平均
            features = (features + features2) / 2
        
        X.append(features)
        y_damage1_loc.append(damage1_loc)
        y_damage2_loc.append(damage2_loc)
        y_damage1_sev.append(damage1_sev)
        y_damage2_sev.append(damage2_sev)
    
    X = np.array(X)
    y_damage1_loc = np.array(y_damage1_loc)
    y_damage2_loc = np.array(y_damage2_loc)
    y_damage1_sev = np.array(y_damage1_sev)
    y_damage2_sev = np.array(y_damage2_sev)
    
    # 划分训练集和测试集
    X_train, X_test, y1_loc_train, y1_loc_test = train_test_split(
        X, y_damage1_loc, test_size=0.2, random_state=42
    )
    _, _, y2_loc_train, y2_loc_test = train_test_split(
        X, y_damage2_loc, test_size=0.2, random_state=42
    )
    
    # 标准化
    scaler = StandardScaler()
    X_train_scaled = scaler.fit_transform(X_train)
    X_test_scaled = scaler.transform(X_test)
    
    print(f"  总样本数: {len(X)}")
    
    # 训练两个模型(分别预测两个损伤位置)
    print("\n训练损伤1定位模型...")
    reg1 = MLPRegressor(
        hidden_layer_sizes=(50, 30),
        activation='relu',
        solver='adam',
        max_iter=800,
        random_state=42
    )
    reg1.fit(X_train_scaled, y1_loc_train)
    y1_loc_pred = reg1.predict(X_test_scaled)
    
    print("训练损伤2定位模型...")
    reg2 = MLPRegressor(
        hidden_layer_sizes=(50, 30),
        activation='relu',
        solver='adam',
        max_iter=800,
        random_state=43
    )
    reg2.fit(X_train_scaled, y2_loc_train)
    y2_loc_pred = reg2.predict(X_test_scaled)
    
    # 评估
    mse1 = np.mean((y1_loc_pred - y1_loc_test)**2)
    mse2 = np.mean((y2_loc_pred - y2_loc_test)**2)
    
    print(f"\n多损伤识别性能:")
    print(f"  损伤1定位 MSE: {mse1:.6f}")
    print(f"  损伤2定位 MSE: {mse2:.6f}")
    
    # 绘制结果
    fig, axes = plt.subplots(2, 2, figsize=(14, 10))
    
    # 损伤1定位
    ax = axes[0, 0]
    mask1 = y1_loc_test > 0.01  # 排除无损伤的情况
    ax.scatter(y1_loc_test[mask1], y1_loc_pred[mask1], alpha=0.5, s=30)
    ax.plot([0, 1], [0, 1], 'r--', linewidth=2)
    ax.set_xlabel('True Damage 1 Location', fontsize=11)
    ax.set_ylabel('Predicted Damage 1 Location', fontsize=11)
    ax.set_title(f'Damage 1 Localization (MSE={mse1:.4f})', fontsize=12)
    ax.grid(True, alpha=0.3)
    ax.set_xlim(0, 1)
    ax.set_ylim(0, 1)
    
    # 损伤2定位
    ax = axes[0, 1]
    mask2 = y2_loc_test > 0.01
    ax.scatter(y2_loc_test[mask2], y2_loc_pred[mask2], alpha=0.5, s=30, color='coral')
    ax.plot([0, 1], [0, 1], 'r--', linewidth=2)
    ax.set_xlabel('True Damage 2 Location', fontsize=11)
    ax.set_ylabel('Predicted Damage 2 Location', fontsize=11)
    ax.set_title(f'Damage 2 Localization (MSE={mse2:.4f})', fontsize=12)
    ax.grid(True, alpha=0.3)
    ax.set_xlim(0, 1)
    ax.set_ylim(0, 1)
    
    # 损伤位置分布
    ax = axes[1, 0]
    ax.hist(y1_loc_test[mask1], bins=20, alpha=0.5, label='Damage 1 (True)', color='blue')
    ax.hist(y2_loc_test[mask2], bins=20, alpha=0.5, label='Damage 2 (True)', color='coral')
    ax.set_xlabel('Damage Location', fontsize=11)
    ax.set_ylabel('Frequency', fontsize=11)
    ax.set_title('Damage Location Distribution', fontsize=12)
    ax.legend()
    ax.grid(True, alpha=0.3, axis='y')
    
    # 误差对比
    ax = axes[1, 1]
    errors1 = np.abs(y1_loc_pred[mask1] - y1_loc_test[mask1])
    errors2 = np.abs(y2_loc_pred[mask2] - y2_loc_test[mask2])
    ax.boxplot([errors1, errors2], labels=['Damage 1', 'Damage 2'])
    ax.set_ylabel('Absolute Error', fontsize=11)
    ax.set_title('Localization Error Comparison', fontsize=12)
    ax.grid(True, alpha=0.3, axis='y')
    
    plt.tight_layout()
    plt.savefig('case4_multiple_damage.png', dpi=150, bbox_inches='tight')
    plt.close()
    print("  已保存: case4_multiple_damage.png")


# ==================== 主程序 ====================
if __name__ == "__main__":
    print("=" * 70)
    print("案例4:神经网络用于结构损伤识别")
    print("=" * 70)
    
    # 运行所有示例
    example1_damage_classification()
    example2_damage_localization()
    example3_damage_severity()
    example4_multiple_damage()
    
    print("\n" + "=" * 70)
    print("案例4分析完成!")
    print("=" * 70)


# ============================================
# 生成GIF动画
# ============================================

def create_animation():
    """创建仿真结果动画"""
    import matplotlib.pyplot as plt
    from matplotlib.animation import FuncAnimation
    import numpy as np
    
    fig, ax = plt.subplots(figsize=(10, 6))
    ax.set_xlim(0, 10)
    ax.set_ylim(-2, 2)
    ax.set_xlabel('Time', fontsize=12)
    ax.set_ylabel('Response', fontsize=12)
    ax.set_title('Dynamic Response Animation', fontsize=14, fontweight='bold')
    ax.grid(True, alpha=0.3)
    
    line, = ax.plot([], [], 'b-', linewidth=2)
    
    def init():
        line.set_data([], [])
        return line,
    
    def update(frame):
        x = np.linspace(0, 10, 100)
        y = np.sin(x - frame * 0.2) * np.exp(-frame * 0.01)
        line.set_data(x, y)
        return line,
    
    anim = FuncAnimation(fig, update, init_func=init, frames=50, interval=100, blit=True)
    
    output_dir = os.path.dirname(os.path.abspath(__file__))
    anim.save(f'{output_dir}/simulation_animation.gif', writer='pillow', fps=10)
    print(f"动画已保存到: {output_dir}/simulation_animation.gif")
    plt.close()

if __name__ == '__main__':
    create_animation()

总结

本主题介绍了神经网络在结构动力学中的主要应用:

  1. 基础应用:使用标准神经网络进行响应预测
  2. 时序建模:使用RNN/LSTM处理时序数据
  3. 物理约束:PINN将物理定律嵌入神经网络
  4. 损伤识别:神经网络自动识别结构损伤

神经网络的引入为结构动力学分析提供了新的工具和方法,特别是在实时分析和大数据处理方面具有独特优势。


Logo

AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。

更多推荐