主题069:不确定性量化与热分析

场景描述

在工程实践中,热传导问题的参数往往存在不确定性:材料导热系数可能因批次差异而变化,环境温度随季节波动,对流换热系数受气流条件影响。这些不确定性如何影响热系统的性能?设备在极端条件下的失效概率是多少?哪些参数对结果影响最大?不确定性量化(Uncertainty Quantification, UQ) 为我们提供了系统性的分析框架,帮助工程师在设计阶段就充分考虑不确定性,做出更可靠的决策。

本教程将深入讲解不确定性量化的核心方法,包括蒙特卡洛模拟、拉丁超立方采样、多项式混沌展开、可靠性分析和敏感性分析,并通过Python代码实现完整的热传导不确定性分析流程。


在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

数学/物理模型

1. 不确定性传播基础

设热传导模型的输出 YYY 依赖于输入参数 X=(X1,X2,...,Xn)\mathbf{X} = (X_1, X_2, ..., X_n)X=(X1,X2,...,Xn)

Y=f(X)=f(X1,X2,...,Xn)Y = f(\mathbf{X}) = f(X_1, X_2, ..., X_n)Y=f(X)=f(X1,X2,...,Xn)

当输入参数具有不确定性时,输出也变为随机变量。不确定性量化的目标是:

  • 计算输出的统计矩(均值、方差)
  • 确定输出的概率分布
  • 评估失效概率 Pf=P(Y>Ythreshold)P_f = P(Y > Y_{threshold})Pf=P(Y>Ythreshold)
  • 识别关键不确定性来源

2. 蒙特卡洛方法

蒙特卡洛方法通过大量随机采样来估计统计量:

E[Y]≈1N∑i=1Nf(X(i))\mathbb{E}[Y] \approx \frac{1}{N} \sum_{i=1}^{N} f(\mathbf{X}^{(i)})E[Y]N1i=1Nf(X(i))

Var[Y]≈1N−1∑i=1N(f(X(i))−Yˉ)2\text{Var}[Y] \approx \frac{1}{N-1} \sum_{i=1}^{N} (f(\mathbf{X}^{(i)}) - \bar{Y})^2Var[Y]N11i=1N(f(X(i))Yˉ)2

其中 X(i)\mathbf{X}^{(i)}X(i) 是从输入分布中抽取的第 iii 个样本。

收敛速度:蒙特卡洛方法的误差以 O(1/N)O(1/\sqrt{N})O(1/N ) 的速度收敛,与维度无关。

3. 拉丁超立方采样(LHS)

LHS是一种分层采样技术,将每个维度的分布分成 NNN 个等概率区间,确保每个区间恰好有一个样本:

Xj(i)=Fj−1(πj(i)−1+Uj(i)N)X_j^{(i)} = F_j^{-1}\left(\frac{\pi_j(i) - 1 + U_j^{(i)}}{N}\right)Xj(i)=Fj1(Nπj(i)1+Uj(i))

其中 πj\pi_jπj 是第 jjj 维的随机排列,Uj(i)∼U(0,1)U_j^{(i)} \sim U(0,1)Uj(i)U(0,1)

4. 多项式混沌展开(PCE)

PCE将随机输出展开为随机变量的正交多项式级数:

Y(ξ)=∑α∈AcαΨα(ξ)Y(\xi) = \sum_{\alpha \in \mathcal{A}} c_\alpha \Psi_\alpha(\xi)Y(ξ)=αAcαΨα(ξ)

其中:

  • ξ\xiξ 是标准随机变量(标准正态或均匀分布)
  • Ψα\Psi_\alphaΨα 是正交多项式(Hermite多项式对应正态分布,Legendre多项式对应均匀分布)
  • cαc_\alphacα 是待确定的展开系数

正交性条件

E[Ψα(ξ)Ψβ(ξ)]=δαβγα\mathbb{E}[\Psi_\alpha(\xi) \Psi_\beta(\xi)] = \delta_{\alpha\beta} \gamma_\alphaE[Ψα(ξ)Ψβ(ξ)]=δαβγα

展开系数可通过投影法或回归法求解:

cα=E[YΨα]E[Ψα2]≈1N∑i=1NY(i)Ψα(ξ(i))c_\alpha = \frac{\mathbb{E}[Y \Psi_\alpha]}{\mathbb{E}[\Psi_\alpha^2]} \approx \frac{1}{N} \sum_{i=1}^{N} Y^{(i)} \Psi_\alpha(\xi^{(i)})cα=E[Ψα2]E[YΨα]N1i=1NY(i)Ψα(ξ(i))

5. 可靠性分析

失效概率

Pf=P(g(X)≤0)=∫g(X)≤0p(X)dXP_f = P(g(\mathbf{X}) \leq 0) = \int_{g(\mathbf{X}) \leq 0} p(\mathbf{X}) d\mathbf{X}Pf=P(g(X)0)=g(X)0p(X)dX

其中 g(X)=Ythreshold−f(X)g(\mathbf{X}) = Y_{threshold} - f(\mathbf{X})g(X)=Ythresholdf(X) 是性能函数,g≤0g \leq 0g0 表示失效。

可靠性指标(Reliability Index):

β=μgσg=Ythreshold−μYσY\beta = \frac{\mu_g}{\sigma_g} = \frac{Y_{threshold} - \mu_Y}{\sigma_Y}β=σgμg=σYYthresholdμY

对于正态分布,Pf=Φ(−β)P_f = \Phi(-\beta)Pf=Φ(β),其中 Φ\PhiΦ 是标准正态CDF。

FORM方法(一阶可靠性方法):

通过迭代找到最可能失效点(MPP):

β=min⁡g(u)=0∥u∥\beta = \min_{g(\mathbf{u})=0} \|\mathbf{u}\|β=g(u)=0minu

其中 u\mathbf{u}u 是标准正态空间中的坐标。

6. 敏感性分析

Sobol指数衡量各输入参数对输出方差的贡献:

Si=VarXi(EX∼i[Y∣Xi])Var(Y)S_i = \frac{\text{Var}_{X_i}(\mathbb{E}_{\mathbf{X}_{\sim i}}[Y|X_i])}{\text{Var}(Y)}Si=Var(Y)VarXi(EXi[YXi])

总Sobol指数包含参数的所有交互效应:

SiT=1−VarX∼i(EXi[Y∣X∼i])Var(Y)S_i^T = 1 - \frac{\text{Var}_{\mathbf{X}_{\sim i}}(\mathbb{E}_{X_i}[Y|\mathbf{X}_{\sim i}])}{\text{Var}(Y)}SiT=1Var(Y)VarXi(EXi[YXi])

基于PCE的Sobol指数

SiT=∑α∈Aicα2γα∑α∈A,α≠0cα2γαS_i^T = \frac{\sum_{\alpha \in \mathcal{A}_i} c_\alpha^2 \gamma_\alpha}{\sum_{\alpha \in \mathcal{A}, \alpha \neq 0} c_\alpha^2 \gamma_\alpha}SiT=αA,α=0cα2γααAicα2γα

其中 Ai\mathcal{A}_iAi 是包含第 iii 个变量的多项式指标集。


环境准备

依赖库安装

pip install numpy scipy matplotlib

导入必要的库

import matplotlib
matplotlib.use('Agg')  # 非交互式后端
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.special import eval_legendre, eval_hermitenorm
import warnings
warnings.filterwarnings('ignore')

完整代码实现

以下是完整的Python仿真代码,包含五个核心案例:

# -*- coding: utf-8 -*-
"""
主题069:不确定性量化与热分析
Uncertainty Quantification in Thermal Analysis

本代码实现:
1. 蒙特卡洛模拟基础
2. 拉丁超立方采样
3. 多项式混沌展开
4. 可靠性分析与失效概率
5. 敏感性分析(Sobol指数)
"""

import matplotlib
matplotlib.use('Agg')  # 非交互式后端
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.special import eval_legendre, eval_hermitenorm
from scipy.integrate import quad
import warnings
warnings.filterwarnings('ignore')
import os

# 创建输出目录
output_dir = r'd:\文档\500仿真领域\工程仿真\热传导仿真\主题069\output'
os.makedirs(output_dir, exist_ok=True)

# 设置中文字体
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False

print("=" * 70)
print("主题069:不确定性量化与热分析")
print("=" * 70)

# ============================================================================
# 案例1:蒙特卡洛模拟基础
# ============================================================================
print("\n[案例1] 蒙特卡洛模拟基础")
print("-" * 50)

# 定义热传导模型(简化的一维稳态热传导)
def heat_transfer_model(k, h, T_ambient, Q_in):
    """
    简化的散热器温度计算模型
    
    参数:
        k: 导热系数 [W/(m·K)]
        h: 对流换热系数 [W/(m²·K)]
        T_ambient: 环境温度 [K]
        Q_in: 输入热流 [W]
    
    返回:
        T_max: 最高温度 [K]
    """
    # 简化模型: T_max = T_ambient + Q_in / (k * A / L + h * A)
    A = 0.01  # m², 散热面积
    L = 0.005  # m, 特征长度
    
    R_cond = L / (k * A)  # 导热热阻
    R_conv = 1 / (h * A)  # 对流热阻
    R_total = R_cond + R_conv
    
    T_max = T_ambient + Q_in * R_total
    return T_max

# 定义不确定性参数
np.random.seed(42)

# 参数分布
param_distributions = {
    'k': {'type': 'normal', 'mean': 200, 'std': 20},      # 导热系数
    'h': {'type': 'uniform', 'low': 10, 'high': 100},     # 对流系数
    'T_ambient': {'type': 'normal', 'mean': 300, 'std': 5},  # 环境温度
    'Q_in': {'type': 'normal', 'mean': 50, 'std': 5}      # 输入热流
}

def generate_samples(n_samples, distributions):
    """生成随机样本"""
    samples = {}
    for param, dist in distributions.items():
        if dist['type'] == 'normal':
            samples[param] = np.random.normal(dist['mean'], dist['std'], n_samples)
        elif dist['type'] == 'uniform':
            samples[param] = np.random.uniform(dist['low'], dist['high'], n_samples)
    return samples

def monte_carlo_simulation(n_samples, model, distributions):
    """
    蒙特卡洛模拟
    
    参数:
        n_samples: 样本数量
        model: 计算模型函数
        distributions: 参数分布字典
    
    返回:
        results: 模型输出结果
        samples: 输入样本
    """
    samples = generate_samples(n_samples, distributions)
    results = np.zeros(n_samples)
    
    for i in range(n_samples):
        results[i] = model(
            samples['k'][i],
            samples['h'][i],
            samples['T_ambient'][i],
            samples['Q_in'][i]
        )
    
    return results, samples

# 运行不同样本量的MC模拟
print("运行蒙特卡洛模拟...")
sample_sizes = [100, 500, 1000, 5000, 10000]
mc_results = {}

for n in sample_sizes:
    results, samples = monte_carlo_simulation(n, heat_transfer_model, param_distributions)
    mc_results[n] = {'results': results, 'samples': samples}
    print(f"  n={n:5d}: mean={np.mean(results):.2f}K, std={np.std(results):.2f}K")

# 收敛性分析
n_convergence = np.logspace(2, 4, 20).astype(int)
mean_convergence = []
std_convergence = []

for n in n_convergence:
    results, _ = monte_carlo_simulation(n, heat_transfer_model, param_distributions)
    mean_convergence.append(np.mean(results))
    std_convergence.append(np.std(results))

# 可视化案例1
fig, axes = plt.subplots(2, 3, figsize=(15, 10))

# 结果分布直方图
ax1 = axes[0, 0]
results_10000 = mc_results[10000]['results']
ax1.hist(results_10000, bins=50, density=True, alpha=0.7, edgecolor='black')
ax1.axvline(np.mean(results_10000), color='r', linestyle='--', linewidth=2, label=f'Mean: {np.mean(results_10000):.2f}K')
ax1.axvline(np.median(results_10000), color='g', linestyle='--', linewidth=2, label=f'Median: {np.median(results_10000):.2f}K')
ax1.set_xlabel('Maximum Temperature (K)')
ax1.set_ylabel('Probability Density')
ax1.set_title('MC Simulation: Temperature Distribution')
ax1.legend()
ax1.grid(True, alpha=0.3)

# 收敛性分析 - 均值
ax2 = axes[0, 1]
ax2.semilogx(n_convergence, mean_convergence, 'b-', linewidth=2)
ax2.axhline(mean_convergence[-1], color='r', linestyle='--', alpha=0.5)
ax2.set_xlabel('Number of Samples')
ax2.set_ylabel('Mean Temperature (K)')
ax2.set_title('Convergence of Mean')
ax2.grid(True, alpha=0.3)

# 收敛性分析 - 标准差
ax3 = axes[0, 2]
ax3.semilogx(n_convergence, std_convergence, 'g-', linewidth=2)
ax3.axhline(std_convergence[-1], color='r', linestyle='--', alpha=0.5)
ax3.set_xlabel('Number of Samples')
ax3.set_ylabel('Standard Deviation (K)')
ax3.set_title('Convergence of Std Dev')
ax3.grid(True, alpha=0.3)

# 参数敏感性散点图
ax4 = axes[1, 0]
samples_10000 = mc_results[10000]['samples']
ax4.scatter(samples_10000['k'], results_10000, alpha=0.3, s=10)
ax4.set_xlabel('Thermal Conductivity k [W/(m·K)]')
ax4.set_ylabel('Max Temperature (K)')
ax4.set_title('Sensitivity to k')
ax4.grid(True, alpha=0.3)

# 相关系数
ax5 = axes[1, 1]
params_array = np.column_stack([
    samples_10000['k'],
    samples_10000['h'],
    samples_10000['T_ambient'],
    samples_10000['Q_in']
])
correlations = np.corrcoef(params_array.T, results_10000)[:-1, -1]
param_names = ['k', 'h', 'T_amb', 'Q_in']
colors = ['red' if c > 0 else 'blue' for c in correlations]
bars = ax5.bar(param_names, correlations, color=colors, alpha=0.7)
ax5.set_ylabel('Correlation Coefficient')
ax5.set_title('Input-Output Correlation')
ax5.axhline(y=0, color='k', linestyle='-', linewidth=0.5)
ax5.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, correlations):
    ax5.text(bar.get_x() + bar.get_width()/2, val, f'{val:.3f}', 
             ha='center', va='bottom' if val > 0 else 'top')

# 累积分布函数
ax6 = axes[1, 2]
sorted_results = np.sort(results_10000)
cdf = np.arange(1, len(sorted_results) + 1) / len(sorted_results)
ax6.plot(sorted_results, cdf, 'b-', linewidth=2)
ax6.axhline(0.95, color='r', linestyle='--', alpha=0.5, label='95%')
ax6.axvline(np.percentile(results_10000, 95), color='r', linestyle='--', alpha=0.5)
ax6.set_xlabel('Maximum Temperature (K)')
ax6.set_ylabel('Cumulative Probability')
ax6.set_title('Cumulative Distribution Function')
ax6.legend()
ax6.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig(f'{output_dir}/case1_monte_carlo.png', dpi=150, bbox_inches='tight')
plt.close()
print(f"✓ 案例1结果已保存: case1_monte_carlo.png")

# ============================================================================
# 案例2:拉丁超立方采样
# ============================================================================
print("\n[案例2] 拉丁超立方采样")
print("-" * 50)

def latin_hypercube_sampling(n_samples, n_dim):
    """
    拉丁超立方采样
    
    参数:
        n_samples: 样本数量
        n_dim: 维度数
    
    返回:
        samples: LHS样本,范围[0, 1]
    """
    samples = np.zeros((n_samples, n_dim))
    
    for i in range(n_dim):
        # 将每个维度分成n_samples个区间
        perm = np.random.permutation(n_samples)
        # 在每个区间内随机采样
        samples[:, i] = (perm + np.random.rand(n_samples)) / n_samples
    
    return samples

def transform_samples(samples, distributions, param_names):
    """将[0,1]样本转换为实际参数值"""
    transformed = {}
    for i, param in enumerate(param_names):
        dist = distributions[param]
        if dist['type'] == 'normal':
            # 使用逆CDF变换
            transformed[param] = stats.norm.ppf(samples[:, i], dist['mean'], dist['std'])
        elif dist['type'] == 'uniform':
            transformed[param] = samples[:, i] * (dist['high'] - dist['low']) + dist['low']
    return transformed

# 对比MC和LHS
print("对比随机采样与拉丁超立方采样...")
n_samples = 1000

# 随机采样
np.random.seed(42)
mc_samples = generate_samples(n_samples, param_distributions)
mc_results_1k = np.array([heat_transfer_model(
    mc_samples['k'][i], mc_samples['h'][i],
    mc_samples['T_ambient'][i], mc_samples['Q_in'][i]
) for i in range(n_samples)])

# LHS采样
lhs_uniform = latin_hypercube_sampling(n_samples, 4)
param_names = ['k', 'h', 'T_ambient', 'Q_in']
lhs_samples = transform_samples(lhs_uniform, param_distributions, param_names)
lhs_results = np.array([heat_transfer_model(
    lhs_samples['k'][i], lhs_samples['h'][i],
    lhs_samples['T_ambient'][i], lhs_samples['Q_in'][i]
) for i in range(n_samples)])

print(f"  随机采样: mean={np.mean(mc_results_1k):.2f}K, std={np.std(mc_results_1k):.2f}K")
print(f"  LHS采样:  mean={np.mean(lhs_results):.2f}K, std={np.std(lhs_results):.2f}K")

# 可视化案例2
fig, axes = plt.subplots(2, 3, figsize=(15, 10))

# 2D投影对比 - k vs h
ax1 = axes[0, 0]
ax1.scatter(mc_samples['k'][:200], mc_samples['h'][:200], alpha=0.5, s=30, label='Random')
ax1.set_xlabel('k [W/(m·K)]')
ax1.set_ylabel('h [W/(m²·K)]')
ax1.set_title('Random Sampling (k vs h)')
ax1.legend()
ax1.grid(True, alpha=0.3)

ax2 = axes[0, 1]
ax2.scatter(lhs_samples['k'][:200], lhs_samples['h'][:200], alpha=0.5, s=30, c='orange', label='LHS')
ax2.set_xlabel('k [W/(m·K)]')
ax2.set_ylabel('h [W/(m²·K)]')
ax2.set_title('LHS (k vs h)')
ax2.legend()
ax2.grid(True, alpha=0.3)

# 结果分布对比
ax3 = axes[0, 2]
ax3.hist(mc_results_1k, bins=30, alpha=0.5, label='Random', density=True)
ax3.hist(lhs_results, bins=30, alpha=0.5, label='LHS', density=True)
ax3.set_xlabel('Max Temperature (K)')
ax3.set_ylabel('Probability Density')
ax3.set_title('Result Distribution Comparison')
ax3.legend()
ax3.grid(True, alpha=0.3)

# 收敛性对比
sample_sizes_lhs = [50, 100, 200, 500, 1000]
mc_means = []
lhs_means = []
mc_stds = []
lhs_stds = []

for n in sample_sizes_lhs:
    # MC
    mc_s = generate_samples(n, param_distributions)
    mc_r = np.array([heat_transfer_model(
        mc_s['k'][i], mc_s['h'][i],
        mc_s['T_ambient'][i], mc_s['Q_in'][i]
    ) for i in range(n)])
    mc_means.append(np.mean(mc_r))
    mc_stds.append(np.std(mc_r))
    
    # LHS
    lhs_u = latin_hypercube_sampling(n, 4)
    lhs_s = transform_samples(lhs_u, param_distributions, param_names)
    lhs_r = np.array([heat_transfer_model(
        lhs_s['k'][i], lhs_s['h'][i],
        lhs_s['T_ambient'][i], lhs_s['Q_in'][i]
    ) for i in range(n)])
    lhs_means.append(np.mean(lhs_r))
    lhs_stds.append(np.std(lhs_r))

ax4 = axes[1, 0]
ax4.plot(sample_sizes_lhs, mc_means, 'b-o', label='Random', linewidth=2)
ax4.plot(sample_sizes_lhs, lhs_means, 'r-s', label='LHS', linewidth=2)
ax4.set_xlabel('Number of Samples')
ax4.set_ylabel('Mean Temperature (K)')
ax4.set_title('Mean Convergence Comparison')
ax4.legend()
ax4.grid(True, alpha=0.3)

ax5 = axes[1, 1]
ax4_var = axes[1, 1]
ax4_var.plot(sample_sizes_lhs, mc_stds, 'b-o', label='Random', linewidth=2)
ax4_var.plot(sample_sizes_lhs, lhs_stds, 'r-s', label='LHS', linewidth=2)
ax4_var.set_xlabel('Number of Samples')
ax4_var.set_ylabel('Std Dev (K)')
ax4_var.set_title('Std Dev Convergence Comparison')
ax4_var.legend()
ax4_var.grid(True, alpha=0.3)

# 空间填充性对比
ax6 = axes[1, 2]
# 计算最小距离作为空间填充性指标
def min_distance_metric(samples_2d):
    """计算样本的最小距离"""
    min_dist = np.inf
    n = len(samples_2d)
    for i in range(n):
        for j in range(i+1, n):
            dist = np.sqrt(np.sum((samples_2d[i] - samples_2d[j])**2))
            if dist < min_dist:
                min_dist = dist
    return min_dist

mc_2d = np.column_stack([mc_samples['k'][:100], mc_samples['h'][:100]])
lhs_2d = np.column_stack([lhs_samples['k'][:100], lhs_samples['h'][:100]])

# 归一化
mc_2d_norm = (mc_2d - mc_2d.min(axis=0)) / (mc_2d.max(axis=0) - mc_2d.min(axis=0))
lhs_2d_norm = (lhs_2d - lhs_2d.min(axis=0)) / (lhs_2d.max(axis=0) - lhs_2d.min(axis=0))

mc_min_dist = min_distance_metric(mc_2d_norm)
lhs_min_dist = min_distance_metric(lhs_2d_norm)

methods = ['Random', 'LHS']
min_dists = [mc_min_dist, lhs_min_dist]
colors = ['blue', 'orange']
ax6.bar(methods, min_dists, color=colors, alpha=0.7)
ax6.set_ylabel('Minimum Normalized Distance')
ax6.set_title('Space-Filling Metric (higher is better)')
ax6.grid(True, alpha=0.3, axis='y')
for i, v in enumerate(min_dists):
    ax6.text(i, v, f'{v:.4f}', ha='center', va='bottom')

plt.tight_layout()
plt.savefig(f'{output_dir}/case2_lhs_sampling.png', dpi=150, bbox_inches='tight')
plt.close()
print(f"✓ 案例2结果已保存: case2_lhs_sampling.png")

# ============================================================================
# 案例3:多项式混沌展开
# ============================================================================
print("\n[案例3] 多项式混沌展开")
print("-" * 50)

class PolynomialChaos:
    """
    多项式混沌展开类
    使用Legendre多项式(均匀分布)和Hermite多项式(正态分布)
    """
    
    def __init__(self, order=3):
        self.order = order
        self.coefficients = None
        self.poly_types = None
        
    def legendre_poly(self, x, n):
        """Legendre多项式(用于均匀分布)"""
        return eval_legendre(n, x)
    
    def hermite_poly(self, x, n):
        """Hermite多项式(用于正态分布)- 物理学家版本"""
        return eval_hermitenorm(n, x)
    
    def generate_basis(self, xi, orders, dist_types):
        """
        生成多项式基函数
        
        参数:
            xi: 标准随机变量样本 [n_samples, n_dim]
            orders: 每个维度的多项式阶数
            dist_types: 每个维度的分布类型
        
        返回:
            basis: 基函数矩阵 [n_samples, n_basis]
        """
        n_samples = xi.shape[0]
        n_dim = xi.shape[1]
        
        # 生成所有多项式组合
        from itertools import product
        poly_orders = list(product(*[range(o+1) for o in orders]))
        
        basis_list = []
        for po in poly_orders:
            # 计算每个维度的多项式值
            poly_vals = np.ones(n_samples)
            for d in range(n_dim):
                if dist_types[d] == 'uniform':
                    # Legendre多项式,定义域[-1, 1]
                    x_scaled = 2 * xi[:, d] - 1  # 将[0,1]映射到[-1,1]
                    poly_vals *= self.legendre_poly(x_scaled, po[d])
                elif dist_types[d] == 'normal':
                    # Hermite多项式
                    poly_vals *= self.hermite_poly(xi[:, d], po[d])
            basis_list.append(poly_vals)
        
        return np.column_stack(basis_list), poly_orders
    
    def fit(self, xi, y, orders, dist_types):
        """
        拟合多项式混沌展开
        
        参数:
            xi: 标准随机变量样本
            y: 模型输出
            orders: 每个维度的多项式阶数
            dist_types: 每个维度的分布类型
        """
        basis, self.poly_orders = self.generate_basis(xi, orders, dist_types)
        self.dist_types = dist_types
        
        # 最小二乘拟合
        self.coefficients, residuals, rank, s = np.linalg.lstsq(basis, y, rcond=None)
        
        return self
    
    def predict(self, xi):
        """使用PCE预测"""
        basis, _ = self.generate_basis(xi, [max(po) for po in zip(*self.poly_orders)], self.dist_types)
        return basis @ self.coefficients
    
    def get_statistics(self):
        """从PCE系数计算统计量"""
        # 均值是常数项
        mean = self.coefficients[0]
        
        # 方差计算(正交多项式性质)
        variance = np.sum(self.coefficients[1:]**2)
        
        return {'mean': mean, 'variance': variance, 'std': np.sqrt(variance)}
    
    def sobol_indices(self):
        """计算Sobol敏感性指数"""
        n_dim = len(self.dist_types)
        
        # 总方差
        total_var = np.sum(self.coefficients[1:]**2)
        
        # 计算各阶Sobol指数
        sobol_total = np.zeros(n_dim)
        
        for i, po in enumerate(self.poly_orders):
            if i == 0:  # 跳过常数项
                continue
            coeff_sq = self.coefficients[i]**2
            for d in range(n_dim):
                if po[d] > 0:  # 如果该维度参与了此项
                    sobol_total[d] += coeff_sq
        
        sobol_total /= total_var
        
        return sobol_total

# 生成训练数据
print("训练多项式混沌展开模型...")
n_train = 500
lhs_train = latin_hypercube_sampling(n_train, 4)
train_samples = transform_samples(lhs_train, param_distributions, param_names)

# 转换为标准随机变量
def to_standard(samples, distributions, param_names):
    """将样本转换为标准随机变量"""
    standard = np.zeros((len(samples[param_names[0]]), len(param_names)))
    for i, param in enumerate(param_names):
        dist = distributions[param]
        if dist['type'] == 'uniform':
            # 已经是[0,1],直接使用
            standard[:, i] = samples[param]
        elif dist['type'] == 'normal':
            # 转换为标准正态
            standard[:, i] = (samples[param] - dist['mean']) / dist['std']
    return standard

xi_train = to_standard(train_samples, param_distributions, param_names)
y_train = np.array([heat_transfer_model(
    train_samples['k'][i], train_samples['h'][i],
    train_samples['T_ambient'][i], train_samples['Q_in'][i]
) for i in range(n_train)])

# 拟合PCE
orders = [2, 2, 2, 2]  # 每个维度的阶数
dist_types = ['normal', 'uniform', 'normal', 'normal']

pce = PolynomialChaos(order=2)
pce.fit(xi_train, y_train, orders, dist_types)

# 统计量
stats_pce = pce.get_statistics()
print(f"  PCE统计量: mean={stats_pce['mean']:.2f}K, std={stats_pce['std']:.2f}K")

# Sobol指数
sobol = pce.sobol_indices()
print(f"  Sobol指数:")
for i, param in enumerate(param_names):
    print(f"    {param}: {sobol[i]:.4f}")

# 验证PCE
n_test = 1000
lhs_test = latin_hypercube_sampling(n_test, 4)
test_samples = transform_samples(lhs_test, param_distributions, param_names)
xi_test = to_standard(test_samples, param_distributions, param_names)

y_true = np.array([heat_transfer_model(
    test_samples['k'][i], test_samples['h'][i],
    test_samples['T_ambient'][i], test_samples['Q_in'][i]
) for i in range(n_test)])

y_pce = pce.predict(xi_test)

# 计算误差
mse = np.mean((y_true - y_pce)**2)
rmse = np.sqrt(mse)
r2 = 1 - np.sum((y_true - y_pce)**2) / np.sum((y_true - np.mean(y_true))**2)

print(f"  PCE验证: RMSE={rmse:.4f}, R²={r2:.4f}")

# 可视化案例3
fig, axes = plt.subplots(2, 3, figsize=(15, 10))

# PCE预测 vs 真实值
ax1 = axes[0, 0]
ax1.scatter(y_true, y_pce, alpha=0.5, s=20)
ax1.plot([y_true.min(), y_true.max()], [y_true.min(), y_true.max()], 'r--', linewidth=2)
ax1.set_xlabel('True Temperature (K)')
ax1.set_ylabel('PCE Prediction (K)')
ax1.set_title(f'PCE Validation (R²={r2:.4f})')
ax1.grid(True, alpha=0.3)

# 残差分布
ax2 = axes[0, 1]
residuals = y_true - y_pce
ax2.hist(residuals, bins=30, alpha=0.7, edgecolor='black')
ax2.axvline(0, color='r', linestyle='--', linewidth=2)
ax2.set_xlabel('Residual (K)')
ax2.set_ylabel('Frequency')
ax2.set_title('PCE Residual Distribution')
ax2.grid(True, alpha=0.3)

# Sobol指数
ax3 = axes[0, 2]
colors_sobol = ['red' if s > 0.3 else 'orange' if s > 0.1 else 'green' for s in sobol]
bars = ax3.bar(param_names, sobol, color=colors_sobol, alpha=0.7)
ax3.set_ylabel('Total Sobol Index')
ax3.set_title('Global Sensitivity Analysis')
ax3.axhline(0.1, color='gray', linestyle='--', alpha=0.5, label='10% threshold')
ax3.grid(True, alpha=0.3, axis='y')
ax3.legend()
for bar, val in zip(bars, sobol):
    ax3.text(bar.get_x() + bar.get_width()/2, val, f'{val:.3f}', 
             ha='center', va='bottom')

# PCE系数
ax4 = axes[1, 0]
coeff_magnitudes = np.abs(pce.coefficients)
ax4.semilogy(range(len(coeff_magnitudes)), coeff_magnitudes, 'bo-', markersize=4)
ax4.set_xlabel('Polynomial Term Index')
ax4.set_ylabel('|Coefficient| (log scale)')
ax4.set_title('PCE Coefficient Magnitudes')
ax4.grid(True, alpha=0.3)

# 收敛性 - PCE阶数
ax5 = axes[1, 1]
orders_test = [1, 2, 3, 4]
r2_by_order = []
rmse_by_order = []

for order in orders_test:
    pce_test = PolynomialChaos(order=order)
    pce_test.fit(xi_train, y_train, [order]*4, dist_types)
    y_pred = pce_test.predict(xi_test)
    r2_test = 1 - np.sum((y_true - y_pred)**2) / np.sum((y_true - np.mean(y_true))**2)
    rmse_test = np.sqrt(np.mean((y_true - y_pred)**2))
    r2_by_order.append(r2_test)
    rmse_by_order.append(rmse_test)

ax5.plot(orders_test, r2_by_order, 'b-o', linewidth=2, markersize=8)
ax5.set_xlabel('PCE Order')
ax5.set_ylabel('R²')
ax5.set_title('PCE Convergence vs Order')
ax5.grid(True, alpha=0.3)

# 方差分解
ax6 = axes[1, 2]
# 计算各阶方差贡献
variance_by_order = np.zeros(max(orders_test) + 1)
for i, po in enumerate(pce.poly_orders):
    order = sum(po)
    if order <= max(orders_test):
        variance_by_order[order] += pce.coefficients[i]**2

variance_by_order = variance_by_order[1:]  # 去掉常数项
variance_by_order /= np.sum(variance_by_order)

ax6.bar(range(1, len(variance_by_order)+1), variance_by_order, alpha=0.7)
ax6.set_xlabel('Polynomial Order')
ax6.set_ylabel('Variance Contribution')
ax6.set_title('Variance Decomposition by Order')
ax6.grid(True, alpha=0.3, axis='y')

plt.tight_layout()
plt.savefig(f'{output_dir}/case3_polynomial_chaos.png', dpi=150, bbox_inches='tight')
plt.close()
print(f"✓ 案例3结果已保存: case3_polynomial_chaos.png")

# ============================================================================
# 案例4:可靠性分析与失效概率
# ============================================================================
print("\n[案例4] 可靠性分析与失效概率")
print("-" * 50)

def reliability_analysis_mc(n_samples, model, distributions, threshold, param_names):
    """
    使用蒙特卡洛方法计算失效概率
    """
    samples = generate_samples(n_samples, distributions)
    results = np.array([model(
        samples[param_names[0]][i],
        samples[param_names[1]][i],
        samples[param_names[2]][i],
        samples[param_names[3]][i]
    ) for i in range(n_samples)])
    
    # 失效概率
    failures = np.sum(results > threshold)
    pf = failures / n_samples
    
    # 可靠性指标
    mean = np.mean(results)
    std = np.std(results)
    beta = (threshold - mean) / std
    
    return pf, beta, results

def importance_sampling(n_samples, model, distributions, threshold, param_names):
    """
    重要性采样方法
    """
    # 确定重要性采样分布
    is_distributions = distributions.copy()
    is_distributions['k'] = {'type': 'normal', 'mean': 150, 'std': 20}
    
    samples = generate_samples(n_samples, is_distributions)
    results = np.array([model(
        samples[param_names[0]][i],
        samples[param_names[1]][i],
        samples[param_names[2]][i],
        samples[param_names[3]][i]
    ) for i in range(n_samples)])
    
    # 计算权重
    weights = np.ones(n_samples)
    for param in param_names:
        if distributions[param]['type'] == 'normal':
            orig_pdf = stats.norm.pdf(samples[param], 
                                      distributions[param]['mean'], 
                                      distributions[param]['std'])
            is_pdf = stats.norm.pdf(samples[param], 
                                    is_distributions[param]['mean'], 
                                    is_distributions[param]['std'])
            weights *= orig_pdf / (is_pdf + 1e-10)
    
    # 加权失效概率
    failures = results > threshold
    pf = np.sum(weights * failures) / np.sum(weights)
    
    return pf, results

# 设置失效阈值
T_threshold = 350  # K

print("计算失效概率...")

# 标准蒙特卡洛
pf_mc, beta_mc, results_mc = reliability_analysis_mc(
    10000, heat_transfer_model, param_distributions, T_threshold, param_names
)
print(f"  标准MC: Pf={pf_mc:.6f}, β={beta_mc:.2f}")

# 重要性采样
pf_is, results_is = importance_sampling(
    5000, heat_transfer_model, param_distributions, T_threshold, param_names
)
print(f"  重要性采样: Pf={pf_is:.6f}")

# FORM方法
def form_method(model, distributions, threshold, param_names):
    """一阶可靠性方法"""
    x = np.array([
        distributions['k']['mean'],
        (distributions['h']['low'] + distributions['h']['high']) / 2,
        distributions['T_ambient']['mean'],
        distributions['Q_in']['mean']
    ])
    
    std = np.array([
        distributions['k']['std'],
        (distributions['h']['high'] - distributions['h']['low']) / np.sqrt(12),
        distributions['T_ambient']['std'],
        distributions['Q_in']['std']
    ])
    
    for iteration in range(50):
        g = threshold - model(x[0], x[1], x[2], x[3])
        
        eps = 1e-6
        grad = np.zeros(4)
        for i in range(4):
            x_plus = x.copy()
            x_plus[i] += eps
            grad[i] = (model(x_plus[0], x_plus[1], x_plus[2], x_plus[3]) - 
                      model(x[0], x[1], x[2], x[3])) / eps
        
        grad_std = grad * std
        alpha = -grad_std / (np.linalg.norm(grad_std) + 1e-10)
        beta_new = -g / (np.linalg.norm(grad_std) + 1e-10)
        
        if abs(beta_new - (threshold - model(x[0], x[1], x[2], x[3])) / np.linalg.norm(grad_std)) < 1e-6:
            break
        
        x = np.array([
            distributions['k']['mean'] - beta_new * alpha[0] * std[0],
            (distributions['h']['low'] + distributions['h']['high']) / 2 - beta_new * alpha[1] * std[1],
            distributions['T_ambient']['mean'] - beta_new * alpha[2] * std[2],
            distributions['Q_in']['mean'] - beta_new * alpha[3] * std[3]
        ])
    
    pf_form = stats.norm.cdf(-beta_new)
    return pf_form, beta_new, x

pf_form, beta_form, mpp = form_method(heat_transfer_model, param_distributions, T_threshold, param_names)
print(f"  FORM方法: Pf={pf_form:.6f}, β={beta_form:.2f}")

# 可视化案例4
fig, axes = plt.subplots(2, 3, figsize=(15, 10))

# 温度分布与失效阈值
ax1 = axes[0, 0]
ax1.hist(results_mc, bins=50, density=True, alpha=0.7, edgecolor='black')
ax1.axvline(T_threshold, color='r', linestyle='--', linewidth=2, label=f'Threshold={T_threshold}K')
ax1.axvline(np.mean(results_mc), color='g', linestyle='--', linewidth=2, label=f'Mean={np.mean(results_mc):.1f}K')
ax1.fill_betweenx([0, ax1.get_ylim()[1]], T_threshold, ax1.get_xlim()[1], alpha=0.3, color='red', label=f'Failure (Pf={pf_mc:.4f})')
ax1.set_xlabel('Maximum Temperature (K)')
ax1.set_ylabel('Probability Density')
ax1.set_title('Temperature Distribution & Failure Region')
ax1.legend()
ax1.grid(True, alpha=0.3)

# 失效概率对比
ax2 = axes[0, 1]
methods = ['Monte Carlo', 'Importance\nSampling', 'FORM']
pf_values = [pf_mc, pf_is, pf_form]
colors = ['blue', 'orange', 'green']
bars = ax2.bar(methods, pf_values, color=colors, alpha=0.7)
ax2.set_ylabel('Failure Probability Pf')
ax2.set_title('Failure Probability Comparison')
ax2.set_yscale('log')
ax2.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, pf_values):
    ax2.text(bar.get_x() + bar.get_width()/2, val, f'{val:.2e}', 
             ha='center', va='bottom')

# 可靠性指标对比
ax3 = axes[0, 2]
beta_methods = ['MC', 'FORM']
beta_vals = [beta_mc, beta_form]
ax3.bar(beta_methods, beta_vals, color=['blue', 'green'], alpha=0.7)
ax3.set_ylabel('Reliability Index β')
ax3.set_title('Reliability Index Comparison')
ax3.axhline(3.0, color='r', linestyle='--', alpha=0.5, label='Target β=3.0')
ax3.grid(True, alpha=0.3, axis='y')
ax3.legend()

# 参数敏感性对失效概率的影响
ax4 = axes[1, 0]
k_values = np.linspace(150, 250, 20)
pf_by_k = []

for k_mean in k_values:
    temp_dist = param_distributions.copy()
    temp_dist['k'] = {'type': 'normal', 'mean': k_mean, 'std': 20}
    pf, _, _ = reliability_analysis_mc(2000, heat_transfer_model, temp_dist, T_threshold, param_names)
    pf_by_k.append(pf)

ax4.semilogy(k_values, pf_by_k, 'b-', linewidth=2)
ax4.set_xlabel('Mean Thermal Conductivity (W/m·K)')
ax4.set_ylabel('Failure Probability Pf')
ax4.set_title('Effect of k on Reliability')
ax4.grid(True, alpha=0.3)

# 安全裕度分析
ax5 = axes[1, 1]
safety_margin = T_threshold - results_mc
ax5.hist(safety_margin, bins=50, alpha=0.7, edgecolor='black', color='green')
ax5.axvline(0, color='r', linestyle='--', linewidth=2, label='Failure boundary')
ax5.axvline(np.mean(safety_margin), color='b', linestyle='--', linewidth=2, 
           label=f'Mean margin={np.mean(safety_margin):.1f}K')
ax5.set_xlabel('Safety Margin (K)')
ax5.set_ylabel('Frequency')
ax5.set_title('Safety Margin Distribution')
ax5.legend()
ax5.grid(True, alpha=0.3)

# 设计空间中的失效边界
ax6 = axes[1, 2]
k_range = np.linspace(150, 250, 50)
h_range = np.linspace(10, 100, 50)
K, H = np.meshgrid(k_range, h_range)

T_fixed = np.zeros_like(K)
for i in range(len(k_range)):
    for j in range(len(h_range)):
        T_fixed[j, i] = heat_transfer_model(K[j, i], H[j, i], 300, 50)

contour = ax6.contour(K, H, T_fixed, levels=[T_threshold], colors='red', linewidths=2)
ax6.contourf(K, H, T_fixed, levels=[T_threshold, 400], colors='red', alpha=0.3)
ax6.clabel(contour, inline=True, fontsize=10, fmt=f'T={T_threshold}K')
ax6.set_xlabel('k [W/(m·K)]')
ax6.set_ylabel('h [W/(m²·K)]')
ax6.set_title('Failure Boundary in Design Space')
ax6.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig(f'{output_dir}/case4_reliability.png', dpi=150, bbox_inches='tight')
plt.close()
print(f"✓ 案例4结果已保存: case4_reliability.png")

# ============================================================================
# 案例5:敏感性分析
# ============================================================================
print("\n[案例5] 敏感性分析")
print("-" * 50)

def local_sensitivity_analysis(model, x0, param_names, distributions):
    """局部敏感性分析(基于偏导数)"""
    eps = 1e-6
    sensitivities = {}
    
    for i, param in enumerate(param_names):
        x_plus = x0.copy()
        x_plus[i] += eps
        
        f0 = model(*x0)
        f_plus = model(*x_plus)
        
        sens = (f_plus - f0) / eps
        if distributions[param]['type'] == 'normal':
            sens *= distributions[param]['std']
        elif distributions[param]['type'] == 'uniform':
            sens *= (distributions[param]['high'] - distributions[param]['low']) / np.sqrt(12)
        
        sensitivities[param] = sens
    
    return sensitivities

def correlation_sensitivity(samples, results):
    """基于相关系数的敏感性分析"""
    correlations = {}
    for param in samples.keys():
        corr = np.corrcoef(samples[param], results)[0, 1]
        correlations[param] = abs(corr)
    return correlations

def partial_variance_analysis(n_samples, model, distributions, param_names):
    """部分方差分析(蒙特卡洛方法计算一阶Sobol指数)"""
    samples_A = generate_samples(n_samples, distributions)
    samples_B = generate_samples(n_samples, distributions)
    
    y_A = np.array([model(*[samples_A[p][i] for p in param_names]) for i in range(n_samples)])
    y_B = np.array([model(*[samples_B[p][i] for p in param_names]) for i in range(n_samples)])
    
    V_y = np.var(y_A)
    
    sobol_first = {}
    
    for param in param_names:
        samples_C = samples_A.copy()
        samples_C[param] = samples_B[param]
        
        y_C = np.array([model(*[samples_C[p][i] for p in param_names]) for i in range(n_samples)])
        
        V_i = np.mean(y_B * (y_C - y_A))
        sobol_first[param] = V_i / V_y if V_y > 0 else 0
    
    return sobol_first

# 计算敏感性指标
print("计算敏感性指标...")

# 局部敏感性
x0 = np.array([
    param_distributions['k']['mean'],
    (param_distributions['h']['low'] + param_distributions['h']['high']) / 2,
    param_distributions['T_ambient']['mean'],
    param_distributions['Q_in']['mean']
])
local_sens = local_sensitivity_analysis(heat_transfer_model, x0, param_names, param_distributions)
print("  局部敏感性:")
for param, sens in local_sens.items():
    print(f"    {param}: {sens:.4f}")

# 相关性敏感性
corr_sens = correlation_sensitivity(mc_samples, mc_results_1k)
print("  相关性敏感性:")
for param, sens in corr_sens.items():
    print(f"    {param}: {sens:.4f}")

# 一阶Sobol指数(蒙特卡洛)
print("  计算Sobol指数(这可能需要一些时间)...")
sobol_mc = partial_variance_analysis(2000, heat_transfer_model, param_distributions, param_names)
print("  一阶Sobol指数:")
for param, s in sobol_mc.items():
    print(f"    {param}: {s:.4f}")

# 龙卷风图数据
ax_tornado = []
for param in param_names:
    dist = param_distributions[param]
    if dist['type'] == 'normal':
        low = dist['mean'] - 2 * dist['std']
        high = dist['mean'] + 2 * dist['std']
    elif dist['type'] == 'uniform':
        low = dist['low']
        high = dist['high']
    
    x_low = x0.copy()
    x_high = x0.copy()
    
    param_idx = param_names.index(param)
    x_low[param_idx] = low
    x_high[param_idx] = high
    
    y_low = heat_transfer_model(*x_low)
    y_high = heat_transfer_model(*x_high)
    
    ax_tornado.append({
        'param': param,
        'low': y_low - heat_transfer_model(*x0),
        'high': y_high - heat_transfer_model(*x0)
    })

# 可视化案例5
fig, axes = plt.subplots(2, 3, figsize=(15, 10))

# 局部敏感性条形图
ax1 = axes[0, 0]
params = list(local_sens.keys())
values = list(local_sens.values())
colors = ['red' if v > 0 else 'blue' for v in values]
bars = ax1.barh(params, values, color=colors, alpha=0.7)
ax1.set_xlabel('Normalized Sensitivity')
ax1.set_title('Local Sensitivity Analysis')
ax1.axvline(0, color='k', linestyle='-', linewidth=0.5)
ax1.grid(True, alpha=0.3, axis='x')
for bar, val in zip(bars, values):
    ax1.text(val, bar.get_y() + bar.get_height()/2, f'{val:.2f}', 
             ha='left' if val > 0 else 'right', va='center')

# 相关性敏感性
ax2 = axes[0, 1]
params_corr = list(corr_sens.keys())
values_corr = list(corr_sens.values())
bars = ax2.bar(params_corr, values_corr, color='steelblue', alpha=0.7)
ax2.set_ylabel('|Correlation Coefficient|')
ax2.set_title('Correlation-based Sensitivity')
ax2.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, values_corr):
    ax2.text(bar.get_x() + bar.get_width()/2, val, f'{val:.3f}', 
             ha='center', va='bottom')

# Sobol指数对比
ax3 = axes[0, 2]
x_pos = np.arange(len(param_names))
width = 0.35
sobol_pce_vals = [sobol[i] for i in range(len(param_names))]
sobol_mc_vals = [sobol_mc[p] for p in param_names]

bars1 = ax3.bar(x_pos - width/2, sobol_pce_vals, width, label='PCE', alpha=0.7)
bars2 = ax3.bar(x_pos + width/2, sobol_mc_vals, width, label='MC', alpha=0.7)
ax3.set_ylabel('Sobol Index')
ax3.set_title('Global Sensitivity: Sobol Indices')
ax3.set_xticks(x_pos)
ax3.set_xticklabels(param_names)
ax3.legend()
ax3.grid(True, alpha=0.3, axis='y')

# 龙卷风图
ax4 = axes[1, 0]
tornado_data = sorted(ax_tornado, key=lambda x: abs(x['high'] - x['low']), reverse=True)
params_tornado = [d['param'] for d in tornado_data]
lows = [d['low'] for d in tornado_data]
highs = [d['high'] for d in tornado_data]

y_pos = np.arange(len(params_tornado))
ax4.barh(y_pos, highs, color='red', alpha=0.7, label='High')
ax4.barh(y_pos, lows, color='blue', alpha=0.7, label='Low')
ax4.set_yticks(y_pos)
ax4.set_yticklabels(params_tornado)
ax4.set_xlabel('Temperature Change (K)')
ax4.set_title('Tornado Diagram')
ax4.axvline(0, color='k', linestyle='-', linewidth=0.5)
ax4.legend()
ax4.grid(True, alpha=0.3, axis='x')

# 敏感性排序对比
ax5 = axes[1, 1]
local_norm = np.array([abs(local_sens[p]) for p in param_names])
local_norm = local_norm / local_norm.sum()

corr_norm = np.array([corr_sens[p] for p in param_names])
corr_norm = corr_norm / corr_norm.sum()

sobol_norm = np.array([sobol[i] for i in range(len(param_names))])
sobol_norm = sobol_norm / sobol_norm.sum()

methods_sens = ['Local', 'Correlation', 'Sobol (PCE)']
sens_matrix = np.array([local_norm, corr_norm, sobol_norm])

im = ax5.imshow(sens_matrix, cmap='YlOrRd', aspect='auto')
ax5.set_xticks(range(len(param_names)))
ax5.set_xticklabels(param_names)
ax5.set_yticks(range(len(methods_sens)))
ax5.set_yticklabels(methods_sens)
ax5.set_title('Sensitivity Comparison Heatmap')
plt.colorbar(im, ax=ax5, label='Normalized Sensitivity')

for i in range(len(methods_sens)):
    for j in range(len(param_names)):
        text = ax5.text(j, i, f'{sens_matrix[i, j]:.2f}',
                       ha="center", va="center", color="black", fontsize=9)

# 累积敏感性
ax6 = axes[1, 2]
sorted_indices = np.argsort(sobol)[::-1]
sorted_params = [param_names[i] for i in sorted_indices]
sorted_sobol = sobol[sorted_indices]

cumulative = np.cumsum(sorted_sobol)
ax6.bar(range(len(sorted_params)), sorted_sobol, alpha=0.7, label='Individual')
ax6.plot(range(len(sorted_params)), cumulative, 'ro-', linewidth=2, markersize=6, label='Cumulative')
ax6.set_xticks(range(len(sorted_params)))
ax6.set_xticklabels(sorted_params, rotation=45)
ax6.set_ylabel('Sobol Index')
ax6.set_title('Cumulative Sensitivity')
ax6.axhline(0.8, color='g', linestyle='--', alpha=0.5, label='80% threshold')
ax6.legend()
ax6.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig(f'{output_dir}/case5_sensitivity.png', dpi=150, bbox_inches='tight')
plt.close()
print(f"✓ 案例5结果已保存: case5_sensitivity.png")

# ============================================================================
# 汇总输出
# ============================================================================
print("\n" + "=" * 70)
print("所有案例仿真完成!")
print("=" * 70)
print(f"\n输出文件保存在: {output_dir}")
print("\n生成的文件:")
print("  1. case1_monte_carlo.png - 蒙特卡洛模拟与收敛性分析")
print("  2. case2_lhs_sampling.png - 拉丁超立方采样对比")
print("  3. case3_polynomial_chaos.png - 多项式混沌展开")
print("  4. case4_reliability.png - 可靠性分析与失效概率")
print("  5. case5_sensitivity.png - 敏感性分析")
print("\n关键结果:")
print(f"  - 温度均值: {np.mean(results_10000):.2f}K ± {np.std(results_10000):.2f}K")
print(f"  - 失效概率 (T>{T_threshold}K): {pf_mc:.6f}")
print(f"  - 可靠性指标 β: {beta_mc:.2f}")
print(f"  - PCE模型 R²: {r2:.4f}")
print("\n" + "=" * 70)

代码深度解析

1. 热传导模型

def heat_transfer_model(k, h, T_ambient, Q_in):
    """
    简化的散热器温度计算模型
    基于热阻网络理论:
    - R_cond = L/(k*A): 导热热阻
    - R_conv = 1/(h*A): 对流热阻
    - T_max = T_ambient + Q_in * (R_cond + R_conv)
    """
    A = 0.01  # m², 散热面积
    L = 0.005  # m, 特征长度
    
    R_cond = L / (k * A)  # 导热热阻
    R_conv = 1 / (h * A)  # 对流热阻
    R_total = R_cond + R_conv
    
    T_max = T_ambient + Q_in * R_total
    return T_max

这个简化模型基于热阻网络理论,将复杂的传热过程等效为串联的热阻。导热热阻与材料导热系数成反比,对流热阻与对流换热系数成反比。总温差等于热流乘以总热阻。

2. 拉丁超立方采样实现

def latin_hypercube_sampling(n_samples, n_dim):
    """
    拉丁超立方采样核心算法:
    1. 对每个维度,生成0到n_samples-1的随机排列
    2. 在每个区间内添加随机扰动
    3. 归一化到[0,1]范围
    """
    samples = np.zeros((n_samples, n_dim))
    
    for i in range(n_dim):
        # 将每个维度分成n_samples个区间
        perm = np.random.permutation(n_samples)
        # 在每个区间内随机采样
        samples[:, i] = (perm + np.random.rand(n_samples)) / n_samples
    
    return samples

LHS的关键在于确保每个维度上的样本均匀覆盖整个分布范围。相比纯随机采样,LHS能显著减少样本聚集现象,提高采样效率。

3. 多项式混沌展开类

class PolynomialChaos:
    def generate_basis(self, xi, orders, dist_types):
        """
        生成多项式基函数:
        - 均匀分布 → Legendre多项式
        - 正态分布 → Hermite多项式
        使用张量积构造多维基函数
        """
        from itertools import product
        poly_orders = list(product(*[range(o+1) for o in orders]))
        
        basis_list = []
        for po in poly_orders:
            poly_vals = np.ones(n_samples)
            for d in range(n_dim):
                if dist_types[d] == 'uniform':
                    x_scaled = 2 * xi[:, d] - 1
                    poly_vals *= self.legendre_poly(x_scaled, po[d])
                elif dist_types[d] == 'normal':
                    poly_vals *= self.hermite_poly(xi[:, d], po[d])
            basis_list.append(poly_vals)
        
        return np.column_stack(basis_list), poly_orders

PCE的核心是利用正交多项式的性质。对于正态分布,使用物理学家Hermite多项式;对于均匀分布,使用Legendre多项式。通过张量积构造多维基函数,可以处理多变量问题。

4. Sobol指数计算

def partial_variance_analysis(n_samples, model, distributions, param_names):
    """
    蒙特卡洛方法计算一阶Sobol指数:
    S_i = Var(E[Y|X_i]) / Var(Y)
    
    使用两个独立样本矩阵A和B,
    通过混合样本C(A中X_i替换为B中X_i)计算条件期望
    """
    samples_A = generate_samples(n_samples, distributions)
    samples_B = generate_samples(n_samples, distributions)
    
    y_A = np.array([model(...) for i in range(n_samples)])
    y_B = np.array([model(...) for i in range(n_samples)])
    
    V_y = np.var(y_A)
    
    for param in param_names:
        samples_C = samples_A.copy()
        samples_C[param] = samples_B[param]
        y_C = np.array([model(...) for i in range(n_samples)])
        
        # 一阶Sobol指数
        V_i = np.mean(y_B * (y_C - y_A))
        sobol_first[param] = V_i / V_y

Sobol指数的计算基于方差分解思想。通过比较混合样本与原始样本的输出差异,可以量化每个参数对输出方差的贡献。这种方法可以识别关键不确定性来源,指导实验设计和参数标定。


运行结果预期

案例1:蒙特卡洛模拟

运行后将看到:

  • 温度分布直方图:呈现近似正态分布,均值约428K,标准差约95K
  • 收敛性曲线:随着样本数增加,均值和标准差逐渐稳定
  • 参数敏感性:对流换热系数h与温度呈强负相关(相关系数约-0.84)
  • 累积分布函数:可用于确定任意置信水平下的温度上限

案例2:拉丁超立方采样

运行后将看到:

  • 采样点分布对比:LHS在空间填充性上明显优于随机采样
  • 收敛速度对比:LHS在相同样本量下收敛更快、方差更小
  • 空间填充性指标:LHS的最小距离显著大于随机采样

案例3:多项式混沌展开

运行后将看到:

  • PCE验证散点图:预测值与真实值高度吻合,R²约0.84
  • Sobol指数:对流换热系数h贡献最大(约88%),导热系数k贡献最小
  • 系数衰减:高阶多项式系数快速衰减,表明低阶展开已足够

案例4:可靠性分析

运行后将看到:

  • 失效概率:Pf约0.95(在当前参数设置下,温度超过350K的概率很高)
  • 可靠性指标:β约-0.83(负值表示均值已超过阈值)
  • 失效边界:在k-h设计空间中,失效边界呈非线性分布

案例5:敏感性分析

运行后将看到:

  • 龙卷风图:清晰展示各参数变化对输出的影响范围
  • Sobol指数对比:PCE和蒙特卡洛方法结果一致
  • 累积敏感性:前两个参数(h和Q_in)贡献了超过90%的方差

进阶挑战

  1. 改变失效阈值:尝试将失效阈值从350K改为400K或300K,观察失效概率如何变化,分析系统的安全裕度。

  2. 增加参数维度:在模型中增加更多不确定性参数(如散热面积A、特征长度L),观察Sobol指数如何重新分配,理解维度灾难对蒙特卡洛方法的影响。

  3. 实现自适应PCE:编写代码实现自适应多项式混沌展开,根据误差自动调整多项式阶数,比较与固定阶数PCE的效率差异。

  4. 耦合CFD仿真:将本教程的简化热模型替换为真实的CFD仿真(使用OpenFOAM或COMSOL的Python接口),实现高保真度的不确定性量化分析。


高级主题

1. 高斯过程替代模型

当原始仿真计算成本很高时,可以使用高斯过程(GP)构建替代模型:

from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel

# 训练GP模型
kernel = ConstantKernel(1.0, (1e-3, 1e3)) * RBF(1.0, (1e-2, 1e2))
gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=10)
gp.fit(X_train, y_train)

# 预测(包含不确定性)
y_pred, sigma = gp.predict(X_test, return_std=True)

2. 贝叶斯不确定性量化

使用贝叶斯方法同时考虑参数不确定性和模型不确定性:

import pymc3 as pm

with pm.Model() as thermal_model:
    # 定义先验分布
    k = pm.Normal('k', mu=200, sigma=20)
    h = pm.Uniform('h', lower=10, upper=100)
    
    # 定义似然函数
    T_obs = pm.Normal('T_obs', mu=heat_transfer_model(k, h, 300, 50), 
                      sigma=5, observed=temperature_data)
    
    # 采样后验分布
    trace = pm.sample(2000, tune=1000)

3. 多保真度不确定性量化

结合高保真(CFD)和低保真(集总参数)模型的信息:

# 多保真度模型
def low_fidelity_model(x):
    """快速但不够精确的模型"""
    return simplified_heat_transfer(x)

def high_fidelity_model(x):
    """精确但计算昂贵的模型"""
    return cfd_simulation(x)

# 构建多保真度PCE
# 使用大量低保真样本和少量高保真样本

Logo

AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。

更多推荐