结构动力学仿真-主题091-不确定性量化与可靠性分析
结构动力学仿真 - 主题091:不确定性量化与可靠性分析
目录









引言
在实际工程结构中,不确定性无处不在。材料属性的分散性、几何尺寸的制造误差、荷载的随机性、模型简化带来的认知不确定性等,都会显著影响结构的动力响应预测和安全评估。传统的确定性分析方法往往无法充分考虑这些不确定性,可能导致结构设计过于保守或存在安全隐患。
不确定性量化(Uncertainty Quantification, UQ)与可靠性分析是结构动力学领域的重要研究方向,旨在系统性地识别、表征、传播和管理工程系统中的各种不确定性。本章将深入探讨不确定性量化与可靠性分析的理论基础、计算方法和工程应用。
本章学习目标
- 理解不确定性的来源和分类方法
- 掌握概率和非概率不确定性量化技术
- 学会结构可靠性分析方法
- 能够进行基于可靠性的优化设计
- 通过Python实现不确定性分析与可视化
不确定性来源与分类
1.1 不确定性的来源
工程结构中的不确定性主要来源于以下几个方面:
1.1.1 材料属性不确定性
材料属性如弹性模量、密度、屈服强度等存在固有变异性。即使是同一批次的材料,其力学性能也会在一定范围内波动。
典型变异系数:
- 钢材弹性模量:2-5%
- 混凝土抗压强度:10-20%
- 复合材料性能:15-30%
1.1.2 几何参数不确定性
制造和施工过程中的误差导致实际结构尺寸与设计值存在偏差。
常见误差范围:
- 钢结构构件尺寸:±2-5mm
- 混凝土构件尺寸:±5-20mm
- 结构整体偏差:L/1000~L/500
1.1.3 荷载不确定性
外部荷载(地震、风、波浪等)具有显著的随机性和时空变异性。
荷载不确定性特征:
- 地震动:幅值、频谱、持时的随机性
- 风荷载:平均风速、湍流强度、风谱模型
- 波浪荷载:波高、周期、方向的随机性
1.1.4 模型不确定性
数学模型对物理现实的近似必然引入不确定性:
- 本构模型简化
- 边界条件假设
- 阻尼模型选择
- 数值离散误差
1.2 不确定性的分类
1.2.1 随机不确定性(Aleatory Uncertainty)
也称为偶然不确定性或固有不确定性,来源于系统本身的随机性,是不可减少的。
特点:
- 具有客观随机性
- 可用概率统计方法描述
- 无法通过增加数据消除
示例:
- 材料微观结构的随机分布
- 地震动的随机特性
- 风场的湍流脉动
1.2.2 认知不确定性(Epistemic Uncertainty)
也称为知识不确定性,来源于信息不足或认知局限,是可以减少的。
特点:
- 源于知识或数据不足
- 可通过增加信息减少
- 可用区间分析、模糊理论等方法描述
示例:
- 模型参数识别误差
- 小样本统计推断的不确定性
- 专家判断的主观性
1.2.3 分类对比
| 特性 | 随机不确定性 | 认知不确定性 |
|---|---|---|
| 来源 | 系统固有随机性 | 知识信息不足 |
| 可消除性 | 不可消除 | 可减少 |
| 描述方法 | 概率分布 | 区间、模糊、证据理论 |
| 示例 | 材料性能波动 | 模型参数识别误差 |
概率不确定性量化方法
2.1 随机变量与概率分布
2.1.1 常用概率分布
正态分布(Normal Distribution)
最广泛使用的连续概率分布,适用于许多自然现象和工程参数。
概率密度函数:
f ( x ) = 1 σ 2 π exp ( − ( x − μ ) 2 2 σ 2 ) f(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) f(x)=σ2π1exp(−2σ2(x−μ)2)
其中, μ \mu μ为均值, σ \sigma σ为标准差。
对数正态分布(Lognormal Distribution)
适用于正值随机变量,如材料强度、疲劳寿命等。
若 ln ( X ) ∼ N ( μ , σ 2 ) \ln(X) \sim N(\mu, \sigma^2) ln(X)∼N(μ,σ2),则:
f ( x ) = 1 x σ 2 π exp ( − ( ln x − μ ) 2 2 σ 2 ) , x > 0 f(x) = \frac{1}{x\sigma\sqrt{2\pi}} \exp\left(-\frac{(\ln x - \mu)^2}{2\sigma^2}\right), \quad x > 0 f(x)=xσ2π1exp(−2σ2(lnx−μ)2),x>0
威布尔分布(Weibull Distribution)
常用于描述强度、寿命等极值特性。
f ( x ) = k λ ( x λ ) k − 1 exp ( − ( x λ ) k ) , x ≥ 0 f(x) = \frac{k}{\lambda}\left(\frac{x}{\lambda}\right)^{k-1} \exp\left(-\left(\frac{x}{\lambda}\right)^k\right), \quad x \geq 0 f(x)=λk(λx)k−1exp(−(λx)k),x≥0
均匀分布(Uniform Distribution)
当仅知道参数范围而缺乏更多信息时使用。
f ( x ) = 1 b − a , a ≤ x ≤ b f(x) = \frac{1}{b-a}, \quad a \leq x \leq b f(x)=b−a1,a≤x≤b
2.1.2 分布参数估计
矩估计法
利用样本矩估计总体参数:
样本均值: μ ^ = 1 n ∑ i = 1 n x i \hat{\mu} = \frac{1}{n}\sum_{i=1}^n x_i μ^=n1∑i=1nxi
样本方差: σ ^ 2 = 1 n − 1 ∑ i = 1 n ( x i − μ ^ ) 2 \hat{\sigma}^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i - \hat{\mu})^2 σ^2=n−11∑i=1n(xi−μ^)2
最大似然估计
寻找使似然函数最大化的参数:
θ ^ = arg max θ L ( θ ∣ x ) = arg max θ ∏ i = 1 n f ( x i ∣ θ ) \hat{\theta} = \arg\max_\theta L(\theta|x) = \arg\max_\theta \prod_{i=1}^n f(x_i|\theta) θ^=argθmaxL(θ∣x)=argθmaxi=1∏nf(xi∣θ)
贝叶斯估计
结合先验信息和观测数据:
p ( θ ∣ x ) = p ( x ∣ θ ) p ( θ ) p ( x ) p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)} p(θ∣x)=p(x)p(x∣θ)p(θ)
2.2 蒙特卡洛模拟方法
蒙特卡洛模拟(Monte Carlo Simulation, MCS)是概率不确定性量化的最基本和最通用的方法。
2.2.1 基本原理
通过大量随机抽样,利用统计方法估计输出量的概率特征。
算法流程:
- 定义输入随机变量:确定各输入参数的概率分布
- 随机抽样:从输入分布中生成大量样本
- 确定性分析:对每个样本进行确定性计算
- 统计分析:分析输出结果的统计特性
2.2.2 抽样技术
标准蒙特卡洛抽样
直接使用伪随机数生成器从目标分布中抽样。
拉丁超立方抽样(LHS)
将每个变量的分布分成等概率区间,确保样本均匀覆盖整个分布空间。
优点:
- 收敛速度更快
- 样本分布更均匀
- 适合小样本分析
Sobol序列
低差异序列,具有更好的空间填充特性。
2.2.3 收敛性分析
蒙特卡洛估计的误差与样本量的关系:
ϵ ∝ 1 N \epsilon \propto \frac{1}{\sqrt{N}} ϵ∝N1
要达到 ϵ \epsilon ϵ的精度,所需样本量:
N ∝ 1 ϵ 2 N \propto \frac{1}{\epsilon^2} N∝ϵ21
方差缩减技术:
- 对偶变量法
- 重要性抽样
- 控制变量法
- 分层抽样
2.3 多项式混沌展开
多项式混沌展开(Polynomial Chaos Expansion, PCE)是一种高效的代理模型方法,可显著减少计算成本。
2.3.1 理论基础
将随机输出表示为随机输入的正交多项式展开:
Y ( ξ ) = ∑ α ∈ A y α Ψ α ( ξ ) Y(\xi) = \sum_{\alpha \in \mathcal{A}} y_\alpha \Psi_\alpha(\xi) Y(ξ)=α∈A∑yαΨα(ξ)
其中, Ψ α ( ξ ) \Psi_\alpha(\xi) Ψα(ξ)是关于标准随机变量 ξ \xi ξ的正交多项式。
多项式选择:
- 高斯分布 → Hermite多项式
- 均匀分布 → Legendre多项式
- Gamma分布 → Laguerre多项式
- Beta分布 → Jacobi多项式
2.3.2 系数计算方法
投影法(Galerkin投影)
y α = ⟨ Y , Ψ α ⟩ ⟨ Ψ α , Ψ α ⟩ = 1 γ α E [ Y Ψ α ] y_\alpha = \frac{\langle Y, \Psi_\alpha \rangle}{\langle \Psi_\alpha, \Psi_\alpha \rangle} = \frac{1}{\gamma_\alpha}\mathbb{E}[Y\Psi_\alpha] yα=⟨Ψα,Ψα⟩⟨Y,Ψα⟩=γα1E[YΨα]
回归法
通过最小二乘拟合确定展开系数。
稀疏网格法
利用高斯积分点计算展开系数。
2.3.3 后处理统计量
均值:
μ Y = y 0 \mu_Y = y_0 μY=y0
方差:
σ Y 2 = ∑ α ≠ 0 y α 2 γ α \sigma_Y^2 = \sum_{\alpha \neq 0} y_\alpha^2 \gamma_\alpha σY2=α=0∑yα2γα
灵敏度指标(Sobol指数):
S i = ∑ α ∈ A i y α 2 γ α σ Y 2 S_i = \frac{\sum_{\alpha \in \mathcal{A}_i} y_\alpha^2 \gamma_\alpha}{\sigma_Y^2} Si=σY2∑α∈Aiyα2γα
2.4 随机有限元方法
2.4.1 摄动法
将随机变量在均值附近进行Taylor展开:
K ( θ ) = K 0 + ∑ i = 1 n ∂ K ∂ θ i Δ θ i + 1 2 ∑ i = 1 n ∑ j = 1 n ∂ 2 K ∂ θ i ∂ θ j Δ θ i Δ θ j + ⋯ K(\theta) = K_0 + \sum_{i=1}^n \frac{\partial K}{\partial \theta_i}\Delta\theta_i + \frac{1}{2}\sum_{i=1}^n\sum_{j=1}^n \frac{\partial^2 K}{\partial \theta_i \partial \theta_j}\Delta\theta_i\Delta\theta_j + \cdots K(θ)=K0+i=1∑n∂θi∂KΔθi+21i=1∑nj=1∑n∂θi∂θj∂2KΔθiΔθj+⋯
一阶摄动:
- 计算简单,效率高
- 适用于小不确定性情况
- 精度有限
二阶摄动:
- 精度更高
- 计算量增大
- 可捕获非线性效应
2.4.2 Neumann展开法
将随机刚度矩阵的逆展开为Neumann级数:
K − 1 = K 0 − 1 − K 0 − 1 Δ K K 0 − 1 + K 0 − 1 Δ K K 0 − 1 Δ K K 0 − 1 − ⋯ K^{-1} = K_0^{-1} - K_0^{-1}\Delta K K_0^{-1} + K_0^{-1}\Delta K K_0^{-1}\Delta K K_0^{-1} - \cdots K−1=K0−1−K0−1ΔKK0−1+K0−1ΔKK0−1ΔKK0−1−⋯
2.4.3 谱随机有限元法
结合有限元方法和多项式混沌展开,在随机空间进行离散。
非概率不确定性方法
3.1 区间分析方法
当概率信息不足时,可用区间描述不确定性。
3.1.1 区间数学基础
区间定义:
X I = [ x ‾ , x ‾ ] = { x ∈ R ∣ x ‾ ≤ x ≤ x ‾ } X^I = [\underline{x}, \overline{x}] = \{x \in \mathbb{R} | \underline{x} \leq x \leq \overline{x}\} XI=[x,x]={x∈R∣x≤x≤x}
基本运算:
- 加法: [ a , b ] + [ c , d ] = [ a + c , b + d ] [a,b] + [c,d] = [a+c, b+d] [a,b]+[c,d]=[a+c,b+d]
- 减法: [ a , b ] − [ c , d ] = [ a − d , b − c ] [a,b] - [c,d] = [a-d, b-c] [a,b]−[c,d]=[a−d,b−c]
- 乘法: [ a , b ] × [ c , d ] = [ min ( a c , a d , b c , b d ) , max ( a c , a d , b c , b d ) ] [a,b] \times [c,d] = [\min(ac,ad,bc,bd), \max(ac,ad,bc,bd)] [a,b]×[c,d]=[min(ac,ad,bc,bd),max(ac,ad,bc,bd)]
3.1.2 区间有限元方法
顶点法:
在区间参数的顶点组合处进行确定性分析,取响应的上下界。
区间摄动法:
利用Taylor展开近似计算响应区间。
优化方法:
将区间分析转化为优化问题求解。
3.2 模糊理论方法
3.2.1 模糊集合与隶属函数
模糊集合定义:
A ~ = { ( x , μ A ( x ) ) ∣ x ∈ X , μ A ( x ) ∈ [ 0 , 1 ] } \tilde{A} = \{(x, \mu_A(x)) | x \in X, \mu_A(x) \in [0,1]\} A~={(x,μA(x))∣x∈X,μA(x)∈[0,1]}
常用隶属函数:
- 三角型
- 梯形
- 高斯型
- 钟型
3.2.2 模糊有限元方法
α截集法:
将模糊分析转化为一系列区间分析问题。
扩展原理:
通过隶属函数的扩展计算输出模糊性。
3.3 证据理论方法
Dempster-Shafer证据理论(DST)可处理不完全信息和冲突证据。
3.3.1 基本概念
识别框架:
Θ = { θ 1 , θ 2 , ⋯ , θ n } \Theta = \{\theta_1, \theta_2, \cdots, \theta_n\} Θ={θ1,θ2,⋯,θn}
基本概率分配(BPA):
m : 2 Θ → [ 0 , 1 ] , m ( ∅ ) = 0 , ∑ A ⊆ Θ m ( A ) = 1 m: 2^\Theta \rightarrow [0,1], \quad m(\emptyset) = 0, \quad \sum_{A \subseteq \Theta} m(A) = 1 m:2Θ→[0,1],m(∅)=0,A⊆Θ∑m(A)=1
信任函数与似然函数:
B e l ( A ) = ∑ B ⊆ A m ( B ) Bel(A) = \sum_{B \subseteq A} m(B) Bel(A)=B⊆A∑m(B)
P l ( A ) = ∑ B ∩ A ≠ ∅ m ( B ) Pl(A) = \sum_{B \cap A \neq \emptyset} m(B) Pl(A)=B∩A=∅∑m(B)
3.3.2 证据理论在不确定性量化中的应用
- 处理专家判断的不确定性
- 融合多源异构信息
- 处理小样本情况
结构可靠性分析
4.1 可靠性基本概念
4.1.1 极限状态函数
定义结构安全与失效的界限:
g ( X ) = R − S g(X) = R - S g(X)=R−S
其中, R R R为抗力, S S S为荷载效应。
- g ( X ) > 0 g(X) > 0 g(X)>0:安全状态
- g ( X ) = 0 g(X) = 0 g(X)=0:极限状态
- g ( X ) < 0 g(X) < 0 g(X)<0:失效状态
4.1.2 失效概率与可靠指标
失效概率:
P f = P ( g ( X ) ≤ 0 ) = ∫ g ( x ) ≤ 0 f X ( x ) d x P_f = P(g(X) \leq 0) = \int_{g(x) \leq 0} f_X(x) dx Pf=P(g(X)≤0)=∫g(x)≤0fX(x)dx
可靠指标:
β = Φ − 1 ( 1 − P f ) \beta = \Phi^{-1}(1 - P_f) β=Φ−1(1−Pf)
其中, Φ − 1 \Phi^{-1} Φ−1为标准正态分布的逆累积分布函数。
可靠度:
R = 1 − P f = Φ ( β ) R = 1 - P_f = \Phi(\beta) R=1−Pf=Φ(β)
4.2 一次可靠度方法(FORM)
4.2.1 基本原理
在标准正态空间中寻找极限状态面上距离原点最近的点(设计点)。
算法步骤:
- 变量变换:将相关非正态变量变换为独立标准正态变量
- 迭代搜索:寻找设计点 u ∗ u^* u∗
- 计算可靠指标: β = ∣ ∣ u ∗ ∣ ∣ \beta = ||u^*|| β=∣∣u∗∣∣
- 计算失效概率: P f ≈ Φ ( − β ) P_f \approx \Phi(-\beta) Pf≈Φ(−β)
4.2.2 变量变换
Rosenblatt变换:
将相关随机变量变换为独立标准正态变量。
Nataf变换:
适用于边缘分布已知但联合分布未知的情况。
4.2.3 Hasofer-Lind-Rackwitz-Fiessler(HL-RF)算法
初始化:u⁽⁰⁾ = 0
迭代:
1. 计算极限状态函数值和梯度:g(u⁽ᵏ⁾), ∇g(u⁽ᵏ⁾)
2. 搜索方向:d⁽ᵏ⁾ = (g(u⁽ᵏ⁾) - ∇gᵀu⁽ᵏ⁾)∇g/||∇g||² - u⁽ᵏ⁾
3. 更新:u⁽ᵏ⁺¹⁾ = u⁽ᵏ⁾ + λd⁽ᵏ⁾
4. 检查收敛
4.3 二次可靠度方法(SORM)
4.3.1 曲率近似
在设计点处用二次曲面近似极限状态面:
g ( u ) ≈ β − u n + 1 2 ∑ i = 1 n − 1 κ i u i 2 g(u) \approx \beta - u_n + \frac{1}{2}\sum_{i=1}^{n-1} \kappa_i u_i^2 g(u)≈β−un+21i=1∑n−1κiui2
4.3.2 Breitung公式
P f ≈ Φ ( − β ) ∏ i = 1 n − 1 ( 1 + β κ i ) − 1 / 2 P_f \approx \Phi(-\beta) \prod_{i=1}^{n-1} (1 + \beta\kappa_i)^{-1/2} Pf≈Φ(−β)i=1∏n−1(1+βκi)−1/2
4.4 重要性抽样方法
4.4.1 基本原理
将抽样中心移到设计点,提高失效区域的抽样效率。
重要性抽样估计:
P f = ∫ I [ g ( x ) ≤ 0 ] f X ( x ) h X ( x ) h X ( x ) d x P_f = \int I[g(x) \leq 0] \frac{f_X(x)}{h_X(x)} h_X(x) dx Pf=∫I[g(x)≤0]hX(x)fX(x)hX(x)dx
其中, h X ( x ) h_X(x) hX(x)为重要性抽样密度函数。
4.4.2 自适应重要性抽样
根据已抽样结果自适应调整抽样密度,提高效率。
4.5 子集模拟方法
4.5.1 基本原理
通过引入中间失效事件,将小概率事件转化为一系列条件概率的乘积。
P f = P ( F 1 ) × P ( F 2 ∣ F 1 ) × ⋯ × P ( F m ∣ F m − 1 ) P_f = P(F_1) \times P(F_2|F_1) \times \cdots \times P(F_m|F_{m-1}) Pf=P(F1)×P(F2∣F1)×⋯×P(Fm∣Fm−1)
4.5.2 马尔可夫链蒙特卡洛(MCMC)
利用MCMC方法生成条件样本。
Metropolis-Hastings算法:
- 提议分布生成候选样本
- 接受准则决定是否接受
- 生成服从目标分布的样本链
可靠性优化设计
5.1 基本框架
5.1.1 确定性优化 vs 可靠性优化
确定性优化:
min d f ( d ) s.t. g i ( d ) ≤ 0 \min_{d} f(d) \quad \text{s.t.} \quad g_i(d) \leq 0 dminf(d)s.t.gi(d)≤0
可靠性优化(RBDO):
min d f ( d ) s.t. P ( g i ( d , X ) ≤ 0 ) ≤ P f , a l l o w \min_{d} f(d) \quad \text{s.t.} \quad P(g_i(d,X) \leq 0) \leq P_{f,allow} dminf(d)s.t.P(gi(d,X)≤0)≤Pf,allow
或等价地:
min d f ( d ) s.t. β i ( d ) ≥ β t a r g e t \min_{d} f(d) \quad \text{s.t.} \quad \beta_i(d) \geq \beta_{target} dminf(d)s.t.βi(d)≥βtarget
5.1.2 双层优化 vs 单层优化
双层优化(Double-Loop):
- 外层:设计变量优化
- 内层:可靠性分析
单层优化(Single-Loop):
- 将可靠性约束转化为确定性约束
- 提高计算效率
5.2 可靠性优化方法
5.2.1 序列优化与可靠性评估(SORA)
将可靠性约束转化为确定性约束:
g i ( d + s i ( k ) ) ≥ 0 g_i(d + s_i^{(k)}) \geq 0 gi(d+si(k))≥0
其中, s i ( k ) s_i^{(k)} si(k)为第 k k k次迭代的最可能失效点偏移量。
5.2.2 性能度量法(PMA)
直接计算性能度量:
G p , i ( d ) = min u ∣ ∣ u ∣ ∣ s.t. g i ( d , u ) = 0 G_{p,i}(d) = \min_{u} ||u|| \quad \text{s.t.} \quad g_i(d,u) = 0 Gp,i(d)=umin∣∣u∣∣s.t.gi(d,u)=0
优化约束: G p , i ( d ) ≥ β t a r g e t G_{p,i}(d) \geq \beta_{target} Gp,i(d)≥βtarget
5.2.3 解耦方法
将可靠性分析与优化解耦,交替进行:
- 确定性优化
- 可靠性分析
- 更新设计变量
- 重复直到收敛
5.3 多目标可靠性优化
5.3.1 帕累托最优
在多目标情况下,寻找帕累托最优解集。
帕累托支配:
解 x x x支配解 y y y,如果:
- 对所有目标: f i ( x ) ≤ f i ( y ) f_i(x) \leq f_i(y) fi(x)≤fi(y)
- 至少对一个目标: f i ( x ) < f i ( y ) f_i(x) < f_i(y) fi(x)<fi(y)
5.3.2 进化算法
NSGA-II:
- 非支配排序
- 拥挤距离计算
- 精英保留策略
MOPSO:
多目标粒子群优化算法。
工程案例分析
6.1 案例一:单层框架结构可靠性分析
6.1.1 问题描述
考虑一个单层单跨框架结构,分析其在随机地震作用下的可靠性。
结构参数:
- 柱高: h = 3.0 h = 3.0 h=3.0 m
- 跨度: L = 6.0 L = 6.0 L=6.0 m
- 弹性模量: E ∼ N ( 2.0 × 10 11 , ( 0.1 × 10 11 ) 2 ) E \sim N(2.0\times10^{11}, (0.1\times10^{11})^2) E∼N(2.0×1011,(0.1×1011)2) Pa
- 截面惯性矩: I ∼ N ( 8.0 × 10 − 5 , ( 0.8 × 10 − 5 ) 2 ) I \sim N(8.0\times10^{-5}, (0.8\times10^{-5})^2) I∼N(8.0×10−5,(0.8×10−5)2) m⁴
- 质量: m ∼ N ( 5000 , 500 2 ) m \sim N(5000, 500^2) m∼N(5000,5002) kg
地震动参数:
- 峰值加速度: P G A ∼ N ( 0.3 , 0.06 2 ) PGA \sim N(0.3, 0.06^2) PGA∼N(0.3,0.062) g
- 卓越频率: f g ∼ N ( 2.0 , 0.4 2 ) f_g \sim N(2.0, 0.4^2) fg∼N(2.0,0.42) Hz
极限状态:
层间位移角限值: θ l i m i t = 1 / 50 \theta_{limit} = 1/50 θlimit=1/50
6.1.2 分析方法
- 蒙特卡洛模拟:10,000次抽样
- FORM方法:计算可靠指标
- 灵敏度分析:识别关键参数
6.1.3 结果分析
失效概率估计:
- MCS: P f = 2.35 % P_f = 2.35\% Pf=2.35%
- FORM: P f = 2.12 % P_f = 2.12\% Pf=2.12%, β = 2.03 \beta = 2.03 β=2.03
灵敏度排序:
- 峰值加速度(Sobol指数:0.45)
- 结构刚度(Sobol指数:0.28)
- 结构质量(Sobol指数:0.15)
- 地震卓越频率(Sobol指数:0.12)
6.2 案例二:多自由度结构随机振动可靠性
6.2.1 问题描述
五层剪切型框架结构,考虑材料参数和荷载的随机性。
随机变量:
- 各层刚度: k i ∼ L N ( μ k , σ k 2 ) k_i \sim LN(\mu_k, \sigma_k^2) ki∼LN(μk,σk2)
- 各层质量: m i ∼ L N ( μ m , σ m 2 ) m_i \sim LN(\mu_m, \sigma_m^2) mi∼LN(μm,σm2)
- 白噪声强度: S 0 ∼ U ( 0.8 , 1.2 ) S_0 \sim U(0.8, 1.2) S0∼U(0.8,1.2)
失效准则:
顶层位移超过限值: d t o p > d l i m i t d_{top} > d_{limit} dtop>dlimit
6.2.2 分析方法
首次穿越问题:
P f = P ( max t ∈ [ 0 , T ] ∣ X ( t ) ∣ > b ) P_f = P(\max_{t \in [0,T]} |X(t)| > b) Pf=P(t∈[0,T]max∣X(t)∣>b)
泊松近似:
P f ≈ 1 − exp ( − ν b + T ) P_f \approx 1 - \exp(-\nu_b^+ T) Pf≈1−exp(−νb+T)
其中, ν b + \nu_b^+ νb+为向上穿越率。
6.2.3 结果分析
时变失效概率:
- 1年: P f = 0.5 % P_f = 0.5\% Pf=0.5%
- 10年: P f = 4.8 % P_f = 4.8\% Pf=4.8%
- 50年: P f = 21.5 % P_f = 21.5\% Pf=21.5%
参数重要性:
刚度不确定性对可靠性影响最大。
6.3 案例三:基于可靠性的TMD优化设计
6.3.1 问题描述
对高层建筑TMD进行基于可靠性的优化设计。
设计变量:
- TMD质量比: μ ∈ [ 0.01 , 0.05 ] \mu \in [0.01, 0.05] μ∈[0.01,0.05]
- 频率比: f ∈ [ 0.8 , 1.2 ] f \in [0.8, 1.2] f∈[0.8,1.2]
- 阻尼比: ξ d ∈ [ 0.05 , 0.20 ] \xi_d \in [0.05, 0.20] ξd∈[0.05,0.20]
随机参数:
- 主结构频率: ω s ∼ N ( μ ω , 0.05 μ ω ) \omega_s \sim N(\mu_\omega, 0.05\mu_\omega) ωs∼N(μω,0.05μω)
- 主结构阻尼: ξ s ∼ N ( μ ξ , 0.3 μ ξ ) \xi_s \sim N(\mu_\xi, 0.3\mu_\xi) ξs∼N(μξ,0.3μξ)
- 地震强度:随机过程
优化目标:
最小化TMD质量
约束条件:
- 位移可靠度: R d ≥ 0.99 R_d \geq 0.99 Rd≥0.99
- 加速度可靠度: R a ≥ 0.99 R_a \geq 0.99 Ra≥0.99
6.3.2 优化结果
最优设计:
- 质量比: μ ∗ = 0.028 \mu^* = 0.028 μ∗=0.028
- 频率比: f ∗ = 0.97 f^* = 0.97 f∗=0.97
- 阻尼比: ξ d ∗ = 0.12 \xi_d^* = 0.12 ξd∗=0.12
可靠性验证:
- 位移可靠度: R d = 0.992 R_d = 0.992 Rd=0.992
- 加速度可靠度: R a = 0.991 R_a = 0.991 Ra=0.991
Python实践与可视化
# -*- coding: utf-8 -*-
"""
案例一:蒙特卡洛模拟与收敛性分析
结构可靠性分析中的蒙特卡洛方法应用
"""
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import os
# 创建输出目录
output_dir = r'd:\文档\500仿真领域\工程仿真\结构动力学仿真\主题091'
os.makedirs(output_dir, exist_ok=True)
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False
print("=" * 70)
print("案例一:蒙特卡洛模拟与收敛性分析")
print("=" * 70)
# ==================== 1. 定义随机输入变量 ====================
print("\n[1] 定义随机输入变量...")
np.random.seed(42)
# 结构参数(正态分布)
# 弹性模量 E ~ N(2.0e11, (0.1e11)^2) Pa
mu_E = 2.0e11
sigma_E = 0.1e11
# 截面惯性矩 I ~ N(8.0e-5, (0.8e-5)^2) m^4
mu_I = 8.0e-5
sigma_I = 0.8e-5
# 质量 m ~ N(5000, 500^2) kg
mu_m = 5000
sigma_m = 500
# 地震动参数
# 峰值加速度 PGA ~ N(0.3, 0.06^2) g
mu_PGA = 0.3
sigma_PGA = 0.06
# 卓越频率 f_g ~ N(2.0, 0.4^2) Hz
mu_fg = 2.0
sigma_fg = 0.4
print(" 结构参数:")
print(f" 弹性模量 E: 均值={mu_E:.2e}, 标准差={sigma_E:.2e}")
print(f" 惯性矩 I: 均值={mu_I:.2e}, 标准差={sigma_I:.2e}")
print(f" 质量 m: 均值={mu_m:.1f}, 标准差={sigma_m:.1f}")
print(" 地震动参数:")
print(f" PGA: 均值={mu_PGA:.2f}g, 标准差={sigma_PGA:.2f}g")
print(f" 卓越频率: 均值={mu_fg:.2f}Hz, 标准差={sigma_fg:.2f}Hz")
# ==================== 2. 蒙特卡洛模拟 ====================
print("\n[2] 执行蒙特卡洛模拟...")
n_samples = 10000 # 样本数量
# 生成随机样本
E_samples = np.random.normal(mu_E, sigma_E, n_samples)
I_samples = np.random.normal(mu_I, sigma_I, n_samples)
m_samples = np.random.normal(mu_m, sigma_m, n_samples)
PGA_samples = np.random.normal(mu_PGA, sigma_PGA, n_samples)
f_g_samples = np.random.normal(mu_fg, sigma_fg, n_samples)
# 结构几何参数
h = 3.0 # 柱高 (m)
L = 6.0 # 跨度 (m)
# 计算结构响应
k_samples = 3 * E_samples * I_samples / h**3 # 侧向刚度
omega_n_samples = np.sqrt(k_samples / m_samples) # 固有频率 (rad/s)
f_n_samples = omega_n_samples / (2 * np.pi) # 固有频率 (Hz)
# 地震响应(简化响应谱公式)
# S_a = PGA * g * (f_g / f_n)^1.5
g_acc = 9.81 # 重力加速度
S_a_samples = PGA_samples * g_acc * (f_g_samples / f_n_samples)**1.5
# 等效地震力和位移
F_eq_samples = m_samples * S_a_samples
disp_samples = F_eq_samples / k_samples
# 层间位移角
theta_limit = 1/50 # 限值
theta_samples = disp_samples / h
print(f" 样本数量: {n_samples}")
print(f" 固有频率范围: {f_n_samples.min():.2f} ~ {f_n_samples.max():.2f} Hz")
print(f" 位移范围: {disp_samples.min()*1000:.2f} ~ {disp_samples.max()*1000:.2f} mm")
# ==================== 3. 失效概率计算 ====================
print("\n[3] 计算失效概率...")
# 失效判断
failure_samples = theta_samples > theta_limit
P_f = np.mean(failure_samples)
beta = -stats.norm.ppf(P_f)
# 95%置信区间
P_f_std = np.sqrt(P_f * (1 - P_f) / n_samples)
P_f_lower = P_f - 1.96 * P_f_std
P_f_upper = P_f + 1.96 * P_f_std
print(f" 失效概率 Pf = {P_f:.4f} ({P_f*100:.2f}%)")
print(f" 可靠指标 β = {beta:.4f}")
print(f" 95%置信区间: [{P_f_lower:.4f}, {P_f_upper:.4f}]")
# ==================== 4. 收敛性分析 ====================
print("\n[4] 收敛性分析...")
sample_sizes = np.logspace(2, 4, 50).astype(int)
P_f_history = []
beta_history = []
for n in sample_sizes:
failure_n = failure_samples[:n]
P_f_n = np.mean(failure_n)
beta_n = -stats.norm.ppf(max(P_f_n, 1e-10))
P_f_history.append(P_f_n)
beta_history.append(beta_n)
P_f_history = np.array(P_f_history)
beta_history = np.array(beta_history)
print(" 收敛性分析完成")
# ==================== 5. 绘制结果 ====================
print("\n[5] 绘制可视化结果...")
fig = plt.figure(figsize=(16, 12))
# 1. 输入参数分布
ax1 = plt.subplot(3, 3, 1)
ax1.hist(E_samples/1e11, bins=50, density=True, alpha=0.7, color='steelblue', edgecolor='black')
x_norm = np.linspace(E_samples.min(), E_samples.max(), 100)
ax1.plot(x_norm/1e11, stats.norm.pdf(x_norm, mu_E, sigma_E)*1e11, 'r-', linewidth=2, label='理论分布')
ax1.set_xlabel('E (×10¹¹ Pa)', fontsize=10)
ax1.set_ylabel('PDF', fontsize=10)
ax1.set_title('Elastic Modulus Distribution', fontsize=11, fontweight='bold')
ax1.legend(fontsize=8)
ax1.grid(True, alpha=0.3)
ax2 = plt.subplot(3, 3, 2)
ax2.hist(I_samples/1e-5, bins=50, density=True, alpha=0.7, color='coral', edgecolor='black')
x_norm = np.linspace(I_samples.min(), I_samples.max(), 100)
ax2.plot(x_norm/1e-5, stats.norm.pdf(x_norm, mu_I, sigma_I)*1e-5, 'r-', linewidth=2)
ax2.set_xlabel('I (×10⁻⁵ m⁴)', fontsize=10)
ax2.set_ylabel('PDF', fontsize=10)
ax2.set_title('Moment of Inertia Distribution', fontsize=11, fontweight='bold')
ax2.grid(True, alpha=0.3)
ax3 = plt.subplot(3, 3, 3)
ax3.hist(PGA_samples, bins=50, density=True, alpha=0.7, color='lightgreen', edgecolor='black')
x_norm = np.linspace(PGA_samples.min(), PGA_samples.max(), 100)
ax3.plot(x_norm, stats.norm.pdf(x_norm, mu_PGA, sigma_PGA), 'r-', linewidth=2)
ax3.set_xlabel('PGA (g)', fontsize=10)
ax3.set_ylabel('PDF', fontsize=10)
ax3.set_title('Peak Ground Acceleration', fontsize=11, fontweight='bold')
ax3.grid(True, alpha=0.3)
# 2. 输出响应分布
ax4 = plt.subplot(3, 3, 4)
ax4.hist(f_n_samples, bins=50, density=True, alpha=0.7, color='gold', edgecolor='black')
ax4.axvline(np.mean(f_n_samples), color='red', linestyle='--', linewidth=2, label=f'Mean={np.mean(f_n_samples):.2f}Hz')
ax4.set_xlabel('Natural Frequency (Hz)', fontsize=10)
ax4.set_ylabel('PDF', fontsize=10)
ax4.set_title('Natural Frequency Distribution', fontsize=11, fontweight='bold')
ax4.legend(fontsize=8)
ax4.grid(True, alpha=0.3)
ax5 = plt.subplot(3, 3, 5)
disp_mm = disp_samples * 1000
ax5.hist(disp_mm, bins=50, density=True, alpha=0.7, color='mediumpurple', edgecolor='black')
ax5.axvline(np.mean(disp_mm), color='red', linestyle='--', linewidth=2, label=f'Mean={np.mean(disp_mm):.2f}mm')
ax5.axvline(np.percentile(disp_mm, 95), color='orange', linestyle='--', linewidth=2, label=f'95%={np.percentile(disp_mm, 95):.2f}mm')
ax5.set_xlabel('Displacement (mm)', fontsize=10)
ax5.set_ylabel('PDF', fontsize=10)
ax5.set_title('Displacement Distribution', fontsize=11, fontweight='bold')
ax5.legend(fontsize=8)
ax5.grid(True, alpha=0.3)
ax6 = plt.subplot(3, 3, 6)
theta_percent = theta_samples * 100
theta_limit_percent = theta_limit * 100
ax6.hist(theta_percent, bins=50, density=True, alpha=0.7, color='lightcoral', edgecolor='black')
ax6.axvline(theta_limit_percent, color='red', linestyle='-', linewidth=2, label=f'Limit={theta_limit_percent:.2f}%')
ax6.fill_betweenx([0, ax6.get_ylim()[1]], theta_limit_percent, ax6.get_xlim()[1],
alpha=0.3, color='red', label=f'Failure Region')
ax6.set_xlabel('Inter-story Drift (%)', fontsize=10)
ax6.set_ylabel('PDF', fontsize=10)
ax6.set_title('Drift Angle Distribution', fontsize=11, fontweight='bold')
ax6.legend(fontsize=8)
ax6.grid(True, alpha=0.3)
# 3. 收敛性分析
ax7 = plt.subplot(3, 3, 7)
ax7.semilogx(sample_sizes, P_f_history*100, 'b-', linewidth=1.5, alpha=0.7)
ax7.axhline(P_f*100, color='red', linestyle='--', linewidth=2, label=f'Final={P_f*100:.2f}%')
ax7.set_xlabel('Number of Samples', fontsize=10)
ax7.set_ylabel('Failure Probability (%)', fontsize=10)
ax7.set_title('Convergence of Failure Probability', fontsize=11, fontweight='bold')
ax7.legend(fontsize=8)
ax7.grid(True, alpha=0.3)
ax8 = plt.subplot(3, 3, 8)
ax8.semilogx(sample_sizes, beta_history, 'g-', linewidth=1.5, alpha=0.7)
ax8.axhline(beta, color='red', linestyle='--', linewidth=2, label=f'Final β={beta:.3f}')
ax8.set_xlabel('Number of Samples', fontsize=10)
ax8.set_ylabel('Reliability Index β', fontsize=10)
ax8.set_title('Convergence of Reliability Index', fontsize=11, fontweight='bold')
ax8.legend(fontsize=8)
ax8.grid(True, alpha=0.3)
# 4. 散点图矩阵(部分)
ax9 = plt.subplot(3, 3, 9)
scatter = ax9.scatter(PGA_samples, theta_samples*100, c=failure_samples,
cmap='RdYlGn_r', alpha=0.5, s=10)
ax9.axhline(theta_limit_percent, color='red', linestyle='-', linewidth=2)
ax9.set_xlabel('PGA (g)', fontsize=10)
ax9.set_ylabel('Drift (%)', fontsize=10)
ax9.set_title('PGA vs Drift (Red=Failure)', fontsize=11, fontweight='bold')
plt.colorbar(scatter, ax=ax9, label='Failure')
ax9.grid(True, alpha=0.3)
plt.suptitle('Monte Carlo Simulation for Structural Reliability Analysis',
fontsize=14, fontweight='bold', y=0.995)
plt.tight_layout()
plt.savefig(f'{output_dir}/case1_mc_results.png', dpi=150, bbox_inches='tight')
plt.close()
print(" ✓ case1_mc_results.png 已保存")
# ==================== 6. 灵敏度分析 ====================
print("\n[6] 灵敏度分析(基于相关系数)...")
# 计算输入输出相关系数
inputs = np.column_stack([E_samples/1e11, I_samples/1e-5, m_samples/1e3,
PGA_samples, f_g_samples])
input_names = ['E (×10¹¹Pa)', 'I (×10⁻⁵m⁴)', 'm (×10³kg)', 'PGA (g)', 'f_g (Hz)']
correlations = []
for i in range(inputs.shape[1]):
corr = np.corrcoef(inputs[:, i], theta_samples)[0, 1]
correlations.append(abs(corr))
correlations = np.array(correlations)
sorted_idx = np.argsort(correlations)[::-1]
print(" 输入参数与响应的相关性(按重要性排序):")
for idx in sorted_idx:
print(f" {input_names[idx]}: {correlations[idx]:.4f}")
# 绘制灵敏度图
fig, ax = plt.subplots(figsize=(10, 6))
colors = ['red' if c > 0.3 else 'orange' if c > 0.1 else 'green' for c in correlations]
bars = ax.barh(input_names, correlations, color=colors, edgecolor='black')
ax.set_xlabel('Absolute Correlation Coefficient', fontsize=11)
ax.set_title('Sensitivity Analysis - Input Parameter Importance', fontsize=12, fontweight='bold')
ax.grid(True, alpha=0.3, axis='x')
for bar, val in zip(bars, correlations):
ax.text(val + 0.01, bar.get_y() + bar.get_height()/2,
f'{val:.3f}', va='center', fontsize=10)
plt.tight_layout()
plt.savefig(f'{output_dir}/case1_sensitivity_analysis.png', dpi=150, bbox_inches='tight')
plt.close()
print(" ✓ case1_sensitivity_analysis.png 已保存")
# ==================== 7. 拉丁超立方抽样对比 ====================
print("\n[7] 拉丁超立方抽样(LHS)对比...")
def latin_hypercube_sampling(n_samples, n_dim):
"""生成拉丁超立方样本"""
samples = np.zeros((n_samples, n_dim))
for i in range(n_dim):
# 分层抽样
strata = np.random.permutation(n_samples)
samples[:, i] = (strata + np.random.rand(n_samples)) / n_samples
return samples
# LHS样本
lhs_uniform = latin_hypercube_sampling(n_samples, 5)
E_lhs = stats.norm.ppf(lhs_uniform[:, 0], mu_E, sigma_E)
I_lhs = stats.norm.ppf(lhs_uniform[:, 1], mu_I, sigma_I)
m_lhs = stats.norm.ppf(lhs_uniform[:, 2], mu_m, sigma_m)
PGA_lhs = stats.norm.ppf(lhs_uniform[:, 3], mu_PGA, sigma_PGA)
f_g_lhs = stats.norm.ppf(lhs_uniform[:, 4], mu_fg, sigma_fg)
# LHS响应计算
k_lhs = 3 * E_lhs * I_lhs / h**3
omega_n_lhs = np.sqrt(k_lhs / m_lhs)
f_n_lhs = omega_n_lhs / (2 * np.pi)
S_a_lhs = PGA_lhs * g_acc * (f_g_lhs / f_n_lhs)**1.5
F_eq_lhs = m_lhs * S_a_lhs
disp_lhs = F_eq_lhs / k_lhs
theta_lhs = disp_lhs / h
failure_lhs = theta_lhs > theta_limit
P_f_lhs = np.mean(failure_lhs)
beta_lhs = -stats.norm.ppf(P_f_lhs)
print(f" 标准MC: Pf={P_f:.4f}, β={beta:.4f}")
print(f" LHS: Pf={P_f_lhs:.4f}, β={beta_lhs:.4f}")
# 绘制对比图
fig, axes = plt.subplots(2, 2, figsize=(12, 10))
# 样本分布对比
ax1 = axes[0, 0]
ax1.scatter(E_samples[::100]/1e11, PGA_samples[::100], alpha=0.5, s=20,
label='Standard MC', color='steelblue')
ax1.scatter(E_lhs[::100]/1e11, PGA_lhs[::100], alpha=0.5, s=20,
label='LHS', color='coral')
ax1.set_xlabel('E (×10¹¹ Pa)', fontsize=10)
ax1.set_ylabel('PGA (g)', fontsize=10)
ax1.set_title('Sample Distribution Comparison', fontsize=11, fontweight='bold')
ax1.legend(fontsize=9)
ax1.grid(True, alpha=0.3)
# 收敛速度对比
sample_sizes_small = np.array([100, 200, 500, 1000, 2000, 5000, 10000])
P_f_mc_small = []
P_f_lhs_small = []
for n in sample_sizes_small:
P_f_mc_small.append(np.mean(failure_samples[:n]))
P_f_lhs_small.append(np.mean(failure_lhs[:n]))
ax2 = axes[0, 1]
ax2.semilogx(sample_sizes_small, np.array(P_f_mc_small)*100, 'o-',
label='Standard MC', linewidth=2, markersize=8, color='steelblue')
ax2.semilogx(sample_sizes_small, np.array(P_f_lhs_small)*100, 's-',
label='LHS', linewidth=2, markersize=8, color='coral')
ax2.axhline(P_f*100, color='red', linestyle='--', alpha=0.5)
ax2.set_xlabel('Number of Samples', fontsize=10)
ax2.set_ylabel('Failure Probability (%)', fontsize=10)
ax2.set_title('Convergence Comparison', fontsize=11, fontweight='bold')
ax2.legend(fontsize=9)
ax2.grid(True, alpha=0.3)
# 结果分布对比
ax3 = axes[1, 0]
ax3.hist(theta_samples*100, bins=50, density=True, alpha=0.5,
label='Standard MC', color='steelblue', edgecolor='black')
ax3.hist(theta_lhs*100, bins=50, density=True, alpha=0.5,
label='LHS', color='coral', edgecolor='black')
ax3.axvline(theta_limit*100, color='red', linestyle='-', linewidth=2, label='Limit')
ax3.set_xlabel('Drift Angle (%)', fontsize=10)
ax3.set_ylabel('PDF', fontsize=10)
ax3.set_title('Response Distribution Comparison', fontsize=11, fontweight='bold')
ax3.legend(fontsize=9)
ax3.grid(True, alpha=0.3)
# 统计量对比
ax4 = axes[1, 1]
methods = ['Standard MC', 'LHS']
P_f_values = [P_f*100, P_f_lhs*100]
beta_values = [beta, beta_lhs]
x = np.arange(len(methods))
width = 0.35
bars1 = ax4.bar(x - width/2, P_f_values, width, label='Pf (%)', color='steelblue', edgecolor='black')
ax4_twin = ax4.twinx()
bars2 = ax4_twin.bar(x + width/2, beta_values, width, label='β', color='coral', edgecolor='black')
ax4.set_ylabel('Failure Probability (%)', fontsize=10, color='steelblue')
ax4_twin.set_ylabel('Reliability Index β', fontsize=10, color='coral')
ax4.set_xticks(x)
ax4.set_xticklabels(methods)
ax4.set_title('Results Comparison', fontsize=11, fontweight='bold')
# 添加数值标签
for bar, val in zip(bars1, P_f_values):
ax4.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.05,
f'{val:.2f}%', ha='center', va='bottom', fontsize=9)
for bar, val in zip(bars2, beta_values):
ax4_twin.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.02,
f'{val:.3f}', ha='center', va='bottom', fontsize=9)
plt.tight_layout()
plt.savefig(f'{output_dir}/case1_lhs_comparison.png', dpi=150, bbox_inches='tight')
plt.close()
print(" ✓ case1_lhs_comparison.png 已保存")
# ==================== 8. 生成动画展示收敛过程 ====================
print("\n[8] 生成收敛过程动画...")
from matplotlib.animation import FuncAnimation
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))
# 准备动画数据
n_frames = 50
sample_sizes_anim = np.logspace(2, 4, n_frames).astype(int)
# 左图:样本分布演化
def init():
ax1.clear()
ax1.set_xlim(PGA_samples.min(), PGA_samples.max())
ax1.set_ylim((theta_samples*100).min(), (theta_samples*100).max())
ax1.set_xlabel('PGA (g)', fontsize=11)
ax1.set_ylabel('Drift (%)', fontsize=11)
ax1.set_title('Sample Distribution Evolution', fontsize=12, fontweight='bold')
ax1.axhline(theta_limit*100, color='red', linestyle='-', linewidth=2, label='Limit')
ax1.grid(True, alpha=0.3)
return []
def update(frame):
ax1.clear()
n = sample_sizes_anim[frame]
ax1.scatter(PGA_samples[:n], theta_samples[:n]*100, c=failure_samples[:n],
cmap='RdYlGn_r', alpha=0.5, s=20)
ax1.axhline(theta_limit*100, color='red', linestyle='-', linewidth=2)
ax1.set_xlim(PGA_samples.min(), PGA_samples.max())
ax1.set_ylim((theta_samples*100).min(), (theta_samples*100).max())
ax1.set_xlabel('PGA (g)', fontsize=11)
ax1.set_ylabel('Drift (%)', fontsize=11)
ax1.set_title(f'Sample Distribution (n={n})', fontsize=12, fontweight='bold')
ax1.grid(True, alpha=0.3)
# 右图:收敛曲线
ax2.clear()
P_f_current = np.mean(failure_samples[:n])
ax2.semilogx(sample_sizes_anim[:frame+1],
[np.mean(failure_samples[:s])*100 for s in sample_sizes_anim[:frame+1]],
'b-', linewidth=2)
ax2.axhline(P_f*100, color='red', linestyle='--', linewidth=2, alpha=0.7)
ax2.set_xlabel('Number of Samples', fontsize=11)
ax2.set_ylabel('Failure Probability (%)', fontsize=11)
ax2.set_title(f'Pf = {P_f_current*100:.3f}%', fontsize=12, fontweight='bold')
ax2.set_xlim(100, 10000)
ax2.set_ylim(0, max(P_f_history*100)*1.5)
ax2.grid(True, alpha=0.3)
return []
anim = FuncAnimation(fig, update, frames=n_frames, init_func=init, blit=False)
anim.save(f'{output_dir}/case1_convergence_animation.gif', writer='pillow', fps=5, dpi=100)
plt.close()
print(" ✓ case1_convergence_animation.gif 已保存")
# ==================== 9. 汇总报告 ====================
print("\n" + "=" * 70)
print("蒙特卡洛模拟分析汇总")
print("=" * 70)
print(f"\n【输入参数统计】")
print(f" 弹性模量 E: 均值={mu_E:.2e} Pa, COV={sigma_E/mu_E*100:.1f}%")
print(f" 惯性矩 I: 均值={mu_I:.2e} m⁴, COV={sigma_I/mu_I*100:.1f}%")
print(f" 质量 m: 均值={mu_m:.1f} kg, COV={sigma_m/mu_m*100:.1f}%")
print(f" PGA: 均值={mu_PGA:.2f}g, COV={sigma_PGA/mu_PGA*100:.1f}%")
print(f" 卓越频率: 均值={mu_fg:.2f}Hz, COV={sigma_fg/mu_fg*100:.1f}%")
print(f"\n【输出响应统计】")
print(f" 固有频率: 均值={np.mean(f_n_samples):.2f}Hz, 标准差={np.std(f_n_samples):.2f}Hz")
print(f" 位移: 均值={np.mean(disp_samples)*1000:.2f}mm, 标准差={np.std(disp_samples)*1000:.2f}mm")
print(f" 位移角: 均值={np.mean(theta_samples)*100:.3f}%, 标准差={np.std(theta_samples)*100:.3f}%")
print(f"\n【可靠性分析结果】")
print(f" 失效概率 Pf = {P_f:.4f} ({P_f*100:.2f}%)")
print(f" 可靠指标 β = {beta:.4f}")
print(f" 95%置信区间: [{P_f_lower:.4f}, {P_f_upper:.4f}]")
print(f"\n【灵敏度分析(相关系数)】")
for idx in sorted_idx[:3]:
print(f" {input_names[idx]}: {correlations[idx]:.4f}")
print(f"\n【LHS对比结果】")
print(f" 标准MC: Pf={P_f:.4f}, β={beta:.4f}")
print(f" LHS: Pf={P_f_lhs:.4f}, β={beta_lhs:.4f}")
print(f" 差异: {abs(P_f-P_f_lhs)/P_f*100:.2f}%")
print("\n" + "=" * 70)
print("案例一分析完成!")
print("=" * 70)
# -*- coding: utf-8 -*-
"""
案例二:FORM可靠性分析方法
一次可靠度方法在结构工程中的应用
"""
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.optimize import minimize
import os
# 创建输出目录
output_dir = r'd:\文档\500仿真领域\工程仿真\结构动力学仿真\主题091'
os.makedirs(output_dir, exist_ok=True)
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False
print("=" * 70)
print("案例二:FORM可靠性分析方法")
print("=" * 70)
# ==================== 1. 定义问题 ====================
print("\n[1] 定义结构可靠性问题...")
# 结构参数(均值和标准差)
# 使用对数正态分布模拟材料强度
mu_R = 300.0 # 抗力均值 (kN)
COV_R = 0.15 # 抗力变异系数
sigma_R = mu_R * COV_R
# 转换为对数正态分布参数
zeta_R = np.sqrt(np.log(1 + COV_R**2))
lambda_R = np.log(mu_R) - 0.5 * zeta_R**2
# 荷载效应(极值I型分布 - Gumbel)
mu_S = 200.0 # 荷载均值 (kN)
COV_S = 0.25 # 荷载变异系数
sigma_S = mu_S * COV_S
# Gumbel分布参数
alpha_S = np.pi / (np.sqrt(6) * sigma_S)
mu_S_gumbel = mu_S - 0.5772 / alpha_S
print(f" 抗力 R: 对数正态分布")
print(f" 均值={mu_R:.2f} kN, COV={COV_R*100:.1f}%")
print(f" lambda={lambda_R:.4f}, zeta={zeta_R:.4f}")
print(f" 荷载 S: 极值I型分布")
print(f" 均值={mu_S:.2f} kN, COV={COV_S*100:.1f}%")
print(f" alpha={alpha_S:.4f}, u={mu_S_gumbel:.4f}")
# 极限状态函数: g = R - S
def limit_state_function(X):
"""
极限状态函数
X = [R, S] 物理空间变量
"""
R, S = X
return R - S
# ==================== 2. 变量变换 ====================
print("\n[2] 定义变量变换函数...")
def physical_to_standard(X):
"""
将物理空间变量变换到标准正态空间
X = [R, S] -> U = [U1, U2]
"""
R, S = X
# 对数正态分布 -> 标准正态
# R服从对数正态分布,ln(R)服从正态分布
U1 = (np.log(R) - lambda_R) / zeta_R
# 极值I型分布 -> 标准正态 (使用近似)
# CDF: F(S) = exp(-exp(-alpha*(S-u)))
F_S = np.exp(-np.exp(-alpha_S * (S - mu_S_gumbel)))
# 防止数值问题
F_S = np.clip(F_S, 1e-10, 1-1e-10)
U2 = stats.norm.ppf(F_S)
return np.array([U1, U2])
def standard_to_physical(U):
"""
将标准正态空间变量变换到物理空间
U = [U1, U2] -> X = [R, S]
"""
U1, U2 = U
# 标准正态 -> 对数正态
ln_R = U1 * zeta_R + lambda_R
R = np.exp(ln_R)
# 标准正态 -> 极值I型
F_S = stats.norm.cdf(U2)
# 防止数值问题
F_S = np.clip(F_S, 1e-10, 1-1e-10)
S = mu_S_gumbel - np.log(-np.log(F_S)) / alpha_S
return np.array([R, S])
# 测试变换
X_test = np.array([mu_R, mu_S])
U_test = physical_to_standard(X_test)
X_back = standard_to_physical(U_test)
print(f" 变换测试: X={X_test} -> U={U_test} -> X'={X_back}")
# ==================== 3. FORM分析 ====================
print("\n[3] 执行FORM分析...")
def limit_state_in_standard(U):
"""标准正态空间中的极限状态函数"""
X = standard_to_physical(U)
return limit_state_function(X)
# 数值梯度计算
def gradient_limit_state(U, h=1e-6):
"""计算极限状态函数的梯度"""
grad = np.zeros_like(U)
g0 = limit_state_in_standard(U)
for i in range(len(U)):
U_plus = U.copy()
U_plus[i] += h
g_plus = limit_state_in_standard(U_plus)
grad[i] = (g_plus - g0) / h
return grad
# HL-RF算法
def hl_rf_algorithm(U0=None, max_iter=100, tol=1e-6):
"""
Hasofer-Lind-Rackwitz-Fiessler算法
"""
if U0 is None:
U0 = np.zeros(2)
U = U0.copy()
history = {'U': [U.copy()], 'beta': [np.linalg.norm(U)], 'g': [limit_state_in_standard(U)]}
for i in range(max_iter):
# 计算极限状态函数值和梯度
g = limit_state_in_standard(U)
grad_g = gradient_limit_state(U)
# 梯度归一化
grad_norm = np.linalg.norm(grad_g)
if grad_norm < 1e-10:
print(f" 警告: 梯度接近零,迭代终止")
break
grad_g = grad_g / grad_norm
# HL-RF更新公式
# U_new = (grad_g^T * U - g) / ||grad_g||^2 * grad_g
U_new = (np.dot(grad_g, U) - g) * grad_g
# 检查收敛
delta = np.linalg.norm(U_new - U)
history['U'].append(U_new.copy())
history['beta'].append(np.linalg.norm(U_new))
history['g'].append(limit_state_in_standard(U_new))
if delta < tol:
print(f" 收敛于第{i+1}次迭代")
break
U = U_new
else:
print(f" 达到最大迭代次数{max_iter}")
return U, history
# 执行FORM分析
U_star, history = hl_rf_algorithm()
# 计算可靠指标和失效概率
beta = np.linalg.norm(U_star)
P_f_form = stats.norm.cdf(-beta)
# 设计点物理坐标
X_star = standard_to_physical(U_star)
print(f"\n FORM分析结果:")
print(f" 设计点(标准空间): U* = [{U_star[0]:.4f}, {U_star[1]:.4f}]")
print(f" 设计点(物理空间): X* = [R*={X_star[0]:.2f} kN, S*={X_star[1]:.2f} kN]")
print(f" 可靠指标: β = {beta:.4f}")
print(f" 失效概率: Pf = {P_f_form:.6f} ({P_f_form*100:.4f}%)")
# ==================== 4. 蒙特卡洛验证 ====================
print("\n[4] 蒙特卡洛模拟验证...")
np.random.seed(42)
n_mc = 1000000 # 大样本
# 生成随机样本
R_mc = np.random.lognormal(lambda_R, zeta_R, n_mc)
S_mc = stats.gumbel_r.rvs(loc=mu_S_gumbel, scale=1/alpha_S, size=n_mc)
# 计算极限状态
g_mc = R_mc - S_mc
failure_mc = g_mc < 0
P_f_mc = np.mean(failure_mc)
beta_mc = -stats.norm.ppf(P_f_mc)
print(f" MCS结果 (n={n_mc}):")
print(f" 失效概率: Pf = {P_f_mc:.6f} ({P_f_mc*100:.4f}%)")
print(f" 可靠指标: β = {beta_mc:.4f}")
print(f" FORM误差: {abs(P_f_form - P_f_mc)/P_f_mc*100:.2f}%")
# ==================== 5. 可视化 ====================
print("\n[5] 绘制可视化结果...")
fig = plt.figure(figsize=(16, 12))
# 1. 物理空间分布
ax1 = plt.subplot(2, 3, 1)
# 绘制联合分布的散点图
sample_idx = np.random.choice(n_mc, 5000, replace=False)
ax1.scatter(S_mc[sample_idx], R_mc[sample_idx], c=g_mc[sample_idx],
cmap='RdYlGn', alpha=0.3, s=5)
ax1.plot([0, max(S_mc)], [0, max(S_mc)], 'r--', linewidth=2, label='g=0 (R=S)')
ax1.scatter(X_star[1], X_star[0], c='red', s=200, marker='*',
edgecolors='black', linewidths=2, label='Design Point', zorder=5)
ax1.set_xlabel('Load S (kN)', fontsize=11)
ax1.set_ylabel('Resistance R (kN)', fontsize=11)
ax1.set_title('Physical Space Distribution', fontsize=12, fontweight='bold')
ax1.legend(fontsize=9)
ax1.grid(True, alpha=0.3)
ax1.set_xlim(0, max(S_mc)*0.5)
ax1.set_ylim(0, max(R_mc)*0.5)
# 2. 标准正态空间
ax2 = plt.subplot(2, 3, 2)
# 变换样本到标准空间
U_mc = np.array([physical_to_standard([R_mc[i], S_mc[i]]) for i in sample_idx])
U1_mc, U2_mc = U_mc[:, 0], U_mc[:, 1]
# 绘制极限状态曲线
U1_range = np.linspace(-4, 4, 100)
U2_limit = []
for U1 in U1_range:
# 寻找满足g=0的U2
def find_U2(U2):
return limit_state_in_standard([U1, U2])
try:
from scipy.optimize import brentq
U2_sol = brentq(find_U2, -4, 4)
U2_limit.append(U2_sol)
except:
U2_limit.append(np.nan)
ax2.scatter(U1_mc, U2_mc, c='lightblue', alpha=0.3, s=5)
ax2.plot(U1_range, U2_limit, 'r-', linewidth=2, label='Limit State g=0')
ax2.scatter(U_star[0], U_star[1], c='red', s=200, marker='*',
edgecolors='black', linewidths=2, label='Design Point', zorder=5)
ax2.plot([0, U_star[0]], [0, U_star[1]], 'g--', linewidth=2, label=f'β={beta:.3f}')
# 绘制可靠指标圆
theta_circle = np.linspace(0, 2*np.pi, 100)
ax2.plot(beta*np.cos(theta_circle), beta*np.sin(theta_circle),
'orange', linestyle='--', linewidth=1.5, alpha=0.7, label=f'β={beta:.3f} circle')
ax2.set_xlabel('U1 (Standard Normal)', fontsize=11)
ax2.set_ylabel('U2 (Standard Normal)', fontsize=11)
ax2.set_title('Standard Normal Space', fontsize=12, fontweight='bold')
ax2.legend(fontsize=9)
ax2.grid(True, alpha=0.3)
ax2.set_xlim(-4, 4)
ax2.set_ylim(-4, 4)
ax2.set_aspect('equal')
# 3. 收敛历史
ax3 = plt.subplot(2, 3, 3)
iterations = range(len(history['beta']))
ax3.plot(iterations, history['beta'], 'o-', linewidth=2, markersize=8,
color='steelblue', label='Reliability Index β')
ax3.axhline(beta, color='red', linestyle='--', linewidth=2, alpha=0.7, label=f'Final β={beta:.4f}')
ax3.set_xlabel('Iteration', fontsize=11)
ax3.set_ylabel('Reliability Index β', fontsize=11)
ax3.set_title('FORM Convergence History', fontsize=12, fontweight='bold')
ax3.legend(fontsize=9)
ax3.grid(True, alpha=0.3)
# 4. 边缘分布
ax4 = plt.subplot(2, 3, 4)
ax4.hist(R_mc, bins=100, density=True, alpha=0.6, color='steelblue',
edgecolor='black', label='Resistance R')
ax4.hist(S_mc, bins=100, density=True, alpha=0.6, color='coral',
edgecolor='black', label='Load S')
ax4.axvline(X_star[0], color='blue', linestyle='--', linewidth=2, label=f'R*={X_star[0]:.1f}')
ax4.axvline(X_star[1], color='red', linestyle='--', linewidth=2, label=f'S*={X_star[1]:.1f}')
ax4.set_xlabel('Value (kN)', fontsize=11)
ax4.set_ylabel('PDF', fontsize=11)
ax4.set_title('Marginal Distributions', fontsize=12, fontweight='bold')
ax4.legend(fontsize=9)
ax4.grid(True, alpha=0.3)
# 5. 安全裕度分布
ax5 = plt.subplot(2, 3, 5)
ax5.hist(g_mc, bins=100, density=True, alpha=0.7, color='lightgreen',
edgecolor='black')
ax5.axvline(0, color='red', linestyle='-', linewidth=2, label='Failure Threshold')
ax5.fill_betweenx([0, ax5.get_ylim()[1]], -500, 0, alpha=0.3, color='red', label='Failure Region')
ax5.set_xlabel('Safety Margin g = R - S (kN)', fontsize=11)
ax5.set_ylabel('PDF', fontsize=11)
ax5.set_title('Safety Margin Distribution', fontsize=12, fontweight='bold')
ax5.legend(fontsize=9)
ax5.grid(True, alpha=0.3)
# 6. 方法对比
ax6 = plt.subplot(2, 3, 6)
methods = ['FORM', 'MCS']
P_f_values = [P_f_form*100, P_f_mc*100]
beta_values = [beta, beta_mc]
x = np.arange(len(methods))
width = 0.35
bars1 = ax6.bar(x - width/2, P_f_values, width, label='Pf (%)',
color='steelblue', edgecolor='black')
ax6_twin = ax6.twinx()
bars2 = ax6_twin.bar(x + width/2, beta_values, width, label='β',
color='coral', edgecolor='black')
ax6.set_ylabel('Failure Probability (%)', fontsize=11, color='steelblue')
ax6_twin.set_ylabel('Reliability Index β', fontsize=11, color='coral')
ax6.set_xticks(x)
ax6.set_xticklabels(methods)
ax6.set_title('Method Comparison', fontsize=12, fontweight='bold')
# 添加数值标签
for bar, val in zip(bars1, P_f_values):
ax6.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.001,
f'{val:.4f}%', ha='center', va='bottom', fontsize=10)
for bar, val in zip(bars2, beta_values):
ax6_twin.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.02,
f'{val:.3f}', ha='center', va='bottom', fontsize=10)
plt.tight_layout()
plt.savefig(f'{output_dir}/case2_form_analysis.png', dpi=150, bbox_inches='tight')
plt.close()
print(" ✓ case2_form_analysis.png 已保存")
# ==================== 6. 参数敏感性分析 ====================
print("\n[6] 参数敏感性分析...")
# 方向余弦(重要性因子)
grad_g_star = gradient_limit_state(U_star)
grad_g_star = grad_g_star / np.linalg.norm(grad_g_star)
alpha = -grad_g_star # 方向余弦
print(f" 方向余弦(重要性因子):")
print(f" α_R = {alpha[0]:.4f} (R的重要性)")
print(f" α_S = {alpha[1]:.4f} (S的重要性)")
# 绘制敏感性图
fig, ax = plt.subplots(figsize=(8, 6))
variables = ['Resistance R', 'Load S']
importance = [abs(alpha[0]), abs(alpha[1])]
colors = ['steelblue', 'coral']
bars = ax.bar(variables, importance, color=colors, edgecolor='black')
ax.set_ylabel('Importance Factor |α|', fontsize=11)
ax.set_title('Parameter Importance in FORM Analysis', fontsize=12, fontweight='bold')
ax.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, importance):
ax.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.01,
f'{val:.4f}', ha='center', va='bottom', fontsize=11, fontweight='bold')
plt.tight_layout()
plt.savefig(f'{output_dir}/case2_importance_factors.png', dpi=150, bbox_inches='tight')
plt.close()
print(" ✓ case2_importance_factors.png 已保存")
# ==================== 7. 不同参数组合的可靠指标 ====================
print("\n[7] 不同变异系数下的可靠指标...")
COV_R_range = np.linspace(0.05, 0.30, 20)
COV_S_range = np.linspace(0.15, 0.40, 20)
beta_matrix = np.zeros((len(COV_R_range), len(COV_S_range)))
for i, COV_R_val in enumerate(COV_R_range):
for j, COV_S_val in enumerate(COV_S_range):
# 临时更新参数
zeta_R_temp = np.sqrt(np.log(1 + COV_R_val**2))
lambda_R_temp = np.log(mu_R) - 0.5 * zeta_R_temp**2
alpha_S_temp = np.pi / (np.sqrt(6) * mu_S * COV_S_val)
mu_S_gumbel_temp = mu_S - 0.5772 / alpha_S_temp
# 简化的FORM计算(使用均值点线性化近似)
# β ≈ (μ_R - μ_S) / sqrt((α_R*σ_R)^2 + (α_S*σ_S)^2)
sigma_R_temp = mu_R * COV_R_val
sigma_S_temp = mu_S * COV_S_val
beta_approx = (mu_R - mu_S) / np.sqrt(sigma_R_temp**2 + sigma_S_temp**2)
beta_matrix[i, j] = beta_approx
# 绘制热力图
fig, ax = plt.subplots(figsize=(10, 8))
im = ax.imshow(beta_matrix, cmap='RdYlGn', aspect='auto', origin='lower',
extent=[COV_S_range[0]*100, COV_S_range[-1]*100,
COV_R_range[0]*100, COV_R_range[-1]*100])
ax.set_xlabel('Load COV (%)', fontsize=11)
ax.set_ylabel('Resistance COV (%)', fontsize=11)
ax.set_title('Reliability Index vs Parameter Uncertainty', fontsize=12, fontweight='bold')
cbar = plt.colorbar(im, ax=ax)
cbar.set_label('Reliability Index β', fontsize=11)
# 添加等高线
contours = ax.contour(COV_S_range*100, COV_R_range*100, beta_matrix,
levels=[1.0, 1.5, 2.0, 2.5, 3.0, 3.5], colors='black', linewidths=1)
ax.clabel(contours, inline=True, fontsize=9, fmt='β=%1.1f')
plt.tight_layout()
plt.savefig(f'{output_dir}/case2_reliability_heatmap.png', dpi=150, bbox_inches='tight')
plt.close()
print(" ✓ case2_reliability_heatmap.png 已保存")
# ==================== 8. 汇总报告 ====================
print("\n" + "=" * 70)
print("FORM可靠性分析汇总")
print("=" * 70)
print(f"\n【问题定义】")
print(f" 极限状态函数: g(R,S) = R - S = 0")
print(f" 抗力 R: 对数正态, μ={mu_R:.1f}kN, COV={COV_R*100:.1f}%")
print(f" 荷载 S: 极值I型, μ={mu_S:.1f}kN, COV={COV_S*100:.1f}%")
print(f"\n【FORM分析结果】")
print(f" 设计点: U*=[{U_star[0]:.4f}, {U_star[1]:.4f}]")
print(f" 设计点: R*={X_star[0]:.2f}kN, S*={X_star[1]:.2f}kN")
print(f" 可靠指标: β = {beta:.4f}")
print(f" 失效概率: Pf = {P_f_form:.6f} ({P_f_form*100:.4f}%)")
print(f"\n【蒙特卡洛验证】")
print(f" 样本数: {n_mc}")
print(f" 失效概率: Pf = {P_f_mc:.6f} ({P_f_mc*100:.4f}%)")
print(f" 可靠指标: β = {beta_mc:.4f}")
print(f" 相对误差: {abs(P_f_form - P_f_mc)/P_f_mc*100:.2f}%")
print(f"\n【重要性因子】")
print(f" 抗力 R: α = {alpha[0]:.4f}")
print(f" 荷载 S: α = {alpha[1]:.4f}")
print("\n" + "=" * 70)
print("案例二分析完成!")
print("=" * 70)
# ============================================
# 生成GIF动画
# ============================================
def create_animation():
"""创建仿真结果动画"""
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import numpy as np
fig, ax = plt.subplots(figsize=(10, 6))
ax.set_xlim(0, 10)
ax.set_ylim(-2, 2)
ax.set_xlabel('Time', fontsize=12)
ax.set_ylabel('Response', fontsize=12)
ax.set_title('Dynamic Response Animation', fontsize=14, fontweight='bold')
ax.grid(True, alpha=0.3)
line, = ax.plot([], [], 'b-', linewidth=2)
def init():
line.set_data([], [])
return line,
def update(frame):
x = np.linspace(0, 10, 100)
y = np.sin(x - frame * 0.2) * np.exp(-frame * 0.01)
line.set_data(x, y)
return line,
anim = FuncAnimation(fig, update, init_func=init, frames=50, interval=100, blit=True)
output_dir = os.path.dirname(os.path.abspath(__file__))
anim.save(f'{output_dir}/simulation_animation.gif', writer='pillow', fps=10)
print(f"动画已保存到: {output_dir}/simulation_animation.gif")
plt.close()
if __name__ == '__main__':
create_animation()
# -*- coding: utf-8 -*-
"""
案例三:多项式混沌展开方法
基于PCE的不确定性量化与灵敏度分析
"""
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.special import factorial
from itertools import combinations_with_replacement
import os
# 创建输出目录
output_dir = r'd:\文档\500仿真领域\工程仿真\结构动力学仿真\主题091'
os.makedirs(output_dir, exist_ok=True)
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False
print("=" * 70)
print("案例三:多项式混沌展开方法")
print("=" * 70)
# ==================== 1. 定义模型问题 ====================
print("\n[1] 定义结构动力学模型...")
def structural_response(xi1, xi2, xi3):
"""
结构响应函数( Ishigami 函数的变体)
这是一个常用的测试函数,具有非线性、非单调特性
"""
# 物理参数映射
# xi1: 刚度不确定性 (-1 to 1 in physical, but standard normal here)
# xi2: 质量不确定性
# xi3: 荷载不确定性
# 非线性结构响应模型
Y = np.sin(xi1) + 7 * np.sin(xi2)**2 + 0.1 * xi3**4 * np.sin(xi1)
return Y
def true_statistics():
"""通过大量蒙特卡洛模拟计算参考统计量"""
np.random.seed(42)
n_mc = 1000000
xi1 = np.random.randn(n_mc)
xi2 = np.random.randn(n_mc)
xi3 = np.random.randn(n_mc)
Y = structural_response(xi1, xi2, xi3)
return np.mean(Y), np.var(Y)
print(" 模型: Y = sin(xi1) + 7*sin(xi2)^2 + 0.1*xi3^4*sin(xi1)")
print(" 输入: xi1, xi2, xi3 ~ N(0,1) (独立标准正态)")
# ==================== 2. 多项式混沌展开 ====================
print("\n[2] 构建多项式混沌展开...")
class PolynomialChaosExpansion:
"""多项式混沌展开类"""
def __init__(self, n_vars, degree):
"""
初始化PCE
n_vars: 随机变量个数
degree: 多项式阶数
"""
self.n_vars = n_vars
self.degree = degree
self.multi_indices = self._generate_multi_indices()
self.n_terms = len(self.multi_indices)
self.coefficients = None
def _generate_multi_indices(self):
"""生成多指标集"""
indices = []
# 总阶数不超过degree的所有组合
for total_degree in range(self.degree + 1):
for combo in combinations_with_replacement(range(self.n_vars), total_degree):
index = [0] * self.n_vars
for var_idx in combo:
index[var_idx] += 1
indices.append(tuple(index))
return indices
def _hermite_polynomial(self, x, n):
"""计算概率Hermite多项式 ( physicist's Hermite )"""
if n == 0:
return np.ones_like(x)
elif n == 1:
return x
elif n == 2:
return x**2 - 1
elif n == 3:
return x**3 - 3*x
elif n == 4:
return x**4 - 6*x**2 + 3
elif n == 5:
return x**5 - 10*x**3 + 15*x
else:
# 使用递推公式
H_prev2 = np.ones_like(x)
H_prev1 = x
for i in range(2, n+1):
H_current = x * H_prev1 - (i-1) * H_prev2
H_prev2 = H_prev1
H_prev1 = H_current
return H_current
def _evaluate_basis(self, xi, multi_index):
"""评估多元基函数"""
result = np.ones(xi.shape[0])
for i, order in enumerate(multi_index):
result *= self._hermite_polynomial(xi[:, i], order)
return result
def build_design_matrix(self, xi_samples):
"""构建设计矩阵"""
n_samples = xi_samples.shape[0]
design_matrix = np.zeros((n_samples, self.n_terms))
for j, multi_index in enumerate(self.multi_indices):
design_matrix[:, j] = self._evaluate_basis(xi_samples, multi_index)
return design_matrix
def fit(self, xi_samples, y_samples):
"""
使用最小二乘法拟合PCE系数
"""
design_matrix = self.build_design_matrix(xi_samples)
# 最小二乘求解
self.coefficients, residuals, rank, s = np.linalg.lstsq(
design_matrix, y_samples, rcond=None
)
# 计算R²
y_pred = design_matrix @ self.coefficients
ss_res = np.sum((y_samples - y_pred)**2)
ss_tot = np.sum((y_samples - np.mean(y_samples))**2)
self.r_squared = 1 - ss_res / ss_tot
return self
def predict(self, xi_samples):
"""使用PCE进行预测"""
if self.coefficients is None:
raise ValueError("PCE尚未拟合")
design_matrix = self.build_design_matrix(xi_samples)
return design_matrix @ self.coefficients
def get_statistics(self):
"""获取统计量"""
if self.coefficients is None:
raise ValueError("PCE尚未拟合")
# 均值: c_0
mean = self.coefficients[0]
# 方差: sum of c_i^2 * E[Psi_i^2] for i > 0
variance = 0
for i, multi_index in enumerate(self.multi_indices[1:], 1):
# E[Psi^2] = product of factorials
E_Psi2 = 1
for order in multi_index:
E_Psi2 *= factorial(order, exact=True)
variance += self.coefficients[i]**2 * E_Psi2
return mean, variance
def sobol_indices(self):
"""计算Sobol灵敏度指数"""
if self.coefficients is None:
raise ValueError("PCE尚未拟合")
_, total_variance = self.get_statistics()
# 一阶Sobol指数
S1 = np.zeros(self.n_vars)
# 二阶Sobol指数
S2 = np.zeros((self.n_vars, self.n_vars))
for i, multi_index in enumerate(self.multi_indices[1:], 1):
E_Psi2 = 1
for order in multi_index:
E_Psi2 *= factorial(order, exact=True)
variance_contribution = self.coefficients[i]**2 * E_Psi2
# 一阶效应
active_vars = [j for j, order in enumerate(multi_index) if order > 0]
if len(active_vars) == 1:
S1[active_vars[0]] += variance_contribution
# 二阶效应
if len(active_vars) == 2:
S2[active_vars[0], active_vars[1]] += variance_contribution
S2[active_vars[1], active_vars[0]] += variance_contribution
S1 = S1 / total_variance
S2 = S2 / total_variance
return S1, S2
# 创建PCE模型
pce = PolynomialChaosExpansion(n_vars=3, degree=4)
print(f" 多项式阶数: {pce.degree}")
print(f" 基函数数量: {pce.n_terms}")
print(f" 多指标: {pce.multi_indices}")
# ==================== 3. 训练PCE模型 ====================
print("\n[3] 训练PCE模型...")
np.random.seed(123)
n_train = 200 # 训练样本数
# 生成训练样本
xi_train = np.random.randn(n_train, 3)
y_train = structural_response(xi_train[:, 0], xi_train[:, 1], xi_train[:, 2])
# 拟合PCE
pce.fit(xi_train, y_train)
# 获取统计量
mean_pce, var_pce = pce.get_statistics()
std_pce = np.sqrt(var_pce)
print(f" 训练样本数: {n_train}")
print(f" R² = {pce.r_squared:.6f}")
print(f" PCE均值: {mean_pce:.4f}")
print(f" PCE方差: {var_pce:.4f}")
print(f" PCE标准差: {std_pce:.4f}")
# ==================== 4. 验证PCE模型 ====================
print("\n[4] 验证PCE模型...")
# 生成测试样本
n_test = 10000
xi_test = np.random.randn(n_test, 3)
y_true = structural_response(xi_test[:, 0], xi_test[:, 1], xi_test[:, 2])
y_pce = pce.predict(xi_test)
# 计算误差
mse = np.mean((y_true - y_pce)**2)
rmse = np.sqrt(mse)
mae = np.mean(np.abs(y_true - y_pce))
# 真实统计量
mean_true = np.mean(y_true)
var_true = np.var(y_true)
print(f" 测试样本数: {n_test}")
print(f" RMSE: {rmse:.6f}")
print(f" MAE: {mae:.6f}")
print(f" 真实均值: {mean_true:.4f}, PCE均值: {mean_pce:.4f}, 误差: {abs(mean_true-mean_pce)/mean_true*100:.2f}%")
print(f" 真实方差: {var_true:.4f}, PCE方差: {var_pce:.4f}, 误差: {abs(var_true-var_pce)/var_true*100:.2f}%")
# ==================== 5. 灵敏度分析 ====================
print("\n[5] Sobol灵敏度分析...")
S1, S2 = pce.sobol_indices()
print(f" 一阶Sobol指数:")
print(f" S1 (xi1): {S1[0]:.4f}")
print(f" S1 (xi2): {S1[1]:.4f}")
print(f" S1 (xi3): {S1[2]:.4f}")
print(f" 总和: {np.sum(S1):.4f}")
print(f" 二阶Sobol指数:")
for i in range(3):
for j in range(i+1, 3):
print(f" S2 (xi{i+1}, xi{j+1}): {S2[i, j]:.4f}")
# ==================== 6. 可视化 ====================
print("\n[6] 绘制可视化结果...")
fig = plt.figure(figsize=(16, 12))
# 1. PCE预测 vs 真实值
ax1 = plt.subplot(3, 3, 1)
ax1.scatter(y_true[::10], y_pce[::10], alpha=0.5, s=10, color='steelblue')
ax1.plot([y_true.min(), y_true.max()], [y_true.min(), y_true.max()],
'r--', linewidth=2, label='Perfect Prediction')
ax1.set_xlabel('True Response', fontsize=10)
ax1.set_ylabel('PCE Prediction', fontsize=10)
ax1.set_title(f'PCE Prediction Accuracy (R²={pce.r_squared:.4f})', fontsize=11, fontweight='bold')
ax1.legend(fontsize=9)
ax1.grid(True, alpha=0.3)
# 2. 残差分布
ax2 = plt.subplot(3, 3, 2)
residuals = y_true - y_pce
ax2.hist(residuals, bins=50, density=True, alpha=0.7, color='coral', edgecolor='black')
ax2.axvline(0, color='red', linestyle='--', linewidth=2)
ax2.set_xlabel('Residual', fontsize=10)
ax2.set_ylabel('PDF', fontsize=10)
ax2.set_title('Residual Distribution', fontsize=11, fontweight='bold')
ax2.grid(True, alpha=0.3)
# 3. 响应分布对比
ax3 = plt.subplot(3, 3, 3)
ax3.hist(y_true, bins=50, density=True, alpha=0.5, color='steelblue',
label='True', edgecolor='black')
ax3.hist(y_pce, bins=50, density=True, alpha=0.5, color='coral',
label='PCE', edgecolor='black')
ax3.axvline(mean_true, color='blue', linestyle='--', linewidth=2, label=f'True Mean={mean_true:.2f}')
ax3.axvline(mean_pce, color='red', linestyle='--', linewidth=2, label=f'PCE Mean={mean_pce:.2f}')
ax3.set_xlabel('Response Y', fontsize=10)
ax3.set_ylabel('PDF', fontsize=10)
ax3.set_title('Response Distribution Comparison', fontsize=11, fontweight='bold')
ax3.legend(fontsize=8)
ax3.grid(True, alpha=0.3)
# 4. 一阶Sobol指数
ax4 = plt.subplot(3, 3, 4)
variables = ['xi1', 'xi2', 'xi3']
colors = ['steelblue', 'coral', 'lightgreen']
bars = ax4.bar(variables, S1, color=colors, edgecolor='black')
ax4.set_ylabel('First-order Sobol Index', fontsize=10)
ax4.set_title('First-order Sensitivity Indices', fontsize=11, fontweight='bold')
ax4.set_ylim(0, 1)
ax4.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, S1):
ax4.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.02,
f'{val:.3f}', ha='center', va='bottom', fontsize=10, fontweight='bold')
# 5. 总效应(一阶+高阶)
ax5 = plt.subplot(3, 3, 5)
# 计算总效应指数
S_total = S1.copy()
for i in range(3):
for j in range(3):
if i != j:
S_total[i] += S2[i, j]
bars = ax5.bar(variables, S_total, color=colors, edgecolor='black', alpha=0.7)
ax5.set_ylabel('Total Sobol Index', fontsize=10)
ax5.set_title('Total Sensitivity Indices', fontsize=11, fontweight='bold')
ax5.set_ylim(0, 1)
ax5.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, S_total):
ax5.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.02,
f'{val:.3f}', ha='center', va='bottom', fontsize=10, fontweight='bold')
# 6. 二阶交互效应
ax6 = plt.subplot(3, 3, 6)
interaction_pairs = ['(xi1,xi2)', '(xi1,xi3)', '(xi2,xi3)']
interaction_values = [S2[0, 1], S2[0, 2], S2[1, 2]]
bars = ax6.bar(interaction_pairs, interaction_values, color='mediumpurple', edgecolor='black')
ax6.set_ylabel('Second-order Sobol Index', fontsize=10)
ax6.set_title('Interaction Effects', fontsize=11, fontweight='bold')
ax6.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, interaction_values):
ax6.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.005,
f'{val:.4f}', ha='center', va='bottom', fontsize=10)
# 7. PCE系数
ax7 = plt.subplot(3, 3, 7)
term_labels = [str(idx) for idx in pce.multi_indices]
coeffs_abs = np.abs(pce.coefficients)
sorted_idx = np.argsort(coeffs_abs)[::-1][:15] # 前15个重要系数
ax7.barh(range(len(sorted_idx)), coeffs_abs[sorted_idx], color='steelblue', edgecolor='black')
ax7.set_yticks(range(len(sorted_idx)))
ax7.set_yticklabels([term_labels[i] for i in sorted_idx], fontsize=8)
ax7.set_xlabel('|Coefficient|', fontsize=10)
ax7.set_title('PCE Coefficients (Top 15)', fontsize=11, fontweight='bold')
ax7.grid(True, alpha=0.3, axis='x')
ax7.invert_yaxis()
# 8. 收敛性分析(不同阶数)
ax8 = plt.subplot(3, 3, 8)
degrees = [1, 2, 3, 4, 5]
mean_errors = []
var_errors = []
r2_scores = []
for deg in degrees:
pce_temp = PolynomialChaosExpansion(n_vars=3, degree=deg)
pce_temp.fit(xi_train, y_train)
mean_temp, var_temp = pce_temp.get_statistics()
mean_errors.append(abs(mean_temp - mean_true) / mean_true * 100)
var_errors.append(abs(var_temp - var_true) / var_true * 100)
r2_scores.append(pce_temp.r_squared)
ax8_twin = ax8.twinx()
line1 = ax8.plot(degrees, mean_errors, 'o-', linewidth=2, markersize=8,
color='steelblue', label='Mean Error (%)')
line2 = ax8.plot(degrees, var_errors, 's-', linewidth=2, markersize=8,
color='coral', label='Variance Error (%)')
line3 = ax8_twin.plot(degrees, r2_scores, '^-', linewidth=2, markersize=8,
color='green', label='R²')
ax8.set_xlabel('Polynomial Degree', fontsize=10)
ax8.set_ylabel('Error (%)', fontsize=10, color='steelblue')
ax8_twin.set_ylabel('R² Score', fontsize=10, color='green')
ax8.set_title('PCE Convergence vs Degree', fontsize=11, fontweight='bold')
ax8.grid(True, alpha=0.3)
# 合并图例
lines = line1 + line2 + line3
labels = [l.get_label() for l in lines]
ax8.legend(lines, labels, loc='center right', fontsize=9)
# 9. 样本量影响
ax9 = plt.subplot(3, 3, 9)
sample_sizes = [20, 50, 100, 200, 500]
mean_errors_n = []
var_errors_n = []
for n in sample_sizes:
xi_temp = np.random.randn(n, 3)
y_temp = structural_response(xi_temp[:, 0], xi_temp[:, 1], xi_temp[:, 2])
pce_temp = PolynomialChaosExpansion(n_vars=3, degree=3)
pce_temp.fit(xi_temp, y_temp)
mean_temp, var_temp = pce_temp.get_statistics()
mean_errors_n.append(abs(mean_temp - mean_true) / mean_true * 100)
var_errors_n.append(abs(var_temp - var_true) / var_true * 100)
ax9.semilogx(sample_sizes, mean_errors_n, 'o-', linewidth=2, markersize=8,
color='steelblue', label='Mean Error')
ax9.semilogx(sample_sizes, var_errors_n, 's-', linewidth=2, markersize=8,
color='coral', label='Variance Error')
ax9.set_xlabel('Number of Training Samples', fontsize=10)
ax9.set_ylabel('Error (%)', fontsize=10)
ax9.set_title('Error vs Sample Size', fontsize=11, fontweight='bold')
ax9.legend(fontsize=9)
ax9.grid(True, alpha=0.3)
plt.suptitle('Polynomial Chaos Expansion for Uncertainty Quantification',
fontsize=14, fontweight='bold', y=0.995)
plt.tight_layout()
plt.savefig(f'{output_dir}/case3_pce_analysis.png', dpi=150, bbox_inches='tight')
plt.close()
print(" ✓ case3_pce_analysis.png 已保存")
# ==================== 7. PCE vs MCS 计算效率对比 ====================
print("\n[7] 计算效率对比...")
import time
# PCE预测时间
n_pred = 100000
xi_pred = np.random.randn(n_pred, 3)
start_time = time.time()
y_pce_pred = pce.predict(xi_pred)
pce_time = time.time() - start_time
# MCS时间(直接计算)
start_time = time.time()
y_mcs = structural_response(xi_pred[:, 0], xi_pred[:, 1], xi_pred[:, 2])
mcs_time = time.time() - start_time
speedup = mcs_time / pce_time
print(f" PCE预测 {n_pred} 次: {pce_time:.4f} 秒")
print(f" MCS计算 {n_pred} 次: {mcs_time:.4f} 秒")
print(f" 加速比: {speedup:.1f}x")
# ==================== 8. 汇总报告 ====================
print("\n" + "=" * 70)
print("多项式混沌展开分析汇总")
print("=" * 70)
print(f"\n【模型信息】")
print(f" 响应函数: Y = sin(xi1) + 7*sin(xi2)^2 + 0.1*xi3^4*sin(xi1)")
print(f" 输入变量: 3个独立标准正态变量")
print(f"\n【PCE配置】")
print(f" 多项式阶数: {pce.degree}")
print(f" 基函数数量: {pce.n_terms}")
print(f" 训练样本数: {n_train}")
print(f"\n【精度验证】")
print(f" R² = {pce.r_squared:.6f}")
print(f" RMSE = {rmse:.6f}")
print(f" 均值误差: {abs(mean_true-mean_pce)/mean_true*100:.2f}%")
print(f" 方差误差: {abs(var_true-var_pce)/var_true*100:.2f}%")
print(f"\n【统计量对比】")
print(f" 真实均值: {mean_true:.4f}, PCE均值: {mean_pce:.4f}")
print(f" 真实方差: {var_true:.4f}, PCE方差: {var_pce:.4f}")
print(f"\n【Sobol灵敏度分析】")
print(f" 一阶指数:")
print(f" xi1: {S1[0]:.4f} ({S1[0]*100:.1f}%)")
print(f" xi2: {S1[1]:.4f} ({S1[1]*100:.1f}%)")
print(f" xi3: {S1[2]:.4f} ({S1[2]*100:.1f}%)")
print(f" 总效应指数:")
print(f" xi1: {S_total[0]:.4f}")
print(f" xi2: {S_total[1]:.4f}")
print(f" xi3: {S_total[2]:.4f}")
print(f"\n【计算效率】")
print(f" PCE预测速度: {pce_time:.4f} 秒/{n_pred}次")
print(f" MCS计算速度: {mcs_time:.4f} 秒/{n_pred}次")
print(f" 加速比: {speedup:.1f}x")
print("\n" + "=" * 70)
print("案例三分析完成!")
print("=" * 70)
# ============================================
# 生成GIF动画
# ============================================
def create_animation():
"""创建仿真结果动画"""
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import numpy as np
fig, ax = plt.subplots(figsize=(10, 6))
ax.set_xlim(0, 10)
ax.set_ylim(-2, 2)
ax.set_xlabel('Time', fontsize=12)
ax.set_ylabel('Response', fontsize=12)
ax.set_title('Dynamic Response Animation', fontsize=14, fontweight='bold')
ax.grid(True, alpha=0.3)
line, = ax.plot([], [], 'b-', linewidth=2)
def init():
line.set_data([], [])
return line,
def update(frame):
x = np.linspace(0, 10, 100)
y = np.sin(x - frame * 0.2) * np.exp(-frame * 0.01)
line.set_data(x, y)
return line,
anim = FuncAnimation(fig, update, init_func=init, frames=50, interval=100, blit=True)
output_dir = os.path.dirname(os.path.abspath(__file__))
anim.save(f'{output_dir}/simulation_animation.gif', writer='pillow', fps=10)
print(f"动画已保存到: {output_dir}/simulation_animation.gif")
plt.close()
if __name__ == '__main__':
create_animation()
# -*- coding: utf-8 -*-
"""
案例四:基于可靠性的设计优化 (RBDO)
Reliability-Based Design Optimization
"""
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.optimize import minimize
import os
# 创建输出目录
output_dir = r'd:\文档\500仿真领域\工程仿真\结构动力学仿真\主题091'
os.makedirs(output_dir, exist_ok=True)
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False
print("=" * 70)
print("案例四:基于可靠性的设计优化 (RBDO)")
print("=" * 70)
# ==================== 1. 问题定义 ====================
print("\n[1] 定义RBDO问题...")
# 设计变量:截面尺寸 (d)
# 随机变量:材料强度 (R) 和 荷载 (S)
# 设计参数
mu_R_nominal = 300.0 # 材料强度名义值 (MPa)
COV_R = 0.15 # 强度变异系数
mu_S_nominal = 200.0 # 荷载名义值 (kN)
COV_S = 0.25 # 荷载变异系数
# 目标可靠指标
target_beta = 3.0 # 对应 Pf ≈ 0.00135 (99.865%可靠度)
print(f" 设计变量: 截面尺寸 d")
print(f" 随机变量: R (对数正态), S (正态)")
print(f" 目标可靠指标: β = {target_beta}")
# ==================== 2. 极限状态函数 ====================
print("\n[2] 定义极限状态函数...")
def limit_state_function(d, R, S):
"""
极限状态函数: g = R*A(d) - S
d: 设计变量 (截面尺寸)
R: 材料强度 (随机)
S: 荷载 (随机)
"""
# 假设圆形截面,面积 A = π*(d/2)^2
A = np.pi * (d / 2) ** 2
g = R * A - S
return g
def limit_state_standard(U, d):
"""
标准正态空间中的极限状态函数
U = [u_R, u_S]: 标准正态变量
d: 设计变量
"""
# 从标准正态变换到物理空间
# R: 对数正态分布
zeta_R = np.sqrt(np.log(1 + COV_R**2))
lambda_R = np.log(mu_R_nominal) - 0.5 * zeta_R**2
R = np.exp(lambda_R + zeta_R * U[0])
# S: 正态分布 (简化处理)
sigma_S = COV_S * mu_S_nominal
S = mu_S_nominal + sigma_S * U[1]
return limit_state_function(d, R, S)
print(" g(d, R, S) = R * π*(d/2)² - S")
print(" 失效条件: g ≤ 0")
# ==================== 3. FORM分析计算可靠指标 ====================
print("\n[3] FORM可靠指标计算...")
def compute_beta_form(d, U0=None, max_iter=100, tol=1e-6):
"""
使用HL-RF算法计算给定设计变量d的可靠指标
"""
if U0 is None:
U0 = np.array([0.0, 0.0])
U = U0.copy()
for i in range(max_iter):
# 计算极限状态函数值
g = limit_state_standard(U, d)
# 数值计算梯度
eps = 1e-6
grad_g = np.zeros(2)
for j in range(2):
U_plus = U.copy()
U_plus[j] += eps
grad_g[j] = (limit_state_standard(U_plus, d) - g) / eps
# 梯度归一化
grad_norm = np.linalg.norm(grad_g)
if grad_norm < 1e-10:
break
grad_g_normalized = grad_g / grad_norm
# HL-RF更新
U_new = (np.dot(grad_g_normalized, U) - g / grad_norm) * grad_g_normalized
# 检查收敛
if np.linalg.norm(U_new - U) < tol:
break
U = U_new
beta = np.linalg.norm(U)
return beta, U
# 测试不同设计变量下的可靠指标
test_d_values = np.linspace(20, 50, 20)
beta_values = []
for d in test_d_values:
beta, _ = compute_beta_form(d)
beta_values.append(beta)
print(f" 测试截面尺寸范围: {test_d_values[0]:.1f} - {test_d_values[-1]:.1f} mm")
print(f" 对应可靠指标范围: {min(beta_values):.2f} - {max(beta_values):.2f}")
# ==================== 4. RBDO优化 ====================
print("\n[4] 执行RBDO优化...")
def objective(d):
"""目标函数:最小化截面面积 (即最小化重量/成本)"""
return np.pi * (d / 2) ** 2
def reliability_constraint(d):
"""可靠性约束: β(d) - β_target ≥ 0"""
beta, _ = compute_beta_form(d[0])
return beta - target_beta
# 定义优化问题
constraints = [{'type': 'ineq', 'fun': reliability_constraint}]
bounds = [(15, 60)] # 设计变量范围
# 初始猜测
d0 = [25.0]
# 执行优化
print(" 优化中...")
result = minimize(
objective,
d0,
method='SLSQP',
bounds=bounds,
constraints=constraints,
options={'ftol': 1e-6, 'disp': False, 'maxiter': 100}
)
d_optimal = result.x[0]
beta_optimal, U_optimal = compute_beta_form(d_optimal)
area_optimal = objective(d_optimal)
print(f" 优化结果:")
print(f" 最优截面尺寸: d* = {d_optimal:.4f} mm")
print(f" 最优截面面积: A* = {area_optimal:.4f} mm²")
print(f" 实际可靠指标: β = {beta_optimal:.4f}")
print(f" 目标可靠指标: β_target = {target_beta}")
print(f" 优化迭代次数: {result.nit}")
print(f" 是否成功: {result.success}")
# ==================== 5. 确定性优化对比 ====================
print("\n[5] 确定性优化对比...")
def deterministic_constraint(d):
"""确定性约束: 名义强度 ≥ 名义荷载"""
A = np.pi * (d[0] / 2) ** 2
return mu_R_nominal * A - mu_S_nominal
# 确定性优化
constraints_det = [{'type': 'ineq', 'fun': deterministic_constraint}]
result_det = minimize(
objective,
d0,
method='SLSQP',
bounds=bounds,
constraints=constraints_det,
options={'ftol': 1e-6, 'disp': False}
)
d_deterministic = result_det.x[0]
area_deterministic = objective(d_deterministic)
beta_deterministic, _ = compute_beta_form(d_deterministic)
print(f" 确定性优化结果:")
print(f" 截面尺寸: d = {d_deterministic:.4f} mm")
print(f" 截面面积: A = {area_deterministic:.4f} mm²")
print(f" 实际可靠指标: β = {beta_deterministic:.4f}")
# 安全系数方法
safety_factor = 1.5
def safety_constraint(d):
"""安全系数约束"""
A = np.pi * (d[0] / 2) ** 2
return mu_R_nominal * A / safety_factor - mu_S_nominal
constraints_sf = [{'type': 'ineq', 'fun': safety_constraint}]
result_sf = minimize(
objective,
d0,
method='SLSQP',
bounds=bounds,
constraints=constraints_sf,
options={'ftol': 1e-6, 'disp': False}
)
d_sf = result_sf.x[0]
area_sf = objective(d_sf)
beta_sf, _ = compute_beta_form(d_sf)
print(f"\n 安全系数法 (SF={safety_factor}) 结果:")
print(f" 截面尺寸: d = {d_sf:.4f} mm")
print(f" 截面面积: A = {area_sf:.4f} mm²")
print(f" 实际可靠指标: β = {beta_sf:.4f}")
# ==================== 6. 灵敏度分析 ====================
print("\n[6] 设计灵敏度分析...")
# 计算设计点
design_point_physical = []
# R at design point
zeta_R = np.sqrt(np.log(1 + COV_R**2))
lambda_R = np.log(mu_R_nominal) - 0.5 * zeta_R**2
R_star = np.exp(lambda_R + zeta_R * U_optimal[0])
# S at design point
sigma_S = COV_S * mu_S_nominal
S_star = mu_S_nominal + sigma_S * U_optimal[1]
print(f" 设计点 (U空间): U* = [{U_optimal[0]:.4f}, {U_optimal[1]:.4f}]")
print(f" 设计点 (物理空间): R* = {R_star:.2f} MPa, S* = {S_star:.2f} kN")
# 方向余弦(重要性因子)
eps = 1e-6
grad_g = np.zeros(2)
for j in range(2):
U_plus = U_optimal.copy()
U_plus[j] += eps
grad_g[j] = (limit_state_standard(U_plus, d_optimal) -
limit_state_standard(U_optimal, d_optimal)) / eps
grad_norm = np.linalg.norm(grad_g)
alpha = -grad_g / grad_norm # 重要性因子
print(f" 重要性因子:")
print(f" α_R = {alpha[0]:.4f}")
print(f" α_S = {alpha[1]:.4f}")
# ==================== 7. 可视化 ====================
print("\n[7] 绘制可视化结果...")
fig = plt.figure(figsize=(14, 10))
# 1. 可靠指标 vs 设计变量
ax1 = plt.subplot(3, 3, 1)
ax1.plot(test_d_values, beta_values, 'b-', linewidth=2, label='Reliability Index β(d)')
ax1.axhline(y=target_beta, color='r', linestyle='--', linewidth=2,
label=f'Target β = {target_beta}')
ax1.axvline(x=d_optimal, color='g', linestyle=':', linewidth=2,
label=f'Optimal d = {d_optimal:.2f}')
ax1.scatter([d_optimal], [beta_optimal], color='green', s=100, zorder=5)
ax1.set_xlabel('Design Variable d (mm)', fontsize=10)
ax1.set_ylabel('Reliability Index β', fontsize=10)
ax1.set_title('Reliability Index vs Design Variable', fontsize=11, fontweight='bold')
ax1.legend(fontsize=9)
ax1.grid(True, alpha=0.3)
ax1.set_ylim(0, max(beta_values) * 1.1)
# 2. 目标函数 vs 设计变量
ax2 = plt.subplot(3, 3, 2)
areas = np.pi * (test_d_values / 2) ** 2
ax2.plot(test_d_values, areas, 'b-', linewidth=2, label='Area A(d)')
ax2.axvline(x=d_optimal, color='g', linestyle=':', linewidth=2)
ax2.axvline(x=d_deterministic, color='orange', linestyle=':', linewidth=2)
ax2.axvline(x=d_sf, color='purple', linestyle=':', linewidth=2)
ax2.scatter([d_optimal], [area_optimal], color='green', s=100, zorder=5, label='RBDO')
ax2.scatter([d_deterministic], [area_deterministic], color='orange', s=100, zorder=5, label='Deterministic')
ax2.scatter([d_sf], [area_sf], color='purple', s=100, zorder=5, label=f'SF={safety_factor}')
ax2.set_xlabel('Design Variable d (mm)', fontsize=10)
ax2.set_ylabel('Cross-sectional Area (mm²)', fontsize=10)
ax2.set_title('Objective Function vs Design Variable', fontsize=11, fontweight='bold')
ax2.legend(fontsize=8)
ax2.grid(True, alpha=0.3)
# 3. 设计空间可视化 (U空间)
ax3 = plt.subplot(3, 3, 3)
# 简化网格计算
u1_range = np.linspace(-4, 4, 50)
u2_range = np.linspace(-4, 4, 50)
U1, U2 = np.meshgrid(u1_range, u2_range)
# 计算极限状态函数值 - 使用向量化计算
G = np.zeros_like(U1)
for i in range(U1.shape[0]):
for j in range(U1.shape[1]):
G[i, j] = limit_state_standard([U1[i, j], U2[i, j]], d_optimal)
# 只绘制失效边界
try:
contour = ax3.contour(U1, U2, G, levels=[0], colors='red', linewidths=2)
ax3.contourf(U1, U2, G, levels=[-1000, 0], colors=['red'], alpha=0.2)
except:
pass
ax3.scatter([0], [0], color='blue', s=50, marker='o', label='Origin (Mean)')
if not np.isnan(U_optimal[0]):
ax3.scatter([U_optimal[0]], [U_optimal[1]], color='green', s=100, marker='*',
label='Design Point', zorder=5)
# 绘制可靠指标圆
theta = np.linspace(0, 2*np.pi, 100)
ax3.plot(beta_optimal * np.cos(theta), beta_optimal * np.sin(theta),
'g--', linewidth=1, label=f'β = {beta_optimal:.2f}')
ax3.set_xlabel('u_R (standard normal)', fontsize=10)
ax3.set_ylabel('u_S (standard normal)', fontsize=10)
ax3.set_title('Design Space in U-space', fontsize=11, fontweight='bold')
ax3.legend(fontsize=9)
ax3.grid(True, alpha=0.3)
ax3.axis('equal')
ax3.set_xlim(-4, 4)
ax3.set_ylim(-4, 4)
# 4. 方法对比柱状图
ax4 = plt.subplot(3, 3, 4)
methods = ['Deterministic', f'Safety Factor\n({safety_factor})', 'RBDO']
d_values = [d_deterministic, d_sf, d_optimal]
area_values = [area_deterministic, area_sf, area_optimal]
beta_values_compare = [beta_deterministic, beta_sf, beta_optimal]
x = np.arange(len(methods))
width = 0.35
bars1 = ax4.bar(x - width/2, d_values, width, label='d (mm)', color='steelblue', edgecolor='black')
ax4_twin = ax4.twinx()
bars2 = ax4_twin.bar(x + width/2, area_values, width, label='Area (mm²)', color='coral', edgecolor='black')
ax4.set_ylabel('Design Variable d (mm)', fontsize=10, color='steelblue')
ax4_twin.set_ylabel('Area (mm²)', fontsize=10, color='coral')
ax4.set_xticks(x)
ax4.set_xticklabels(methods, fontsize=9)
ax4.set_title('Design Comparison', fontsize=11, fontweight='bold')
ax4.grid(True, alpha=0.3, axis='y')
# 添加数值标签
for bar, val in zip(bars1, d_values):
ax4.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.5,
f'{val:.1f}', ha='center', va='bottom', fontsize=9)
for bar, val in zip(bars2, area_values):
ax4_twin.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 20,
f'{val:.0f}', ha='center', va='bottom', fontsize=9)
# 5. 可靠指标对比
ax5 = plt.subplot(3, 3, 5)
colors_beta = ['orange', 'purple', 'green']
bars = ax5.bar(methods, beta_values_compare, color=colors_beta, edgecolor='black')
ax5.axhline(y=target_beta, color='red', linestyle='--', linewidth=2, label=f'Target β = {target_beta}')
ax5.set_ylabel('Reliability Index β', fontsize=10)
ax5.set_title('Reliability Index Comparison', fontsize=11, fontweight='bold')
ax5.legend(fontsize=9)
ax5.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, beta_values_compare):
ax5.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.05,
f'{val:.2f}', ha='center', va='bottom', fontsize=10, fontweight='bold')
# 6. 失效概率对比
ax6 = plt.subplot(3, 3, 6)
pf_values = [stats.norm.cdf(-b) for b in beta_values_compare]
bars = ax6.bar(methods, pf_values, color=colors_beta, edgecolor='black')
ax6.set_ylabel('Failure Probability Pf', fontsize=10)
ax6.set_title('Failure Probability Comparison', fontsize=11, fontweight='bold')
ax6.set_yscale('log')
ax6.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, pf_values):
ax6.text(bar.get_x() + bar.get_width()/2, bar.get_height() * 1.5,
f'{val:.2e}', ha='center', va='bottom', fontsize=9)
# 7. 材料节省分析
ax7 = plt.subplot(3, 3, 7)
savings_sf = (area_sf - area_optimal) / area_sf * 100
savings_det = (area_deterministic - area_optimal) / area_deterministic * 100
savings = [0, savings_sf, savings_det]
bars = ax7.bar(methods, savings, color=['gray', 'purple', 'orange'], edgecolor='black')
ax7.set_ylabel('Material Savings (%)', fontsize=10)
ax7.set_title('Material Savings vs RBDO', fontsize=11, fontweight='bold')
ax7.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, savings):
if val > 0:
ax7.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.5,
f'{val:.1f}%', ha='center', va='bottom', fontsize=10, fontweight='bold')
# 8. 不同目标可靠指标下的优化
ax8 = plt.subplot(3, 3, 8)
target_betas = np.array([2.0, 2.5, 3.0, 3.5, 4.0])
d_optimal_values = []
area_optimal_values = []
for tb in target_betas:
def constraint_temp(d):
beta, _ = compute_beta_form(d[0])
return beta - tb
result_temp = minimize(
objective,
[25.0],
method='SLSQP',
bounds=bounds,
constraints=[{'type': 'ineq', 'fun': constraint_temp}],
options={'ftol': 1e-6, 'disp': False}
)
if result_temp.success:
d_optimal_values.append(result_temp.x[0])
area_optimal_values.append(objective(result_temp.x[0]))
else:
d_optimal_values.append(np.nan)
area_optimal_values.append(np.nan)
ax8.plot(target_betas, d_optimal_values, 'o-', linewidth=2, markersize=8,
color='steelblue', label='Optimal d')
ax8_twin = ax8.twinx()
ax8_twin.plot(target_betas, area_optimal_values, 's-', linewidth=2, markersize=8,
color='coral', label='Optimal Area')
ax8.set_xlabel('Target Reliability Index β', fontsize=10)
ax8.set_ylabel('Optimal d (mm)', fontsize=10, color='steelblue')
ax8_twin.set_ylabel('Optimal Area (mm²)', fontsize=10, color='coral')
ax8.set_title('Optimal Design vs Target Reliability', fontsize=11, fontweight='bold')
ax8.grid(True, alpha=0.3)
# 9. 成本-可靠度权衡曲线
ax9 = plt.subplot(3, 3, 9)
# 假设成本与面积成正比,绘制权衡曲线
valid_idx = ~np.isnan(area_optimal_values)
areas_valid = np.array(area_optimal_values)[valid_idx]
betas_valid = target_betas[valid_idx]
pf_valid = [stats.norm.cdf(-b) for b in betas_valid]
ax9.semilogy(areas_valid, pf_valid, 'o-', linewidth=2, markersize=8, color='steelblue')
ax9.axhline(y=stats.norm.cdf(-target_beta), color='red', linestyle='--',
linewidth=2, label=f'Target Pf = {stats.norm.cdf(-target_beta):.2e}')
ax9.scatter([area_optimal], [stats.norm.cdf(-beta_optimal)], color='green', s=150,
marker='*', zorder=5, label='RBDO Solution')
ax9.set_xlabel('Cross-sectional Area (mm²)', fontsize=10)
ax9.set_ylabel('Failure Probability Pf', fontsize=10)
ax9.set_title('Cost-Reliability Trade-off', fontsize=11, fontweight='bold')
ax9.legend(fontsize=9)
ax9.grid(True, alpha=0.3)
plt.suptitle('Reliability-Based Design Optimization (RBDO)',
fontsize=14, fontweight='bold', y=0.995)
plt.tight_layout(rect=[0, 0, 1, 0.99])
plt.savefig(f'{output_dir}/case4_rbdo_analysis.png', dpi=100)
plt.close()
print(" ✓ case4_rbdo_analysis.png 已保存")
# ==================== 8. 汇总报告 ====================
print("\n" + "=" * 70)
print("RBDO分析汇总")
print("=" * 70)
print(f"\n【问题定义】")
print(f" 目标: 最小化截面面积 (重量/成本)")
print(f" 约束: 可靠指标 β ≥ {target_beta}")
print(f" 随机变量: R (对数正态, COV={COV_R}), S (正态, COV={COV_S})")
print(f"\n【RBDO优化结果】")
print(f" 最优截面尺寸: d* = {d_optimal:.4f} mm")
print(f" 最优截面面积: A* = {area_optimal:.4f} mm²")
print(f" 实际可靠指标: β = {beta_optimal:.4f}")
print(f" 失效概率: Pf = {stats.norm.cdf(-beta_optimal):.2e}")
print(f"\n【方法对比】")
print(f" {'方法':<20} {'d (mm)':<12} {'Area (mm²)':<15} {'β':<10} {'Pf':<12}")
print(f" {'-'*70}")
print(f" {'确定性优化':<20} {d_deterministic:<12.2f} {area_deterministic:<15.2f} {beta_deterministic:<10.2f} {stats.norm.cdf(-beta_deterministic):<12.2e}")
print(f" {f'安全系数法({safety_factor})':<20} {d_sf:<12.2f} {area_sf:<15.2f} {beta_sf:<10.2f} {stats.norm.cdf(-beta_sf):<12.2e}")
print(f" {'RBDO':<20} {d_optimal:<12.2f} {area_optimal:<15.2f} {beta_optimal:<10.2f} {stats.norm.cdf(-beta_optimal):<12.2e}")
print(f"\n【材料节省】")
print(f" RBDO vs 确定性优化: 节省 {(area_deterministic - area_optimal) / area_deterministic * 100:.1f}%")
print(f" RBDO vs 安全系数法: 节省 {(area_sf - area_optimal) / area_sf * 100:.1f}%")
print(f"\n【设计点信息】")
print(f" U空间: U* = [{U_optimal[0]:.4f}, {U_optimal[1]:.4f}]")
print(f" 物理空间: R* = {R_star:.2f} MPa, S* = {S_star:.2f} kN")
print(f" 重要性因子: α_R = {alpha[0]:.4f}, α_S = {alpha[1]:.4f}")
print("\n" + "=" * 70)
print("案例四分析完成!")
print("=" * 70)
# ============================================
# 生成GIF动画
# ============================================
def create_animation():
"""创建仿真结果动画"""
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import numpy as np
fig, ax = plt.subplots(figsize=(10, 6))
ax.set_xlim(0, 10)
ax.set_ylim(-2, 2)
ax.set_xlabel('Time', fontsize=12)
ax.set_ylabel('Response', fontsize=12)
ax.set_title('Dynamic Response Animation', fontsize=14, fontweight='bold')
ax.grid(True, alpha=0.3)
line, = ax.plot([], [], 'b-', linewidth=2)
def init():
line.set_data([], [])
return line,
def update(frame):
x = np.linspace(0, 10, 100)
y = np.sin(x - frame * 0.2) * np.exp(-frame * 0.01)
line.set_data(x, y)
return line,
anim = FuncAnimation(fig, update, init_func=init, frames=50, interval=100, blit=True)
output_dir = os.path.dirname(os.path.abspath(__file__))
anim.save(f'{output_dir}/simulation_animation.gif', writer='pillow', fps=10)
print(f"动画已保存到: {output_dir}/simulation_animation.gif")
plt.close()
if __name__ == '__main__':
create_animation()
AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。
更多推荐
所有评论(0)