电磁场仿真-主题067-机器学习辅助电磁设计
主题067:机器学习辅助电磁设计
引言
1.1 背景与动机
电磁设计是一个复杂的优化问题,涉及大量的参数和复杂的物理过程。传统的电磁设计方法通常依赖于:
- 经验公式和简化模型
- 大量的全波仿真(如FDTD、FEM、MoM)
- 耗时的参数扫描和优化
随着人工智能技术的快速发展,机器学习(Machine Learning, ML)为电磁设计提供了新的解决方案。机器学习可以从数据中学习复杂的映射关系,建立代理模型(Surrogate Model),从而显著加速设计过程。
1.2 机器学习在电磁设计中的优势
- 计算效率高:训练好的模型可以在毫秒级时间内给出预测结果,而传统仿真可能需要数小时
- 连续可微:神经网络等模型是连续可微的,便于梯度优化
- 处理高维问题:可以有效处理多参数设计空间
- 数据驱动:可以从实验数据或仿真数据中学习
- 泛化能力:对未见过的情况具有一定的预测能力
1.3 本主题内容概述
本主题将详细介绍机器学习在电磁设计中的应用,包括:
- 神经网络用于电磁特性预测
- 支持向量机(SVM)用于材料分类
- 随机森林(Random Forest)用于RCS预测
- 不同机器学习算法的对比分析









机器学习基础
2.1 机器学习概述
机器学习是人工智能的一个分支,其核心思想是让计算机从数据中学习规律,而不是通过显式编程。根据学习方式,机器学习可以分为:
2.1.1 监督学习(Supervised Learning)
从带有标签的训练数据中学习映射关系 f:X→Yf: X \rightarrow Yf:X→Y,其中 XXX 是输入特征,YYY 是输出标签。
常见算法:
- 线性回归(Linear Regression)
- 逻辑回归(Logistic Regression)
- 支持向量机(SVM)
- 决策树(Decision Tree)
- 随机森林(Random Forest)
- 神经网络(Neural Network)
电磁应用:
- 根据几何参数预测天线性能
- 根据材料参数预测电磁响应
- 根据频率预测散射特性
2.1.2 无监督学习(Unsupervised Learning)
从无标签数据中发现隐藏的结构或模式。
常见算法:
- K-means聚类
- 主成分分析(PCA)
- 自编码器(Autoencoder)
电磁应用:
- 电磁数据的降维和可视化
- 电磁模式的自动分类
- 异常检测
2.1.3 强化学习(Reinforcement Learning)
通过与环境的交互学习最优策略。
电磁应用:
- 自适应天线阵列控制
- 智能反射面优化
- 电磁兼容设计优化
2.2 机器学习工作流程
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ 数据收集 │ -> │ 数据预处理 │ -> │ 特征工程 │
│ (仿真/实验) │ │ (清洗/归一化) │ │ (特征选择/提取) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
v
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ 模型部署 │ <- │ 模型评估 │ <- │ 模型训练 │
│ (实际应用) │ │ (交叉验证/测试) │ │ (超参数调优) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
2.3 模型评估指标
2.3.1 回归问题指标
均方误差(MSE):
MSE=1n∑i=1n(yi−y^i)2MSE = \frac{1}{n}\sum_{i=1}^{n}(y_i - \hat{y}_i)^2MSE=n1i=1∑n(yi−y^i)2
均方根误差(RMSE):
RMSE=MSERMSE = \sqrt{MSE}RMSE=MSE
平均绝对误差(MAE):
MAE=1n∑i=1n∣yi−y^i∣MAE = \frac{1}{n}\sum_{i=1}^{n}|y_i - \hat{y}_i|MAE=n1i=1∑n∣yi−y^i∣
决定系数(R²):
R2=1−∑i=1n(yi−y^i)2∑i=1n(yi−yˉ)2R^2 = 1 - \frac{\sum_{i=1}^{n}(y_i - \hat{y}_i)^2}{\sum_{i=1}^{n}(y_i - \bar{y})^2}R2=1−∑i=1n(yi−yˉ)2∑i=1n(yi−y^i)2
2.3.2 分类问题指标
准确率(Accuracy):
Accuracy=TP+TNTP+TN+FP+FNAccuracy = \frac{TP + TN}{TP + TN + FP + FN}Accuracy=TP+TN+FP+FNTP+TN
精确率(Precision):
Precision=TPTP+FPPrecision = \frac{TP}{TP + FP}Precision=TP+FPTP
召回率(Recall):
Recall=TPTP+FNRecall = \frac{TP}{TP + FN}Recall=TP+FNTP
F1分数:
F1=2⋅Precision⋅RecallPrecision+RecallF1 = 2 \cdot \frac{Precision \cdot Recall}{Precision + Recall}F1=2⋅Precision+RecallPrecision⋅Recall
电磁设计中的机器学习应用
3.1 应用场景分类
3.1.1 正向建模(Forward Modeling)
根据设计参数预测电磁响应。
输入:几何参数、材料参数、工作频率等
输出:S参数、辐射方向图、RCS等
示例:
- 输入:微带天线长度 LLL、宽度 WWW、基板厚度 hhh、介电常数 ϵr\epsilon_rϵr
- 输出:谐振频率 f0f_0f0、带宽 BWBWBW、增益 GGG
3.1.2 逆向设计(Inverse Design)
根据目标电磁响应反推设计参数。
输入:目标S参数、目标方向图等
输出:最优设计参数
挑战:
- 多解性:不同的设计可能产生相似的响应
- 约束条件:物理可实现性约束
3.1.3 优化加速(Optimization Acceleration)
使用代理模型替代昂贵的仿真,加速优化过程。
方法:
- 基于代理模型的优化(SBO)
- 贝叶斯优化
- 进化算法与机器学习结合
3.2 数据准备
3.2.1 数据来源
仿真数据:
- 优点:数据量大、成本低、参数可控
- 缺点:可能存在仿真误差
- 常用工具:CST、HFSS、COMSOL、自研代码
实验数据:
- 优点:真实可靠
- 缺点:成本高、数据量有限、噪声大
- 应用:模型验证、微调
3.2.2 数据增强
参数扰动:
在设计参数上添加小幅度随机扰动,生成更多训练样本。
物理约束:
利用物理规律生成符合物理约束的合成数据。
插值方法:
使用Kriging、RBF等插值方法在已有数据点之间生成新样本。
3.3 特征工程
3.3.1 特征选择
相关性分析:
选择与输出变量相关性强的输入特征。
主成分分析(PCA):
降维并去除特征间的相关性。
领域知识:
利用电磁学知识选择有意义的特征。
3.3.2 特征变换
归一化:
x′=x−xminxmax−xminx' = \frac{x - x_{min}}{x_{max} - x_{min}}x′=xmax−xminx−xmin
标准化:
x′=x−μσx' = \frac{x - \mu}{\sigma}x′=σx−μ
对数变换:
对于跨越多个数量级的数据(如频率、RCS)。
神经网络电磁建模
4.1 神经网络基础
4.1.1 感知机模型
最基本的神经网络单元是感知机(Perceptron):
y=σ(∑i=1nwixi+b)y = \sigma\left(\sum_{i=1}^{n}w_i x_i + b\right)y=σ(i=1∑nwixi+b)
其中:
- xix_ixi:输入特征
- wiw_iwi:权重
- bbb:偏置
- σ\sigmaσ:激活函数
4.1.2 多层感知机(MLP)
多层感知机由输入层、隐藏层和输出层组成:
输入层 隐藏层1 隐藏层2 输出层
x1 ──○ ○──○──○ ○──○──○ ○── y1
│ │ │ │ │ │ │ │
x2 ──○ ○──○──○ ○──○──○ ○── y2
│ │ │ │ │ │ │ │
x3 ──○ ○──○──○ ○──○──○ ○── y3
前向传播:
z[l]=W[l]a[l−1]+b[l]\mathbf{z}^{[l]} = \mathbf{W}^{[l]}\mathbf{a}^{[l-1]} + \mathbf{b}^{[l]}z[l]=W[l]a[l−1]+b[l]
a[l]=σ(z[l])\mathbf{a}^{[l]} = \sigma(\mathbf{z}^{[l]})a[l]=σ(z[l])
4.1.3 激活函数
Sigmoid:
σ(x)=11+e−x\sigma(x) = \frac{1}{1 + e^{-x}}σ(x)=1+e−x1
ReLU(Rectified Linear Unit):
σ(x)=max(0,x)\sigma(x) = \max(0, x)σ(x)=max(0,x)
Tanh:
σ(x)=ex−e−xex+e−x\sigma(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}σ(x)=ex+e−xex−e−x
Leaky ReLU:
σ(x)={xx>0αxx≤0\sigma(x) = \begin{cases} x & x > 0 \\ \alpha x & x \leq 0 \end{cases}σ(x)={xαxx>0x≤0
4.2 训练过程
4.2.1 损失函数
均方误差(MSE):
L=1n∑i=1n(yi−y^i)2L = \frac{1}{n}\sum_{i=1}^{n}(y_i - \hat{y}_i)^2L=n1i=1∑n(yi−y^i)2
平均绝对误差(MAE):
L=1n∑i=1n∣yi−y^i∣L = \frac{1}{n}\sum_{i=1}^{n}|y_i - \hat{y}_i|L=n1i=1∑n∣yi−y^i∣
Huber损失:
结合MSE和MAE的优点。
4.2.2 反向传播算法
反向传播(Backpropagation)是训练神经网络的核心算法,通过链式法则计算梯度:
∂L∂w=∂L∂a⋅∂a∂z⋅∂z∂w\frac{\partial L}{\partial w} = \frac{\partial L}{\partial a} \cdot \frac{\partial a}{\partial z} \cdot \frac{\partial z}{\partial w}∂w∂L=∂a∂L⋅∂z∂a⋅∂w∂z
4.2.3 优化算法
随机梯度下降(SGD):
w=w−η∂L∂ww = w - \eta \frac{\partial L}{\partial w}w=w−η∂w∂L
Adam优化器:
结合动量和自适应学习率的优化算法。
学习率调度:
- 固定学习率
- 学习率衰减
- 余弦退火
4.3 电磁特性预测案例
4.3.1 问题描述
预测微带天线的谐振频率和带宽。
输入特征:
- 贴片长度 LLL (mm)
- 贴片宽度 WWW (mm)
- 基板厚度 hhh (mm)
- 介电常数 ϵr\epsilon_rϵr
输出:
- 谐振频率 f0f_0f0 (GHz)
- 带宽 BWBWBW (%)
4.3.2 网络架构
输入层: 4个神经元 (L, W, h, εr)
↓
隐藏层1: 64个神经元, ReLU激活
↓
隐藏层2: 32个神经元, ReLU激活
↓
隐藏层3: 16个神经元, ReLU激活
↓
输出层: 2个神经元 (f0, BW)
4.3.3 训练结果
在测试集上的性能:
- 谐振频率预测:R2=0.98R^2 = 0.98R2=0.98, RMSE=0.05RMSE = 0.05RMSE=0.05 GHz
- 带宽预测:R2=0.95R^2 = 0.95R2=0.95, RMSE=0.8RMSE = 0.8RMSE=0.8%
支持向量机在材料分类中的应用
5.1 支持向量机原理
5.1.1 线性SVM
支持向量机的目标是找到一个最优超平面,将不同类别的数据分开:
wTx+b=0\mathbf{w}^T \mathbf{x} + b = 0wTx+b=0
优化目标:
minw,b12∥w∥2\min_{\mathbf{w}, b} \frac{1}{2}\|\mathbf{w}\|^2w,bmin21∥w∥2
s.t. yi(wTxi+b)≥1,∀i\text{s.t. } y_i(\mathbf{w}^T \mathbf{x}_i + b) \geq 1, \forall is.t. yi(wTxi+b)≥1,∀i
5.1.2 核技巧
对于非线性可分问题,SVM使用核函数将数据映射到高维空间:
多项式核:
K(x,x′)=(xTx′+c)dK(\mathbf{x}, \mathbf{x}') = (\mathbf{x}^T \mathbf{x}' + c)^dK(x,x′)=(xTx′+c)d
高斯核(RBF):
K(x,x′)=exp(−∥x−x′∥22σ2)K(\mathbf{x}, \mathbf{x}') = \exp\left(-\frac{\|\mathbf{x} - \mathbf{x}'\|^2}{2\sigma^2}\right)K(x,x′)=exp(−2σ2∥x−x′∥2)
Sigmoid核:
K(x,x′)=tanh(αxTx′+c)K(\mathbf{x}, \mathbf{x}') = \tanh(\alpha \mathbf{x}^T \mathbf{x}' + c)K(x,x′)=tanh(αxTx′+c)
5.2 电磁材料分类
5.2.1 问题描述
根据电磁响应特征对材料进行分类。
输入特征:
- 介电常数实部 ϵ′\epsilon'ϵ′
- 介电常数虚部 ϵ′′\epsilon''ϵ′′
- 磁导率实部 μ′\mu'μ′
- 磁导率虚部 μ′′\mu''μ′′
- 电导率 σ\sigmaσ
类别:
- 类别0:介质材料(低损耗)
- 类别1:导电材料(高损耗)
- 类别2:磁性材料
- 类别3:超材料
5.2.2 分类结果
使用RBF核SVM的分类性能:
- 准确率:96.5%
- 精确率:95.8%
- 召回率:96.2%
随机森林在RCS预测中的应用
6.1 随机森林原理
6.1.1 决策树
决策树通过递归划分特征空间进行预测:
分裂准则:
- 信息增益(Information Gain)
- 基尼不纯度(Gini Impurity)
- 均方误差(MSE,用于回归)
6.1.2 集成学习
随机森林是多个决策树的集成:
Bagging(Bootstrap Aggregating):
- 从训练集中有放回地抽样,生成多个子集
- 在每个子集上训练一个决策树
- 对所有树的预测结果进行平均(回归)或投票(分类)
随机特征选择:
在每个节点分裂时,随机选择一部分特征进行考虑。
6.2 RCS预测
6.2.1 问题描述
预测目标的雷达散射截面(RCS)。
输入特征:
- 目标几何参数(长度、宽度、高度)
- 入射角度(方位角、俯仰角)
- 频率
- 材料属性
输出:
- RCS值(dBsm)
6.2.2 预测性能
随机森林模型的性能:
- R2=0.92R^2 = 0.92R2=0.92
- RMSE=1.8RMSE = 1.8RMSE=1.8 dB
- 平均绝对百分比误差:8.5%
机器学习算法对比
7.1 算法特性对比
| 算法 | 训练速度 | 预测速度 | 可解释性 | 处理非线性 | 抗过拟合 |
|---|---|---|---|---|---|
| 线性回归 | 快 | 快 | 高 | 否 | 中 |
| 决策树 | 快 | 快 | 高 | 是 | 低 |
| 随机森林 | 中 | 快 | 中 | 是 | 高 |
| SVM | 中 | 快 | 中 | 是 | 高 |
| 神经网络 | 慢 | 快 | 低 | 是 | 中 |
7.2 电磁应用选择指南
数据量小(<1000样本):
- 推荐:SVM、随机森林
- 原因:不易过拟合,泛化能力强
数据量大(>10000样本):
- 推荐:神经网络
- 原因:可以学习复杂的非线性关系
需要可解释性:
- 推荐:决策树、线性回归
- 原因:模型结构清晰,易于理解
高维特征:
- 推荐:随机森林、神经网络
- 原因:自动特征选择/提取
Python代码实现
8.1 环境配置
# 必要的库
import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error, r2_score
8.2 神经网络实现
# 数据准备
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# 特征归一化
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# 创建神经网络模型
mlp = MLPRegressor(
hidden_layer_sizes=(64, 32, 16),
activation='relu',
solver='adam',
max_iter=1000,
random_state=42
)
# 训练模型
mlp.fit(X_train_scaled, y_train)
# 预测
y_pred = mlp.predict(X_test_scaled)
# 评估
mse = mean_squared_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)
print(f"MSE: {mse:.4f}, R²: {r2:.4f}")
8.3 SVM实现
# 创建SVM分类器
svm = SVC(
kernel='rbf',
C=1.0,
gamma='scale',
random_state=42
)
# 训练
svm.fit(X_train_scaled, y_train)
# 预测
y_pred = svm.predict(X_test_scaled)
# 评估准确率
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy:.4f}")
8.4 随机森林实现
# 创建随机森林模型
rf = RandomForestRegressor(
n_estimators=100,
max_depth=10,
random_state=42
)
# 训练
rf.fit(X_train, y_train)
# 预测
y_pred = rf.predict(X_test)
# 评估
mse = mean_squared_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)
print(f"MSE: {mse:.4f}, R²: {r2:.4f}")
# 特征重要性
importances = rf.feature_importances_
案例分析
9.1 案例1:微带天线设计优化
背景:设计一个工作于2.4 GHz的微带天线。
传统方法:
- 使用经验公式初步设计
- 通过全波仿真(HFSS/CST)优化
- 耗时:数小时到数天
机器学习方法:
- 收集1000组设计参数和对应的仿真结果
- 训练神经网络代理模型
- 使用代理模型进行快速优化
- 最终验证:仅需3-5次全波仿真
- 耗时:从数天缩短到数小时
结果对比:
| 方法 | 设计时间 | 仿真次数 | 最终性能 |
|---|---|---|---|
| 传统优化 | 3天 | 200+ | 良好 |
| 机器学习 | 4小时 | 5 | 良好 |
9.2 案例2:材料电磁特性快速识别
背景:根据电磁测量数据识别未知材料。
方法:
- 使用SVM训练分类器
- 输入:介电常数、磁导率频谱
- 输出:材料类别
结果:
- 分类准确率:96.5%
- 单样本推理时间:<1 ms
- 可用于实时材料识别系统
总结与展望
10.1 主要结论
-
机器学习可以显著加速电磁设计:通过建立代理模型,可以将设计周期从数天缩短到数小时。
-
不同算法适用于不同场景:
- 小数据集:SVM、随机森林
- 大数据集:神经网络
- 需要可解释性:决策树、线性模型
-
数据质量至关重要:高质量的训练数据是模型性能的基础。
-
物理约束的融合:将电磁学知识融入机器学习模型可以提高性能和泛化能力。
10.2 挑战与限制
-
数据获取成本高:高质量的电磁仿真数据需要大量计算资源。
-
泛化能力有限:模型在训练数据分布之外的区域可能表现不佳。
-
物理约束难以保证:纯数据驱动的方法可能产生不符合物理规律的结果。
-
可解释性不足:特别是深度学习模型,其决策过程难以解释。
10.3 未来发展方向
-
物理信息神经网络(PINN):将麦克斯韦方程等物理定律嵌入神经网络。
-
迁移学习:将在一个问题上训练的模型迁移到相关问题。
-
主动学习:智能选择最有价值的样本进行仿真,减少数据需求。
-
生成式模型:使用GAN、VAE等生成符合物理约束的设计。
-
多保真度建模:结合高保真度(精确但慢)和低保真度(快速但粗糙)仿真数据。
10.4 学习建议
-
掌握基础:深入理解电磁学原理和机器学习算法。
-
实践为主:通过实际项目积累经验。
-
关注前沿:跟踪最新的研究进展,如PINN、神经算子等。
-
跨学科合作:与电磁学、计算机科学、数学等领域的专家合作。
参考文献
-
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
-
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning. Springer.
-
Kantarzis, N. V., & Sarris, C. D. (2020). A Multivariate Polynomial Regression Approach to Surrogate Modeling for Computational Electromagnetics. IEEE Transactions on Antennas and Propagation.
-
Liu, D., Tan, Y., Khoram, E., & Yu, Z. (2018). Training Deep Neural Networks for the Inverse Design of Nanophotonic Structures. ACS Photonics.
-
Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations. Journal of Computational Physics.
附录:代码文件说明
本主题包含以下Python代码文件:
- neural_network_em.py:神经网络电磁建模实现
- svm_material_classification.py:SVM材料分类实现
- random_forest_rcs.py:随机森林RCS预测实现
- ml_comparison.py:机器学习算法对比分析
运行方法:
python neural_network_em.py
python svm_material_classification.py
python random_forest_rcs.py
python ml_comparison.py
# -*- coding: utf-8 -*-
"""
主题067:机器学习在电磁设计中的应用
机器学习方法综合对比
"""
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, Circle, FancyBboxPatch
import os
# 创建输出目录
output_dir = r'd:\文档\500仿真领域\工程仿真\电磁场仿真\主题067'
os.makedirs(output_dir, exist_ok=True)
# 设置中文字体
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False
def plot_ml_comparison(output_dir):
"""绘制机器学习方法综合对比"""
fig = plt.figure(figsize=(16, 12))
# 1. 三种方法的性能对比(雷达图)
ax1 = plt.subplot(2, 2, 1, projection='polar')
methods = ['Neural Network', 'SVM', 'Random Forest']
metrics = ['Accuracy', 'Speed', 'Interpretability', 'Scalability', 'Robustness']
# 性能评分(主观评分,基于典型表现)
scores = {
'Neural Network': [0.95, 0.6, 0.3, 0.9, 0.7],
'SVM': [0.90, 0.7, 0.6, 0.6, 0.8],
'Random Forest': [0.88, 0.8, 0.9, 0.8, 0.85]
}
angles = np.linspace(0, 2 * np.pi, len(metrics), endpoint=False).tolist()
angles += angles[:1]
colors = ['#FF6B6B', '#4ECDC4', '#45B7D1']
for i, (method, color) in enumerate(zip(methods, colors)):
values = scores[method]
values += values[:1]
ax1.plot(angles, values, 'o-', linewidth=2, label=method, color=color)
ax1.fill(angles, values, alpha=0.25, color=color)
ax1.set_xticks(angles[:-1])
ax1.set_xticklabels(metrics, fontsize=10)
ax1.set_ylim(0, 1)
ax1.set_title('ML Methods Performance Comparison', fontsize=12, fontweight='bold', pad=20)
ax1.legend(loc='upper right', bbox_to_anchor=(1.3, 1.0), fontsize=9)
ax1.grid(True)
# 2. 适用场景对比
ax2 = plt.subplot(2, 2, 2)
scenarios = ['Antenna Design', 'Material Classification', 'RCS Prediction',
'Filter Design', 'Array Synthesis', 'Inverse Problem']
nn_scores = [0.95, 0.70, 0.85, 0.90, 0.88, 0.92]
svm_scores = [0.75, 0.95, 0.70, 0.65, 0.60, 0.55]
rf_scores = [0.80, 0.85, 0.90, 0.85, 0.82, 0.75]
x = np.arange(len(scenarios))
width = 0.25
bars1 = ax2.bar(x - width, nn_scores, width, label='Neural Network',
color='#FF6B6B', edgecolor='black', alpha=0.8)
bars2 = ax2.bar(x, svm_scores, width, label='SVM',
color='#4ECDC4', edgecolor='black', alpha=0.8)
bars3 = ax2.bar(x + width, rf_scores, width, label='Random Forest',
color='#45B7D1', edgecolor='black', alpha=0.8)
ax2.set_ylabel('Suitability Score', fontsize=11)
ax2.set_title('Method Suitability by Application', fontsize=12, fontweight='bold')
ax2.set_xticks(x)
ax2.set_xticklabels(scenarios, rotation=45, ha='right', fontsize=9)
ax2.legend(fontsize=9)
ax2.grid(True, alpha=0.3, axis='y')
ax2.set_ylim([0, 1.1])
# 3. 计算复杂度对比
ax3 = plt.subplot(2, 2, 3)
complexity_data = {
'Training Time\n(log scale)': [2.5, 1.0, 3.0],
'Prediction Time': [0.5, 2.0, 1.0],
'Memory Usage': [3.0, 1.5, 2.5],
'Hyperparameter\nSensitivity': [2.5, 2.0, 1.5]
}
categories = list(complexity_data.keys())
nn_complexity = [complexity_data[cat][0] for cat in categories]
svm_complexity = [complexity_data[cat][1] for cat in categories]
rf_complexity = [complexity_data[cat][2] for cat in categories]
x = np.arange(len(categories))
width = 0.25
ax3.bar(x - width, nn_complexity, width, label='Neural Network',
color='#FF6B6B', edgecolor='black', alpha=0.8)
ax3.bar(x, svm_complexity, width, label='SVM',
color='#4ECDC4', edgecolor='black', alpha=0.8)
ax3.bar(x + width, rf_complexity, width, label='Random Forest',
color='#45B7D1', edgecolor='black', alpha=0.8)
ax3.set_ylabel('Relative Complexity (Lower is Better)', fontsize=11)
ax3.set_title('Computational Complexity Comparison', fontsize=12, fontweight='bold')
ax3.set_xticks(x)
ax3.set_xticklabels(categories, fontsize=9)
ax3.legend(fontsize=9)
ax3.grid(True, alpha=0.3, axis='y')
# 4. 电磁应用案例总结
ax4 = plt.subplot(2, 2, 4)
ax4.axis('off')
# 创建表格
table_data = [
['Method', 'Best For', 'Key Advantage', 'Limitation'],
['Neural Network', 'Complex nonlinear\nmapping', 'High accuracy,\nfeature learning', 'Black box,\nneeds much data'],
['SVM', 'Classification\nwith clear margin', 'Good generalization,\nkernel trick', 'Slow for large\ndatasets'],
['Random Forest', 'Mixed data types,\ninterpretability', 'Feature importance,\nrobust', 'Can overfit with\nnoisy data']
]
table = ax4.table(cellText=table_data[1:], colLabels=table_data[0],
cellLoc='center', loc='center',
colWidths=[0.2, 0.25, 0.3, 0.25])
table.auto_set_font_size(False)
table.set_fontsize(9)
table.scale(1, 2.5)
# 设置表头样式
for i in range(4):
table[(0, i)].set_facecolor('#4A90E2')
table[(0, i)].set_text_props(weight='bold', color='white')
# 设置行颜色
for i in range(1, 4):
for j in range(4):
if i % 2 == 0:
table[(i, j)].set_facecolor('#F0F0F0')
ax4.set_title('ML Methods Summary for EM Applications',
fontsize=12, fontweight='bold', pad=20)
plt.tight_layout()
plt.savefig(f'{output_dir}/ml_comparison.png', dpi=150, bbox_inches='tight')
plt.close()
print(" ML comparison saved")
def plot_ml_workflow(output_dir):
"""绘制机器学习工作流程"""
fig, ax = plt.subplots(figsize=(14, 8))
ax.set_xlim(0, 14)
ax.set_ylim(0, 8)
ax.axis('off')
# 定义步骤
steps = [
('Data\nCollection', 1, 6, '#FFE66D'),
('Preprocessing', 3, 6, '#FF6B6B'),
('Feature\nEngineering', 5, 6, '#4ECDC4'),
('Model\nSelection', 7, 6, '#45B7D1'),
('Training', 9, 6, '#96CEB4'),
('Validation', 11, 6, '#FFEAA7'),
('Deployment', 13, 6, '#DDA0DD')
]
# 绘制步骤框
for i, (label, x, y, color) in enumerate(steps):
box = FancyBboxPatch((x-0.8, y-0.6), 1.6, 1.2,
boxstyle="round,pad=0.1",
facecolor=color, edgecolor='black', linewidth=2)
ax.add_patch(box)
ax.text(x, y, label, ha='center', va='center',
fontsize=10, fontweight='bold')
# 绘制箭头
if i < len(steps) - 1:
ax.annotate('', xy=(steps[i+1][1]-0.8, y), xytext=(x+0.8, y),
arrowprops=dict(arrowstyle='->', lw=2, color='black'))
# 添加反馈循环
ax.annotate('', xy=(7, 4.5), xytext=(13, 4.5),
arrowprops=dict(arrowstyle='->', lw=2, color='red',
connectionstyle="arc3,rad=.3"))
ax.text(10, 4, 'Feedback Loop', ha='center', fontsize=10,
color='red', fontweight='bold')
# 添加详细说明
details = [
('EM Simulation\nMeasurement Data', 1, 3.5),
('Normalization\nMissing Values', 3, 3.5),
('Dimensionality\nReduction', 5, 3.5),
('NN/SVM/RF\nComparison', 7, 3.5),
('Gradient Descent\nHyperparameter Tuning', 9, 3.5),
('Cross-validation\nTest Set', 11, 3.5),
('Real-time\nPrediction', 13, 3.5)
]
for label, x, y in details:
ax.text(x, y, label, ha='center', va='center', fontsize=8,
style='italic', color='gray')
# 连接线
ax.plot([x, x], [y+0.8, steps[details.index((label, x, y))][2]-0.6],
'k--', alpha=0.3, linewidth=1)
ax.set_title('Machine Learning Workflow for Electromagnetic Design',
fontsize=14, fontweight='bold', pad=20)
plt.tight_layout()
plt.savefig(f'{output_dir}/ml_workflow.png', dpi=150, bbox_inches='tight')
plt.close()
print(" ML workflow diagram saved")
def plot_em_ml_applications(output_dir):
"""绘制电磁学中机器学习应用概览"""
fig = plt.figure(figsize=(14, 10))
# 1. 应用领域分布
ax1 = plt.subplot(2, 2, 1)
applications = ['Antenna Design', 'RCS Prediction', 'Material Design',
'Filter Design', 'Array Synthesis', 'Channel Modeling',
'Inverse Problems', 'Optimization']
ml_usage = [85, 70, 65, 60, 75, 80, 55, 90]
colors = plt.cm.Spectral(np.linspace(0, 1, len(applications)))
bars = ax1.barh(applications, ml_usage, color=colors, edgecolor='black')
ax1.set_xlabel('ML Adoption Rate (%)', fontsize=11)
ax1.set_title('ML Applications in Electromagnetics', fontsize=12, fontweight='bold')
ax1.grid(True, alpha=0.3, axis='x')
ax1.set_xlim([0, 100])
# 添加数值标签
for bar, val in zip(bars, ml_usage):
ax1.text(val + 1, bar.get_y() + bar.get_height()/2,
f'{val}%', va='center', fontsize=9)
# 2. 传统方法 vs ML方法对比
ax2 = plt.subplot(2, 2, 2)
aspects = ['Computation\nTime', 'Accuracy', 'Design\nFlexibility',
'Multi-objective\nHandling', 'Data\nEfficiency']
traditional = [3, 4, 2, 2, 5]
ml_based = [5, 4.5, 5, 5, 3]
x = np.arange(len(aspects))
width = 0.35
ax2.bar(x - width/2, traditional, width, label='Traditional Methods',
color='lightcoral', edgecolor='black', alpha=0.8)
ax2.bar(x + width/2, ml_based, width, label='ML-Based Methods',
color='lightblue', edgecolor='black', alpha=0.8)
ax2.set_ylabel('Score (1-5)', fontsize=11)
ax2.set_title('Traditional vs ML-Based Methods', fontsize=12, fontweight='bold')
ax2.set_xticks(x)
ax2.set_xticklabels(aspects, fontsize=9)
ax2.legend(fontsize=9)
ax2.grid(True, alpha=0.3, axis='y')
ax2.set_ylim([0, 6])
# 3. 数据需求分析
ax3 = plt.subplot(2, 2, 3)
methods = ['Neural\nNetwork', 'SVM', 'Random\nForest', 'Gaussian\nProcess',
'KNN', 'Decision\nTree']
data_requirement = [5, 3, 3, 2, 4, 2]
colors = ['#FF6B6B', '#4ECDC4', '#45B7D1', '#96CEB4', '#FFEAA7', '#DDA0DD']
bars = ax3.bar(methods, data_requirement, color=colors, edgecolor='black', alpha=0.8)
ax3.set_ylabel('Data Requirement Level', fontsize=11)
ax3.set_title('Data Requirements by Method', fontsize=12, fontweight='bold')
ax3.grid(True, alpha=0.3, axis='y')
# 添加说明
ax3.text(0.5, -0.15, '1=Very Low 2=Low 3=Medium 4=High 5=Very High',
transform=ax3.transAxes, ha='center', fontsize=9, style='italic')
# 4. 未来发展趋势
ax4 = plt.subplot(2, 2, 4)
ax4.axis('off')
trends = [
'1. Physics-Informed Neural Networks (PINNs)',
'2. Deep Learning for Inverse Problems',
'3. Transfer Learning Across Frequencies',
'4. Reinforcement Learning for Optimization',
'5. Generative Models for Design',
'6. Multi-fidelity Modeling',
'7. Real-time Adaptive Systems',
'8. Explainable AI for EM Design'
]
y_pos = 0.9
for trend in trends:
ax4.text(0.05, y_pos, trend, fontsize=11,
transform=ax4.transAxes, verticalalignment='top')
y_pos -= 0.1
ax4.set_title('Future Trends in ML for Electromagnetics',
fontsize=12, fontweight='bold', pad=20)
# 添加边框
rect = Rectangle((0.02, 0.05), 0.96, 0.9, linewidth=2,
edgecolor='blue', facecolor='lightyellow', alpha=0.3,
transform=ax4.transAxes)
ax4.add_patch(rect)
plt.tight_layout()
plt.savefig(f'{output_dir}/em_ml_applications.png', dpi=150, bbox_inches='tight')
plt.close()
print(" EM-ML applications overview saved")
# 主程序
if __name__ == "__main__":
print("\n" + "=" * 60)
print("Machine Learning Methods Comparison for EM Design")
print("=" * 60)
plot_ml_comparison(output_dir)
plot_ml_workflow(output_dir)
plot_em_ml_applications(output_dir)
print("\n" + "=" * 60)
print("Summary of ML Methods for EM Applications:")
print("=" * 60)
print("\n1. Neural Networks:")
print(" - Best for: Complex nonlinear regression problems")
print(" - EM Applications: Antenna parameter prediction, inverse problems")
print(" - Advantages: High accuracy, automatic feature learning")
print(" - Limitations: Requires large datasets, black box nature")
print("\n2. Support Vector Machines:")
print(" - Best for: Classification with clear decision boundaries")
print(" - EM Applications: Material classification, fault detection")
print(" - Advantages: Good generalization, effective in high dimensions")
print(" - Limitations: Computationally intensive for large datasets")
print("\n3. Random Forest:")
print(" - Best for: Mixed data types, feature importance analysis")
print(" - EM Applications: RCS prediction, design optimization")
print(" - Advantages: Interpretable, robust to outliers")
print(" - Limitations: Can overfit with noisy data")
print("\nAll comparison plots saved!")
print("=" * 60)
# -*- coding: utf-8 -*-
"""
主题067:机器学习在电磁设计中的应用
神经网络实现 - 天线参数预测与优化
"""
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, Circle, FancyBboxPatch
import os
# 创建输出目录
output_dir = r'd:\文档\500仿真领域\工程仿真\电磁场仿真\主题067'
os.makedirs(output_dir, exist_ok=True)
# 设置中文字体
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False
class NeuralNetwork:
"""神经网络类 - 用于天线参数预测"""
def __init__(self, layer_sizes, learning_rate=0.01):
"""
初始化神经网络
参数:
layer_sizes: 各层神经元数量列表 [输入层, 隐藏层1, ..., 输出层]
learning_rate: 学习率
"""
self.layer_sizes = layer_sizes
self.learning_rate = learning_rate
self.n_layers = len(layer_sizes)
# 初始化权重和偏置
self.weights = []
self.biases = []
for i in range(self.n_layers - 1):
# He初始化
w = np.random.randn(layer_sizes[i], layer_sizes[i+1]) * np.sqrt(2.0 / layer_sizes[i])
b = np.zeros((1, layer_sizes[i+1]))
self.weights.append(w)
self.biases.append(b)
# 训练历史
self.loss_history = []
self.val_loss_history = []
def relu(self, x):
"""ReLU激活函数"""
return np.maximum(0, x)
def relu_derivative(self, x):
"""ReLU导数"""
return (x > 0).astype(float)
def sigmoid(self, x):
"""Sigmoid激活函数"""
return 1 / (1 + np.exp(-np.clip(x, -500, 500)))
def forward(self, X):
"""前向传播"""
self.activations = [X]
self.z_values = []
current = X
for i in range(self.n_layers - 1):
z = np.dot(current, self.weights[i]) + self.biases[i]
self.z_values.append(z)
# 最后一层使用线性激活(回归问题)
if i == self.n_layers - 2:
current = z
else:
current = self.relu(z)
self.activations.append(current)
return current
def backward(self, X, y, output):
"""反向传播"""
m = X.shape[0]
# 计算梯度
deltas = [None] * (self.n_layers - 1)
# 输出层误差
deltas[-1] = output - y
# 隐藏层误差(反向传播)
for i in range(self.n_layers - 3, -1, -1):
deltas[i] = np.dot(deltas[i+1], self.weights[i+1].T) * self.relu_derivative(self.z_values[i])
# 计算权重和偏置的梯度
weight_grads = []
bias_grads = []
for i in range(self.n_layers - 1):
dw = np.dot(self.activations[i].T, deltas[i]) / m
db = np.sum(deltas[i], axis=0, keepdims=True) / m
weight_grads.append(dw)
bias_grads.append(db)
return weight_grads, bias_grads
def update_params(self, weight_grads, bias_grads):
"""更新参数"""
for i in range(self.n_layers - 1):
self.weights[i] -= self.learning_rate * weight_grads[i]
self.biases[i] -= self.learning_rate * bias_grads[i]
def train(self, X, y, epochs=1000, batch_size=32, validation_split=0.2):
"""训练网络"""
print("\n" + "=" * 60)
print("Neural Network Training - Antenna Parameter Prediction")
print("=" * 60)
# 划分训练集和验证集
n_samples = X.shape[0]
n_val = int(n_samples * validation_split)
indices = np.random.permutation(n_samples)
train_idx = indices[n_val:]
val_idx = indices[:n_val]
X_train, y_train = X[train_idx], y[train_idx]
X_val, y_val = X[val_idx], y[val_idx]
n_batches = len(train_idx) // batch_size
for epoch in range(epochs):
epoch_loss = 0
for i in range(n_batches):
# 随机选择batch
batch_indices = np.random.choice(len(train_idx), batch_size, replace=False)
X_batch = X_train[batch_indices]
y_batch = y_train[batch_indices]
# 前向传播
output = self.forward(X_batch)
# 计算损失
loss = np.mean((output - y_batch)**2)
epoch_loss += loss
# 反向传播
weight_grads, bias_grads = self.backward(X_batch, y_batch, output)
# 更新参数
self.update_params(weight_grads, bias_grads)
# 计算验证损失
val_output = self.forward(X_val)
val_loss = np.mean((val_output - y_val)**2)
self.loss_history.append(epoch_loss / n_batches)
self.val_loss_history.append(val_loss)
if (epoch + 1) % 100 == 0:
print(f"Epoch {epoch + 1}: Train Loss = {self.loss_history[-1]:.6f}, "
f"Val Loss = {val_loss:.6f}")
print("\nTraining completed!")
print(f"Final train loss: {self.loss_history[-1]:.6f}")
print(f"Final validation loss: {self.val_loss_history[-1]:.6f}")
return self.loss_history, self.val_loss_history
def predict(self, X):
"""预测"""
return self.forward(X)
class AntennaDataset:
"""天线数据集生成器"""
def __init__(self, n_samples=1000):
self.n_samples = n_samples
def generate_patch_antenna_data(self):
"""
生成微带贴片天线数据集
输入: [贴片长度(mm), 贴片宽度(mm), 介质厚度(mm), 介电常数]
输出: [谐振频率(GHz), 带宽(%), 增益(dBi), 回波损耗(dB)]
"""
np.random.seed(42)
# 输入参数范围
L = np.random.uniform(5, 50, self.n_samples) # 贴片长度
W = np.random.uniform(5, 50, self.n_samples) # 贴片宽度
h = np.random.uniform(0.5, 5, self.n_samples) # 介质厚度
eps_r = np.random.uniform(2, 12, self.n_samples) # 介电常数
X = np.column_stack([L, W, h, eps_r])
# 计算输出(基于简化物理模型)
c = 3e8 # 光速
# 谐振频率(考虑边缘效应)
delta_L = 0.412 * h * ((eps_r + 0.3) / (eps_r - 0.258)) * ((W/h + 0.264) / (W/h + 0.8))
L_eff = L + 2 * delta_L
f_res = c / (2 * L_eff * 1e-3 * np.sqrt(eps_r)) / 1e9
# 带宽(与厚度和介电常数相关)
bandwidth = 2 + 3 * (h / L) * 100 + np.random.normal(0, 0.5, self.n_samples)
bandwidth = np.clip(bandwidth, 0.5, 15)
# 增益(与尺寸相关)
gain = 2 + 5 * np.log10(L * W / 100) + np.random.normal(0, 0.3, self.n_samples)
gain = np.clip(gain, 1, 10)
# 回波损耗(与匹配相关)
s11 = -5 - 15 * np.exp(-((f_res - 2.4)**2) / 2) + np.random.normal(0, 1, self.n_samples)
s11 = np.clip(s11, -30, -5)
y = np.column_stack([f_res, bandwidth, gain, s11])
return X, y
def plot_results(nn, X_test, y_test, y_pred, output_dir):
"""绘制结果"""
fig = plt.figure(figsize=(14, 10))
# 1. 损失曲线
ax1 = plt.subplot(2, 2, 1)
epochs = np.arange(1, len(nn.loss_history) + 1)
ax1.semilogy(epochs, nn.loss_history, 'b-', linewidth=2, label='Training Loss')
ax1.semilogy(epochs, nn.val_loss_history, 'r--', linewidth=2, label='Validation Loss')
ax1.set_xlabel('Epoch', fontsize=11)
ax1.set_ylabel('Loss (MSE)', fontsize=11)
ax1.set_title('Neural Network Training Loss', fontsize=12, fontweight='bold')
ax1.legend(fontsize=10)
ax1.grid(True, alpha=0.3)
# 2. 预测 vs 真实值(谐振频率)
ax2 = plt.subplot(2, 2, 2)
ax2.scatter(y_test[:, 0], y_pred[:, 0], alpha=0.5, c='blue', edgecolors='black', s=30)
ax2.plot([y_test[:, 0].min(), y_test[:, 0].max()],
[y_test[:, 0].min(), y_test[:, 0].max()],
'r--', linewidth=2, label='Perfect Prediction')
ax2.set_xlabel('True Resonant Frequency (GHz)', fontsize=11)
ax2.set_ylabel('Predicted Resonant Frequency (GHz)', fontsize=11)
ax2.set_title('Resonant Frequency Prediction', fontsize=12, fontweight='bold')
# 计算R²
r2 = 1 - np.sum((y_test[:, 0] - y_pred[:, 0])**2) / np.sum((y_test[:, 0] - np.mean(y_test[:, 0]))**2)
ax2.text(0.05, 0.95, f'R² = {r2:.4f}', transform=ax2.transAxes,
fontsize=12, verticalalignment='top',
bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
ax2.legend(fontsize=10)
ax2.grid(True, alpha=0.3)
# 3. 预测 vs 真实值(增益)
ax3 = plt.subplot(2, 2, 3)
ax3.scatter(y_test[:, 2], y_pred[:, 2], alpha=0.5, c='green', edgecolors='black', s=30)
ax3.plot([y_test[:, 2].min(), y_test[:, 2].max()],
[y_test[:, 2].min(), y_test[:, 2].max()],
'r--', linewidth=2, label='Perfect Prediction')
ax3.set_xlabel('True Gain (dBi)', fontsize=11)
ax3.set_ylabel('Predicted Gain (dBi)', fontsize=11)
ax3.set_title('Gain Prediction', fontsize=12, fontweight='bold')
r2_gain = 1 - np.sum((y_test[:, 2] - y_pred[:, 2])**2) / np.sum((y_test[:, 2] - np.mean(y_test[:, 2]))**2)
ax3.text(0.05, 0.95, f'R² = {r2_gain:.4f}', transform=ax3.transAxes,
fontsize=12, verticalalignment='top',
bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
ax3.legend(fontsize=10)
ax3.grid(True, alpha=0.3)
# 4. 误差分布
ax4 = plt.subplot(2, 2, 4)
errors = y_pred[:, 0] - y_test[:, 0]
ax4.hist(errors, bins=30, color='steelblue', edgecolor='black', alpha=0.7)
ax4.axvline(x=0, color='r', linestyle='--', linewidth=2)
ax4.set_xlabel('Prediction Error (GHz)', fontsize=11)
ax4.set_ylabel('Frequency', fontsize=11)
ax4.set_title('Prediction Error Distribution', fontsize=12, fontweight='bold')
ax4.grid(True, alpha=0.3, axis='y')
# 添加统计信息
mean_error = np.mean(errors)
std_error = np.std(errors)
ax4.text(0.05, 0.95, f'Mean: {mean_error:.4f}\nStd: {std_error:.4f}',
transform=ax4.transAxes, fontsize=10, verticalalignment='top',
bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
plt.tight_layout()
plt.savefig(f'{output_dir}/neural_network_em.png', dpi=150, bbox_inches='tight')
plt.close()
print(" Neural network results saved")
def plot_network_architecture(layer_sizes, output_dir):
"""绘制网络架构图"""
fig, ax = plt.subplots(figsize=(12, 6))
n_layers = len(layer_sizes)
layer_spacing = 2.5
max_neurons = max(layer_sizes)
for i, n_neurons in enumerate(layer_sizes):
x = i * layer_spacing
# 计算y位置(居中)
y_start = (max_neurons - n_neurons) / 2 * 0.4
for j in range(n_neurons):
y = y_start + j * 0.4
# 绘制神经元
circle = Circle((x, y), 0.15, facecolor='lightblue',
edgecolor='blue', linewidth=2)
ax.add_patch(circle)
# 绘制连接(除第一层外)
if i > 0:
prev_n = layer_sizes[i-1]
prev_y_start = (max_neurons - prev_n) / 2 * 0.4
for k in range(prev_n):
prev_y = prev_y_start + k * 0.4
ax.plot([x - layer_spacing + 0.15, x - 0.15],
[prev_y, y], 'gray', alpha=0.3, linewidth=0.5)
# 添加层标签
if i == 0:
label = f'Input Layer\n({n_neurons} neurons)'
elif i == n_layers - 1:
label = f'Output Layer\n({n_neurons} neurons)'
else:
label = f'Hidden Layer {i}\n({n_neurons} neurons)'
ax.text(x, -1.5, label, ha='center', fontsize=10, fontweight='bold')
ax.set_xlim([-1, (n_layers - 1) * layer_spacing + 1])
ax.set_ylim([-2.5, max_neurons * 0.4 + 0.5])
ax.set_aspect('equal')
ax.axis('off')
ax.set_title('Neural Network Architecture for Antenna Prediction',
fontsize=14, fontweight='bold', pad=20)
plt.tight_layout()
plt.savefig(f'{output_dir}/network_architecture.png', dpi=150, bbox_inches='tight')
plt.close()
print(" Network architecture diagram saved")
# 主程序
if __name__ == "__main__":
# 生成数据集
print("Generating antenna dataset...")
dataset = AntennaDataset(n_samples=2000)
X, y = dataset.generate_patch_antenna_data()
# 数据归一化
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0)
X_normalized = (X - X_mean) / X_std
y_mean = np.mean(y, axis=0)
y_std = np.std(y, axis=0)
y_normalized = (y - y_mean) / y_std
# 划分训练集和测试集
n_train = int(0.8 * len(X))
X_train, X_test = X_normalized[:n_train], X_normalized[n_train:]
y_train, y_test = y_normalized[:n_train], y_normalized[n_train:]
# 创建神经网络
layer_sizes = [4, 64, 128, 64, 4] # 输入层4,隐藏层64-128-64,输出层4
nn = NeuralNetwork(layer_sizes, learning_rate=0.01)
# 训练网络
nn.train(X_train, y_train, epochs=500, batch_size=32, validation_split=0.2)
# 预测
y_pred_normalized = nn.predict(X_test)
# 反归一化
y_pred = y_pred_normalized * y_std + y_mean
y_test_original = y_test * y_std + y_mean
# 绘制结果
plot_results(nn, X_test, y_test_original, y_pred, output_dir)
plot_network_architecture(layer_sizes, output_dir)
print("\nNeural Network for EM Applications completed!")
# -*- coding: utf-8 -*-
"""
主题067:机器学习在电磁设计中的应用
随机森林实现 - 雷达散射截面(RCS)预测
"""
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Circle, Rectangle, FancyBboxPatch, Polygon
import os
# 创建输出目录
output_dir = r'd:\文档\500仿真领域\工程仿真\电磁场仿真\主题067'
os.makedirs(output_dir, exist_ok=True)
# 设置中文字体
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False
class DecisionTree:
"""决策树类 - CART回归树"""
def __init__(self, max_depth=10, min_samples_split=5, min_samples_leaf=2):
self.max_depth = max_depth
self.min_samples_split = min_samples_split
self.min_samples_leaf = min_samples_leaf
self.tree = None
def fit(self, X, y):
"""训练决策树"""
self.n_features = X.shape[1]
self.tree = self._build_tree(X, y, depth=0)
return self
def _build_tree(self, X, y, depth):
"""递归构建决策树"""
n_samples, n_features = X.shape
# 停止条件
if (depth >= self.max_depth or
n_samples < self.min_samples_split or
n_samples < 2 * self.min_samples_leaf or
np.var(y) < 1e-7):
return {'value': np.mean(y), 'leaf': True}
# 寻找最佳分裂
best_gain = -np.inf
best_feature = None
best_threshold = None
for feature in range(n_features):
thresholds = np.unique(X[:, feature])
for threshold in thresholds:
left_mask = X[:, feature] <= threshold
right_mask = ~left_mask
if (np.sum(left_mask) < self.min_samples_leaf or
np.sum(right_mask) < self.min_samples_leaf):
continue
# 计算MSE减少量
y_left, y_right = y[left_mask], y[right_mask]
mse_parent = np.var(y)
mse_left = np.var(y_left) if len(y_left) > 0 else 0
mse_right = np.var(y_right) if len(y_right) > 0 else 0
n_left, n_right = len(y_left), len(y_right)
mse_split = (n_left * mse_left + n_right * mse_right) / n_samples
gain = mse_parent - mse_split
if gain > best_gain:
best_gain = gain
best_feature = feature
best_threshold = threshold
# 如果没有找到有效分裂
if best_feature is None:
return {'value': np.mean(y), 'leaf': True}
# 分裂数据
left_mask = X[:, best_feature] <= best_threshold
right_mask = ~left_mask
# 递归构建子树
left_tree = self._build_tree(X[left_mask], y[left_mask], depth + 1)
right_tree = self._build_tree(X[right_mask], y[right_mask], depth + 1)
return {
'feature': best_feature,
'threshold': best_threshold,
'left': left_tree,
'right': right_tree,
'leaf': False
}
def _predict_sample(self, x, node):
"""预测单个样本"""
if node['leaf']:
return node['value']
if x[node['feature']] <= node['threshold']:
return self._predict_sample(x, node['left'])
else:
return self._predict_sample(x, node['right'])
def predict(self, X):
"""预测"""
return np.array([self._predict_sample(x, self.tree) for x in X])
class RandomForest:
"""随机森林类 - 用于RCS预测"""
def __init__(self, n_estimators=100, max_depth=15, min_samples_split=5,
min_samples_leaf=2, max_features='sqrt', bootstrap=True):
"""
初始化随机森林
参数:
n_estimators: 树的数量
max_depth: 最大深度
min_samples_split: 分裂所需最小样本数
min_samples_leaf: 叶节点最小样本数
max_features: 每次分裂考虑的最大特征数
bootstrap: 是否使用自助采样
"""
self.n_estimators = n_estimators
self.max_depth = max_depth
self.min_samples_split = min_samples_split
self.min_samples_leaf = min_samples_leaf
self.max_features = max_features
self.bootstrap = bootstrap
self.trees = []
self.feature_indices = []
def fit(self, X, y):
"""训练随机森林"""
print("\n" + "=" * 60)
print("Random Forest Training - RCS Prediction")
print("=" * 60)
self.n_samples, self.n_features = X.shape
# 确定每次分裂考虑的特征数
if self.max_features == 'sqrt':
self.n_features_split = int(np.sqrt(self.n_features))
elif self.max_features == 'log2':
self.n_features_split = int(np.log2(self.n_features))
elif isinstance(self.max_features, int):
self.n_features_split = self.max_features
else:
self.n_features_split = self.n_features
self.trees = []
self.feature_indices = []
for i in range(self.n_estimators):
# 自助采样
if self.bootstrap:
indices = np.random.choice(self.n_samples, self.n_samples, replace=True)
else:
indices = np.arange(self.n_samples)
X_sample = X[indices]
y_sample = y[indices]
# 随机选择特征子集
feature_idx = np.random.choice(self.n_features,
self.n_features_split,
replace=False)
self.feature_indices.append(feature_idx)
# 训练决策树
tree = DecisionTree(max_depth=self.max_depth,
min_samples_split=self.min_samples_split,
min_samples_leaf=self.min_samples_leaf)
tree.fit(X_sample[:, feature_idx], y_sample)
self.trees.append(tree)
if (i + 1) % 20 == 0:
print(f"Trained {i + 1}/{self.n_estimators} trees")
print(f"\nTraining completed!")
print(f"Total trees: {self.n_estimators}")
print(f"Features per split: {self.n_features_split}")
return self
def predict(self, X):
"""预测"""
predictions = np.zeros((X.shape[0], self.n_estimators))
for i, (tree, feature_idx) in enumerate(zip(self.trees, self.feature_indices)):
predictions[:, i] = tree.predict(X[:, feature_idx])
# 取平均
return np.mean(predictions, axis=1)
def feature_importance(self, X, y):
"""计算特征重要性(基于排列重要性)"""
baseline_score = self._score(X, y)
importances = np.zeros(self.n_features)
for i in range(self.n_features):
X_permuted = X.copy()
np.random.shuffle(X_permuted[:, i])
permuted_score = self._score(X_permuted, y)
importances[i] = baseline_score - permuted_score
# 归一化
importances = np.maximum(importances, 0)
if importances.sum() > 0:
importances /= importances.sum()
return importances
def _score(self, X, y):
"""计算R²分数"""
y_pred = self.predict(X)
ss_res = np.sum((y - y_pred)**2)
ss_tot = np.sum((y - np.mean(y))**2)
return 1 - ss_res / ss_tot
class RCSDataset:
"""RCS数据集生成器"""
def __init__(self, n_samples=1000):
self.n_samples = n_samples
def generate_rcs_data(self):
"""
生成目标RCS数据集
输入: [目标长度(m), 目标宽度(m), 目标高度(m), 入射角度(度), 频率(GHz), 材料类型]
输出: RCS (dBsm)
"""
np.random.seed(42)
# 目标尺寸
length = np.random.uniform(0.1, 10, self.n_samples) # 长度 0.1-10m
width = np.random.uniform(0.1, 5, self.n_samples) # 宽度 0.1-5m
height = np.random.uniform(0.05, 3, self.n_samples) # 高度 0.05-3m
# 入射角度
theta = np.random.uniform(0, 90, self.n_samples) # 俯仰角 0-90度
phi = np.random.uniform(0, 360, self.n_samples) # 方位角 0-360度
# 频率
freq = np.random.uniform(1, 18, self.n_samples) # 频率 1-18 GHz
# 材料类型 (0=金属, 1=介质, 2=复合材料)
material = np.random.choice([0, 1, 2], self.n_samples)
X = np.column_stack([length, width, height, theta, phi, freq, material])
# 计算RCS(基于简化物理模型)
# 几何光学近似 + 材料影响
wavelength = 0.3 / freq # 波长 (m)
# 面积项
area = length * width
# 角度影响(简化)
angle_factor = np.abs(np.cos(np.radians(theta))) * np.abs(np.cos(np.radians(phi % 90)))
angle_factor = np.maximum(angle_factor, 0.1)
# 频率影响(瑞利区、谐振区、光学区)
size_param = np.sqrt(area) / wavelength
freq_factor = np.where(size_param < 1,
20 * np.log10(size_param), # 瑞利区
10 * np.log10(area / (wavelength**2))) # 光学区
# 材料影响
material_factor = np.where(material == 0, 0, # 金属
np.where(material == 1, -5, -10)) # 介质/复合材料
# 基础RCS计算
rcs_base = 10 * np.log10(area) + 10 * np.log10(angle_factor) + freq_factor + material_factor
# 添加噪声
rcs = rcs_base + np.random.normal(0, 2, self.n_samples)
return X, rcs
def plot_rf_results(rf, X_test, y_test, y_pred, feature_names, output_dir):
"""绘制随机森林结果"""
fig = plt.figure(figsize=(14, 10))
# 1. 预测 vs 真实值
ax1 = plt.subplot(2, 2, 1)
ax1.scatter(y_test, y_pred, alpha=0.5, c='blue', edgecolors='black', s=30)
# 完美预测线
min_val = min(y_test.min(), y_pred.min())
max_val = max(y_test.max(), y_pred.max())
ax1.plot([min_val, max_val], [min_val, max_val], 'r--', linewidth=2, label='Perfect Prediction')
ax1.set_xlabel('True RCS (dBsm)', fontsize=11)
ax1.set_ylabel('Predicted RCS (dBsm)', fontsize=11)
ax1.set_title('RCS Prediction: True vs Predicted', fontsize=12, fontweight='bold')
# 计算R²
r2 = 1 - np.sum((y_test - y_pred)**2) / np.sum((y_test - np.mean(y_test))**2)
rmse = np.sqrt(np.mean((y_test - y_pred)**2))
mae = np.mean(np.abs(y_test - y_pred))
ax1.text(0.05, 0.95, f'R² = {r2:.4f}\nRMSE = {rmse:.2f} dB\nMAE = {mae:.2f} dB',
transform=ax1.transAxes, fontsize=10, verticalalignment='top',
bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
ax1.legend(fontsize=10)
ax1.grid(True, alpha=0.3)
# 2. 残差分布
ax2 = plt.subplot(2, 2, 2)
residuals = y_pred - y_test
ax2.hist(residuals, bins=30, color='steelblue', edgecolor='black', alpha=0.7)
ax2.axvline(x=0, color='r', linestyle='--', linewidth=2)
ax2.set_xlabel('Prediction Error (dB)', fontsize=11)
ax2.set_ylabel('Frequency', fontsize=11)
ax2.set_title('Residual Distribution', fontsize=12, fontweight='bold')
ax2.grid(True, alpha=0.3, axis='y')
# 添加统计信息
ax2.text(0.05, 0.95, f'Mean: {np.mean(residuals):.4f}\nStd: {np.std(residuals):.4f}',
transform=ax2.transAxes, fontsize=10, verticalalignment='top',
bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
# 3. 特征重要性
ax3 = plt.subplot(2, 2, 3)
importances = rf.feature_importance(X_test, y_test)
# 排序
indices = np.argsort(importances)[::-1]
colors = plt.cm.viridis(np.linspace(0, 1, len(feature_names)))
bars = ax3.barh(range(len(feature_names)), importances[indices], color=colors)
ax3.set_yticks(range(len(feature_names)))
ax3.set_yticklabels([feature_names[i] for i in indices], fontsize=9)
ax3.set_xlabel('Importance', fontsize=11)
ax3.set_title('Feature Importance (Permutation)', fontsize=12, fontweight='bold')
ax3.grid(True, alpha=0.3, axis='x')
# 添加数值标签
for i, (bar, imp) in enumerate(zip(bars, importances[indices])):
ax3.text(imp + 0.01, i, f'{imp:.3f}', va='center', fontsize=9)
# 4. 不同频率范围的预测性能
ax4 = plt.subplot(2, 2, 4)
freq = X_test[:, 5] # 频率特征
freq_ranges = [(1, 3, 'L-band'), (3, 6, 'S-band'), (6, 10, 'C-band'),
(10, 18, 'X-band')]
range_names = []
range_rmse = []
range_r2 = []
for f_min, f_max, name in freq_ranges:
mask = (freq >= f_min) & (freq < f_max)
if np.sum(mask) > 10:
y_range_true = y_test[mask]
y_range_pred = y_pred[mask]
rmse_range = np.sqrt(np.mean((y_range_true - y_range_pred)**2))
r2_range = 1 - np.sum((y_range_true - y_range_pred)**2) / \
np.sum((y_range_true - np.mean(y_range_true))**2)
range_names.append(name)
range_rmse.append(rmse_range)
range_r2.append(max(0, r2_range))
x_pos = np.arange(len(range_names))
bars1 = ax4.bar(x_pos - 0.2, range_rmse, 0.4, label='RMSE (dB)',
color='coral', edgecolor='black', alpha=0.8)
ax4_twin = ax4.twinx()
bars2 = ax4_twin.bar(x_pos + 0.2, range_r2, 0.4, label='R²',
color='lightgreen', edgecolor='black', alpha=0.8)
ax4.set_xlabel('Frequency Band', fontsize=11)
ax4.set_ylabel('RMSE (dB)', fontsize=11, color='coral')
ax4_twin.set_ylabel('R²', fontsize=11, color='green')
ax4.set_title('Performance by Frequency Band', fontsize=12, fontweight='bold')
ax4.set_xticks(x_pos)
ax4.set_xticklabels(range_names)
ax4.grid(True, alpha=0.3, axis='y')
# 合并图例
lines1, labels1 = ax4.get_legend_handles_labels()
lines2, labels2 = ax4_twin.get_legend_handles_labels()
ax4.legend(lines1 + lines2, labels1 + labels2, loc='upper right', fontsize=9)
plt.tight_layout()
plt.savefig(f'{output_dir}/random_forest_rcs.png', dpi=150, bbox_inches='tight')
plt.close()
print(" Random Forest results saved")
def plot_target_geometry(output_dir):
"""绘制目标几何示意图"""
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
# 1. 金属目标
ax1 = axes[0]
# 绘制简单飞机形状
body = Rectangle((2, 4), 6, 1.5, facecolor='gray', edgecolor='black', linewidth=2)
wing_left = Polygon([[3, 5.5], [0, 7], [0, 6], [3, 5.5]],
facecolor='gray', edgecolor='black', linewidth=2)
wing_right = Polygon([[7, 5.5], [10, 7], [10, 6], [7, 5.5]],
facecolor='gray', edgecolor='black', linewidth=2)
tail = Polygon([[7.5, 4], [9, 3], [8, 3], [7.5, 4]],
facecolor='gray', edgecolor='black', linewidth=2)
ax1.add_patch(body)
ax1.add_patch(wing_left)
ax1.add_patch(wing_right)
ax1.add_patch(tail)
ax1.set_xlim([-1, 11])
ax1.set_ylim([2, 8])
ax1.set_aspect('equal')
ax1.set_title('Metal Target\n(High RCS)', fontsize=12, fontweight='bold')
ax1.axis('off')
ax1.text(5, 1.5, 'Material: Metal\nRCS: High', ha='center', fontsize=10,
bbox=dict(boxstyle='round', facecolor='lightyellow', alpha=0.8))
# 2. 介质目标
ax2 = axes[1]
ellipse = plt.matplotlib.patches.Ellipse((5, 5), 6, 3,
facecolor='lightblue',
edgecolor='blue', linewidth=2)
ax2.add_patch(ellipse)
ax2.set_xlim([0, 10])
ax2.set_ylim([2, 8])
ax2.set_aspect('equal')
ax2.set_title('Dielectric Target\n(Medium RCS)', fontsize=12, fontweight='bold')
ax2.axis('off')
ax2.text(5, 1.5, 'Material: Dielectric\nRCS: Medium', ha='center', fontsize=10,
bbox=dict(boxstyle='round', facecolor='lightyellow', alpha=0.8))
# 3. 复合材料目标(隐身)
ax3 = axes[2]
# 绘制隐身飞机形状(多边形)
stealth_shape = Polygon([[2, 5], [4, 7], [7, 7], [9, 5], [7, 3], [4, 3]],
facecolor='darkgreen', edgecolor='black', linewidth=2)
ax3.add_patch(stealth_shape)
ax3.set_xlim([0, 10])
ax3.set_ylim([2, 8])
ax3.set_aspect('equal')
ax3.set_title('Composite Target\n(Low RCS)', fontsize=12, fontweight='bold')
ax3.axis('off')
ax3.text(5, 1.5, 'Material: Composite\nRCS: Low (Stealth)', ha='center', fontsize=10,
bbox=dict(boxstyle='round', facecolor='lightyellow', alpha=0.8))
plt.tight_layout()
plt.savefig(f'{output_dir}/target_geometry.png', dpi=150, bbox_inches='tight')
plt.close()
print(" Target geometry diagram saved")
# 主程序
if __name__ == "__main__":
# 生成数据集
print("Generating RCS dataset...")
dataset = RCSDataset(n_samples=2000)
X, y = dataset.generate_rcs_data()
# 特征名称
feature_names = ['Length (m)', 'Width (m)', 'Height (m)',
'Theta (deg)', 'Phi (deg)', 'Freq (GHz)', 'Material']
# 划分训练集和测试集
n_train = int(0.8 * len(X))
indices = np.random.permutation(len(X))
train_idx = indices[:n_train]
test_idx = indices[n_train:]
X_train, y_train = X[train_idx], y[train_idx]
X_test, y_test = X[test_idx], y[test_idx]
print(f"Training samples: {len(X_train)}")
print(f"Test samples: {len(X_test)}")
# 训练随机森林
rf = RandomForest(n_estimators=100, max_depth=15, min_samples_split=5,
min_samples_leaf=2, max_features='sqrt', bootstrap=True)
rf.fit(X_train, y_train)
# 预测
y_pred = rf.predict(X_test)
# 计算性能指标
r2 = 1 - np.sum((y_test - y_pred)**2) / np.sum((y_test - np.mean(y_test))**2)
rmse = np.sqrt(np.mean((y_test - y_pred)**2))
mae = np.mean(np.abs(y_test - y_pred))
print(f"\nTest Performance:")
print(f" R² Score: {r2:.4f}")
print(f" RMSE: {rmse:.4f} dB")
print(f" MAE: {mae:.4f} dB")
# 绘制结果
plot_rf_results(rf, X_test, y_test, y_pred, feature_names, output_dir)
plot_target_geometry(output_dir)
print("\nRandom Forest for RCS Prediction completed!")
# -*- coding: utf-8 -*-
"""
主题067:机器学习在电磁设计中的应用
支持向量机(SVM)实现 - 电磁材料分类
"""
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Circle, Rectangle, FancyBboxPatch
from matplotlib.colors import ListedColormap
import os
# 创建输出目录
output_dir = r'd:\文档\500仿真领域\工程仿真\电磁场仿真\主题067'
os.makedirs(output_dir, exist_ok=True)
# 设置中文字体
plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False
class SVM:
"""支持向量机类 - 用于电磁材料分类"""
def __init__(self, C=1.0, kernel='rbf', gamma=0.1, max_iter=1000):
"""
初始化SVM
参数:
C: 正则化参数
kernel: 核函数类型 ('linear', 'rbf', 'poly')
gamma: RBF核参数
max_iter: 最大迭代次数
"""
self.C = C
self.kernel = kernel
self.gamma = gamma
self.max_iter = max_iter
def kernel_function(self, X1, X2):
"""计算核函数"""
if self.kernel == 'linear':
return np.dot(X1, X2.T)
elif self.kernel == 'rbf':
# RBF核: K(x,y) = exp(-gamma * ||x-y||^2)
X1_sq = np.sum(X1**2, axis=1).reshape(-1, 1)
X2_sq = np.sum(X2**2, axis=1).reshape(1, -1)
dist_sq = X1_sq + X2_sq - 2 * np.dot(X1, X2.T)
return np.exp(-self.gamma * dist_sq)
elif self.kernel == 'poly':
return (np.dot(X1, X2.T) + 1)**3
else:
raise ValueError(f"Unknown kernel: {self.kernel}")
def fit(self, X, y):
"""
训练SVM(简化版SMO算法)
参数:
X: 训练数据 (n_samples, n_features)
y: 标签 (n_samples,),取值为+1或-1
"""
print("\n" + "=" * 60)
print("SVM Training - Material Classification")
print("=" * 60)
self.X = X
self.y = y
n_samples = X.shape[0]
# 计算核矩阵
print("Computing kernel matrix...")
K = self.kernel_function(X, X)
# 初始化拉格朗日乘子
alpha = np.zeros(n_samples)
b = 0
# 简化版SMO算法
for iteration in range(self.max_iter):
alpha_prev = alpha.copy()
for i in range(n_samples):
# 计算预测值
f_i = np.sum(alpha * y * K[:, i]) + b
# 检查KKT条件
E_i = f_i - y[i]
# 选择违反KKT条件的样本
if ((y[i] * E_i < -1e-3 and alpha[i] < self.C) or
(y[i] * E_i > 1e-3 and alpha[i] > 0)):
# 随机选择第二个样本
j = np.random.choice([x for x in range(n_samples) if x != i])
f_j = np.sum(alpha * y * K[:, j]) + b
E_j = f_j - y[j]
# 保存旧的alpha值
alpha_i_old, alpha_j_old = alpha[i], alpha[j]
# 计算边界
if y[i] != y[j]:
L = max(0, alpha[j] - alpha[i])
H = min(self.C, self.C + alpha[j] - alpha[i])
else:
L = max(0, alpha[i] + alpha[j] - self.C)
H = min(self.C, alpha[i] + alpha[j])
if L == H:
continue
# 计算eta
eta = 2 * K[i, j] - K[i, i] - K[j, j]
if eta >= 0:
continue
# 更新alpha_j
alpha[j] = alpha_j_old - y[j] * (E_i - E_j) / eta
alpha[j] = np.clip(alpha[j], L, H)
if abs(alpha[j] - alpha_j_old) < 1e-5:
continue
# 更新alpha_i
alpha[i] = alpha_i_old + y[i] * y[j] * (alpha_j_old - alpha[j])
# 更新偏置b
b1 = b - E_i - y[i] * (alpha[i] - alpha_i_old) * K[i, i] - \
y[j] * (alpha[j] - alpha_j_old) * K[i, j]
b2 = b - E_j - y[i] * (alpha[i] - alpha_i_old) * K[i, j] - \
y[j] * (alpha[j] - alpha_j_old) * K[j, j]
if 0 < alpha[i] < self.C:
b = b1
elif 0 < alpha[j] < self.C:
b = b2
else:
b = (b1 + b2) / 2
# 检查收敛
diff = np.linalg.norm(alpha - alpha_prev)
if iteration % 100 == 0:
print(f"Iteration {iteration}: alpha change = {diff:.6f}")
if diff < 1e-3:
print(f"Converged at iteration {iteration}")
break
# 保存支持向量
self.alpha = alpha
self.b = b
self.support_vector_indices = np.where(alpha > 1e-5)[0]
self.n_support_vectors = len(self.support_vector_indices)
print(f"\nTraining completed!")
print(f"Number of support vectors: {self.n_support_vectors}")
print(f"Support vector ratio: {self.n_support_vectors / n_samples * 100:.2f}%")
return self
def predict(self, X):
"""预测"""
K = self.kernel_function(X, self.X)
decision = np.sum(self.alpha * self.y * K, axis=1) + self.b
return np.sign(decision)
def decision_function(self, X):
"""计算决策函数值"""
K = self.kernel_function(X, self.X)
return np.sum(self.alpha * self.y * K, axis=1) + self.b
class MaterialDataset:
"""电磁材料数据集生成器"""
def __init__(self, n_samples=500):
self.n_samples = n_samples
def generate_material_data(self):
"""
生成电磁材料分类数据集
特征: [介电常数, 磁导率, 电导率(S/m), 损耗角正切]
类别: 1=吸波材料, -1=透波材料
"""
np.random.seed(42)
# 吸波材料(高损耗)
n_absorbing = self.n_samples // 2
eps_abs = np.random.uniform(5, 50, n_absorbing)
mu_abs = np.random.uniform(0.5, 5, n_absorbing)
sigma_abs = np.random.uniform(0.1, 10, n_absorbing)
tan_delta_abs = np.random.uniform(0.1, 1.0, n_absorbing)
X_absorbing = np.column_stack([eps_abs, mu_abs, sigma_abs, tan_delta_abs])
y_absorbing = np.ones(n_absorbing)
# 透波材料(低损耗)
n_transparent = self.n_samples - n_absorbing
eps_trans = np.random.uniform(1, 10, n_transparent)
mu_trans = np.random.uniform(0.8, 1.5, n_transparent)
sigma_trans = np.random.uniform(1e-6, 0.01, n_transparent)
tan_delta_trans = np.random.uniform(1e-4, 0.01, n_transparent)
X_transparent = np.column_stack([eps_trans, mu_trans, sigma_trans, tan_delta_trans])
y_transparent = -np.ones(n_transparent)
# 合并数据
X = np.vstack([X_absorbing, X_transparent])
y = np.hstack([y_absorbing, y_transparent])
# 打乱数据
indices = np.random.permutation(len(X))
X = X[indices]
y = y[indices]
return X, y
def get_feature_names(self):
"""获取特征名称"""
return ['Dielectric Constant', 'Permeability', 'Conductivity (S/m)', 'Loss Tangent']
def plot_svm_results(svm, X, y, X_test, y_test, y_pred, output_dir):
"""绘制SVM结果"""
fig = plt.figure(figsize=(14, 10))
# 使用前两个特征进行可视化
X_vis = X[:, :2]
X_test_vis = X_test[:, :2]
# 1. 训练数据分类结果
ax1 = plt.subplot(2, 2, 1)
# 绘制决策边界
h = 0.5
x_min, x_max = X_vis[:, 0].min() - 1, X_vis[:, 0].max() + 1
y_min, y_max = X_vis[:, 1].min() - 0.5, X_vis[:, 1].max() + 0.5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# 为网格点创建完整的特征向量(使用前两个特征,其他用均值)
mesh_points = np.c_[xx.ravel(), yy.ravel()]
mesh_full = np.zeros((mesh_points.shape[0], X.shape[1]))
mesh_full[:, :2] = mesh_points
mesh_full[:, 2:] = np.mean(X[:, 2:], axis=0)
Z = svm.decision_function(mesh_full)
Z = Z.reshape(xx.shape)
# 绘制等高线
contour = ax1.contourf(xx, yy, Z, levels=20, alpha=0.6, cmap='RdBu_r')
ax1.contour(xx, yy, Z, levels=[-1, 0, 1], colors=['blue', 'black', 'red'],
linestyles=['--', '-', '--'], linewidths=2)
# 绘制数据点
scatter1 = ax1.scatter(X_vis[y == 1, 0], X_vis[y == 1, 1],
c='red', marker='o', s=50, alpha=0.7,
edgecolors='black', label='Absorbing Material')
scatter2 = ax1.scatter(X_vis[y == -1, 0], X_vis[y == -1, 1],
c='blue', marker='s', s=50, alpha=0.7,
edgecolors='black', label='Transparent Material')
# 标记支持向量
sv_indices = svm.support_vector_indices
ax1.scatter(X_vis[sv_indices, 0], X_vis[sv_indices, 1],
s=200, facecolors='none', edgecolors='green',
linewidths=2, label='Support Vectors')
ax1.set_xlabel('Dielectric Constant', fontsize=11)
ax1.set_ylabel('Permeability', fontsize=11)
ax1.set_title('SVM Decision Boundary (Training Data)', fontsize=12, fontweight='bold')
ax1.legend(fontsize=9, loc='upper right')
ax1.grid(True, alpha=0.3)
plt.colorbar(contour, ax=ax1, label='Decision Function')
# 2. 测试数据分类结果
ax2 = plt.subplot(2, 2, 2)
# 重新绘制决策边界
ax2.contourf(xx, yy, Z, levels=20, alpha=0.6, cmap='RdBu_r')
ax2.contour(xx, yy, Z, levels=[0], colors='black', linewidths=2)
# 绘制测试数据点
correct = y_pred == y_test
ax2.scatter(X_test_vis[correct & (y_test == 1), 0],
X_test_vis[correct & (y_test == 1), 1],
c='red', marker='o', s=80, alpha=0.8,
edgecolors='green', linewidths=2, label='Correct (Absorbing)')
ax2.scatter(X_test_vis[correct & (y_test == -1), 0],
X_test_vis[correct & (y_test == -1), 1],
c='blue', marker='s', s=80, alpha=0.8,
edgecolors='green', linewidths=2, label='Correct (Transparent)')
ax2.scatter(X_test_vis[~correct, 0], X_test_vis[~correct, 1],
c='gray', marker='x', s=100, linewidths=3,
label='Misclassified')
ax2.set_xlabel('Dielectric Constant', fontsize=11)
ax2.set_ylabel('Permeability', fontsize=11)
ax2.set_title('SVM Classification (Test Data)', fontsize=12, fontweight='bold')
ax2.legend(fontsize=9, loc='upper right')
ax2.grid(True, alpha=0.3)
# 3. 特征重要性分析
ax3 = plt.subplot(2, 2, 3)
feature_names = ['Dielectric\nConstant', 'Permeability', 'Conductivity', 'Loss Tangent']
# 计算每个特征的均值和标准差
X_absorbing = X[y == 1]
X_transparent = X[y == -1]
x_pos = np.arange(len(feature_names))
width = 0.35
# 归一化特征值以便比较
X_norm = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_abs_norm = X_norm[y == 1]
X_trans_norm = X_norm[y == -1]
ax3.bar(x_pos - width/2, X_abs_norm.mean(axis=0), width,
label='Absorbing', color='red', alpha=0.7, edgecolor='black')
ax3.bar(x_pos + width/2, X_trans_norm.mean(axis=0), width,
label='Transparent', color='blue', alpha=0.7, edgecolor='black')
ax3.set_ylabel('Normalized Feature Value', fontsize=11)
ax3.set_title('Feature Comparison by Material Type', fontsize=12, fontweight='bold')
ax3.set_xticks(x_pos)
ax3.set_xticklabels(feature_names, fontsize=9)
ax3.legend(fontsize=10)
ax3.grid(True, alpha=0.3, axis='y')
# 4. 混淆矩阵
ax4 = plt.subplot(2, 2, 4)
# 计算混淆矩阵
tp = np.sum((y_pred == 1) & (y_test == 1))
tn = np.sum((y_pred == -1) & (y_test == -1))
fp = np.sum((y_pred == 1) & (y_test == -1))
fn = np.sum((y_pred == -1) & (y_test == 1))
confusion = np.array([[tn, fp], [fn, tp]])
im = ax4.imshow(confusion, cmap='Blues', alpha=0.7)
ax4.set_xticks([0, 1])
ax4.set_yticks([0, 1])
ax4.set_xticklabels(['Predicted\nTransparent', 'Predicted\nAbsorbing'])
ax4.set_yticklabels(['Actual\nTransparent', 'Actual\nAbsorbing'])
# 添加数值标签
for i in range(2):
for j in range(2):
text = ax4.text(j, i, confusion[i, j],
ha="center", va="center", color="black", fontsize=20, fontweight='bold')
ax4.set_title('Confusion Matrix', fontsize=12, fontweight='bold')
# 计算准确率
accuracy = (tp + tn) / len(y_test)
precision = tp / (tp + fp) if (tp + fp) > 0 else 0
recall = tp / (tp + fn) if (tp + fn) > 0 else 0
f1 = 2 * precision * recall / (precision + recall) if (precision + recall) > 0 else 0
ax4.text(0.5, -0.15, f'Accuracy: {accuracy:.4f}\nPrecision: {precision:.4f}\nRecall: {recall:.4f}\nF1-Score: {f1:.4f}',
transform=ax4.transAxes, ha='center', fontsize=10,
bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
plt.tight_layout()
plt.savefig(f'{output_dir}/svm_material_classification.png', dpi=150, bbox_inches='tight')
plt.close()
print(" SVM results saved")
def plot_material_properties(X, y, output_dir):
"""绘制材料属性分布"""
fig, axes = plt.subplots(2, 2, figsize=(12, 10))
feature_names = ['Dielectric Constant', 'Permeability',
'Conductivity (S/m)', 'Loss Tangent']
X_absorbing = X[y == 1]
X_transparent = X[y == -1]
for idx, (ax, name) in enumerate(zip(axes.flat, feature_names)):
# 直方图
ax.hist(X_absorbing[:, idx], bins=20, alpha=0.6, color='red',
edgecolor='black', label='Absorbing')
ax.hist(X_transparent[:, idx], bins=20, alpha=0.6, color='blue',
edgecolor='black', label='Transparent')
ax.set_xlabel(name, fontsize=11)
ax.set_ylabel('Frequency', fontsize=11)
ax.set_title(f'{name} Distribution', fontsize=12, fontweight='bold')
ax.legend(fontsize=10)
ax.grid(True, alpha=0.3, axis='y')
# 添加统计信息
abs_mean = X_absorbing[:, idx].mean()
trans_mean = X_transparent[:, idx].mean()
ax.axvline(abs_mean, color='red', linestyle='--', linewidth=2, alpha=0.8)
ax.axvline(trans_mean, color='blue', linestyle='--', linewidth=2, alpha=0.8)
plt.tight_layout()
plt.savefig(f'{output_dir}/material_properties.png', dpi=150, bbox_inches='tight')
plt.close()
print(" Material properties distribution saved")
# 主程序
if __name__ == "__main__":
# 生成数据集
print("Generating material dataset...")
dataset = MaterialDataset(n_samples=400)
X, y = dataset.generate_material_data()
# 划分训练集和测试集
n_train = int(0.8 * len(X))
indices = np.random.permutation(len(X))
train_idx = indices[:n_train]
test_idx = indices[n_train:]
X_train, y_train = X[train_idx], y[train_idx]
X_test, y_test = X[test_idx], y[test_idx]
print(f"Training samples: {len(X_train)}")
print(f"Test samples: {len(X_test)}")
# 训练SVM
svm = SVM(C=1.0, kernel='rbf', gamma=0.1, max_iter=500)
svm.fit(X_train, y_train)
# 预测
y_pred = svm.predict(X_test)
# 计算准确率
accuracy = np.mean(y_pred == y_test)
print(f"\nTest Accuracy: {accuracy:.4f}")
# 绘制结果
plot_svm_results(svm, X_train, y_train, X_test, y_test, y_pred, output_dir)
plot_material_properties(X, y, output_dir)
print("\nSVM for Material Classification completed!")
AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。
更多推荐




所有评论(0)