网站链接: element-ui dtcms
当前位置: 首页 > 技术博文  > 技术博文

【神经网络+数学】——(2)神经网络求解一元微分问题(二阶微分)

2021/6/25 7:45:11 人评论

背景 详见上篇博客 本博客对更复杂的二阶微分问题进行神经网络求解,问题示例参考博客 问题描述 定义域选取[0,2] 模型代码 神经网络模拟φ(x),利用自动微分得到二阶、一阶微分,代入表达式后作为loss进行训练即可。该方法适用于N阶微分问题…

背景

详见上篇博客
本博客对更复杂的二阶微分问题进行神经网络求解,问题示例参考博客

问题描述

在这里插入图片描述

定义域选取[0,2]

模型代码

神经网络模拟φ(x),利用自动微分得到二阶、一阶微分,代入表达式后作为loss进行训练即可。该方法适用于N阶微分问题,具有搭建快捷、模型结构不受阶数影响的优点,而传统的解析方法对阶数很敏感,求解难度随着阶数的增加直线上升。

使用tf2+python3.7环境,自动微分的结果表示微分函数值,训练代码如下(不包含net类的定义代码,需要付费获取,请私信联系博主):

# 随机打乱
seed = np.random.randint(0, 2021, 1)[0]
np.random.seed(seed)
np.random.shuffle(x_space)
y_space = psy_analytic(x_space)
x_space = tf.reshape(x_space, (-1, 1))
x_space = tf.cast(x_space, tf.float32)  # 默认是float64会报错不匹配,所以要转类型
net = Nx_Net(x_space, tf.reduce_min(x_space), tf.reduce_max(x_space), w=w, activation=activation)
if retrain:
    net.model_load()
optimizer = Adam(lr)
for epoch in range(epochs):
    grad, loss, loss_data, loss_equation,loss_border = net.train_step()
    optimizer.apply_gradients(zip(grad, net.trainable_variables))

    if epoch % 100 == 0:
        print("loss:{}\tloss_data:{}\tloss_equation:{}\tloss_border:{}\tepoch:{}".format(loss, loss_data, loss_equation,loss_border, epoch))
net.model_save()
predict = net.net_call(x_space)
plt.plot(x_space, y_space, 'o', label="True")
plt.plot(x_space, predict, 'x', label="Pred")
plt.legend(loc=1)
plt.title("predictions")
plt.show()

训练参数配置:

retrain = False
activation = 'tanh'
grid = 10
epochs = 20000
lr = 0.001
w = (1, 1,1)

训练日志:

loss:1.4774293899536133	loss_data:0.3582383990287781	loss_equation:0.0964028537273407	loss_border:1.022788166999817	epoch:0
loss:1.139674186706543	loss_data:0.31288060545921326	loss_equation:0.13169381022453308	loss_border:0.6950997114181519	epoch:100
loss:1.0643255710601807	loss_data:0.3342372477054596	loss_equation:0.14209073781967163	loss_border:0.587997555732727	epoch:200
loss:0.981413722038269	loss_data:0.34834182262420654	loss_equation:0.13856054842472076	loss_border:0.49451133608818054	epoch:300
loss:0.8012645840644836	loss_data:0.3721367120742798	loss_equation:0.12421827018260956	loss_border:0.3049095869064331	epoch:400
loss:0.5379026532173157	loss_data:0.42624107003211975	loss_equation:0.04168599843978882	loss_border:0.0699755847454071	epoch:500
loss:0.5068864822387695	loss_data:0.43383288383483887	loss_equation:0.02313089184463024	loss_border:0.04992268607020378	epoch:600
loss:0.5024876594543457	loss_data:0.43435603380203247	loss_equation:0.020750800147652626	loss_border:0.04738083481788635	epoch:700
loss:0.5008866786956787	loss_data:0.4343510568141937	loss_equation:0.019994685426354408	loss_border:0.04654095694422722	epoch:800
loss:0.4999883472919464	loss_data:0.43428468704223633	loss_equation:0.019583197310566902	loss_border:0.046120475977659225	epoch:900
……
loss:0.49528592824935913	loss_data:0.43690577149391174	loss_equation:0.015979139134287834	loss_border:0.04240100085735321	epoch:19600
loss:0.4952779710292816	loss_data:0.4368930459022522	loss_equation:0.015986021608114243	loss_border:0.04239888861775398	epoch:19700
loss:0.49783745408058167	loss_data:0.44942647218704224	loss_equation:0.007159555796533823	loss_border:0.04125141352415085	epoch:19800
loss:0.49526429176330566	loss_data:0.4368693232536316	loss_equation:0.01600196212530136	loss_border:0.042392998933792114	epoch:19900
model saved in  net.weights

Process finished with exit code 0

输出的拟合效果:
在这里插入图片描述

结论

拟合效果较好,说明了理论在实践上的可行性,以及代码的正确性;根据神经网络解方程的优势,可推广至N阶微分方程(临时只做了一元的,没有做多元偏微分方程PDE的研究),这是传统的解析方法无论是从可求解性、求解复杂程度、求解速度上都是无法比拟的(低阶时传统解析方法可能优于神经网络方法)

相关资讯

    暂无相关的数据...

共有条评论 网友评论

验证码: 看不清楚?