Skip to content
GitLab
探索
登录
注册
主导航
搜索或转到…
项目
S
SDFVAE
管理
动态
成员
标记
计划
议题
0
议题看板
里程碑
Wiki
代码
合并请求
0
仓库
分支
提交
标签
仓库图
比较修订版本
代码片段
构建
流水线
作业
流水线计划
产物
部署
发布
软件包库
运维
环境
Terraform 模块
监控
事件
服务台
分析
价值流分析
Contributor analytics
CI/CD 分析
仓库分析
模型实验
帮助
帮助
支持
GitLab 文档
比较 GitLab 各版本
社区论坛
为极狐GitLab 提交贡献
提交反馈
快捷键
?
支持
扫码加入微信群
1. 获取企业级DevOps解决方案
2. 免费或优惠考取极狐GitLab官方培训认证
代码片段
群组
项目
AIOps-NanKai
model
SDFVAE
提交
84fe054e
未验证
提交
84fe054e
编辑于
4年前
作者:
dlagul
提交者:
GitHub
4年前
浏览文件
操作
下载
补丁
差异文件
Update trainer.py
上级
e9b5d6ac
分支
分支 包含提交
无相关合并请求
变更
1
隐藏空白变更内容
行内
左右并排
显示
1 个更改的文件
sdfvae/trainer.py
+5
-5
5 个添加, 5 个删除
sdfvae/trainer.py
有
5 个添加
和
5 个删除
sdfvae/trainer.py
+
5
−
5
浏览文件 @
84fe054e
...
@@ -53,7 +53,7 @@ class Trainer(object):
...
@@ -53,7 +53,7 @@ class Trainer(object):
print
(
"
No Checkpoint Exists At
'
{}
'
, Starting Fresh Training
"
.
format
(
self
.
checkpoints
))
print
(
"
No Checkpoint Exists At
'
{}
'
, Starting Fresh Training
"
.
format
(
self
.
checkpoints
))
self
.
start_epoch
=
0
self
.
start_epoch
=
0
def
loss_fn
(
self
,
original_seq
,
recon_seq_mu
,
recon_seq_log
var
,
s_mean
,
def
loss_fn
(
self
,
original_seq
,
recon_seq_mu
,
recon_seq_log
sigma
,
s_mean
,
s_logvar
,
d_post_mean
,
d_post_logvar
,
d_prior_mean
,
d_prior_logvar
):
s_logvar
,
d_post_mean
,
d_post_logvar
,
d_prior_mean
,
d_prior_logvar
):
batch_size
=
original_seq
.
size
(
0
)
batch_size
=
original_seq
.
size
(
0
)
# See https://arxiv.org/pdf/1606.05908.pdf, Page 9, Section 2.2 for details.
# See https://arxiv.org/pdf/1606.05908.pdf, Page 9, Section 2.2 for details.
...
@@ -64,8 +64,8 @@ class Trainer(object):
...
@@ -64,8 +64,8 @@ class Trainer(object):
# = -0.5*{log(2*pi)+2*log(sigma)+[(x-mu)/exp{log(sigma)}]^2}
# = -0.5*{log(2*pi)+2*log(sigma)+[(x-mu)/exp{log(sigma)}]^2}
# Note that var = sigma^2, i.e., log(var) = 2*log(sigma),
# Note that var = sigma^2, i.e., log(var) = 2*log(sigma),
# so the “recon_seq_logvar” here is more appropriate to be called “recon_seq_logsigma”, but the name does not the matter
# so the “recon_seq_logvar” here is more appropriate to be called “recon_seq_logsigma”, but the name does not the matter
loglikelihood
=
-
0.5
*
torch
.
sum
(
torch
.
pow
(((
original_seq
.
float
()
-
recon_seq_mu
.
float
())
/
torch
.
exp
(
recon_seq_log
var
.
float
())),
2
)
loglikelihood
=
-
0.5
*
torch
.
sum
(
torch
.
pow
(((
original_seq
.
float
()
-
recon_seq_mu
.
float
())
/
torch
.
exp
(
recon_seq_log
sigma
.
float
())),
2
)
+
2
*
recon_seq_log
var
.
float
()
+
2
*
recon_seq_log
sigma
.
float
()
+
np
.
log
(
np
.
pi
*
2
))
+
np
.
log
(
np
.
pi
*
2
))
# See https://arxiv.org/pdf/1606.05908.pdf, Page 9, Section 2.2, Equation (7) for details.
# See https://arxiv.org/pdf/1606.05908.pdf, Page 9, Section 2.2, Equation (7) for details.
kld_s
=
-
0.5
*
torch
.
sum
(
1
+
s_logvar
-
torch
.
pow
(
s_mean
,
2
)
-
torch
.
exp
(
s_logvar
))
kld_s
=
-
0.5
*
torch
.
sum
(
1
+
s_logvar
-
torch
.
pow
(
s_mean
,
2
)
-
torch
.
exp
(
s_logvar
))
...
@@ -89,8 +89,8 @@ class Trainer(object):
...
@@ -89,8 +89,8 @@ class Trainer(object):
_
,
_
,
data
=
dataitem
_
,
_
,
data
=
dataitem
data
=
data
.
to
(
self
.
device
)
data
=
data
.
to
(
self
.
device
)
self
.
optimizer
.
zero_grad
()
self
.
optimizer
.
zero_grad
()
s_mean
,
s_logvar
,
s
,
d_post_mean
,
d_post_logvar
,
d
,
d_prior_mean
,
d_prior_logvar
,
recon_x_mu
,
recon_x_log
var
=
self
.
model
(
data
)
s_mean
,
s_logvar
,
s
,
d_post_mean
,
d_post_logvar
,
d
,
d_prior_mean
,
d_prior_logvar
,
recon_x_mu
,
recon_x_log
sigma
=
self
.
model
(
data
)
loss
,
llh
,
kld_s
,
kld_d
=
self
.
loss_fn
(
data
,
recon_x_mu
,
recon_x_log
var
,
s_mean
,
s_logvar
,
loss
,
llh
,
kld_s
,
kld_d
=
self
.
loss_fn
(
data
,
recon_x_mu
,
recon_x_log
sigma
,
s_mean
,
s_logvar
,
d_post_mean
,
d_post_logvar
,
d_prior_mean
,
d_prior_logvar
)
d_post_mean
,
d_post_logvar
,
d_prior_mean
,
d_prior_logvar
)
loss
.
backward
()
loss
.
backward
()
self
.
optimizer
.
step
()
self
.
optimizer
.
step
()
...
...
This diff is collapsed.
Click to expand it.
预览
0%
请重试
或
添加新附件
.
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
保存评论
取消
想要评论请
注册
或
登录