Skip to content
GitLab
探索
登录
注册
主导航
搜索或转到…
项目
D
DAGMM
管理
动态
成员
标记
计划
议题
0
议题看板
里程碑
Wiki
代码
合并请求
0
仓库
分支
提交
标签
仓库图
比较修订版本
代码片段
构建
流水线
作业
流水线计划
产物
部署
发布
软件包库
运维
环境
Terraform 模块
监控
事件
服务台
分析
价值流分析
Contributor analytics
CI/CD 分析
仓库分析
模型实验
帮助
帮助
支持
GitLab 文档
比较 GitLab 各版本
社区论坛
为极狐GitLab 提交贡献
提交反馈
快捷键
?
支持
扫码加入微信群
1. 获取企业级DevOps解决方案
2. 免费或优惠考取极狐GitLab官方培训认证
代码片段
群组
项目
AIOps-NanKai
model
DAGMM
提交
c49e8fc7
提交
c49e8fc7
编辑于
5年前
作者:
Toshihiro NAKAE
浏览文件
操作
下载
补丁
差异文件
Changed f-strings to format functions to support python3.5
上级
a9159cf2
分支
分支 包含提交
无相关合并请求
变更
3
隐藏空白变更内容
行内
左右并排
显示
3 个更改的文件
dagmm/compression_net.py
+4
-4
4 个添加, 4 个删除
dagmm/compression_net.py
dagmm/dagmm.py
+1
-1
1 个添加, 1 个删除
dagmm/dagmm.py
dagmm/estimation_net.py
+2
-2
2 个添加, 2 个删除
dagmm/estimation_net.py
有
7 个添加
和
7 个删除
dagmm/compression_net.py
+
4
−
4
浏览文件 @
c49e8fc7
...
...
@@ -35,12 +35,12 @@ class CompressionNet:
for
size
in
self
.
hidden_layer_sizes
[:
-
1
]:
n_layer
+=
1
z
=
tf
.
layers
.
dense
(
z
,
size
,
activation
=
self
.
activation
,
name
=
f
"
layer_
{
n_layer
}
"
)
name
=
"
layer_{
}
"
.
format
(
n_layer
)
)
# activation function of last layer is linear
n_layer
+=
1
z
=
tf
.
layers
.
dense
(
z
,
self
.
hidden_layer_sizes
[
-
1
],
name
=
f
"
layer_
{
n_layer
}
"
)
name
=
"
layer_{
}
"
.
format
(
n_layer
)
)
return
z
...
...
@@ -50,12 +50,12 @@ class CompressionNet:
for
size
in
self
.
hidden_layer_sizes
[:
-
1
][::
-
1
]:
n_layer
+=
1
z
=
tf
.
layers
.
dense
(
z
,
size
,
activation
=
self
.
activation
,
name
=
f
"
layer_
{
n_layer
}
"
)
name
=
"
layer_{
}
"
.
format
(
n_layer
)
)
# activation function of last layes is linear
n_layer
+=
1
x_dash
=
tf
.
layers
.
dense
(
z
,
self
.
input_size
,
name
=
f
"
layer_
{
n_layer
}
"
)
name
=
"
layer_{
}
"
.
format
(
n_layer
)
)
return
x_dash
...
...
This diff is collapsed.
Click to expand it.
dagmm/dagmm.py
+
1
−
1
浏览文件 @
c49e8fc7
...
...
@@ -153,7 +153,7 @@ class DAGMM:
if
(
epoch
+
1
)
%
100
==
0
:
loss_val
=
self
.
sess
.
run
(
loss
,
feed_dict
=
{
input
:
x
,
drop
:
0
})
print
(
f
"
epoch
{
epoch
+
1
}
/
{
self
.
epoch_size
}
: loss =
{
loss_val
:
.
3
f
}
"
)
print
(
"
epoch {
}/{} : loss = {:.3f}
"
.
format
(
epoch
+
1
,
self
.
epoch_size
,
loss_val
)
)
# Fix GMM parameter
fix
=
self
.
gmm
.
fix_op
()
...
...
This diff is collapsed.
Click to expand it.
dagmm/estimation_net.py
+
2
−
2
浏览文件 @
c49e8fc7
...
...
@@ -46,10 +46,10 @@ class EstimationNet:
for
size
in
self
.
hidden_layer_sizes
[:
-
1
]:
n_layer
+=
1
z
=
tf
.
layers
.
dense
(
z
,
size
,
activation
=
self
.
activation
,
name
=
f
"
layer_
{
n_layer
}
"
)
name
=
"
layer_{
}
"
.
format
(
n_layer
)
)
if
dropout_ratio
is
not
None
:
z
=
tf
.
layers
.
dropout
(
z
,
dropout_ratio
,
name
=
f
"
drop_
{
n_layer
}
"
)
name
=
"
drop_{
}
"
.
format
(
n_layer
)
)
# Last layer uses linear function (=logits)
size
=
self
.
hidden_layer_sizes
[
-
1
]
...
...
This diff is collapsed.
Click to expand it.
预览
0%
请重试
或
添加新附件
.
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
保存评论
取消
想要评论请
注册
或
登录