[ Python ] TensorFlow Weight L2, L1 Normalization 쉽게하기
2019. 9. 24. 10:00ㆍ분석 Python/Tensorflow
기존 Loss gen_loss , disc_loss에 Weight만 L2 부여하기
t_vars = tf.trainable_variables()
G_L2 = []
for v in t_vars :
if re.search('Weight' , v.name) :
if re.search("Generator", v.name) :
print(v.name)
G_L2.append(tf.nn.l2_loss(v))
gen_loss += tf.add_n(G_L2) * 0.001
D_L2 = []
for v in t_vars :
if re.search('Discriminator' , v.name) :
if re.search("kernel", v.name) :
print(v.name)
D_L2.append(tf.nn.l2_loss(v))
disc_loss += tf.add_n(D_L2) * 0.001
L2, L1 Loss 해보기
vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,scope="Ensembles")
L2 = []
WEIGHTS = []
import re
for v in vars :
if re.search('Weight' , v.name) :
WEIGHTS.append(v)
L2.append(tf.nn.l2_loss(v))
Loss += tf.add_n(L2) * 0.1
l1_regularizer = tf.contrib.layers.l1_regularizer(scale=0.005, scope=None)
regularization_penalty = tf.contrib.layers.apply_regularization(l1_regularizer, WEIGHTS )
Loss += regularization_penalty
728x90
'분석 Python > Tensorflow' 카테고리의 다른 글
tensorflow에서 Loss 가 nan 발생한 경우 정리 (개인 생각) (0) | 2019.09.28 |
---|---|
[ Python ] Tensorflow max norm 적용하기 (0) | 2019.09.28 |
[ Python ] gumbel softmax 알아보기 (0) | 2019.09.14 |
[ Python ] TensorFlow 1.x save & load model & predict (0) | 2019.08.17 |
tensorflow 논문 구현 코드가 많이 있는 Github 공유 (0) | 2019.06.30 |