GAN minibatch discrimination code

2019. 5. 28. 03:32분석 Python/Tensorflow

728x90
NUM_KERNELS = 5
def minibatch(input, num_kernels=NUM_KERNELS, kernel_dim=3, name = None ):
    output_dim = num_kernels*kernel_dim
    w = tf.get_variable("Weight_minibatch_" + name ,
                         [input.get_shape()[1], output_dim ],
                         initializer=tf.random_normal_initializer(stddev=0.2))
    b = tf.get_variable("Bias_minibatch_" + name ,
                        [output_dim],initializer=tf.constant_initializer(0.0))
    x = tf.matmul(input, w) + b
    activation = tf.reshape(x, (-1, num_kernels, kernel_dim))
    diffs = tf.expand_dims(activation, 3) - \
        tf.expand_dims(tf.transpose(activation, [1, 2, 0]), 0)
    #eps = tf.expand_dims(np.eye(int(input.get_shape()[0]), dtype=np.float32), 1)
    abs_diffs = tf.reduce_sum(tf.abs(diffs), 2) #+ eps
    minibatch_features = tf.reduce_sum(tf.exp(-abs_diffs), 2)
    output = tf.concat([input, minibatch_features],1)
    return output

 

https://www.inference.vc/understanding-minibatch-discrimination-in-gans/

 

 

Understanding Minibatch Discrimination in GANs

Yesterday I read the latest paper by the OpenAI folks on practical tricks to make GAN traning stable: Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen (2016) Improved Techniques for Training GANs There was one idea in the

www.inference.vc

https://arxiv.org/pdf/1606.03498.pdf  Improved Techniques for Training GANs

불러오는 중입니다...
728x90