当前位置: 首页>>代码示例>>Python>>正文


Python dp_optimizer.DPGradientDescentOptimizer方法代码示例

本文整理汇总了Python中differential_privacy.dp_sgd.dp_optimizer.dp_optimizer.DPGradientDescentOptimizer方法的典型用法代码示例。如果您正苦于以下问题:Python dp_optimizer.DPGradientDescentOptimizer方法的具体用法?Python dp_optimizer.DPGradientDescentOptimizer怎么用?Python dp_optimizer.DPGradientDescentOptimizer使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在differential_privacy.dp_sgd.dp_optimizer.dp_optimizer的用法示例。


在下文中一共展示了dp_optimizer.DPGradientDescentOptimizer方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: GAN_solvers

# 需要导入模块: from differential_privacy.dp_sgd.dp_optimizer import dp_optimizer [as 别名]
# 或者: from differential_privacy.dp_sgd.dp_optimizer.dp_optimizer import DPGradientDescentOptimizer [as 别名]
def GAN_solvers(D_loss, G_loss, learning_rate, batch_size, total_examples, 
        l2norm_bound, batches_per_lot, sigma, dp=False):
    """
    Optimizers
    """
    discriminator_vars = [v for v in tf.trainable_variables() if v.name.startswith('discriminator')]
    generator_vars = [v for v in tf.trainable_variables() if v.name.startswith('generator')]
    if dp:
        print('Using differentially private SGD to train discriminator!')
        eps = tf.placeholder(tf.float32)
        delta = tf.placeholder(tf.float32)
        priv_accountant = accountant.GaussianMomentsAccountant(total_examples)
        clip = True
        l2norm_bound = l2norm_bound/batch_size
        batches_per_lot = 1
        gaussian_sanitizer = sanitizer.AmortizedGaussianSanitizer(
                priv_accountant,
                [l2norm_bound, clip])
       
        # the trick is that we need to calculate the gradient with respect to
        # each example in the batch, during the DP SGD step
        D_solver = dp_optimizer.DPGradientDescentOptimizer(learning_rate,
                [eps, delta],
                sanitizer=gaussian_sanitizer,
                sigma=sigma,
                batches_per_lot=batches_per_lot).minimize(D_loss, var_list=discriminator_vars)
    else:
        D_loss_mean_over_batch = tf.reduce_mean(D_loss)
        D_solver = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(D_loss_mean_over_batch, var_list=discriminator_vars)
        priv_accountant = None
    G_loss_mean_over_batch = tf.reduce_mean(G_loss)
    G_solver = tf.train.AdamOptimizer().minimize(G_loss_mean_over_batch, var_list=generator_vars)
    return D_solver, G_solver, priv_accountant

# --- to do with the model --- # 
开发者ID:ratschlab,项目名称:RGAN,代码行数:37,代码来源:model.py


注:本文中的differential_privacy.dp_sgd.dp_optimizer.dp_optimizer.DPGradientDescentOptimizer方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。