当前位置: 首页>>代码示例>>Python>>正文


Python AttributeDict.m方法代码示例

本文整理汇总了Python中utils.AttributeDict.m方法的典型用法代码示例。如果您正苦于以下问题:Python AttributeDict.m方法的具体用法?Python AttributeDict.m怎么用?Python AttributeDict.m使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在utils.AttributeDict的用法示例。


在下文中一共展示了AttributeDict.m方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: apply_tagger

# 需要导入模块: from utils import AttributeDict [as 别名]
# 或者: from utils.AttributeDict import m [as 别名]
    def apply_tagger(self, x, apply_noise, y=None):
        """ Build one path of Tagger """
        mb_size = x.shape[1]
        input_shape = (self.p.n_groups, mb_size) + self.in_dim
        in_dim = np.prod(self.in_dim)

        # Add noise
        x_corr = self.corrupt(x) if apply_noise else x
        # Repeat input
        x_corr = T.repeat(x_corr, self.p.n_groups, 0)

        # Compute v
        if self.p.input_type == 'binary':
            v = None
        elif self.p.input_type == 'continuous':
            v = self.weight(1., 'v')
            v = v * T.alloc(1., *input_shape)
            # Cap to positive range
            v = nn.exp_inv_sinh(v)

        d = AttributeDict()

        if y:
            d.pred = []
            d.class_error, d.class_cost = [], []
            # here we have the book-keeping of z and m for the visualizations.
            d.z = []
            d.m = []
        else:
            d.denoising_cost, d.ami_score, d.ami_score_per_sample = [], [], []

        assert self.p.n_iterations >= 1

        # z_hat is the value for the next iteration of tagger.
        # z is the current iteration tagger input
        # m is the current iteration mask input
        # m_hat is the value for the next iteration of tagger.
        # m_lh is the mask likelihood.
        # z_delta is the gradient of z, which depends on x, z and m.
        for step in xrange(self.p.n_iterations):
            # Encoder
            # =======

            # Compute m, z and z_hat_pre_bin
            if step == 0:
                # No values from previous iteration, so let's make them up
                m, z = self.init_m_z(input_shape)
                z_hat_pre_bin = None
                # let's keep in the bookkeeping for the visualizations.
                if y:
                    d.z.append(z)
                    d.m.append(m)
            else:
                # Feed in the previous iteration's estimates
                z = z_hat
                m = m_hat

            # Compute m_lh
            m_lh = self.m_lh(x_corr, z, v)
            z_delta = self.f_z_deriv(x_corr, z, m)

            z_tilde = z_hat_pre_bin if z_hat_pre_bin is not None else z
            # Concatenate all inputs
            inputs = [z_tilde, z_delta, m, m_lh]
            inputs = T.concatenate(inputs, axis=2)

            # Projection, batch-normalization and activation to a hidden layer
            z = self.proj(inputs, in_dim * 4, self.p.encoder_proj[0])

            z -= z.mean((0, 1), keepdims=True)
            z /= T.sqrt(z.var((0, 1), keepdims=True) + np.float32(1e-10))

            z += self.bias(0.0 * np.ones(self.p.encoder_proj[0]), 'b')
            h = self.apply_act(z, 'relu')

            # The first dimension is the group. Let's flatten together with
            # minibatch in order to have parametric mapping compute all groups
            # in parallel
            h, undo_flatten = flatten_first_two_dims(h)

            # Parametric Mapping
            # ==================

            self.ladder.apply(None, self.y, h)
            ladder_encoder_output = undo_flatten(self.ladder.act.corr.unlabeled.h[len(self.p.encoder_proj) - 1])
            ladder_decoder_output = undo_flatten(self.ladder.act.est.z[0])

            # Decoder
            # =======

            # compute z_hat
            z_u = self.proj(ladder_decoder_output, self.p.encoder_proj[0], in_dim, scope='z_u')

            z_u -= z_u.mean((0, 1), keepdims=True)
            z_u /= T.sqrt(z_u.var((0, 1), keepdims=True) + np.float32(1e-10))

            z_hat = self.weight(np.ones(in_dim), 'c1') * z_u + self.bias(np.zeros(in_dim), 'b1')
            z_hat = z_hat.reshape(input_shape)

            # compute m_hat
#.........这里部分代码省略.........
开发者ID:CuriousAI,项目名称:tagger,代码行数:103,代码来源:tagger.py


注:本文中的utils.AttributeDict.m方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。