當前位置: 首頁>>代碼示例>>Python>>正文


Python nn.ReplicationPad1d方法代碼示例

本文整理匯總了Python中torch.nn.ReplicationPad1d方法的典型用法代碼示例。如果您正苦於以下問題:Python nn.ReplicationPad1d方法的具體用法?Python nn.ReplicationPad1d怎麽用?Python nn.ReplicationPad1d使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在torch.nn的用法示例。


在下文中一共展示了nn.ReplicationPad1d方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: upsample

# 需要導入模塊: from torch import nn [as 別名]
# 或者: from torch.nn import ReplicationPad1d [as 別名]
def upsample(self, x, input_len):
        # Compute padding to compromise the downsample loss
        left_over = input_len % self.dr
        if left_over % 2 == 0:
            left_pad = left_over // 2
            right_pad = left_pad
        else:
            left_pad = left_over // 2
            right_pad = left_over // 2 + 1
        
        x = self.tile_representations(x)

        # padding
        x = x.permute(0, 2, 1).contiguous() # (B, T, D) -> (B, D, T)
        padding = nn.ReplicationPad1d((left_pad, right_pad))
        x = padding(x)
        
        x = x.permute(0, 2, 1).contiguous() # (B, D, T) -> (B, T, D)
        return x 
開發者ID:andi611,項目名稱:Self-Supervised-Speech-Pretraining-and-Representation-Learning,代碼行數:21,代碼來源:nn_transformer.py

示例2: __init__

# 需要導入模塊: from torch import nn [as 別名]
# 或者: from torch.nn import ReplicationPad1d [as 別名]
def __init__(
        self,
        conv_layers,
        embed,
        dropout,
        skip_connections,
        residual_scale,
        non_affine_group_norm,
        conv_bias,
        zero_pad,
        activation,
    ):
        super().__init__()

        def block(n_in, n_out, k, stride):
            # padding dims only really make sense for stride = 1
            ka = k // 2
            kb = ka - 1 if k % 2 == 0 else ka

            pad = (
                ZeroPad1d(ka + kb, 0) if zero_pad else nn.ReplicationPad1d((ka + kb, 0))
            )

            return nn.Sequential(
                pad,
                nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias),
                nn.Dropout(p=dropout),
                norm_block(False, n_out, affine=not non_affine_group_norm),
                activation,
            )

        in_d = embed
        self.conv_layers = nn.ModuleList()
        self.residual_proj = nn.ModuleList()
        for dim, k, stride in conv_layers:
            if in_d != dim and skip_connections:
                self.residual_proj.append(nn.Conv1d(in_d, dim, 1, bias=False))
            else:
                self.residual_proj.append(None)

            self.conv_layers.append(block(in_d, dim, k, stride))
            in_d = dim
        self.conv_layers = nn.Sequential(*self.conv_layers)
        self.skip_connections = skip_connections
        self.residual_scale = math.sqrt(residual_scale) 
開發者ID:pytorch,項目名稱:fairseq,代碼行數:47,代碼來源:wav2vec.py

示例3: _forward

# 需要導入模塊: from torch import nn [as 別名]
# 或者: from torch.nn import ReplicationPad1d [as 別名]
def _forward(self, x):

        if self.permute_input:
            x = x.permute(1, 0, 2).contiguous() # (T, B, D) -> (B, T, D)
            input_len = x.shape[0]
        else:
            input_len = x.shape[1]

        # Compute padding to compromise the downsample loss
        left_over = input_len % self.dr
        if left_over % 2 == 0:
            left_pad = left_over // 2
            right_pad = left_pad
        else:
            left_pad = left_over // 2
            right_pad = left_over // 2 + 1

        # Model forwarding
        spec_stacked, pos_enc, attn_mask = self.process_input_data(x) # x shape: (B, T, D)
        x = self.model(spec_stacked, pos_enc, attn_mask, output_all_encoded_layers=self.weighted_sum or self.select_layer != -1) # (B, T, D) or # (N, B, T, D)

        # Apply weighted sum
        if self.weighted_sum:
            if type(x) is list: x = torch.stack(x)
            softmax_weight = nn.functional.softmax(self.weight, dim=-1)
            B, T, D = x.shape[1], x.shape[2], x.shape[3]
            x = x.reshape(self.num_layers, -1)
            x = torch.matmul(softmax_weight, x).reshape(B, T, D)
        # Select a specific layer
        elif self.select_layer != -1:
            x = x[self.select_layer]

        if self.spec_aug and not self.spec_aug_prev and self.model.training:
            x = spec_augment(x, mask_T=70, mask_F=86, num_T=2, num_F=2, p=1.0) # (B, T, D)

        # If using a downsampling model, apply tile and padding
        if x.shape[1] != input_len:
            x = self.tile_representations(x)

            # padding
            x = x.permute(0, 2, 1).contiguous() # (B, T, D) -> (B, D, T)
            padding = nn.ReplicationPad1d((left_pad, right_pad))
            x = padding(x)
            
            if self.permute_input: x = x.permute(2, 0, 1).contiguous() # (B, D, T) -> (T, B, D)
            else: x = x.permute(0, 2, 1).contiguous() # (B, D, T) -> (B, T, D)
        
        # If not using a downsampling model, permute to output
        elif self.permute_input:
            x = x.permute(1, 0, 2).contiguous() # (B, T, D) -> (T, B, D)
        
        # else: (B, T, D)
        return x 
開發者ID:andi611,項目名稱:Self-Supervised-Speech-Pretraining-and-Representation-Learning,代碼行數:55,代碼來源:nn_transformer.py


注:本文中的torch.nn.ReplicationPad1d方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。