本文整理匯總了Python中nets.perspective_transform.transformer方法的典型用法代碼示例。如果您正苦於以下問題:Python perspective_transform.transformer方法的具體用法?Python perspective_transform.transformer怎麽用?Python perspective_transform.transformer使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類nets.perspective_transform
的用法示例。
在下文中一共展示了perspective_transform.transformer方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: model
# 需要導入模塊: from nets import perspective_transform [as 別名]
# 或者: from nets.perspective_transform import transformer [as 別名]
def model(voxels, transform_matrix, params, is_training):
"""Model transforming the 3D voxels into 2D projections.
Args:
voxels: A tensor of size [batch, depth, height, width, channel]
representing the input of projection layer (tf.float32).
transform_matrix: A tensor of size [batch, 16] representing
the flattened 4-by-4 matrix for transformation (tf.float32).
params: Model parameters (dict).
is_training: Set to True if while training (boolean).
Returns:
A transformed tensor (tf.float32)
"""
del is_training # Doesn't make a difference for projector
# Rearrangement (batch, z, y, x, channel) --> (batch, y, z, x, channel).
# By the standard, projection happens along z-axis but the voxels
# are stored in a different way. So we need to switch the y and z
# axis for transformation operation.
voxels = tf.transpose(voxels, [0, 2, 1, 3, 4])
z_near = params.focal_length
z_far = params.focal_length + params.focal_range
transformed_voxels = perspective_transform.transformer(
voxels, transform_matrix, [params.vox_size] * 3, z_near, z_far)
views = tf.reduce_max(transformed_voxels, [1])
views = tf.reverse(views, [1])
return views