当前位置: 首页>>代码示例>>Python>>正文


Python cudart.check_cuda_status函数代码示例

本文整理汇总了Python中quagga.cuda.cudart.check_cuda_status函数的典型用法代码示例。如果您正苦于以下问题:Python check_cuda_status函数的具体用法?Python check_cuda_status怎么用?Python check_cuda_status使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。


在下文中一共展示了check_cuda_status函数的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: assign_sequential_weighted_sum

def assign_sequential_weighted_sum(stream, nrows, ncols, matrices, weights, n, out):
    status = gpu_matrix_kernels._assignSequentialWeightedSum(stream, nrows, ncols, matrices, weights, n, out)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例2: transpose_int

def transpose_int(stream, nrows, ncols, in_, out):
    status = gpu_matrix_kernels._transposeInt(stream, nrows, ncols, in_, out)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例3: callback

 def callback(stream, status, user_data):
     cudart.check_cuda_status(status)
     args, kwargs = ct.cast(user_data, ct_py_object_p).contents.value
     function(*args, **kwargs)
     GpuContext._user_data[ct.cast(stream, ct.c_void_p).value].popleft()
开发者ID:Sandy4321,项目名称:quagga,代码行数:5,代码来源:GpuContext.py

示例4: add_repeat_along_col_derivative

def add_repeat_along_col_derivative(stream, repeats, a, nrows, ncols, derivative):
    status = gpu_matrix_kernels._addRepeatAlongColDerivative(stream, repeats, a, nrows, ncols, derivative)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例5: add_scaled_div_sqrt

def add_scaled_div_sqrt(stream, nelems, alpha, a, b, epsilon, c):
    status = gpu_matrix_kernels._addScaledDivSqrt(stream, nelems, alpha, a, b, epsilon, c)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例6: matrix_vector_column_hprod

def matrix_vector_column_hprod(stream, nrows, ncols, matrix, vector, out):
    status = gpu_matrix_kernels._matrixVectorColumnHprod(stream, nrows, ncols, matrix, vector, out)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例7: batch_horizontal_split

def batch_horizontal_split(stream, n, nrows, x_ncols, y_ncols, matrices, x_matrices, y_matrices):
    status = gpu_matrix_kernels._batchHorizontalSplit(stream, n, nrows, x_ncols, y_ncols, matrices, x_matrices, y_matrices)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例8: sliced_rows_batch_scaled_add

def sliced_rows_batch_scaled_add(stream, embd_rows_indxs, nrows, ncols, alpha, dense_matrices, embd_nrows, embd_ncols, embd_matrix):
    status = gpu_matrix_kernels._slicedRowsBatchScaledAdd(stream, embd_rows_indxs, nrows, ncols, alpha, dense_matrices, embd_nrows, embd_ncols, embd_matrix)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例9: assign_scaled_addition

def assign_scaled_addition(stream, nelems, alpha, a, b, out):
    status = gpu_matrix_kernels._assignScaledAddition(stream, nelems, alpha, a, b, out)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例10: add_attention_tile

def add_attention_tile(stream, nrows, ncols, derivative, a, dL_dpre_a, u, n, matrices_derivs):
    status = gpu_matrix_kernels._addAttentionTile(stream, nrows, ncols, derivative, a, dL_dpre_a, u, n, matrices_derivs)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例11: slice_rows_batch

def slice_rows_batch(stream, embd_rows_indxs, nrows, ncols, embd_matrix, embd_nrows, embd_ncols, dense_matrices):
    status = gpu_matrix_kernels._sliceRowsBatch(stream, embd_rows_indxs, nrows, ncols, embd_matrix, embd_nrows, embd_ncols, dense_matrices)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例12: add_attention_derivative

def add_attention_derivative(stream, nrows, ncols, matrices, derivative, n, out):
    status = gpu_matrix_kernels._addAttentionDerivative(stream, nrows, ncols, matrices, derivative, n, out)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例13: assign_dL_dpre_a

def assign_dL_dpre_a(stream, nrows, ncols, matrices, derivative, weights, n, out):
    status = gpu_matrix_kernels._assignDLDprea(stream, nrows, ncols, matrices, derivative, weights, n, out)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例14: sequentially_tile

def sequentially_tile(stream, nelems, a, matrices, n):
    status = gpu_matrix_kernels._sequentiallyTile(stream, nelems, a, matrices, n)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py

示例15: add_hprod_one_minus_mask

def add_hprod_one_minus_mask(stream, nelems, mask, a, out):
    status = gpu_matrix_kernels._addHprodOneMinusMask(stream, nelems, mask, a, out)
    cudart.check_cuda_status(status)
开发者ID:Sandy4321,项目名称:quagga,代码行数:3,代码来源:gpu_matrix_kernels.py


注:本文中的quagga.cuda.cudart.check_cuda_status函数示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。