当前位置: 首页>>代码示例>>C++>>正文


C++ Kernel::getWorkGroupInfo方法代码示例

本文整理汇总了C++中cl::Kernel::getWorkGroupInfo方法的典型用法代码示例。如果您正苦于以下问题:C++ Kernel::getWorkGroupInfo方法的具体用法?C++ Kernel::getWorkGroupInfo怎么用?C++ Kernel::getWorkGroupInfo使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在cl::Kernel的用法示例。


在下文中一共展示了Kernel::getWorkGroupInfo方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的C++代码示例。

示例1: cl_minmax_local_size

/// \brief Query the preferred factor of local size
/// \ingroup OpenCL
///
/// \param kern An OpenCL kernel
/// \param dev An OpenCL device
/// \param factor Multiplier factor of local size for optimzied performance
/// \param lmax Maximum of the local size
/// \param mmax Maximum of the multiplier of the factor
inline void cl_minmax_local_size (
        const ::cl::Kernel &kern, const ::cl::Device &dev,
        std::size_t &factor, std::size_t &lmax, std::size_t &mmax)
{
    try {
        kern.getWorkGroupInfo(dev,
                CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE, &factor);
        kern.getWorkGroupInfo(dev,
                CL_KERNEL_WORK_GROUP_SIZE, &lmax);
        if (factor == 0 || factor > lmax) {
            factor = lmax = mmax = 0;
            return;
        }
        mmax = lmax / factor;
    } catch (const ::cl::Error &) {
        factor = lmax = mmax = 0;
    }
}
开发者ID:Soledad89,项目名称:vSMC,代码行数:26,代码来源:cl_manip.hpp

示例2: catch

/// \brief The preferred global and local size
/// \ingroup OpenCL
///
/// \return The difference between the preferred global size and the N
inline std::size_t cl_preferred_work_size (std::size_t N,
        const ::cl::Kernel &kern, const ::cl::Device &dev,
        std::size_t &global_size, std::size_t &local_size)
{
    cl::size_t<3> reqd_size;
    try {
        kern.getWorkGroupInfo(dev,
                CL_KERNEL_COMPILE_WORK_GROUP_SIZE, &reqd_size);
    } catch (const ::cl::Error &) {
        reqd_size[0] = 0;
    }

    if (reqd_size[0] != 0) {
        local_size = reqd_size[0];
        global_size = cl_min_global_size(N, local_size);

        return global_size - N;
    }

    std::size_t factor;
    std::size_t lmax;
    std::size_t mmax;
    cl_minmax_local_size(kern, dev, factor, lmax, mmax);
    if (lmax == 0) {
        global_size = N;
        local_size = 0;

        return global_size - N;
    }

    local_size = lmax;
    global_size = cl_min_global_size(N, local_size);
    std::size_t diff_size = global_size - N;
    for (std::size_t m = mmax; m >= 1; --m) {
        std::size_t l = m * factor;
        std::size_t g = cl_min_global_size(N, l);
        std::size_t d = g - N;
        if (d < diff_size) {
            local_size = l;
            global_size = g;
            diff_size = d;
        }
    }

    return diff_size;
}
开发者ID:Soledad89,项目名称:vSMC,代码行数:50,代码来源:cl_manip.hpp

示例3: calculateSpaceNeededForClosePlanes

size_t VNNclAlgorithm::calculateSpaceNeededForClosePlanes(cl::Kernel kernel, cl::Device device, size_t local_work_size, size_t nPlanes_numberOfInputImages, int nClosePlanes)
{
	// Find out how much local memory the device has
	size_t dev_local_mem_size;
	dev_local_mem_size = device.getInfo<CL_DEVICE_LOCAL_MEM_SIZE>();

	// Find the maximum work group size
	size_t max_work_size;
	kernel.getWorkGroupInfo(device, CL_KERNEL_WORK_GROUP_SIZE, &max_work_size);

	// Now find the largest multiple of the preferred work group size that will fit into local mem
	size_t constant_local_mem = sizeof(cl_float) * 4 * nPlanes_numberOfInputImages;

	size_t varying_local_mem = (sizeof(cl_float) + sizeof(cl_short) + sizeof(cl_uchar) + sizeof(cl_uchar)) * (nClosePlanes + 1);  //see _close_plane struct in kernels.cl
	report(QString("Device has %1 bytes of local memory").arg(dev_local_mem_size));
	dev_local_mem_size -= constant_local_mem + 128; //Hmmm? 128?

	// How many work items can the local mem support?
	size_t maxItems = dev_local_mem_size / varying_local_mem;
	// And what is the biggest multiple of local_work_size that fits into that?
	int multiple = maxItems / local_work_size;

	if(multiple == 0)
	{
		// If the maximum amount of work items is smaller than the preferred multiple, we end up here.
		// This means that the local memory use is so big we can't even fit into the preferred multiple, and
		// have use a sub-optimal local work size.
		local_work_size = std::min(max_work_size, maxItems);
	}
	else
	{
		// Otherwise, we make it fit into the local work size.
		local_work_size = std::min(max_work_size, multiple * local_work_size);
	}

	size_t close_planes_size = varying_local_mem*local_work_size;

	return close_planes_size;
}
开发者ID:SINTEFMedtek,项目名称:CustusX,代码行数:39,代码来源:cxVNNclAlgorithm.cpp


注:本文中的cl::Kernel::getWorkGroupInfo方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。