当前位置: 首页>>代码示例>>Java>>正文


Java Dataset.init方法代码示例

本文整理汇总了Java中ncsa.hdf.object.Dataset.init方法的典型用法代码示例。如果您正苦于以下问题:Java Dataset.init方法的具体用法?Java Dataset.init怎么用?Java Dataset.init使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在ncsa.hdf.object.Dataset的用法示例。


在下文中一共展示了Dataset.init方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: loadPopulationFromTime

import ncsa.hdf.object.Dataset; //导入方法依赖的package包/类
private static int[][] loadPopulationFromTime(H5File h5,
                                              int trial,
                                              String output_set,
                                              double pop_from_time)
    throws Exception
{
    String path = "/trial" + trial + "/output/" + output_set;

    final int index;
    {
        double[] times = getSomething(h5, path + "/times");
        if (pop_from_time == -1)
            index = times.length - 1;
        else if (pop_from_time < 0)
            throw new Exception("Time must be nonnegative or -1");
        else {
            index = Arrays.binarySearch(times, pop_from_time);
            if (index < 0)
                throw new Exception("time= " + pop_from_time + " not found "
                                    + "in " + path + "/times");
        }
    }

    String poppath = path + "/population";
    Dataset obj = (Dataset) h5.get(poppath);
    if (obj == null) {
        log.error("Failed to retrieve \"{}\"", path);
        throw new Exception("Path \"" + path + "\" not found");
    }

    /* This is necessary to retrieve dimensions */
    obj.init();

    int rank = obj.getRank();
    long[] dims = obj.getDims();
    long[] start = obj.getStartDims();
    long[] selected = obj.getSelectedDims();
    int[] selectedIndex = obj.getSelectedIndex();

    log.info("Retrieving population from {}:{} row {}", h5, poppath, index);
    log.debug("pristine rank={} dims={} start={} selected={} selectedIndex={}",
              rank, dims, start, selected, selectedIndex);
    start[0] = index;
    selected[0] = 1;
    selected[1] = dims[1];
    selected[2] = dims[2];
    log.debug("selected rank={} dims={} start={} selected={} selectedIndex={}",
              rank, dims, start, selected, selectedIndex);
    int[] data = (int[]) obj.getData();
    int[][] pop = ArrayUtil.reshape(data, (int) dims[1], (int) dims[2]);
    // log.debug("{}", (Object) pop);
    return pop;
}
 
开发者ID:neurord,项目名称:stochdiff,代码行数:54,代码来源:ResultWriterHDF5.java

示例2: createExtensibleArray

import ncsa.hdf.object.Dataset; //导入方法依赖的package包/类
protected H5ScalarDS createExtensibleArray(String name, Group parent, Datatype type,
                                           String TITLE, String LAYOUT, String UNITS,
                                           long... dims)
    throws Exception
{
    long[] maxdims = dims.clone();
    maxdims[0] = H5F_UNLIMITED;
    long[] chunks = dims.clone();

    /* avoid too small chunks */
    chunks[0] = 1;
    if (ArrayUtil.product(chunks) == 0)
        throw new RuntimeException("Empty chunks: " + xJoined(chunks));

    while (ArrayUtil.product(chunks) < 1024)
        chunks[0] *= 2;

    /* do not write any data in the beginning */
    dims[0] = 0;

    /* Create dataspace */
    int filespace_id = H5.H5Screate_simple(dims.length, dims, maxdims);

    /* Create the dataset creation property list, add the shuffle filter
     * and the gzip compression filter. The order in which the filters
     * are added here is significant — we will see much greater results
     * when the shuffle is applied first. The order in which the filters
     * are added to the property list is the order in which they will be
     * invoked when writing data. */
    int dcpl_id = H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE);
    H5.H5Pset_shuffle(dcpl_id);
    H5.H5Pset_deflate(dcpl_id, compression_level);
    H5.H5Pset_chunk(dcpl_id, dims.length, chunks);

    /* Create the dataset */
    final String path = parent.getFullName() + "/" + name;
    H5.H5Dcreate(this.output.getFID(), path,
                 type.toNative(), filespace_id,
                 HDF5Constants.H5P_DEFAULT, dcpl_id, HDF5Constants.H5P_DEFAULT);
    Dataset ds = new H5ScalarDS(this.output, path, "/");
    ds.init();

    log.info("Created {} with dims=[{}] size=[{}] chunks=[{}]",
             name, xJoined(dims), xJoined(maxdims), xJoined(chunks));

    setAttribute(ds, "TITLE", TITLE);
    setAttribute(ds, "LAYOUT", LAYOUT);
    setAttribute(ds, "UNITS", UNITS);

    return (H5ScalarDS) ds;
}
 
开发者ID:neurord,项目名称:stochdiff,代码行数:52,代码来源:ResultWriterHDF5.java


注:本文中的ncsa.hdf.object.Dataset.init方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。