当前位置: 首页>>代码示例>>Java>>正文


Java WritableSerialization类代码示例

本文整理汇总了Java中org.apache.hadoop.io.serializer.WritableSerialization的典型用法代码示例。如果您正苦于以下问题:Java WritableSerialization类的具体用法?Java WritableSerialization怎么用?Java WritableSerialization使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


WritableSerialization类属于org.apache.hadoop.io.serializer包,在下文中一共展示了WritableSerialization类的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testTotalOrderWithCustomSerialization

import org.apache.hadoop.io.serializer.WritableSerialization; //导入依赖的package包/类
public void testTotalOrderWithCustomSerialization() throws Exception {
  TotalOrderPartitioner<String, NullWritable> partitioner =
      new TotalOrderPartitioner<String, NullWritable>();
  Configuration conf = new Configuration();
  conf.setStrings(CommonConfigurationKeys.IO_SERIALIZATIONS_KEY,
      JavaSerialization.class.getName(),
      WritableSerialization.class.getName());
  conf.setClass(MRJobConfig.KEY_COMPARATOR,
      JavaSerializationComparator.class,
      Comparator.class);
  Path p = TestTotalOrderPartitioner.<String>writePartitionFile(
      "totalordercustomserialization", conf, splitJavaStrings);
  conf.setClass(MRJobConfig.MAP_OUTPUT_KEY_CLASS, String.class, Object.class);
  try {
    partitioner.setConf(conf);
    NullWritable nw = NullWritable.get();
    for (Check<String> chk : testJavaStrings) {
      assertEquals(chk.data.toString(), chk.part,
          partitioner.getPartition(chk.data, nw, splitJavaStrings.length + 1));
    }
  } finally {
    p.getFileSystem(conf).delete(p, true);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:25,代码来源:TestTotalOrderPartitioner.java

示例2: getSerialization

import org.apache.hadoop.io.serializer.WritableSerialization; //导入依赖的package包/类
/**
 * Gets serializer for specified class.
 *
 * @param cls Class.
 * @param jobConf Job configuration.
 * @return Appropriate serializer.
 */
@SuppressWarnings("unchecked")
private HadoopSerialization getSerialization(Class<?> cls, Configuration jobConf) throws IgniteCheckedException {
    A.notNull(cls, "cls");

    SerializationFactory factory = new SerializationFactory(jobConf);

    Serialization<?> serialization = factory.getSerialization(cls);

    if (serialization == null)
        throw new IgniteCheckedException("Failed to find serialization for: " + cls.getName());

    if (serialization.getClass() == WritableSerialization.class)
        return new HadoopWritableSerialization((Class<? extends Writable>)cls);

    return new HadoopSerializationWrapper(serialization, cls);
}
 
开发者ID:apache,项目名称:ignite,代码行数:24,代码来源:HadoopV2TaskContext.java

示例3: AppWorkerContainer

import org.apache.hadoop.io.serializer.WritableSerialization; //导入依赖的package包/类
public AppWorkerContainer(AppConfig config) {
  this.config = config ;
  this.appContainerInfoHolder = new AppContainerInfoHolder(config.getAppWorkerContainerId()) ;
  try {
    Configuration rpcConf = new Configuration() ;
    rpcConf.set(
        CommonConfigurationKeys.IO_SERIALIZATIONS_KEY, 
        JavaSerialization.class.getName() + "," + 
            WritableSerialization.class.getName() + "," +
            AvroSerialization.class.getName()
    ) ;
    
    rpcClient = new RPCClient(config.appHostName, config.appRpcPort) ;
    ipcService = IPCService.newBlockingStub(rpcClient.getRPCChannel()) ;
    
    Class<AppWorker> appWorkerClass = (Class<AppWorker>) Class.forName(config.worker) ;
    worker = appWorkerClass.newInstance() ;
  } catch(Throwable error) {
    LOGGER.error("Error" , error);
    onDestroy() ;
  }
}
 
开发者ID:DemandCube,项目名称:NeverwinterDP-Commons,代码行数:23,代码来源:AppWorkerContainer.java

示例4: register

import org.apache.hadoop.io.serializer.WritableSerialization; //导入依赖的package包/类
public static void register(Configuration conf) {
  String[] serializations = conf.getStrings("io.serializations");
  if (ArrayUtils.isEmpty(serializations)) {
    serializations = new String[]{WritableSerialization.class.getName(),
        AvroSpecificSerialization.class.getName(),
        AvroReflectSerialization.class.getName()};
  }
  serializations = (String[]) ArrayUtils.add(serializations, ProtobufSerialization.class.getName());
  conf.setStrings("io.serializations", serializations);
}
 
开发者ID:Hanmourang,项目名称:hiped2,代码行数:11,代码来源:ProtobufSerialization.java

示例5: testIntWritableSerialization

import org.apache.hadoop.io.serializer.WritableSerialization; //导入依赖的package包/类
/**
 * Tests read/write of IntWritable via native WritableSerialization.
 * @throws Exception If fails.
 */
public void testIntWritableSerialization() throws Exception {
    HadoopSerialization ser = new HadoopSerializationWrapper(new WritableSerialization(), IntWritable.class);

    ByteArrayOutputStream buf = new ByteArrayOutputStream();

    DataOutput out = new DataOutputStream(buf);

    ser.write(out, new IntWritable(3));
    ser.write(out, new IntWritable(-5));

    assertEquals("[0, 0, 0, 3, -1, -1, -1, -5]", Arrays.toString(buf.toByteArray()));

    DataInput in = new DataInputStream(new ByteArrayInputStream(buf.toByteArray()));

    assertEquals(3, ((IntWritable)ser.read(in, null)).get());
    assertEquals(-5, ((IntWritable)ser.read(in, null)).get());
}
 
开发者ID:apache,项目名称:ignite,代码行数:22,代码来源:HadoopSerializationWrapperSelfTest.java

示例6: sweep

import org.apache.hadoop.io.serializer.WritableSerialization; //导入依赖的package包/类
/**
 * Runs map reduce to do the sweeping on the mob files.
 * The running of the sweep tool on the same column family are mutually exclusive.
 * The HBase major compaction and running of the sweep tool on the same column family
 * are mutually exclusive.
 * These synchronization is done by the Zookeeper.
 * So in the beginning of the running, we need to make sure only this sweep tool is the only one
 * that is currently running in this column family, and in this column family there're no major
 * compaction in progress.   
 * @param tn The current table name.
 * @param family The descriptor of the current column family.
 * @throws IOException
 * @throws ClassNotFoundException
 * @throws InterruptedException
 * @throws KeeperException
 */
public void sweep(TableName tn, HColumnDescriptor family) throws IOException, ClassNotFoundException,
    InterruptedException, KeeperException {
  Configuration conf = new Configuration(this.conf);
  // check whether the current user is the same one with the owner of hbase root
  String currentUserName = UserGroupInformation.getCurrentUser().getShortUserName();
  FileStatus[] hbaseRootFileStat = fs.listStatus(new Path(conf.get(HConstants.HBASE_DIR)));
  if (hbaseRootFileStat.length > 0) {
    String owner = hbaseRootFileStat[0].getOwner();
    if (!owner.equals(currentUserName)) {
      String errorMsg = "The current user[" + currentUserName + "] doesn't have the privilege."
          + " Please make sure the user is the root of the target HBase";
      LOG.error(errorMsg);
      throw new IOException(errorMsg);
    }
  } else {
    LOG.error("The target HBase doesn't exist");
    throw new IOException("The target HBase doesn't exist");
  }
  String familyName = family.getNameAsString();
  Job job = null;
  try {
    Scan scan = new Scan();
    // Do not retrieve the mob data when scanning
    scan.setAttribute(MobConstants.MOB_SCAN_RAW, Bytes.toBytes(Boolean.TRUE));
    scan.setFilter(new ReferenceOnlyFilter());
    scan.setCaching(10000);
    scan.setCacheBlocks(false);
    scan.setMaxVersions(family.getMaxVersions());
    conf.set(CommonConfigurationKeys.IO_SERIALIZATIONS_KEY,
        JavaSerialization.class.getName() + "," + WritableSerialization.class.getName());
    job = prepareJob(tn, familyName, scan, conf);
    job.getConfiguration().set(TableInputFormat.SCAN_COLUMN_FAMILY, familyName);
    // Record the compaction start time.
    // In the sweep tool, only the mob file whose modification time is older than
    // (startTime - delay) could be handled by this tool.
    // The delay is one day. It could be configured as well, but this is only used
    // in the test.
    job.getConfiguration().setLong(MobConstants.MOB_COMPACTION_START_DATE,
        compactionStartTime);

    job.setPartitionerClass(MobFilePathHashPartitioner.class);
    submit(job, tn, familyName);
    if (job.waitForCompletion(true)) {
      // Archive the unused mob files.
      removeUnusedFiles(job, tn, family);
    }
  } finally {
    cleanup(job, tn, familyName);
  }
}
 
开发者ID:intel-hadoop,项目名称:HBase-LOB,代码行数:67,代码来源:SweepJob.java


注:本文中的org.apache.hadoop.io.serializer.WritableSerialization类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。