當前位置: 首頁>>代碼示例>>Java>>正文


Java RecordWriter類代碼示例

本文整理匯總了Java中org.apache.hadoop.mapred.RecordWriter的典型用法代碼示例。如果您正苦於以下問題:Java RecordWriter類的具體用法?Java RecordWriter怎麽用?Java RecordWriter使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


RecordWriter類屬於org.apache.hadoop.mapred包,在下文中一共展示了RecordWriter類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
/** {@inheritDoc} */
public RecordWriter<K, V> getRecordWriter(FileSystem filesystem,
    JobConf job, String name, Progressable progress) throws IOException {
  org.apache.hadoop.mapreduce.RecordWriter<K, V> w = super.getRecordWriter(
    new TaskAttemptContextImpl(job, 
          TaskAttemptID.forName(job.get(MRJobConfig.TASK_ATTEMPT_ID))));
  org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter writer = 
   (org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter) w;
  try {
    return new DBRecordWriter(writer.getConnection(), writer.getStatement());
  } catch(SQLException se) {
    throw new IOException(se);
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:15,代碼來源:DBOutputFormat.java

示例2: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
public RecordWriter<WritableComparable<?>, Writable> getRecordWriter(
    final FileSystem fs, JobConf job, String name,
    final Progressable progress) throws IOException {

  final Path segmentDumpFile = new Path(
      FileOutputFormat.getOutputPath(job), name);

  // Get the old copy out of the way
  if (fs.exists(segmentDumpFile))
    fs.delete(segmentDumpFile, true);

  final PrintStream printStream = new PrintStream(
      fs.create(segmentDumpFile));
  return new RecordWriter<WritableComparable<?>, Writable>() {
    public synchronized void write(WritableComparable<?> key, Writable value)
        throws IOException {
      printStream.println(value);
    }

    public synchronized void close(Reporter reporter) throws IOException {
      printStream.close();
    }
  };
}
 
開發者ID:jorcox,項目名稱:GeoCrawler,代碼行數:25,代碼來源:SegmentReader.java

示例3: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
@Override
public RecordWriter<Text, NutchIndexAction> getRecordWriter(
    FileSystem ignored, JobConf job, String name, Progressable progress)
    throws IOException {

  final IndexWriters writers = new IndexWriters(job);

  writers.open(job, name);

  return new RecordWriter<Text, NutchIndexAction>() {

    public void close(Reporter reporter) throws IOException {
      writers.close();
    }

    public void write(Text key, NutchIndexAction indexAction)
        throws IOException {
      if (indexAction.action == NutchIndexAction.ADD) {
        writers.write(indexAction.doc);
      } else if (indexAction.action == NutchIndexAction.DELETE) {
        writers.delete(key.toString());
      }
    }
  };
}
 
開發者ID:jorcox,項目名稱:GeoCrawler,代碼行數:26,代碼來源:IndexerOutputFormat.java

示例4: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
@Override
public RecordWriter<NullWritable,SpreadSheetCellDAO> getRecordWriter(FileSystem ignored, JobConf conf, String name, Progressable progress) throws IOException {
	// check if mimeType is set. If not assume new Excel format (.xlsx)
	
	String defaultConf=conf.get(HadoopOfficeWriteConfiguration.CONF_MIMETYPE,ExcelFileOutputFormat.DEFAULT_MIMETYPE);
	conf.set(HadoopOfficeWriteConfiguration.CONF_MIMETYPE,defaultConf);
	
	Path file = getTaskOutputPath(conf, name);
	// add suffix
	file=file.suffix(ExcelFileOutputFormat.getSuffix(conf.get(HadoopOfficeWriteConfiguration.CONF_MIMETYPE)));
	 	try {
			return new ExcelRecordWriter<>(HadoopUtil.getDataOutputStream(conf,file,progress,getCompressOutput(conf),getOutputCompressorClass(conf, ExcelFileOutputFormat.defaultCompressorClass)),file.getName(),conf);
		} catch (InvalidWriterConfigurationException | OfficeWriterException e) {
			LOG.error(e);
		}

	return null;
}
 
開發者ID:ZuInnoTe,項目名稱:hadoopoffice,代碼行數:19,代碼來源:ExcelFileOutputFormat.java

示例5: close

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
@SuppressWarnings("rawtypes")
public void close(final RecordWriter recordWriter, final Reporter reporter) throws IOException {
  throwCaughtException();

  closePool.execute(new Runnable() {
    @Override
    public void run() {
      try {
        long start = time.getNanoTime();
        recordWriter.close(reporter);
        long duration = time.getTimeSinceMs(start);
        log.info("Flushed file in " + (duration / 1000.0) + " seconds.");
      } catch (Throwable e) {
        log.error("Exeption caught while closing stream. This exception will be thrown later.",
            e);
        exception = e;
      }

    }
  });
}
 
開發者ID:awslabs,項目名稱:emr-dynamodb-connector,代碼行數:22,代碼來源:ExportFileFlusher.java

示例6: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
@Override
public RecordWriter<NullWritable, DynamoDBItemWritable> getRecordWriter(FileSystem ignored,
    JobConf job, String name, Progressable progress) throws IOException {
  boolean isCompressed = getCompressOutput(job);
  CompressionCodec codec = null;
  String extension = "";
  DataOutputStream fileOut;

  if (isCompressed) {
    Class<? extends CompressionCodec> codecClass = getOutputCompressorClass(job, GzipCodec.class);
    codec = ReflectionUtils.newInstance(codecClass, job);
    extension = codec.getDefaultExtension();
  }

  Path file = new Path(FileOutputFormat.getOutputPath(job), name + extension);
  FileSystem fs = file.getFileSystem(job);

  if (!isCompressed) {
    fileOut = fs.create(file, progress);
  } else {
    fileOut = new DataOutputStream(codec.createOutputStream(fs.create(file, progress)));
  }

  return new ExportRecordWriter(fileOut);
}
 
開發者ID:awslabs,項目名稱:emr-dynamodb-connector,代碼行數:26,代碼來源:ExportOutputFormat.java

示例7: testWriteBufferData

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
@Test(enabled = true)
public void testWriteBufferData() throws Exception {
  NullWritable nada = NullWritable.get();
  MneDurableOutputSession<DurableBuffer<?>> sess =
      new MneDurableOutputSession<DurableBuffer<?>>(null, m_conf,
          MneConfigHelper.DEFAULT_OUTPUT_CONFIG_PREFIX);
  MneDurableOutputValue<DurableBuffer<?>> mdvalue =
      new MneDurableOutputValue<DurableBuffer<?>>(sess);
  OutputFormat<NullWritable, MneDurableOutputValue<DurableBuffer<?>>> outputFormat =
      new MneOutputFormat<MneDurableOutputValue<DurableBuffer<?>>>();
  RecordWriter<NullWritable, MneDurableOutputValue<DurableBuffer<?>>> writer =
      outputFormat.getRecordWriter(m_fs, m_conf, null, null);
  DurableBuffer<?> dbuf = null;
  Checksum cs = new CRC32();
  cs.reset();
  for (int i = 0; i < m_reccnt; ++i) {
    dbuf = genupdDurableBuffer(sess, cs);
    Assert.assertNotNull(dbuf);
    writer.write(nada, mdvalue.of(dbuf));
  }
  m_checksum = cs.getValue();
  writer.close(null);
  sess.close();
}
 
開發者ID:apache,項目名稱:mnemonic,代碼行數:25,代碼來源:MneMapredBufferDataTest.java

示例8: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
@Override
@SuppressWarnings("unchecked")
public RecordWriter getRecordWriter(FileSystem ignored,
    JobConf job, String name, Progressable progress) throws IOException {

  // expecting exactly one path

  String tableName = job.get(OUTPUT_TABLE);
  HTable table = null;
  try {
    table = new HTable(HBaseConfiguration.create(job), tableName);
  } catch(IOException e) {
    LOG.error(e);
    throw e;
  }
  table.setAutoFlush(false);
  return new TableRecordWriter(table);
}
 
開發者ID:fengchen8086,項目名稱:LCIndex-HBase-0.94.16,代碼行數:19,代碼來源:TableOutputFormat.java

示例9: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
public RecordWriter<Shard, Text> getRecordWriter(final FileSystem fs,
    JobConf job, String name, final Progressable progress)
    throws IOException {

  final Path perm = new Path(getWorkOutputPath(job), name);

  return new RecordWriter<Shard, Text>() {
    public void write(Shard key, Text value) throws IOException {
      assert (IndexUpdateReducer.DONE.equals(value));

      String shardName = key.getDirectory();
      shardName = shardName.replace("/", "_");

      Path doneFile =
          new Path(perm, IndexUpdateReducer.DONE + "_" + shardName);
      if (!fs.exists(doneFile)) {
        fs.createNewFile(doneFile);
      }
    }

    public void close(final Reporter reporter) throws IOException {
    }
  };
}
 
開發者ID:Nextzero,項目名稱:hadoop-2.6.0-cdh5.4.3,代碼行數:25,代碼來源:IndexUpdateOutputFormat.java

示例10: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
/** {@inheritDoc} */
public RecordWriter<K, V> getRecordWriter(FileSystem filesystem,
    JobConf job, String name, Progressable progress) throws IOException {

  DBConfiguration dbConf = new DBConfiguration(job);
  String tableName = dbConf.getOutputTableName();
  String[] fieldNames = dbConf.getOutputFieldNames();
  
  try {
    Connection connection = dbConf.getConnection();
    PreparedStatement statement = null;

    statement = connection.prepareStatement(constructQuery(tableName, fieldNames));
    return new DBRecordWriter(connection, statement);
  }
  catch (Exception ex) {
    throw new IOException(ex.getMessage());
  }
}
 
開發者ID:rhli,項目名稱:hadoop-EAR,代碼行數:20,代碼來源:DBOutputFormat.java

示例11: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
public RecordWriter<Text, LWDocumentWritable> getRecordWriter(FileSystem ignored, JobConf job,
    String name, Progressable progress) throws IOException {

  final LucidWorksWriter writer = new LucidWorksWriter(progress);
  writer.open(job, name);

  return new RecordWriter<Text, LWDocumentWritable>() {

    public void close(Reporter reporter) throws IOException {
      writer.close();
    }

    public void write(Text key, LWDocumentWritable doc) throws IOException {
      writer.write(key, doc);
    }
  };
}
 
開發者ID:lucidworks,項目名稱:solr-hadoop-common,代碼行數:18,代碼來源:LWMapRedOutputFormat.java

示例12: get

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
@Override @Nonnull
public List<Processor> get(int count) {
    return processorList = range(0, count).mapToObj(i -> {
        try {
            String uuid = context.jetInstance().getCluster().getLocalMember().getUuid();
            TaskAttemptID taskAttemptID = new TaskAttemptID("jet-node-" + uuid, jobContext.getJobID().getId(),
                    JOB_SETUP, i, 0);
            jobConf.set("mapred.task.id", taskAttemptID.toString());
            jobConf.setInt("mapred.task.partition", i);

            TaskAttemptContextImpl taskAttemptContext = new TaskAttemptContextImpl(jobConf, taskAttemptID);
            @SuppressWarnings("unchecked")
            OutputFormat<K, V> outFormat = jobConf.getOutputFormat();
            RecordWriter<K, V> recordWriter = outFormat.getRecordWriter(
                    null, jobConf, uuid + '-' + valueOf(i), Reporter.NULL);
            return new WriteHdfsP<>(
                    recordWriter, taskAttemptContext, outputCommitter, extractKeyFn, extractValueFn);
        } catch (IOException e) {
            throw new JetException(e);
        }

    }).collect(toList());
}
 
開發者ID:hazelcast,項目名稱:hazelcast-jet,代碼行數:24,代碼來源:WriteHdfsP.java

示例13: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
public RecordWriter<WritableComparable, Writable> getRecordWriter(
    final FileSystem fs, JobConf job,
    String name, final Progressable progress) throws IOException {

  final Path segmentDumpFile = new Path(FileOutputFormat.getOutputPath(job), name);

  // Get the old copy out of the way
  if (fs.exists(segmentDumpFile)) fs.delete(segmentDumpFile, true);

  final PrintStream printStream = new PrintStream(fs.create(segmentDumpFile));
  return new RecordWriter<WritableComparable, Writable>() {
    public synchronized void write(WritableComparable key, Writable value) throws IOException {
      printStream.println(value);
    }

    public synchronized void close(Reporter reporter) throws IOException {
      printStream.close();
    }
  };
}
 
開發者ID:yahoo,項目名稱:anthelion,代碼行數:21,代碼來源:SegmentReader.java

示例14: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
@Override
@SuppressWarnings("unchecked")
public RecordWriter getRecordWriter(FileSystem ignored,
    JobConf job, String name, Progressable progress) throws IOException {

  // expecting exactly one path

  String tableName = job.get(OUTPUT_TABLE);
  HTable table = null;
  try {
    table = new HTable(HBaseConfiguration.create(job), tableName);
  } catch(IOException e) {
    LOG.error(e);
    throw e;
  }
  table.setAutoFlush(false, true);
  return new TableRecordWriter(table);
}
 
開發者ID:tenggyut,項目名稱:HIndex,代碼行數:19,代碼來源:TableOutputFormat.java

示例15: getRecordWriter

import org.apache.hadoop.mapred.RecordWriter; //導入依賴的package包/類
@Override
public RecordWriter getRecordWriter(FileSystem fileSystem, JobConf configuration, String s, Progressable progressable) throws IOException {
    String mapName = configuration.get(outputNamedMapProperty);
    Class<CustomSerializer<K>> keySerializerClass = (Class<CustomSerializer<K>>) configuration.getClass(outputNamedMapKeySerializerProperty, null);
    Class<CustomSerializer<V>> valueSerializerClass = (Class<CustomSerializer<V>>) configuration.getClass(outputNamedMapValueSerializerProperty, null);
    int smOrdinal = configuration.getInt(SERIALIZATION_MODE, SerializationMode.DEFAULT.ordinal());
    int amOrdinal = configuration.getInt(AVAILABILITY_MODE, AvailabilityMode.USE_REPLICAS.ordinal());
    SerializationMode serializationMode = SerializationMode.values()[smOrdinal];
    AvailabilityMode availabilityMode = AvailabilityMode.values()[amOrdinal];

    if (mapName == null || mapName.length() == 0 || keySerializerClass == null || valueSerializerClass == null) {
        throw new IOException("Input format is not configured with a valid NamedMap.");
    }

    CustomSerializer<K> keySerializer = ReflectionUtils.newInstance(keySerializerClass, configuration);
    keySerializer.setObjectClass((Class<K>) configuration.getClass(outputNamedMapKeyProperty, null));
    CustomSerializer<V> valueSerializer = ReflectionUtils.newInstance(valueSerializerClass, configuration);
    valueSerializer.setObjectClass((Class<V>) configuration.getClass(outputNamedMapValueProperty, null));
    NamedMap<K, V> namedMap = NamedMapFactory.getMap(mapName, keySerializer, valueSerializer);
    namedMap.setAvailabilityMode(availabilityMode);
    namedMap.setSerializationMode(serializationMode);

    return new NamedMapRecordWriter<K, V>(namedMap);
}
 
開發者ID:scaleoutsoftware,項目名稱:hServer,代碼行數:25,代碼來源:NamedMapOutputFormatMapred.java


注:本文中的org.apache.hadoop.mapred.RecordWriter類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。