當前位置: 首頁>>代碼示例>>Java>>正文


Java PigSplit.setConf方法代碼示例

本文整理匯總了Java中org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit.setConf方法的典型用法代碼示例。如果您正苦於以下問題:Java PigSplit.setConf方法的具體用法?Java PigSplit.setConf怎麽用?Java PigSplit.setConf使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit的用法示例。


在下文中一共展示了PigSplit.setConf方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: initializeReader

import org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit; //導入方法依賴的package包/類
private boolean initializeReader() throws IOException, 
InterruptedException {
    if(curSplitIndex > inpSplits.size() - 1) {
        // past the last split, we are done
        return false;
    }
    if(reader != null){
        reader.close();
    }
    InputSplit curSplit = inpSplits.get(curSplitIndex);
    TaskAttemptContext tAContext = HadoopShims.createTaskAttemptContext(conf, 
            new TaskAttemptID());
    reader = inputFormat.createRecordReader(curSplit, tAContext);
    reader.initialize(curSplit, tAContext);
    // create a dummy pigsplit - other than the actual split, the other
    // params are really not needed here where we are just reading the
    // input completely
    PigSplit pigSplit = new PigSplit(new InputSplit[] {curSplit}, -1, 
            new ArrayList<OperatorKey>(), -1);
    // Set the conf object so that if the wrappedLoadFunc uses it,
    // it won't be null
    pigSplit.setConf(conf);
    wrappedLoadFunc.prepareToRead(reader, pigSplit);
    return true;
}
 
開發者ID:sigmoidanalytics,項目名稱:spork,代碼行數:26,代碼來源:ReadToEndLoader.java

示例2: test10

import org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit; //導入方法依賴的package包/類
@Test
public void test10() throws IOException, InterruptedException {
    // verify locations in order
    ArrayList<InputSplit> rawSplits = new ArrayList<InputSplit>();

    rawSplits.add(new FileSplit(new Path("path1"), 0, 100, new String[] {
            "l1", "l2", "l3" }));
    rawSplits.add(new FileSplit(new Path("path2"), 0, 200, new String[] {
            "l3", "l4", "l5" }));
    rawSplits.add(new FileSplit(new Path("path3"), 0, 400, new String[] {
            "l5", "l6", "l1" }));
    List<InputSplit> result = pigInputFormat.getPigSplits(rawSplits, 0, ok,
            null, true, conf);

    Assert.assertEquals(result.size(), 1);

    for (InputSplit split : result) {
        PigSplit pigSplit = (PigSplit) split;
        // write to a byte array output stream
        ByteArrayOutputStream outputStream = new ByteArrayOutputStream();

        DataOutput out = new DataOutputStream(outputStream);
        pigSplit.write(out);
        // restore the pig split from the byte array
        ByteArrayInputStream inputStream = new ByteArrayInputStream(
                outputStream.toByteArray());

        DataInput in = new DataInputStream(inputStream);
        PigSplit anotherSplit = new PigSplit();
        anotherSplit.setConf(conf);

        anotherSplit.readFields(in);

        Assert.assertEquals(700, anotherSplit.getLength());
        checkLocationOrdering(pigSplit.getLocations(), new String[] { "l5",
                "l1", "l6", "l3", "l4" });

        Assert.assertEquals(3, anotherSplit.getNumPaths());

        Assert.assertEquals(
                "org.apache.hadoop.mapreduce.lib.input.FileSplit",
                (anotherSplit.getWrappedSplit(0).getClass().getName()));
        Assert.assertEquals(
                "org.apache.hadoop.mapreduce.lib.input.FileSplit",
                (anotherSplit.getWrappedSplit(1).getClass().getName()));
        Assert.assertEquals(
                "org.apache.hadoop.mapreduce.lib.input.FileSplit",
                (anotherSplit.getWrappedSplit(2).getClass().getName()));
    }
}
 
開發者ID:sigmoidanalytics,項目名稱:spork,代碼行數:51,代碼來源:TestSplitCombine.java

示例3: test11

import org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit; //導入方法依賴的package包/類
@Test
public void test11() throws IOException, InterruptedException {
    // verify locations in order
    ArrayList<InputSplit> rawSplits = new ArrayList<InputSplit>();

    // first split is parquetinputsplit
    rawSplits.add(new ParquetInputSplit(new Path("path1"), 0, 100,
            new String[] { "l1", "l2", "l3" },
            new ArrayList<BlockMetaData>(), "", "",
            new HashMap<String, String>(), new HashMap<String, String>()));
    // second split is file split
    rawSplits.add(new FileSplit(new Path("path2"), 0, 400, new String[] {
            "l5", "l6", "l1" }));

    List<InputSplit> result = pigInputFormat.getPigSplits(rawSplits, 0, ok,
            null, true, conf);

    // pig combines two into one pigsplit
    Assert.assertEquals(result.size(), 1);

    for (InputSplit split : result) {
        PigSplit pigSplit = (PigSplit) split;

        // write to a byte array output stream
        ByteArrayOutputStream outputStream = new ByteArrayOutputStream();

        DataOutput out = new DataOutputStream(outputStream);
        pigSplit.write(out);
        // restore the pig split from the byte array
        ByteArrayInputStream inputStream = new ByteArrayInputStream(
                outputStream.toByteArray());

        DataInput in = new DataInputStream(inputStream);
        PigSplit anotherSplit = new PigSplit();
        anotherSplit.setConf(conf);
        anotherSplit.readFields(in);

        Assert.assertEquals(500, anotherSplit.getLength());

        Assert.assertEquals(2, anotherSplit.getNumPaths());
        Assert.assertEquals("parquet.hadoop.ParquetInputSplit",
                (anotherSplit.getWrappedSplit(0).getClass().getName()));
        Assert.assertEquals(
                "org.apache.hadoop.mapreduce.lib.input.FileSplit",
                (anotherSplit.getWrappedSplit(1).getClass().getName()));
    }
}
 
開發者ID:sigmoidanalytics,項目名稱:spork,代碼行數:48,代碼來源:TestSplitCombine.java


注:本文中的org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit.setConf方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。