當前位置: 首頁>>代碼示例>>Java>>正文


Java IOManager.shutdown方法代碼示例

本文整理匯總了Java中org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown方法的典型用法代碼示例。如果您正苦於以下問題:Java IOManager.shutdown方法的具體用法?Java IOManager.shutdown怎麽用?Java IOManager.shutdown使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.flink.runtime.io.disk.iomanager.IOManager的用法示例。


在下文中一共展示了IOManager.shutdown方法的8個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: testAddOnSpilledPartitionWithSlowWriter

import org.apache.flink.runtime.io.disk.iomanager.IOManager; //導入方法依賴的package包/類
/**
 * Tests {@link SpillableSubpartition#add(Buffer)} with a spilled partition where adding the
 * write request fails with an exception.
 */
@Test
public void testAddOnSpilledPartitionWithSlowWriter() throws Exception {
    // simulate slow writer by a no-op write operation
    IOManager ioManager = new IOManagerAsyncWithNoOpBufferFileWriter();
    SpillableSubpartition partition = createSubpartition(ioManager);
    assertEquals(0, partition.releaseMemory());

    Buffer buffer = TestBufferFactory.createBuffer(4096, 4096);
    boolean bufferRecycled;
    try {
        partition.add(buffer);
    } finally {
        ioManager.shutdown();
        bufferRecycled = buffer.isRecycled();
        if (!bufferRecycled) {
            buffer.recycleBuffer();
        }
    }
    if (bufferRecycled) {
        Assert.fail("buffer recycled before the write operation completed");
    }
    assertEquals(1, partition.getTotalNumberOfBuffers());
    assertEquals(4096, partition.getTotalNumberOfBytes());
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:29,代碼來源:SpillableSubpartitionTest.java

示例2: testAddOnSpilledPartitionWithFailingWriter

import org.apache.flink.runtime.io.disk.iomanager.IOManager; //導入方法依賴的package包/類
/**
 * Tests {@link SpillableSubpartition#add(Buffer)} with a spilled partition where adding the
 * write request fails with an exception.
 */
@Test
public void testAddOnSpilledPartitionWithFailingWriter() throws Exception {
    IOManager ioManager = new IOManagerAsyncWithClosedBufferFileWriter();
    SpillableSubpartition partition = createSubpartition(ioManager);
    assertEquals(0, partition.releaseMemory());

    exception.expect(IOException.class);

    Buffer buffer = TestBufferFactory.createBuffer(4096, 4096);
    boolean bufferRecycled;
    try {
        partition.add(buffer);
    } finally {
        ioManager.shutdown();
        bufferRecycled = buffer.isRecycled();
        if (!bufferRecycled) {
            buffer.recycleBuffer();
        }
    }
    if (!bufferRecycled) {
        Assert.fail("buffer not recycled");
    }
    assertEquals(0, partition.getTotalNumberOfBuffers());
    assertEquals(0, partition.getTotalNumberOfBytes());
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:30,代碼來源:SpillableSubpartitionTest.java

示例3: testSpillingFreesOnlyOverflowSegments

import org.apache.flink.runtime.io.disk.iomanager.IOManager; //導入方法依賴的package包/類
/**
 * This tests the case where no additional partition buffers are used at the point when spilling
 * is triggered, testing that overflow bucket buffers are taken into account when deciding which
 * partition to spill.
 */
@Test
public void testSpillingFreesOnlyOverflowSegments() {
    final IOManager ioMan = new IOManagerAsync();
    
    final TypeSerializer<ByteValue> serializer = ByteValueSerializer.INSTANCE;
    final TypeComparator<ByteValue> buildComparator = new ValueComparator<>(true, ByteValue.class);
    final TypeComparator<ByteValue> probeComparator = new ValueComparator<>(true, ByteValue.class);
    
    @SuppressWarnings("unchecked")
    final TypePairComparator<ByteValue, ByteValue> pairComparator = Mockito.mock(TypePairComparator.class);
    
    try {
        final int pageSize = 32*1024;
        final int numSegments = 34;

        List<MemorySegment> memory = getMemory(numSegments, pageSize);

        MutableHashTable<ByteValue, ByteValue> table = new MutableHashTable<>(
                serializer, serializer, buildComparator, probeComparator,
                pairComparator, memory, ioMan, 1, false);

        table.open(new ByteValueIterator(100000000), new ByteValueIterator(1));
        
        table.close();
        
        checkNoTempFilesRemain(ioMan);
    }
    catch (Exception e) {
        e.printStackTrace();
        fail(e.getMessage());
    }
    finally {
        ioMan.shutdown();
    }
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:41,代碼來源:HashTableTest.java

示例4: testCloseAndDeleteOutputView

import org.apache.flink.runtime.io.disk.iomanager.IOManager; //導入方法依賴的package包/類
@Test
public void testCloseAndDeleteOutputView() {
    final IOManager ioManager = new IOManagerAsync();
    try {
        MemoryManager memMan = new MemoryManager(4 * 16*1024, 1, 16*1024, MemoryType.HEAP, true);
        List<MemorySegment> memory = new ArrayList<MemorySegment>();
        memMan.allocatePages(new DummyInvokable(), memory, 4);
        
        FileIOChannel.ID channel = ioManager.createChannel();
        BlockChannelWriter<MemorySegment> writer = ioManager.createBlockChannelWriter(channel);
        
        FileChannelOutputView out = new FileChannelOutputView(writer, memMan, memory, memMan.getPageSize());
        new StringValue("Some test text").write(out);
        
        // close for the first time, make sure all memory returns
        out.close();
        assertTrue(memMan.verifyEmpty());
        
        // close again, should not cause an exception
        out.close();
        
        // delete, make sure file is removed
        out.closeAndDelete();
        assertFalse(new File(channel.getPath()).exists());
    }
    catch (Exception e) {
        e.printStackTrace();
        fail(e.getMessage());
    }
    finally {
        ioManager.shutdown();
    }
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:34,代碼來源:FileChannelStreamsTest.java

示例5: testCloseAndDeleteInputView

import org.apache.flink.runtime.io.disk.iomanager.IOManager; //導入方法依賴的package包/類
@Test
public void testCloseAndDeleteInputView() {
    final IOManager ioManager = new IOManagerAsync();
    try {
        MemoryManager memMan = new MemoryManager(4 * 16*1024, 1, 16*1024, MemoryType.HEAP, true);
        List<MemorySegment> memory = new ArrayList<MemorySegment>();
        memMan.allocatePages(new DummyInvokable(), memory, 4);
        
        FileIOChannel.ID channel = ioManager.createChannel();
        
        // add some test data
        try (FileWriter wrt = new FileWriter(channel.getPath())) {
            wrt.write("test data");
        }
        
        BlockChannelReader<MemorySegment> reader = ioManager.createBlockChannelReader(channel);
        FileChannelInputView in = new FileChannelInputView(reader, memMan, memory, 9);
        
        // read just something
        in.readInt();
        
        // close for the first time, make sure all memory returns
        in.close();
        assertTrue(memMan.verifyEmpty());
        
        // close again, should not cause an exception
        in.close();
        
        // delete, make sure file is removed
        in.closeAndDelete();
        assertFalse(new File(channel.getPath()).exists());
    }
    catch (Exception e) {
        e.printStackTrace();
        fail(e.getMessage());
    }
    finally {
        ioManager.shutdown();
    }
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:41,代碼來源:FileChannelStreamsTest.java

示例6: testSpillingWhenBuildingTableWithoutOverflow

import org.apache.flink.runtime.io.disk.iomanager.IOManager; //導入方法依賴的package包/類
/**
 * Tests that the MutableHashTable spills its partitions when creating the initial table
 * without overflow segments in the partitions. This means that the records are large.
 */
@Test
public void testSpillingWhenBuildingTableWithoutOverflow() throws Exception {
    final IOManager ioMan = new IOManagerAsync();

    try {
        final TypeSerializer<byte[]> serializer = BytePrimitiveArraySerializer.INSTANCE;
        final TypeComparator<byte[]> buildComparator = new BytePrimitiveArrayComparator(true);
        final TypeComparator<byte[]> probeComparator = new BytePrimitiveArrayComparator(true);

        @SuppressWarnings("unchecked") final TypePairComparator<byte[], byte[]> pairComparator =
            new GenericPairComparator<>(
                new BytePrimitiveArrayComparator(true), new BytePrimitiveArrayComparator(true));

        final int pageSize = 128;
        final int numSegments = 33;

        List<MemorySegment> memory = getMemory(numSegments, pageSize);

        MutableHashTable<byte[], byte[]> table = new MutableHashTable<byte[], byte[]>(
            serializer,
            serializer,
            buildComparator,
            probeComparator,
            pairComparator,
            memory,
            ioMan,
            1,
            false);

        int numElements = 9;

        table.open(
            new CombiningIterator<byte[]>(
                new ByteArrayIterator(numElements, 128, (byte) 0),
                new ByteArrayIterator(numElements, 128, (byte) 1)),
            new CombiningIterator<byte[]>(
                new ByteArrayIterator(1, 128, (byte) 0),
                new ByteArrayIterator(1, 128, (byte) 1)));

        while (table.nextRecord()) {
            MutableObjectIterator<byte[]> iterator = table.getBuildSideIterator();

            int counter = 0;

            while (iterator.next() != null) {
                counter++;
            }

            // check that we retrieve all our elements
            Assert.assertEquals(numElements, counter);
        }

        table.close();
    } finally {
        ioMan.shutdown();
    }
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:62,代碼來源:HashTableTest.java

示例7: testReleaseOnSpillablePartitionWithSlowWriter

import org.apache.flink.runtime.io.disk.iomanager.IOManager; //導入方法依賴的package包/類
/**
 * Tests {@link SpillableSubpartition#releaseMemory()} with a spillable partition which has a a
 * writer that does not do any write to check for correct buffer recycling.
 */
private void testReleaseOnSpillablePartitionWithSlowWriter(boolean createView) throws Exception {
    // simulate slow writer by a no-op write operation
    IOManager ioManager = new IOManagerAsyncWithNoOpBufferFileWriter();
    SpillableSubpartition partition = createSubpartition(ioManager);

    Buffer buffer1 = TestBufferFactory.createBuffer(4096, 4096);
    Buffer buffer2 = TestBufferFactory.createBuffer(4096, 4096);
    try {
        // we need two buffers because the view will use one of them and not release it
        partition.add(buffer1);
        partition.add(buffer2);
        assertFalse("buffer1 should not be recycled (still in the queue)", buffer1.isRecycled());
        assertFalse("buffer2 should not be recycled (still in the queue)", buffer2.isRecycled());
        assertEquals(2, partition.getTotalNumberOfBuffers());
        assertEquals(4096 * 2, partition.getTotalNumberOfBytes());

        if (createView) {
            // Create a read view
            partition.finish();
            partition.createReadView(numBuffers -> {});
        }

        // one instance of the buffers is placed in the view's nextBuffer and not released
        // (if there is no view, there will be no additional EndOfPartitionEvent)
        assertEquals(2, partition.releaseMemory());
        assertFalse("buffer1 should not be recycled (advertised as nextBuffer)", buffer1.isRecycled());
        assertFalse("buffer2 should not be recycled (not written yet)", buffer2.isRecycled());
    } finally {
        ioManager.shutdown();
        if (!buffer1.isRecycled()) {
            buffer1.recycleBuffer();
        }
        if (!buffer2.isRecycled()) {
            buffer2.recycleBuffer();
        }
    }
    // note: a view requires a finished partition which has an additional EndOfPartitionEvent
    assertEquals(2 + (createView ? 1 : 0), partition.getTotalNumberOfBuffers());
    assertEquals(4096 * 2 + (createView ? 4 : 0), partition.getTotalNumberOfBytes());
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:45,代碼來源:SpillableSubpartitionTest.java

示例8: testWithTwoChannelsAndRandomBarriers

import org.apache.flink.runtime.io.disk.iomanager.IOManager; //導入方法依賴的package包/類
@Test
public void testWithTwoChannelsAndRandomBarriers() {
    IOManager ioMan = null;
    NetworkBufferPool networkBufferPool1 = null;
    NetworkBufferPool networkBufferPool2 = null;
    try {
        ioMan = new IOManagerAsync();

        networkBufferPool1 = new NetworkBufferPool(100, PAGE_SIZE);
        networkBufferPool2 = new NetworkBufferPool(100, PAGE_SIZE);
        BufferPool pool1 = networkBufferPool1.createBufferPool(100, 100);
        BufferPool pool2 = networkBufferPool2.createBufferPool(100, 100);

        RandomGeneratingInputGate myIG = new RandomGeneratingInputGate(
                new BufferPool[] { pool1, pool2 },
                new BarrierGenerator[] { new CountBarrier(100000), new RandomBarrier(100000) });

        BarrierBuffer barrierBuffer = new BarrierBuffer(myIG, ioMan);

        for (int i = 0; i < 2000000; i++) {
            BufferOrEvent boe = barrierBuffer.getNextNonBlocked();
            if (boe.isBuffer()) {
                boe.getBuffer().recycleBuffer();
            }
        }
    }
    catch (Exception e) {
        e.printStackTrace();
        fail(e.getMessage());
    }
    finally {
        if (ioMan != null) {
            ioMan.shutdown();
        }
        if (networkBufferPool1 != null) {
            networkBufferPool1.destroyAllBufferPools();
            networkBufferPool1.destroy();
        }
        if (networkBufferPool2 != null) {
            networkBufferPool2.destroyAllBufferPools();
            networkBufferPool2.destroy();
        }
    }
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:45,代碼來源:BarrierBufferMassiveRandomTest.java


注:本文中的org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。