当前位置: 首页>>代码示例>>Java>>正文


Java FSWriteError类代码示例

本文整理汇总了Java中org.apache.cassandra.io.FSWriteError的典型用法代码示例。如果您正苦于以下问题:Java FSWriteError类的具体用法?Java FSWriteError怎么用?Java FSWriteError使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


FSWriteError类属于org.apache.cassandra.io包,在下文中一共展示了FSWriteError类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: appendTOC

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
/**
 * Appends new component names to the TOC component.
 */
protected static void appendTOC(Descriptor descriptor, Collection<Component> components)
{
    String tocFile = descriptor.filenameFor(Component.TOC);

    try (BufferedWriter bufferedWriter = HadoopFileUtils.newBufferedWriter(descriptor.filenameFor(Component.TOC),
                                                                           Charsets.UTF_8,
                                                                           descriptor.getConfiguration()))
    {
        for (Component component : components)
            bufferedWriter.write(component.name + "\n");
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, tocFile);
    }
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:20,代码来源:SSTable.java

示例2: append

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
public void append(DecoratedKey key, RowIndexEntry indexEntry, long dataEnd, ByteBuffer indexInfo) throws IOException
{
    bf.add(key);
    long indexStart = indexFile.position();
    try
    {
        ByteBufferUtil.writeWithShortLength(key.getKey(), indexFile);
        rowIndexEntrySerializer.serialize(indexEntry, indexFile, indexInfo);
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, indexFile.getPath());
    }
    long indexEnd = indexFile.position();

    if (logger.isTraceEnabled())
        logger.trace("wrote index entry: {} at {}", indexEntry, indexStart);

    summary.maybeAddEntry(key, indexStart, indexEnd, dataEnd);
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:21,代码来源:BigTableWriter.java

示例3: flushBf

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
/**
 * Closes the index and bloomfilter, making the public state of this writer valid for consumption.
 */
void flushBf()
{
    if (components.contains(Component.FILTER))
    {
        String path = descriptor.filenameFor(Component.FILTER);

        try (HadoopFileUtils.HadoopFileChannel hos = HadoopFileUtils.newFilesystemChannel(path,
                                                                              descriptor.getConfiguration());
             DataOutputStreamPlus stream = new BufferedDataOutputStreamPlus(hos))
        {
            // bloom filter
            FilterFactory.serialize(bf, stream);
            stream.flush();
            //SyncUtil.sync(hos);
        }
        catch (IOException e)
        {
            logger.info(e.getMessage());
            throw new FSWriteError(e, path);
        }
    }
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:26,代码来源:BigTableWriter.java

示例4: writeHeader

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
private void writeHeader(DataOutput out, long dataLength, int chunks)
{
    try
    {
        out.writeUTF(parameters.getSstableCompressor().getClass().getSimpleName());
        out.writeInt(parameters.getOtherOptions().size());
        for (Map.Entry<String, String> entry : parameters.getOtherOptions().entrySet())
        {
            out.writeUTF(entry.getKey());
            out.writeUTF(entry.getValue());
        }

        // store the length of the chunk
        out.writeInt(parameters.chunkLength());
        // store position and reserve a place for uncompressed data length and chunks count
        out.writeLong(dataLength);
        out.writeInt(chunks);
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, filePath);
    }
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:24,代码来源:CompressionMetadata.java

示例5: createHardLink

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
public static void createHardLink(File from, File to)
{
    if (to.exists())
        throw new RuntimeException("Tried to create duplicate hard link to " + to);
    if (!from.exists())
        throw new RuntimeException("Tried to hard link to file that does not exist " + from);

    try
    {
        Files.createLink(to.toPath(), from.toPath());
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, to);
    }
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:17,代码来源:FileUtils.java

示例6: flushData

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
/**
 * Override this method instead of overriding flush()
 * @throws FSWriteError on any I/O error.
 */
protected void flushData()
{
    try
    {
        buffer.flip();
        channel.write(buffer);
        lastFlushOffset += buffer.position();
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, getPath());
    }
    if (runPostFlush != null)
        runPostFlush.run();
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:20,代码来源:SequentialWriter.java

示例7: dumpInterArrivalTimes

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
/**
 * Dump the inter arrival times for examination if necessary.
 */
public void dumpInterArrivalTimes()
{
    File file = FileUtils.createTempFile("failuredetector-", ".dat");

    OutputStream os = null;
    try
    {
        os = new BufferedOutputStream(new FileOutputStream(file, true));
        os.write(toString().getBytes());
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, file);
    }
    finally
    {
        FileUtils.closeQuietly(os);
    }
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:23,代码来源:FailureDetector.java

示例8: recycle

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
/**
 * Recycle processes an unneeded segment file for reuse.
 *
 * @return a new CommitLogSegment representing the newly reusable segment.
 */
CommitLogSegment recycle()
{
    try
    {
        sync();
    }
    catch (FSWriteError e)
    {
        logger.error("I/O error flushing {} {}", this, e.getMessage());
        throw e;
    }

    close();

    return new CommitLogSegment(getPath());
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:22,代码来源:CommitLogSegment.java

示例9: writeSnapshotManifest

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
private void writeSnapshotManifest(final JSONArray filesJSONArr, final String snapshotName)
{
    final File manifestFile = directories.getSnapshotManifestFile(snapshotName);
    final JSONObject manifestJSON = new JSONObject();
    manifestJSON.put("files", filesJSONArr);

    try
    {
        if (!manifestFile.getParentFile().exists())
            manifestFile.getParentFile().mkdirs();
        PrintStream out = new PrintStream(manifestFile);
        out.println(manifestJSON.toJSONString());
        out.close();
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, manifestFile);
    }
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:20,代码来源:ColumnFamilyStore.java

示例10: append

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
/**
 * @param row
 * @return null if the row was compacted away entirely; otherwise, the PK index entry for this row
 */
public RowIndexEntry append(AbstractCompactedRow row)
{
    long startPosition = beforeAppend(row.key);
    RowIndexEntry entry;
    try
    {
        entry = row.write(startPosition, dataFile.stream);
        if (entry == null)
            return null;
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, dataFile.getPath());
    }
    long endPosition = dataFile.getFilePointer();
    sstableMetadataCollector.update(endPosition - startPosition, row.columnStats());
    afterAppend(row.key, endPosition, entry);
    return entry;
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:24,代码来源:SSTableWriter.java

示例11: writeMetadata

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
private static void writeMetadata(Descriptor desc, Map<MetadataType, MetadataComponent> components)
{
    SequentialWriter out = SequentialWriter.open(new File(desc.filenameFor(Component.STATS)));
    try
    {
        desc.getMetadataSerializer().serialize(components, out.stream);
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, out.getPath());
    }
    finally
    {
        out.close();
    }
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:17,代码来源:SSTableWriter.java

示例12: appendTOC

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
/**
 * Appends new component names to the TOC component.
 */
protected static void appendTOC(Descriptor descriptor, Collection<Component> components)
{
    File tocFile = new File(descriptor.filenameFor(Component.TOC));
    PrintWriter w = null;
    try
    {
        w = new PrintWriter(new FileWriter(tocFile, true));
        for (Component component : components)
            w.println(component.name);
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, tocFile);
    }
    finally
    {
        FileUtils.closeQuietly(w);
    }
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:23,代码来源:SSTable.java

示例13: writeHeader

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
private void writeHeader(DataOutput out, long dataLength, int chunks)
{
    try
    {
        out.writeUTF(parameters.sstableCompressor.getClass().getSimpleName());
        out.writeInt(parameters.otherOptions.size());
        for (Map.Entry<String, String> entry : parameters.otherOptions.entrySet())
        {
            out.writeUTF(entry.getKey());
            out.writeUTF(entry.getValue());
        }

        // store the length of the chunk
        out.writeInt(parameters.chunkLength());
        // store position and reserve a place for uncompressed data length and chunks count
        out.writeLong(dataLength);
        out.writeInt(chunks);
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, filePath);
    }
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:24,代码来源:CompressionMetadata.java

示例14: close

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
@Override
public void close()
{
    if (buffer == null)
        return; // already closed

    super.close();
    sstableMetadataCollector.addCompressionRatio(compressedSize, originalSize);
    try
    {
        metadataWriter.close(current, chunkCount);
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, getPath());
    }
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:18,代码来源:CompressedSequentialWriter.java

示例15: writeFullChecksum

import org.apache.cassandra.io.FSWriteError; //导入依赖的package包/类
public void writeFullChecksum(Descriptor descriptor)
{
    File outFile = new File(descriptor.filenameFor(Component.DIGEST));
    BufferedWriter out = null;
    try
    {
        out = Files.newBufferedWriter(outFile.toPath(), Charsets.UTF_8);
        out.write(String.valueOf(fullChecksum.getValue()));
    }
    catch (IOException e)
    {
        throw new FSWriteError(e, outFile);
    }
    finally
    {
        FileUtils.closeQuietly(out);
    }
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:19,代码来源:DataIntegrityMetadata.java


注:本文中的org.apache.cassandra.io.FSWriteError类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。