當前位置: 首頁>>代碼示例>>Java>>正文


Java Upload類代碼示例

本文整理匯總了Java中com.amazonaws.services.s3.transfer.Upload的典型用法代碼示例。如果您正苦於以下問題:Java Upload類的具體用法?Java Upload怎麽用?Java Upload使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


Upload類屬於com.amazonaws.services.s3.transfer包,在下文中一共展示了Upload類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: uploadToS3

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
private String uploadToS3(String bucket, String key, MultipartFile file) {
    final AmazonS3 s3 = s3ClientFactory.createClient();
    final TransferManager transferManager = TransferManagerBuilder.standard().withS3Client(s3).build();
    try {
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentLength(file.getSize());
        metadata.setContentType(file.getContentType());

        byte[] resultByte = DigestUtils.md5(file.getBytes());
        String streamMD5 = new String(Base64.encodeBase64(resultByte));
        metadata.setContentMD5(streamMD5);

        Upload upload = transferManager.upload(bucket, key, file.getInputStream(), metadata);
        upload.waitForCompletion();
        return streamMD5;
    } catch (AmazonServiceException | InterruptedException | IOException e) {
        logger.error("Error uploading file: {}", e.toString());
        return null;
    } finally {
        transferManager.shutdownNow();
    }
}
 
開發者ID:grassrootza,項目名稱:grassroot-platform,代碼行數:23,代碼來源:StorageBrokerImpl.java

示例2: asynchronousAction

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
/**
 * Uses the {@link TransferManager} to upload a file.
 *
 * @param objectToActOn The put request.
 * @return The object key in the bucket.
 */
@Override
protected String asynchronousAction(PutObjectRequest objectToActOn) {
	String returnValue;

	try {
		Upload upload = s3TransferManager.upload(objectToActOn);
		returnValue = upload.waitForUploadResult().getKey();
	} catch (InterruptedException exception) {
		Thread.currentThread().interrupt();
		throw new UncheckedInterruptedException(exception);
	}

	API_LOG.info("Successfully wrote object {} to S3 bucket {}", returnValue, objectToActOn.getBucketName());

	return returnValue;
}
 
開發者ID:CMSgov,項目名稱:qpp-conversion-tool,代碼行數:23,代碼來源:StorageServiceImpl.java

示例3: testS3Persister

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
@Test
public void testS3Persister() throws Exception {
    Upload upload = mock(Upload.class);
    when(upload.waitForUploadResult()).thenReturn(new UploadResult());
    TransferManager transferManager = mock(TransferManager.class);
    when(transferManager.upload(anyString(), anyString(), any())).thenReturn(upload);

    S3Persister s3Persister = new S3Persister(transferManager, "foo");
    s3Persister.saveMetrics("foo", "bar");

    verify(transferManager, times(1)).upload(anyString(), anyString(), any());
    verify(transferManager, times(1)).shutdownNow();
    verifyNoMoreInteractions(transferManager);
    verify(upload, times(1)).waitForCompletion();
    verifyNoMoreInteractions(upload);
    assertFalse(new File("foo").exists());
}
 
開發者ID:awslabs,項目名稱:emr-workload-profiler,代碼行數:18,代碼來源:S3PersisterTest.java

示例4: updateSnapshotIndex

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
/**
 * Write a list of all of the state versions to S3.
 * @param newVersion
 */
private synchronized void updateSnapshotIndex(Long newVersion) {
	/// insert the new version into the list
	int idx = Collections.binarySearch(snapshotIndex, newVersion);
	int insertionPoint = Math.abs(idx) - 1;
	snapshotIndex.add(insertionPoint, newVersion);
	
	/// build a binary representation of the list -- gap encoded variable-length integers
	byte[] idxBytes = buidGapEncodedVarIntSnapshotIndex();
	
	/// indicate the Content-Length
    ObjectMetadata metadata = new ObjectMetadata();
    metadata.setHeader("Content-Length", (long)idxBytes.length);
	
    /// upload the new file content.
    try(InputStream is = new ByteArrayInputStream(idxBytes)) {
        Upload upload = s3TransferManager.upload(bucketName, getSnapshotIndexObjectName(blobNamespace), is, metadata);
        
        upload.waitForCompletion();
    } catch(Exception e) {
        throw new RuntimeException(e);
    }
}
 
開發者ID:Netflix,項目名稱:hollow-reference-implementation,代碼行數:27,代碼來源:S3Publisher.java

示例5: shouldUploadInParallel

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
/**
 * Tests if an object can be uploaded asynchronously
 *
 * @throws Exception not expected
 */
@Test
public void shouldUploadInParallel() throws Exception {
  final File uploadFile = new File(UPLOAD_FILE_NAME);

  s3Client.createBucket(BUCKET_NAME);

  final TransferManager transferManager = createDefaultTransferManager();
  final Upload upload =
      transferManager.upload(new PutObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME, uploadFile));
  final UploadResult uploadResult = upload.waitForUploadResult();

  assertThat(uploadResult.getKey(), equalTo(UPLOAD_FILE_NAME));

  final S3Object getResult = s3Client.getObject(BUCKET_NAME, UPLOAD_FILE_NAME);
  assertThat(getResult.getKey(), equalTo(UPLOAD_FILE_NAME));
}
 
開發者ID:adobe,項目名稱:S3Mock,代碼行數:22,代碼來源:AmazonClientUploadIT.java

示例6: checkRangeDownloads

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
/**
 * Verify that range-downloads work.
 *
 * @throws Exception not expected
 */
@Test
public void checkRangeDownloads() throws Exception {
  final File uploadFile = new File(UPLOAD_FILE_NAME);

  s3Client.createBucket(BUCKET_NAME);

  final TransferManager transferManager = createDefaultTransferManager();
  final Upload upload =
      transferManager.upload(new PutObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME, uploadFile));
  upload.waitForUploadResult();

  final File downloadFile = File.createTempFile(UUID.randomUUID().toString(), null);
  transferManager
      .download(new GetObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME).withRange(1, 2),
          downloadFile)
      .waitForCompletion();
  assertThat("Invalid file length", downloadFile.length(), is(2L));

  transferManager
      .download(new GetObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME).withRange(0, 1000),
          downloadFile)
      .waitForCompletion();
  assertThat("Invalid file length", downloadFile.length(), is(uploadFile.length()));
}
 
開發者ID:adobe,項目名稱:S3Mock,代碼行數:30,代碼來源:AmazonClientUploadIT.java

示例7: putChunk

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
public long putChunk(String localDataFile, String localIndexFile, TopicPartition tp) throws IOException {
  // Put data file then index, then finally update/create the last_index_file marker
  String dataFileKey = this.getChunkFileKey(localDataFile);
  String idxFileKey = this.getChunkFileKey(localIndexFile);
  // Read offset first since we'll delete the file after upload
  long nextOffset = getNextOffsetFromIndexFileContents(new FileReader(localIndexFile));

  try {
    Upload upload = tm.upload(this.bucket, dataFileKey, new File(localDataFile));
    upload.waitForCompletion();
    upload = tm.upload(this.bucket, idxFileKey, new File(localIndexFile));
    upload.waitForCompletion();
  } catch (Exception e) {
    throw new IOException("Failed to upload to S3", e);
  }

  this.updateCursorFile(idxFileKey, tp);

  // Sanity check - return what the new nextOffset will be based on the index we just uploaded
  return nextOffset;
}
 
開發者ID:DeviantArt,項目名稱:kafka-connect-s3,代碼行數:22,代碼來源:S3Writer.java

示例8: testUpload

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
public void testUpload() throws Exception {
  AmazonS3 s3Mock = mock(AmazonS3.class);
  TransferManager tmMock = mock(TransferManager.class);
  BlockGZIPFileWriter fileWriter = createDummmyFiles(0, 1000);
  S3Writer s3Writer = new S3Writer(testBucket, "pfx", s3Mock, tmMock);
  TopicPartition tp = new TopicPartition("bar", 0);

  Upload mockUpload = mock(Upload.class);

  when(tmMock.upload(eq(testBucket), eq(getKeyForFilename("pfx", "bar-00000-000000000000.gz")), isA(File.class)))
    .thenReturn(mockUpload);
  when(tmMock.upload(eq(testBucket), eq(getKeyForFilename("pfx", "bar-00000-000000000000.index.json")), isA(File.class)))
    .thenReturn(mockUpload);

  s3Writer.putChunk(fileWriter.getDataFilePath(), fileWriter.getIndexFilePath(), tp);

  verifyTMUpload(tmMock, new ExpectedRequestParams[]{
    new ExpectedRequestParams(getKeyForFilename("pfx", "bar-00000-000000000000.gz"), testBucket),
    new ExpectedRequestParams(getKeyForFilename("pfx", "bar-00000-000000000000.index.json"), testBucket)
  });

  // Verify it also wrote the index file key
  verifyStringPut(s3Mock, "pfx/last_chunk_index.bar-00000.txt",
    getKeyForFilename("pfx", "bar-00000-000000000000.index.json"));
}
 
開發者ID:DeviantArt,項目名稱:kafka-connect-s3,代碼行數:26,代碼來源:S3WriterTest.java

示例9: Upload

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
public void Upload(String key, InputStream input, long size) throws InterruptedException {
    ObjectMetadata meta = new ObjectMetadata();
    if(SSE)
        meta.setServerSideEncryption(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);   
    meta.setContentLength(size);
    Upload upload = tm.upload(existingBucketName, key, input, meta);
    
    try {
    	// Or you can block and wait for the upload to finish
    	upload.waitForCompletion();
            Logger.DEBUG("Upload complete.");
    } catch (AmazonClientException amazonClientException) {
    	Logger.DEBUG("Unable to upload file, upload was aborted.");
    	Logger.EXCEPTION(amazonClientException);
    }
}
 
開發者ID:biointec,項目名稱:halvade,代碼行數:17,代碼來源:AWSUploader.java

示例10: uploadArtifactStream

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
private void uploadArtifactStream(IndexArtifact ia, StorageRequest sr) throws LocalStorageException
{
    try
    {
        TransferManager tx = new TransferManager(client);
        ObjectMetadata om = new ObjectMetadata();
        om.setContentLength(sr.getLength());

        String key = getPath() + ia.getLocation() + "/" + sr.getFilename();
        
        Upload myUpload = tx.upload(bucketName, key, sr.getNewStream(), om);
        myUpload.waitForCompletion();
    }
    catch (Exception exc)
    {
        logger.error(exc.getLocalizedMessage());
        throw new LocalStorageException(exc);
    }
}
 
開發者ID:Spedge,項目名稱:hangar,代碼行數:20,代碼來源:S3Storage.java

示例11: doUpload

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
Upload doUpload(String bucket, String fileName, InputStream is, ObjectMetadata metadata) {
  final PutObjectRequest putObjectRequest = new PutObjectRequest(
      bucket,
      fileName,
      is,
      metadata
  );
  final String object = bucket + s3TargetConfigBean.s3Config.delimiter + fileName;
  Upload upload = transferManager.upload(putObjectRequest);
  upload.addProgressListener((ProgressListener) progressEvent -> {
    switch (progressEvent.getEventType()) {
      case TRANSFER_STARTED_EVENT:
        LOG.debug("Started uploading object {} into Amazon S3", object);
        break;
      case TRANSFER_COMPLETED_EVENT:
        LOG.debug("Completed uploading object {} into Amazon S3", object);
        break;
      case TRANSFER_FAILED_EVENT:
        LOG.debug("Failed uploading object {} into Amazon S3", object);
        break;
      default:
        break;
    }
  });
  return upload;
}
 
開發者ID:streamsets,項目名稱:datacollector,代碼行數:27,代碼來源:FileHelper.java

示例12: verifyMultiPartUpload

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
private Map<String, String> verifyMultiPartUpload(MultipleFileUpload uploadDirectory) throws AmazonClientException {
    Collection<? extends Upload> uploadResults = uploadDirectory.getSubTransfers();
    Iterator<? extends Upload> iterator = uploadResults.iterator();

    Map<String, String> fileModifyMap = new HashMap<String, String>();
    while (iterator.hasNext()) {
        UploadResult uploadResult = null;

        try {
            uploadResult = iterator.next().waitForUploadResult();
        } catch (Exception e) {
                LOGGER.error(e.getMessage());
                throw new AmazonClientException(e.getMessage());
        }

        if (uploadResult != null) {
            LOGGER.info(String.format("Multipart upload success for file " + uploadResult.getKey() + " to Amazon S3 bucket " + uploadResult.getBucketName()));
        }
    }
    
    return fileModifyMap;
}
 
開發者ID:ktenzer,項目名稱:snap2cloud,代碼行數:23,代碼來源:S3Backup.java

示例13: store

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
@Override
public void store( final Movie movie ) throws MovieNotStoredException
{
    final String key = movie.getMovieId().getMovieId();
    logger.info( "Uploading {} to S3 key {}", movie, key );
    final File movieFile = movie.getPath().toFile();
    final PutObjectRequest putObjectRequest = new PutObjectRequest( S3_BUCKET_HOOD_ETS_SOURCE, key, movieFile );
    final ProgressListener progressListener = new S3ProgressListener( key, movieFile.length() );
    try
    {
        final Upload upload = this.transferManager.upload( putObjectRequest );
        upload.addProgressListener( progressListener );
        upload.waitForCompletion();
    }
    catch ( AmazonClientException | InterruptedException e )
    {
        this.transferManager.abortMultipartUploads( S3_BUCKET_HOOD_ETS_SOURCE, new Date() );
        throw new MovieNotStoredException( movie, e );
    }
    logger.info( "Upload complete." );
}
 
開發者ID:stevenmhood,項目名稱:transcoder,代碼行數:22,代碼來源:S3MovieRepository.java

示例14: call

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
@Override
public Integer call() throws Exception {
    TransferManager t = new TransferManager(amazonS3Client);

    ObjectMetadata objectMetadata = new ObjectMetadata();
    objectMetadata.setUserMetadata(metadata);
    if(sse) {
        objectMetadata.setSSEAlgorithm(SSEAlgorithm.AES256.getAlgorithm());
    }

    Upload u = t.upload(new PutObjectRequest(bucket, key, inputFile).withMetadata(objectMetadata));

    // TODO this listener spews out garbage >100% on a retry, add a test to verify
    if (progressListener != null) {
        progressListener.withTransferProgress(new TransferProgressWrapper(u.getProgress()));
        u.addProgressListener(progressListener);
    }
    try {
        u.waitForCompletion();
    } finally {
        t.shutdownNow();
    }
    return 0;
}
 
開發者ID:rholder,項目名稱:esthree,代碼行數:25,代碼來源:Put.java

示例15: transferFile

import com.amazonaws.services.s3.transfer.Upload; //導入依賴的package包/類
protected void transferFile(boolean deleteSource, String bucket, String filename, String localDirectory) {
	File source = new File(localDirectory + BaseESReducer.DIR_SEPARATOR + filename);
	Preconditions.checkArgument(source.exists(), "Could not find source file: " + source.getAbsolutePath());
	logger.info("Transfering + " + source + " to " + bucket + " with key " + filename);
	FileInputStream fis;
	try {
		fis = new FileInputStream(source);
		ObjectMetadata objectMetadata = new ObjectMetadata();
		objectMetadata.setSSEAlgorithm("AES256");
		objectMetadata.setContentLength(source.length());
		Upload upload = tx.upload(bucket, filename, fis, objectMetadata);
		
		while(!upload.isDone());
		Preconditions.checkState(upload.getState().equals(TransferState.Completed), "File " + filename + " failed to upload with state: " + upload.getState());
		if(deleteSource) {
			source.delete();
		}
	} catch (FileNotFoundException e) {
		// Exception should never be thrown because the precondition above has already validated existence of file
		logger.error("Filename could not be found " + filename, e);
	}
}
 
開發者ID:MyPureCloud,項目名稱:elasticsearch-lambda,代碼行數:23,代碼來源:S3SnapshotTransport.java


注:本文中的com.amazonaws.services.s3.transfer.Upload類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。