當前位置: 首頁>>代碼示例>>Java>>正文


Java Upload.waitForUploadResult方法代碼示例

本文整理匯總了Java中com.amazonaws.services.s3.transfer.Upload.waitForUploadResult方法的典型用法代碼示例。如果您正苦於以下問題:Java Upload.waitForUploadResult方法的具體用法?Java Upload.waitForUploadResult怎麽用?Java Upload.waitForUploadResult使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在com.amazonaws.services.s3.transfer.Upload的用法示例。


在下文中一共展示了Upload.waitForUploadResult方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: shouldUploadInParallel

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
/**
 * Tests if an object can be uploaded asynchronously
 *
 * @throws Exception not expected
 */
@Test
public void shouldUploadInParallel() throws Exception {
  final File uploadFile = new File(UPLOAD_FILE_NAME);

  s3Client.createBucket(BUCKET_NAME);

  final TransferManager transferManager = createDefaultTransferManager();
  final Upload upload =
      transferManager.upload(new PutObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME, uploadFile));
  final UploadResult uploadResult = upload.waitForUploadResult();

  assertThat(uploadResult.getKey(), equalTo(UPLOAD_FILE_NAME));

  final S3Object getResult = s3Client.getObject(BUCKET_NAME, UPLOAD_FILE_NAME);
  assertThat(getResult.getKey(), equalTo(UPLOAD_FILE_NAME));
}
 
開發者ID:adobe,項目名稱:S3Mock,代碼行數:22,代碼來源:AmazonClientUploadIT.java

示例2: checkRangeDownloads

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
/**
 * Verify that range-downloads work.
 *
 * @throws Exception not expected
 */
@Test
public void checkRangeDownloads() throws Exception {
  final File uploadFile = new File(UPLOAD_FILE_NAME);

  s3Client.createBucket(BUCKET_NAME);

  final TransferManager transferManager = createDefaultTransferManager();
  final Upload upload =
      transferManager.upload(new PutObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME, uploadFile));
  upload.waitForUploadResult();

  final File downloadFile = File.createTempFile(UUID.randomUUID().toString(), null);
  transferManager
      .download(new GetObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME).withRange(1, 2),
          downloadFile)
      .waitForCompletion();
  assertThat("Invalid file length", downloadFile.length(), is(2L));

  transferManager
      .download(new GetObjectRequest(BUCKET_NAME, UPLOAD_FILE_NAME).withRange(0, 1000),
          downloadFile)
      .waitForCompletion();
  assertThat("Invalid file length", downloadFile.length(), is(uploadFile.length()));
}
 
開發者ID:adobe,項目名稱:S3Mock,代碼行數:30,代碼來源:AmazonClientUploadIT.java

示例3: shouldUploadAndDownloadStream

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
/**
 * Stores a file in a previously created bucket. Downloads the file again and compares checksums
 *
 * @throws Exception if FileStreams can not be read
 */
@Test
public void shouldUploadAndDownloadStream() throws Exception {
  s3Client.createBucket(BUCKET_NAME);
  final String resourceId = UUID.randomUUID().toString();

  final byte[] resource = new byte[] {1, 2, 3, 4, 5};
  final ByteArrayInputStream bais = new ByteArrayInputStream(resource);

  final ObjectMetadata objectMetadata = new ObjectMetadata();
  objectMetadata.setContentLength(resource.length);
  final PutObjectRequest putObjectRequest =
      new PutObjectRequest(BUCKET_NAME, resourceId, bais, objectMetadata);

  final TransferManager tm = createDefaultTransferManager();
  final Upload upload = tm.upload(putObjectRequest);

  upload.waitForUploadResult();

  final S3Object s3Object = s3Client.getObject(BUCKET_NAME, resourceId);

  final String uploadHash = HashUtil.getDigest(new ByteArrayInputStream(resource));
  final String downloadedHash = HashUtil.getDigest(s3Object.getObjectContent());
  s3Object.close();

  assertThat("Up- and downloaded Files should have equal Hashes", uploadHash,
      is(equalTo(downloadedHash)));
}
 
開發者ID:adobe,項目名稱:S3Mock,代碼行數:33,代碼來源:AmazonClientUploadIT.java

示例4: multipartCopy

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
/**
 * Verifies multipart copy.
 *
 * @throws InterruptedException
 */
@Test
public void multipartCopy() throws InterruptedException, IOException, NoSuchAlgorithmException {
  final int contentLen = 3 * _1MB;

  final ObjectMetadata objectMetadata = new ObjectMetadata();
  objectMetadata.setContentLength(contentLen);

  final String assumedSourceKey = UUID.randomUUID().toString();

  final Bucket sourceBucket = s3Client.createBucket(UUID.randomUUID().toString());
  final Bucket targetBucket = s3Client.createBucket(UUID.randomUUID().toString());

  final TransferManager transferManager = createTransferManager(_2MB, _1MB, _2MB, _1MB);

  final InputStream sourceInputStream = randomInputStream(contentLen);
  final Upload upload = transferManager
      .upload(sourceBucket.getName(), assumedSourceKey,
          sourceInputStream, objectMetadata);

  final UploadResult uploadResult = upload.waitForUploadResult();

  assertThat(uploadResult.getKey(), is(assumedSourceKey));

  final String assumedDestinationKey = UUID.randomUUID().toString();
  final Copy copy =
      transferManager.copy(sourceBucket.getName(), assumedSourceKey, targetBucket.getName(),
          assumedDestinationKey);
  final CopyResult copyResult = copy.waitForCopyResult();
  assertThat(copyResult.getDestinationKey(), is(assumedDestinationKey));

  final S3Object copiedObject = s3Client.getObject(targetBucket.getName(), assumedDestinationKey);

  assertThat("Hashes for source and target S3Object do not match.",
      HashUtil.getDigest(copiedObject.getObjectContent()) + "-1",
      is(uploadResult.getETag()));
}
 
開發者ID:adobe,項目名稱:S3Mock,代碼行數:42,代碼來源:AmazonClientUploadIT.java

示例5: testUploadFileAsync

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
/**
    * Test method for {@link com.github.abhinavmishra14.aws.s3.service.AwsS3IamService#uploadFileAsync(java.lang.String, java.lang.String, java.io.File)}.
    *
    * @throws Exception the exception
    */
@Test
public void testUploadFileAsync() throws Exception{
	awsS3IamService.createBucket(AWS_S3_BUCKET);//create bucket for test
	InputStream inStream = AwsS3IamServiceTest.class
			.getResourceAsStream("/sample-file/TestPutObject.txt");
	File tempFile = AWSUtil.createTempFileFromStream(inStream);
	Upload upload = awsS3IamService.uploadFileAsync(AWS_S3_BUCKET, AWSUtilConstants.SAMPLE_FILE_NAME, tempFile);
	upload.waitForUploadResult();
	assertEquals(true,upload.isDone());
}
 
開發者ID:abhinavmishra14,項目名稱:aws-s3-utils,代碼行數:16,代碼來源:AwsS3IamServiceTest.java

示例6: testUploadFileWithPublicAccessAsync

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
/**
    * Test method for {@link com.github.abhinavmishra14.aws.s3.service.AwsS3IamService#uploadFileAsync(java.lang.String, java.lang.String, java.io.File,boolean)}.
    *
    * @throws Exception the exception
    */
@Test
public void testUploadFileWithPublicAccessAsync() throws Exception{
	awsS3IamService.createBucket(AWS_S3_BUCKET);//create bucket for test
	InputStream inStream = AwsS3IamServiceTest.class
			.getResourceAsStream("/sample-file/TestPutObject.txt");
	File tempFile = AWSUtil.createTempFileFromStream(inStream);
	Upload upload = awsS3IamService.uploadFileAsync(AWS_S3_BUCKET, AWSUtilConstants.SAMPLE_FILE_NAME, tempFile,true);
	upload.waitForUploadResult();
	assertEquals(true,upload.isDone());
}
 
開發者ID:abhinavmishra14,項目名稱:aws-s3-utils,代碼行數:16,代碼來源:AwsS3IamServiceTest.java

示例7: testUploadFileWithCannedACLAsync

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
/**
    * Test method for {@link com.github.abhinavmishra14.aws.s3.service.AwsS3IamService#uploadFileAsync(java.lang.String, java.lang.String, java.io.File,com.amazonaws.services.s3.model.CannedAccessControlList)}.
    *
    * @throws Exception the exception
    */
@Test
public void testUploadFileWithCannedACLAsync() throws Exception{
	awsS3IamService.createBucket(AWS_S3_BUCKET);//create bucket for test
	InputStream inStream = AwsS3IamServiceTest.class
			.getResourceAsStream("/sample-file/TestPutObject.txt");
	File tempFile = AWSUtil.createTempFileFromStream(inStream);
	Upload upload = awsS3IamService.uploadFileAsync(AWS_S3_BUCKET, AWSUtilConstants.SAMPLE_FILE_NAME, tempFile,CannedAccessControlList.PublicRead);
	upload.waitForUploadResult();
	assertEquals(true,upload.isDone());
}
 
開發者ID:abhinavmishra14,項目名稱:aws-s3-utils,代碼行數:16,代碼來源:AwsS3IamServiceTest.java

示例8: upload

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
@Override
public void upload(String bucketName, String name, InputStream input, ObjectMetadata meta) throws IOException {
    final Upload myUpload = tx.upload(bucketName, name, input, meta);
    try {
        UploadResult uploadResult = myUpload.waitForUploadResult();
        LOG.info("Upload completed, bucket={}, key={}", uploadResult.getBucketName(), uploadResult.getKey());
    } catch (InterruptedException e) {
        throw new IOException(e);
    }
}
 
開發者ID:wurstmeister,項目名稱:storm-s3,代碼行數:11,代碼來源:BlockingTransferManagerUploader.java

示例9: createStreamWriter

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
public BufferedWriter createStreamWriter(String correlationID, String streamUri) throws IOException {
	String[] split = streamUri.split("://", 2);
	String protocol = split[0];
	String path = split[1];
	if (Constants.FILE.equals(protocol)) {
		return new BufferedWriter(new FileWriter(path));
	} else if (Constants.s3.equals(protocol)) {

		String[] split1 = path.split("/", 2);
		final String bucketName = split1[0];
		final String objectKey = split1[1];

		String tempFilePath = tempDirectoryPath + "/" + correlationID;

		final File tempFile = new File(tempFilePath);
		return new BufferedWriterTaskOnClose(new FileWriter(tempFile), new Task() {
			@Override
			public void run() throws InterruptedException {
				if (!offlineMode) {
					Upload upload = transferManager.upload(bucketName, objectKey, tempFile);
					upload.waitForUploadResult();
					tempFile.delete();
				}
			}
		});
	} else {
		throw new NotImplementedException("Unrecognised stream URI protocol: " + protocol);
	}
}
 
開發者ID:IHTSDO,項目名稱:snomed-release-service,代碼行數:30,代碼來源:StreamFactory.java

示例10: doInBackground

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
@Override
protected UploadResult doInBackground(String... mediaPaths) {
    UploadResult result = null;

    if(null == mediaPaths[0]) {
        jobFailed(null, 7000000, "S3 media path is null");
        return result;
    }

    File mediaFile = new File(mediaPaths[0]);
    if (!mediaFile.exists()) {
        jobFailed(null, 7000001, "S3 media path invalid");
        return result;
    }

    try {
        final AWSCredentials credentials = new BasicAWSCredentials(mContext.getString(R.string.s3_key), mContext.getString(R.string.s3_secret));
        Log.i(TAG, "upload file: " + mediaFile.getName());

        AmazonS3Client s3Client = new AmazonS3Client(credentials, s3Config);
        TransferManager transferManager = new TransferManager(s3Client);
        Upload upload = transferManager.upload(bucket, pathPrefix + mediaFile.getName(), mediaFile);

        result = upload.waitForUploadResult();
    } catch (Exception e) {
        Timber.e("upload error: " + e.getMessage());
        jobFailed(null, 7000002, "S3 upload failed: " + e.getMessage());
    }

    return result;
}
 
開發者ID:StoryMaker,項目名稱:SecureShareLib,代碼行數:32,代碼來源:S3SiteController.java

示例11: close

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
@Override
public synchronized void close() throws IOException {
  if (closed) {
    return;
  }

  backupStream.close();
  if (LOG.isDebugEnabled()) {
    LOG.debug("OutputStream for key '" + key + "' closed. Now beginning upload");
    LOG.debug("Minimum upload part size: " + partSize + " threshold " + partSizeThreshold);
  }


  try {
    final ObjectMetadata om = new ObjectMetadata();
    if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)) {
      om.setServerSideEncryption(serverSideEncryptionAlgorithm);
    }
    PutObjectRequest putObjectRequest = new PutObjectRequest(bucket, key, backupFile);
    putObjectRequest.setCannedAcl(cannedACL);
    putObjectRequest.setMetadata(om);

    Upload upload = transfers.upload(putObjectRequest);

    ProgressableProgressListener listener = 
      new ProgressableProgressListener(upload, progress, statistics);
    upload.addProgressListener(listener);

    upload.waitForUploadResult();

    long delta = upload.getProgress().getBytesTransferred() - listener.getLastBytesTransferred();
    if (statistics != null && delta != 0) {
      if (LOG.isDebugEnabled()) {
        LOG.debug("S3A write delta changed after finished: " + delta + " bytes");
      }
      statistics.incrementBytesWritten(delta);
    }

    // This will delete unnecessary fake parent directories
    fs.finishedWrite(key);
  } catch (InterruptedException e) {
    throw new IOException(e);
  } finally {
    if (!backupFile.delete()) {
      LOG.warn("Could not delete temporary s3a file: {}", backupFile);
    }
    super.close();
    closed = true;
  }
  if (LOG.isDebugEnabled()) {
    LOG.debug("OutputStream for key '" + key + "' upload complete");
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:54,代碼來源:S3AOutputStream.java

示例12: copyFromLocalFile

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
/**
 * The src file is on the local disk.  Add it to FS at
 * the given dst name.
 *
 * This version doesn't need to create a temporary file to calculate the md5.
 * Sadly this doesn't seem to be used by the shell cp :(
 *
 * delSrc indicates if the source should be removed
 * @param delSrc whether to delete the src
 * @param overwrite whether to overwrite an existing file
 * @param src path
 * @param dst path
 */
@Override
public void copyFromLocalFile(boolean delSrc, boolean overwrite, Path src, 
  Path dst) throws IOException {
  String key = pathToKey(dst);

  if (!overwrite && exists(dst)) {
    throw new IOException(dst + " already exists");
  }
  if (LOG.isDebugEnabled()) {
    LOG.debug("Copying local file from " + src + " to " + dst);
  }

  // Since we have a local file, we don't need to stream into a temporary file
  LocalFileSystem local = getLocal(getConf());
  File srcfile = local.pathToFile(src);

  final ObjectMetadata om = new ObjectMetadata();
  if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)) {
    om.setServerSideEncryption(serverSideEncryptionAlgorithm);
  }
  PutObjectRequest putObjectRequest = new PutObjectRequest(bucket, key, srcfile);
  putObjectRequest.setCannedAcl(cannedACL);
  putObjectRequest.setMetadata(om);

  ProgressListener progressListener = new ProgressListener() {
    public void progressChanged(ProgressEvent progressEvent) {
      switch (progressEvent.getEventCode()) {
        case ProgressEvent.PART_COMPLETED_EVENT_CODE:
          statistics.incrementWriteOps(1);
          break;
        default:
          break;
      }
    }
  };

  Upload up = transfers.upload(putObjectRequest);
  up.addProgressListener(progressListener);
  try {
    up.waitForUploadResult();
    statistics.incrementWriteOps(1);
  } catch (InterruptedException e) {
    throw new IOException("Got interrupted, cancelling");
  }

  // This will delete unnecessary fake parent directories
  finishedWrite(key);

  if (delSrc) {
    local.delete(src, false);
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:66,代碼來源:S3AFileSystem.java

示例13: close

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
@Override
public synchronized void close() throws IOException {
  if (closed) {
    return;
  }

  backupStream.close();
  if (LOG.isDebugEnabled()) {
    LOG.debug("OutputStream for key '" + key + "' closed. Now beginning upload");
    LOG.debug("Minimum upload part size: " + partSize + " threshold " + partSizeThreshold);
  }


  try {
    final ObjectMetadata om = new ObjectMetadata();
    if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)) {
      om.setSSEAlgorithm(serverSideEncryptionAlgorithm);
    }
    PutObjectRequest putObjectRequest = new PutObjectRequest(bucket, key, backupFile);
    putObjectRequest.setCannedAcl(cannedACL);
    putObjectRequest.setMetadata(om);

    Upload upload = transfers.upload(putObjectRequest);

    ProgressableProgressListener listener = 
      new ProgressableProgressListener(upload, progress, statistics);
    upload.addProgressListener(listener);

    upload.waitForUploadResult();

    long delta = upload.getProgress().getBytesTransferred() - listener.getLastBytesTransferred();
    if (statistics != null && delta != 0) {
      if (LOG.isDebugEnabled()) {
        LOG.debug("S3A write delta changed after finished: " + delta + " bytes");
      }
      statistics.incrementBytesWritten(delta);
    }

    // This will delete unnecessary fake parent directories
    fs.finishedWrite(key);
  } catch (InterruptedException e) {
    throw new IOException(e);
  } finally {
    if (!backupFile.delete()) {
      LOG.warn("Could not delete temporary s3a file: {}", backupFile);
    }
    super.close();
    closed = true;
  }
  if (LOG.isDebugEnabled()) {
    LOG.debug("OutputStream for key '" + key + "' upload complete");
  }
}
 
開發者ID:aliyun-beta,項目名稱:aliyun-oss-hadoop-fs,代碼行數:54,代碼來源:S3AOutputStream.java

示例14: copyFromLocalFile

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
/**
 * The src file is on the local disk.  Add it to FS at
 * the given dst name.
 *
 * This version doesn't need to create a temporary file to calculate the md5.
 * Sadly this doesn't seem to be used by the shell cp :(
 *
 * delSrc indicates if the source should be removed
 * @param delSrc whether to delete the src
 * @param overwrite whether to overwrite an existing file
 * @param src path
 * @param dst path
 */
@Override
public void copyFromLocalFile(boolean delSrc, boolean overwrite, Path src, 
  Path dst) throws IOException {
  String key = pathToKey(dst);

  if (!overwrite && exists(dst)) {
    throw new IOException(dst + " already exists");
  }
  if (LOG.isDebugEnabled()) {
    LOG.debug("Copying local file from " + src + " to " + dst);
  }

  // Since we have a local file, we don't need to stream into a temporary file
  LocalFileSystem local = getLocal(getConf());
  File srcfile = local.pathToFile(src);

  final ObjectMetadata om = new ObjectMetadata();
  if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)) {
    om.setSSEAlgorithm(serverSideEncryptionAlgorithm);
  }
  PutObjectRequest putObjectRequest = new PutObjectRequest(bucket, key, srcfile);
  putObjectRequest.setCannedAcl(cannedACL);
  putObjectRequest.setMetadata(om);

  ProgressListener progressListener = new ProgressListener() {
    public void progressChanged(ProgressEvent progressEvent) {
      switch (progressEvent.getEventType()) {
        case TRANSFER_PART_COMPLETED_EVENT:
          statistics.incrementWriteOps(1);
          break;
        default:
          break;
      }
    }
  };

  Upload up = transfers.upload(putObjectRequest);
  up.addProgressListener(progressListener);
  try {
    up.waitForUploadResult();
    statistics.incrementWriteOps(1);
  } catch (InterruptedException e) {
    throw new IOException("Got interrupted, cancelling");
  }

  // This will delete unnecessary fake parent directories
  finishedWrite(key);

  if (delSrc) {
    local.delete(src, false);
  }
}
 
開發者ID:aliyun-beta,項目名稱:aliyun-oss-hadoop-fs,代碼行數:66,代碼來源:S3AFileSystem.java

示例15: storeFile

import com.amazonaws.services.s3.transfer.Upload; //導入方法依賴的package包/類
@Override
public void storeFile(final FileType fileType,
                      final PersistentFileMetadata metadata,
                      final InputStream inputStream) throws IOException {
    final String bucketName = fileType.resolveAwsBucketName(awsConfigProperties);
    final String objectKey = fileType.resolveAwsBucketKey(metadata);

    if (!StringUtils.hasText(bucketName)) {
        throw new IllegalStateException("Bucket name is not configured for fileType=" + fileType);
    }

    // Store S3 bucket and key embedded as resource URL
    metadata.setResourceUrl(S3Util.createResourceURL(bucketName, objectKey));

    final ObjectMetadata objectMetadata = new ObjectMetadata();
    objectMetadata.setContentLength(Objects.requireNonNull(metadata.getContentSize()));
    objectMetadata.setContentType(Objects.requireNonNull(metadata.getContentType()));

    if (metadata.getMd5Hash() != null) {
        // Use MD5 to verify uploaded file
        objectMetadata.setContentMD5(Base64.encodeAsString(metadata.getMd5Hash().asBytes()));
    }

    try {
        final PutObjectRequest request = new PutObjectRequest(bucketName, objectKey, inputStream, objectMetadata);
        request.setCannedAcl(CannedAccessControlList.Private);

        final Upload upload = this.transferManager.upload(request);
        upload.waitForUploadResult();

    } catch (Exception e) {
        Throwables.throwIfUnchecked(e);
        throw new RuntimeException(e);
    } finally {
        inputStream.close();
    }

    TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
        @Override
        public void afterCompletion(final int status) {
            // Remove file if transaction is rolled back
            if (status == STATUS_ROLLED_BACK) {
                removeInternal(new S3Util.BucketObjectPair(bucketName, objectKey));
            }
        }
    });
}
 
開發者ID:suomenriistakeskus,項目名稱:oma-riista-web,代碼行數:48,代碼來源:S3FileStorage.java


注:本文中的com.amazonaws.services.s3.transfer.Upload.waitForUploadResult方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。