本文整理汇总了Java中com.microsoft.azure.documentdb.ConnectionPolicy.setMaxPoolSize方法的典型用法代码示例。如果您正苦于以下问题:Java ConnectionPolicy.setMaxPoolSize方法的具体用法?Java ConnectionPolicy.setMaxPoolSize怎么用?Java ConnectionPolicy.setMaxPoolSize使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类com.microsoft.azure.documentdb.ConnectionPolicy
的用法示例。
在下文中一共展示了ConnectionPolicy.setMaxPoolSize方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: documentClientFrom
import com.microsoft.azure.documentdb.ConnectionPolicy; //导入方法依赖的package包/类
public static DocumentClient documentClientFrom(CmdLineConfiguration cfg) throws DocumentClientException {
ConnectionPolicy policy = new ConnectionPolicy();
RetryOptions retryOptions = new RetryOptions();
retryOptions.setMaxRetryAttemptsOnThrottledRequests(0);
policy.setRetryOptions(retryOptions);
policy.setConnectionMode(cfg.getConnectionMode());
policy.setMaxPoolSize(cfg.getMaxConnectionPoolSize());
return new DocumentClient(cfg.getServiceEndpoint(), cfg.getMasterKey(),
policy, cfg.getConsistencyLevel());
}
示例2: getConnectionPolicy
import com.microsoft.azure.documentdb.ConnectionPolicy; //导入方法依赖的package包/类
public ConnectionPolicy getConnectionPolicy() {
ConnectionPolicy policy = new ConnectionPolicy();
policy.setConnectionMode(connectionMode);
policy.setMaxPoolSize(maxConnectionPoolSize);
return policy;
}
示例3: main
import com.microsoft.azure.documentdb.ConnectionPolicy; //导入方法依赖的package包/类
public static void main(String[] args) throws Exception {
ConnectionPolicy connectionPolicy = new ConnectionPolicy();
RetryOptions retryOptions = new RetryOptions();
// set to 0 to let bulk importer handles throttling
retryOptions.setMaxRetryAttemptsOnThrottledRequests(0);
connectionPolicy.setRetryOptions(retryOptions);
connectionPolicy.setMaxPoolSize(200);
try(DocumentClient client = new DocumentClient(HOST, MASTER_KEY,
connectionPolicy, ConsistencyLevel.Session)) {
String collectionLink = String.format("/dbs/%s/colls/%s", "mydb", "mycol");
// this assumes database and collection already exists
DocumentCollection collection = client.readCollection(collectionLink, null).getResource();
int collectionOfferThroughput = getOfferThroughput(client, collection);
Builder bulkImporterBuilder = DocumentBulkImporter.builder().from(
client,
"mydb", "mycol",
collection.getPartitionKey(),
collectionOfferThroughput);
try(DocumentBulkImporter importer = bulkImporterBuilder.build()) {
//NOTE: for getting higher throughput please
// 1) Set JVM heap size to a large enough number to avoid any memory issue in handling large number of documents.
// Suggested heap size: max(3GB, 3 * sizeof(all documents passed to bulk import in one batch))
// 2) there is a pre-processing and warm up time and due that,
// you will get higher throughput for bulks with larger number of documents.
// So if you want to import 10,000,000 documents,
// running bulk import 10 times on 10 bulk of documents each of size 1,000,000 is more preferable
// than running bulk import 100 times on 100 bulk of documents each of size 100,000 documents.
for(int i = 0; i< 10; i++) {
Collection<String> docs = DataMigrationDocumentSource.loadDocuments(1000000, collection.getPartitionKey());
BulkImportResponse bulkImportResponse = importer.importAll(docs, false);
// returned stats
System.out.println("Number of documents inserted: " + bulkImportResponse.getNumberOfDocumentsImported());
System.out.println("Import total time: " + bulkImportResponse.getTotalTimeTaken());
System.out.println("Total request unit consumed: " + bulkImportResponse.getTotalRequestUnitsConsumed());
// validate that all documents in this checkpoint inserted
if (bulkImportResponse.getNumberOfDocumentsImported() < docs.size()) {
System.err.println("Some documents failed to get inserted in this checkpoint."
+ " This checkpoint has to get retried with upsert enabled");
for(int j = 0; j < bulkImportResponse.getErrors().size(); j++) {
bulkImportResponse.getErrors().get(j).printStackTrace();
}
break;
}
}
}
}
}