当前位置: 首页>>代码示例>>Java>>正文


Java ZKAssign.deleteClosingNode方法代码示例

本文整理汇总了Java中org.apache.hadoop.hbase.zookeeper.ZKAssign.deleteClosingNode方法的典型用法代码示例。如果您正苦于以下问题:Java ZKAssign.deleteClosingNode方法的具体用法?Java ZKAssign.deleteClosingNode怎么用?Java ZKAssign.deleteClosingNode使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hbase.zookeeper.ZKAssign的用法示例。


在下文中一共展示了ZKAssign.deleteClosingNode方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testUnassignWithSplitAtSameTime

import org.apache.hadoop.hbase.zookeeper.ZKAssign; //导入方法依赖的package包/类
@Test
public void testUnassignWithSplitAtSameTime() throws KeeperException, IOException {
  // Region to use in test.
  final HRegionInfo hri = HRegionInfo.FIRST_META_REGIONINFO;
  // First amend the servermanager mock so that when we do send close of the
  // first meta region on SERVERNAME_A, it will return true rather than
  // default null.
  Mockito.when(this.serverManager.sendRegionClose(SERVERNAME_A, hri, -1)).thenReturn(true);
  // Need a mocked catalog tracker.
  CatalogTracker ct = Mockito.mock(CatalogTracker.class);
  LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer(server
      .getConfiguration());
  // Create an AM.
  AssignmentManager am =
    new AssignmentManager(this.server, this.serverManager, ct, balancer, null);
  try {
    // First make sure my mock up basically works.  Unassign a region.
    unassign(am, SERVERNAME_A, hri);
    // This delete will fail if the previous unassign did wrong thing.
    ZKAssign.deleteClosingNode(this.watcher, hri);
    // Now put a SPLITTING region in the way.  I don't have to assert it
    // go put in place.  This method puts it in place then asserts it still
    // owns it by moving state from SPLITTING to SPLITTING.
    int version = createNodeSplitting(this.watcher, hri, SERVERNAME_A);
    // Now, retry the unassign with the SPLTTING in place.  It should just
    // complete without fail; a sort of 'silent' recognition that the
    // region to unassign has been split and no longer exists: TOOD: what if
    // the split fails and the parent region comes back to life?
    unassign(am, SERVERNAME_A, hri);
    // This transition should fail if the znode has been messed with.
    ZKAssign.transitionNode(this.watcher, hri, SERVERNAME_A,
      EventType.RS_ZK_REGION_SPLITTING, EventType.RS_ZK_REGION_SPLITTING, version);
    assertTrue(am.isRegionInTransition(hri) == null);
  } finally {
    am.shutdown();
  }
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:38,代码来源:TestAssignmentManager.java

示例2: testUnassignWithSplitAtSameTime

import org.apache.hadoop.hbase.zookeeper.ZKAssign; //导入方法依赖的package包/类
@Test (timeout=180000)
public void testUnassignWithSplitAtSameTime() throws KeeperException,
    IOException, CoordinatedStateException {
  // Region to use in test.
  final HRegionInfo hri = HRegionInfo.FIRST_META_REGIONINFO;
  // First amend the servermanager mock so that when we do send close of the
  // first meta region on SERVERNAME_A, it will return true rather than
  // default null.
  Mockito.when(this.serverManager.sendRegionClose(SERVERNAME_A, hri, -1)).thenReturn(true);
  // Need a mocked catalog tracker.
  LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer(server
      .getConfiguration());
  // Create an AM.
  AssignmentManager am = new AssignmentManager(this.server,
    this.serverManager, balancer, null, null, master.getTableLockManager());
  try {
    // First make sure my mock up basically works.  Unassign a region.
    unassign(am, SERVERNAME_A, hri);
    // This delete will fail if the previous unassign did wrong thing.
    ZKAssign.deleteClosingNode(this.watcher, hri, SERVERNAME_A);
    // Now put a SPLITTING region in the way.  I don't have to assert it
    // go put in place.  This method puts it in place then asserts it still
    // owns it by moving state from SPLITTING to SPLITTING.
    int version = createNodeSplitting(this.watcher, hri, SERVERNAME_A);
    // Now, retry the unassign with the SPLTTING in place.  It should just
    // complete without fail; a sort of 'silent' recognition that the
    // region to unassign has been split and no longer exists: TOOD: what if
    // the split fails and the parent region comes back to life?
    unassign(am, SERVERNAME_A, hri);
    // This transition should fail if the znode has been messed with.
    ZKAssign.transitionNode(this.watcher, hri, SERVERNAME_A,
      EventType.RS_ZK_REGION_SPLITTING, EventType.RS_ZK_REGION_SPLITTING, version);
    assertFalse(am.getRegionStates().isRegionInTransition(hri));
  } finally {
    am.shutdown();
  }
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:38,代码来源:TestAssignmentManager.java

示例3: testUnassignWithSplitAtSameTime

import org.apache.hadoop.hbase.zookeeper.ZKAssign; //导入方法依赖的package包/类
@Test
public void testUnassignWithSplitAtSameTime() throws KeeperException, IOException {
  // Region to use in test.
  final HRegionInfo hri = HRegionInfo.FIRST_META_REGIONINFO;
  // First amend the servermanager mock so that when we do send close of the
  // first meta region on SERVERNAME_A, it will return true rather than
  // default null.
  Mockito.when(this.serverManager.sendRegionClose(SERVERNAME_A, hri, -1)).thenReturn(true);
  // Need a mocked catalog tracker.
  CatalogTracker ct = Mockito.mock(CatalogTracker.class);
  LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer(server
      .getConfiguration());
  // Create an AM.
  AssignmentManager am = new AssignmentManager(this.server,
    this.serverManager, ct, balancer, null, null, master.getTableLockManager());
  try {
    // First make sure my mock up basically works.  Unassign a region.
    unassign(am, SERVERNAME_A, hri);
    // This delete will fail if the previous unassign did wrong thing.
    ZKAssign.deleteClosingNode(this.watcher, hri, SERVERNAME_A);
    // Now put a SPLITTING region in the way.  I don't have to assert it
    // go put in place.  This method puts it in place then asserts it still
    // owns it by moving state from SPLITTING to SPLITTING.
    int version = createNodeSplitting(this.watcher, hri, SERVERNAME_A);
    // Now, retry the unassign with the SPLTTING in place.  It should just
    // complete without fail; a sort of 'silent' recognition that the
    // region to unassign has been split and no longer exists: TOOD: what if
    // the split fails and the parent region comes back to life?
    unassign(am, SERVERNAME_A, hri);
    // This transition should fail if the znode has been messed with.
    ZKAssign.transitionNode(this.watcher, hri, SERVERNAME_A,
      EventType.RS_ZK_REGION_SPLITTING, EventType.RS_ZK_REGION_SPLITTING, version);
    assertFalse(am.getRegionStates().isRegionInTransition(hri));
  } finally {
    am.shutdown();
  }
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:38,代码来源:TestAssignmentManager.java

示例4: testExistingZnodeBlocksSplitAndWeRollback

import org.apache.hadoop.hbase.zookeeper.ZKAssign; //导入方法依赖的package包/类
@Test (timeout = 300000) public void testExistingZnodeBlocksSplitAndWeRollback()
throws IOException, InterruptedException, NodeExistsException, KeeperException, ServiceException {
  final TableName tableName =
      TableName.valueOf("testExistingZnodeBlocksSplitAndWeRollback");

  // Create table then get the single region for our new table.
  HTable t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY);
  List<HRegion> regions = cluster.getRegions(tableName);
  HRegionInfo hri = getAndCheckSingleTableRegion(regions);

  int tableRegionIndex = ensureTableRegionNotOnSameServerAsMeta(admin, hri);

  RegionStates regionStates = cluster.getMaster().getAssignmentManager().getRegionStates();

  // Turn off balancer so it doesn't cut in and mess up our placements.
  this.admin.setBalancerRunning(false, true);
  // Turn off the meta scanner so it don't remove parent on us.
  cluster.getMaster().setCatalogJanitorEnabled(false);
  try {
    // Add a bit of load up into the table so splittable.
    TESTING_UTIL.loadTable(t, HConstants.CATALOG_FAMILY, false);
    // Get region pre-split.
    HRegionServer server = cluster.getRegionServer(tableRegionIndex);
    printOutRegions(server, "Initial regions: ");
    int regionCount = ProtobufUtil.getOnlineRegions(server.getRSRpcServices()).size();
    // Insert into zk a blocking znode, a znode of same name as region
    // so it gets in way of our splitting.
    ServerName fakedServer = ServerName.valueOf("any.old.server", 1234, -1);
    if (useZKForAssignment) {
      ZKAssign.createNodeClosing(TESTING_UTIL.getZooKeeperWatcher(),
        hri, fakedServer);
    } else {
      regionStates.updateRegionState(hri, RegionState.State.CLOSING);
    }
    // Now try splitting.... should fail.  And each should successfully
    // rollback.
    this.admin.split(hri.getRegionNameAsString());
    this.admin.split(hri.getRegionNameAsString());
    this.admin.split(hri.getRegionNameAsString());
    // Wait around a while and assert count of regions remains constant.
    for (int i = 0; i < 10; i++) {
      Thread.sleep(100);
      assertEquals(regionCount, ProtobufUtil.getOnlineRegions(
        server.getRSRpcServices()).size());
    }
    if (useZKForAssignment) {
      // Now clear the zknode
      ZKAssign.deleteClosingNode(TESTING_UTIL.getZooKeeperWatcher(),
        hri, fakedServer);
    } else {
      regionStates.regionOnline(hri, server.getServerName());
    }
    // Now try splitting and it should work.
    split(hri, server, regionCount);
    // Get daughters
    checkAndGetDaughters(tableName);
    // OK, so split happened after we cleared the blocking node.
  } finally {
    admin.setBalancerRunning(true, false);
    cluster.getMaster().setCatalogJanitorEnabled(true);
    t.close();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:64,代码来源:TestSplitTransactionOnCluster.java

示例5: testExistingZnodeBlocksSplitAndWeRollback

import org.apache.hadoop.hbase.zookeeper.ZKAssign; //导入方法依赖的package包/类
@Test (timeout = 300000) public void testExistingZnodeBlocksSplitAndWeRollback()
throws IOException, InterruptedException, NodeExistsException, KeeperException {
  final byte [] tableName =
    Bytes.toBytes("testExistingZnodeBlocksSplitAndWeRollback");

  // Create table then get the single region for our new table.
  HTable t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY);
  List<HRegion> regions = cluster.getRegions(tableName);
  HRegionInfo hri = getAndCheckSingleTableRegion(regions);

  int tableRegionIndex = ensureTableRegionNotOnSameServerAsMeta(admin, hri);

  // Turn off balancer so it doesn't cut in and mess up our placements.
  this.admin.setBalancerRunning(false, true);
  // Turn off the meta scanner so it don't remove parent on us.
  cluster.getMaster().setCatalogJanitorEnabled(false);
  try {
    // Add a bit of load up into the table so splittable.
    TESTING_UTIL.loadTable(t, HConstants.CATALOG_FAMILY);
    // Get region pre-split.
    HRegionServer server = cluster.getRegionServer(tableRegionIndex);
    printOutRegions(server, "Initial regions: ");
    int regionCount = server.getOnlineRegions().size();
    // Insert into zk a blocking znode, a znode of same name as region
    // so it gets in way of our splitting.
    ZKAssign.createNodeClosing(t.getConnection().getZooKeeperWatcher(),
      hri, new ServerName("any.old.server", 1234, -1));
    // Now try splitting.... should fail.  And each should successfully
    // rollback.
    this.admin.split(hri.getRegionNameAsString());
    this.admin.split(hri.getRegionNameAsString());
    this.admin.split(hri.getRegionNameAsString());
    // Wait around a while and assert count of regions remains constant.
    for (int i = 0; i < 10; i++) {
      Thread.sleep(100);
      assertEquals(regionCount, server.getOnlineRegions().size());
    }
    // Now clear the zknode
    ZKAssign.deleteClosingNode(t.getConnection().getZooKeeperWatcher(), hri);
    // Now try splitting and it should work.
    split(hri, server, regionCount);
    // Get daughters
    checkAndGetDaughters(tableName);
    // OK, so split happened after we cleared the blocking node.
  } finally {
    admin.setBalancerRunning(true, false);
    cluster.getMaster().setCatalogJanitorEnabled(true);
    t.close();
  }
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:51,代码来源:TestSplitTransactionOnCluster.java

示例6: testExistingZnodeBlocksSplitAndWeRollback

import org.apache.hadoop.hbase.zookeeper.ZKAssign; //导入方法依赖的package包/类
@Test (timeout = 300000) public void testExistingZnodeBlocksSplitAndWeRollback()
throws IOException, InterruptedException, NodeExistsException, KeeperException, ServiceException {
  final byte [] tableName =
    Bytes.toBytes("testExistingZnodeBlocksSplitAndWeRollback");

  // Create table then get the single region for our new table.
  HTable t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY);
  List<HRegion> regions = cluster.getRegions(tableName);
  HRegionInfo hri = getAndCheckSingleTableRegion(regions);

  int tableRegionIndex = ensureTableRegionNotOnSameServerAsMeta(admin, hri);

  // Turn off balancer so it doesn't cut in and mess up our placements.
  this.admin.setBalancerRunning(false, true);
  // Turn off the meta scanner so it don't remove parent on us.
  cluster.getMaster().setCatalogJanitorEnabled(false);
  try {
    // Add a bit of load up into the table so splittable.
    TESTING_UTIL.loadTable(t, HConstants.CATALOG_FAMILY, false);
    // Get region pre-split.
    HRegionServer server = cluster.getRegionServer(tableRegionIndex);
    printOutRegions(server, "Initial regions: ");
    int regionCount = ProtobufUtil.getOnlineRegions(server).size();
    // Insert into zk a blocking znode, a znode of same name as region
    // so it gets in way of our splitting.
    ServerName fakedServer = ServerName.valueOf("any.old.server", 1234, -1);
    ZKAssign.createNodeClosing(TESTING_UTIL.getZooKeeperWatcher(),
      hri, fakedServer);
    // Now try splitting.... should fail.  And each should successfully
    // rollback.
    this.admin.split(hri.getRegionNameAsString());
    this.admin.split(hri.getRegionNameAsString());
    this.admin.split(hri.getRegionNameAsString());
    // Wait around a while and assert count of regions remains constant.
    for (int i = 0; i < 10; i++) {
      Thread.sleep(100);
      assertEquals(regionCount, ProtobufUtil.getOnlineRegions(server).size());
    }
    // Now clear the zknode
    ZKAssign.deleteClosingNode(TESTING_UTIL.getZooKeeperWatcher(),
      hri, fakedServer);
    // Now try splitting and it should work.
    split(hri, server, regionCount);
    // Get daughters
    checkAndGetDaughters(tableName);
    // OK, so split happened after we cleared the blocking node.
  } finally {
    admin.setBalancerRunning(true, false);
    cluster.getMaster().setCatalogJanitorEnabled(true);
    t.close();
  }
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:53,代码来源:TestSplitTransactionOnCluster.java


注:本文中的org.apache.hadoop.hbase.zookeeper.ZKAssign.deleteClosingNode方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。