当前位置: 首页>>代码示例>>Python>>正文


Python Executor.ncores方法代码示例

本文整理汇总了Python中distributed.Executor.ncores方法的典型用法代码示例。如果您正苦于以下问题:Python Executor.ncores方法的具体用法?Python Executor.ncores怎么用?Python Executor.ncores使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在distributed.Executor的用法示例。


在下文中一共展示了Executor.ncores方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: Executor

# 需要导入模块: from distributed import Executor [as 别名]
# 或者: from distributed.Executor import ncores [as 别名]
        logging.basicConfig(level="DEBUG")
    out_fds = [sys.stdout]
    if args.out:
        out_fds.append(open(args.out, 'w'))

    client = None
    if args.dask_scheduler:
        client = Executor(args.dask_scheduler)
    else:
        client = Executor()

    print(dir(client))
    logging.info(
        "Running with dask scheduler: %s [%s cores]" % (
            args.dask_scheduler,
            sum(client.ncores().values())))

    if args.jobs_range is not None:
        for i in range(*args.jobs_range):
            command = args.scale_command % i
            logging.info("Running: %s" % command)
            subprocess.check_call(command, shell=True)
            while True:
                cores = sum(client.ncores().values())
                logging.info(
                    "Cores: %d. Waiting for %d cores." % (cores, i))
                if cores == i:
                    break
                time.sleep(1)
            go(client, args, cores, out_fds)
    else:
开发者ID:hammerlab,项目名称:mhcflurry-cloud,代码行数:33,代码来源:benchmark.py

示例2: DistributedContext

# 需要导入模块: from distributed import Executor [as 别名]
# 或者: from distributed.Executor import ncores [as 别名]
class DistributedContext(object):
    io_loop = None
    io_thread = None

    def __init__(self,
                 ip="127.0.0.1",
                 port=8787,
                 spawn_workers=0,
                 write_partial_results=None,
                 track_progress=False,
                 time_limit=None,
                 job_observer=None):
        """
        :type ip: string
        :type port: int
        :type spawn_workers: int
        :type write_partial_results: int
        :type track_progress: bool
        :type time_limit: int
        :type job_observer: JobObserver
        """

        self.worker_count = spawn_workers
        self.ip = ip
        self.port = port
        self.active = False
        self.write_partial_results = write_partial_results
        self.track_progress = track_progress
        self.execution_count = 0
        self.timeout = TimeoutManager(time_limit) if time_limit else None
        self.job_observer = job_observer

        if not DistributedContext.io_loop:
            DistributedContext.io_loop = IOLoop()
            DistributedContext.io_thread = Thread(
                target=DistributedContext.io_loop.start)
            DistributedContext.io_thread.daemon = True
            DistributedContext.io_thread.start()

        if spawn_workers > 0:
            self.scheduler = self._create_scheduler()
            self.workers = [self._create_worker()
                            for i in xrange(spawn_workers)]
            time.sleep(0.5)  # wait for workers to spawn

        self.executor = Executor((ip, port))

    def run(self, domain,
            worker_reduce_fn, worker_reduce_init,
            global_reduce_fn, global_reduce_init):
        size = domain.steps
        assert size is not None  # TODO: Iterators without size

        workers = 0
        for name, value in self.executor.ncores().items():
            workers += value

        if workers == 0:
            raise Exception("There are no workers")

        batch_count = workers * 4
        batch_size = max(int(round(size / float(batch_count))), 1)
        batches = self._create_batches(batch_size, size, domain,
                                       worker_reduce_fn, worker_reduce_init)

        logging.info("Qit: starting {} batches with size {}".format(
            batch_count, batch_size))

        if self.job_observer:
            self.job_observer.on_computation_start(batch_count, batch_size)

        futures = self.executor.map(process_batch, batches)

        if self.track_progress:
            distributed.diagnostics.progress(futures)

        if self.write_partial_results is not None:
            result_saver = ResultSaver(self.execution_count,
                                       self.write_partial_results)
        else:
            result_saver = None

        timeouted = False
        results = []

        for future in as_completed(futures):
            job = future.result()
            if result_saver:
                result_saver.handle_result(job.result)
            if self.job_observer:
                self.job_observer.on_job_completed(job)

            results.append(job.result)

            if self.timeout and self.timeout.is_finished():
                logging.info("Qit: timeouted after {} seconds".format(
                    self.timeout.timeout))
                timeouted = True
                break

#.........这里部分代码省略.........
开发者ID:Kobzol,项目名称:pyqit,代码行数:103,代码来源:distributedcontext.py


注:本文中的distributed.Executor.ncores方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。