本文整理匯總了Python中joblib.Parallel.extend方法的典型用法代碼示例。如果您正苦於以下問題:Python Parallel.extend方法的具體用法?Python Parallel.extend怎麽用?Python Parallel.extend使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類joblib.Parallel
的用法示例。
在下文中一共展示了Parallel.extend方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: process_point
# 需要導入模塊: from joblib import Parallel [as 別名]
# 或者: from joblib.Parallel import extend [as 別名]
nnc.sync()
tides = process_point(i,nnc)
write_point(nnc,i,tides)
else:
from joblib import Parallel, delayed
elev = ncv['elev'][:]
# if timeseries or chunked version
# (hand over subarray instead of timeseries)
if False:
result = Parallel(n_jobs=n_jobs)(delayed(process_timeseries_parallel)(elev[:,i]) for i in tqdm(range(inum),ascii=True))
else:
chunksize=100
chunked_result = Parallel(n_jobs=n_jobs)(delayed(process_chunk_parallel)(elev[:,i*chunksize:min(inum,(i+1)*chunksize)]) for i in tqdm(range(int(inum/chunksize)+1),ascii=True))
result=[]
for res in chunked_result:
result.extend(res)
# save result as pickle
with open('result.pickle','wb') as f:
pickle.dump(result,f)
# write to netcdf
write_result(result,nnc)
# close netcdf
nnc.close()