当前位置: 首页>>代码示例>>Python>>正文


Python SparkConf.setExecutorEnv方法代码示例

本文整理汇总了Python中pyspark.SparkConf.setExecutorEnv方法的典型用法代码示例。如果您正苦于以下问题:Python SparkConf.setExecutorEnv方法的具体用法?Python SparkConf.setExecutorEnv怎么用?Python SparkConf.setExecutorEnv使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在pyspark.SparkConf的用法示例。


在下文中一共展示了SparkConf.setExecutorEnv方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: SparkConf

# 需要导入模块: from pyspark import SparkConf [as 别名]
# 或者: from pyspark.SparkConf import setExecutorEnv [as 别名]

parser = argparse.ArgumentParser()
parser.add_argument('-a', '--access_key')
parser.add_argument('-s', '--secret_access_key')
parser.add_argument('-l', '--copy_local', action='store_true')

config = parser.parse_args()

download = False;

spark_config = None
if config.access_key and config.secret_access_key:
    download = True
    spark_config = SparkConf()
    spark_config.setExecutorEnv('AWS_ACCESS_KEY_ID', config.access_key)
    spark_config.setExecutorEnv('AWS_SECRET_ACCESS_KEY ', config.secret_access_key)


# Build up the context, using the master URL
sc = SparkContext('spark://ulex:7077', 'mean', conf=spark_config)
local_data_path = '/media/bitbucket/pr_amon_BCSD_rcp26_r1i1p1_CONUS_bcc-csm1-1_202101-202512.nc'
data_path = local_data_path
data_url = 'https://nasanex.s3.amazonaws.com/NEX-DCP30/BCSD/rcp26/mon/atmos/pr/r1i1p1/v1.0/CONUS/pr_amon_BCSD_rcp26_r1i1p1_CONUS_bcc-csm1-1_202101-202512.nc'

if download:
    data_path = data_url

# Download the file onto each node
if download or config.copy_local:
    sc.addFile(data_path)
开发者ID:OpenGeoscience,项目名称:nex,代码行数:32,代码来源:timestep_mean.py

示例2: SparkConf

# 需要导入模块: from pyspark import SparkConf [as 别名]
# 或者: from pyspark.SparkConf import setExecutorEnv [as 别名]
    'INTERNAL_PROC_ERAB_SETUP': [12, 13, 19, 20],
    'INTERNAL_PROC_INITIAL_CTXT_SETUP' : [12, 13, 20, 21],
    'INTERNAL_PROC_UE_CTXT_RELEASE': [17, 21, 22, 23],
    'INTERNAL_PROC_HO_PREP_S1_IN': [17, 18, 19],
    'INTERNAL_PROC_HO_PREP_X2_IN' : [18, 19, 20],
    'INTERNAL_PROC_RRC_CONN_SETUP': [12, 13],
    'INTERNAL_PROC_S1_SIG_CONN_SETUP' : [13]}

#os.environ['PYSPARK_PYTHON'] = '/usr/bin/python'
#py2.7 timedelta.total_seconds()

#NUM_PARTITIONS = 2000
path = '/user/mfoo/20160318tmp/seqFile.seq'
conf = SparkConf()
conf.set('spark.yarn.dist.files','file:/home/wfoo/install/spark1.4/python/lib/pyspark.zip,file:/home/wfoo/install/spark1.4/python/lib/py4j-0.8.2.1-src.  zip')
conf.setExecutorEnv('PYTHONPATH','pyspark.zip:py4j-0.8.2.1-src.zip')
#conf.set("dynamicAllocation.enabled", "true")
conf.set("spark.yarn.executor.memoryOverhead", 8192)
conf.set("spark.yarn.driver.memoryOverhead", 8192)
#conf.set("spark.executor.memory", "6g")
#conf.set("spark.driver.memory", "6g")
conf.set("spark.rdd.compress", "true")
conf.set("spark.storage.memoryFraction", 1)
conf.set("spark.core.connection.ack.wait.timeout", 600)
conf.set("spark.akka.frameSize", 50)
#conf.set("spark.local.dir","/data1/hadoop")
#conf.set("spark.driver.maxResultSize","32g")
#conf.setMaster("yarn-client")
sc = SparkContext(appName = "hpa_stats", conf=conf)
evt = sc.broadcast(EVENT_NAME)
fld = sc.broadcast(eventFields[EVENT_NAME])
开发者ID:eagle9,项目名称:palgo,代码行数:33,代码来源:hpa.py


注:本文中的pyspark.SparkConf.setExecutorEnv方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。