当前位置: 首页>>代码示例>>Java>>正文


Java SparseSampling.setForgetPreviousPlanResults方法代码示例

本文整理汇总了Java中burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.setForgetPreviousPlanResults方法的典型用法代码示例。如果您正苦于以下问题:Java SparseSampling.setForgetPreviousPlanResults方法的具体用法?Java SparseSampling.setForgetPreviousPlanResults怎么用?Java SparseSampling.setForgetPreviousPlanResults使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling的用法示例。


在下文中一共展示了SparseSampling.setForgetPreviousPlanResults方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: IPSS

import burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling; //导入方法依赖的package包/类
public static void IPSS(){

		InvertedPendulum ip = new InvertedPendulum();
		ip.physParams.actionNoise = 0.;
		Domain domain = ip.generateDomain();
		RewardFunction rf = new InvertedPendulum.InvertedPendulumRewardFunction(Math.PI/8.);
		TerminalFunction tf = new InvertedPendulum.InvertedPendulumTerminalFunction(Math.PI/8.);
		State initialState = InvertedPendulum.getInitialState(domain);

		SparseSampling ss = new SparseSampling(domain, rf, tf, 1, new SimpleHashableStateFactory(), 10 ,1);
		ss.setForgetPreviousPlanResults(true);
		ss.toggleDebugPrinting(false);
		Policy p = new GreedyQPolicy(ss);

		EpisodeAnalysis ea = p.evaluateBehavior(initialState, rf, tf, 500);
		System.out.println("Num steps: " + ea.maxTimeStep());
		Visualizer v = InvertedPendulumVisualizer.getInvertedPendulumVisualizer();
		new EpisodeSequenceVisualizer(v, domain, Arrays.asList(ea));

	}
 
开发者ID:f-leno,项目名称:DOO-Q_BRACIS2016,代码行数:21,代码来源:ContinuousDomainTutorial.java

示例2: IPSS

import burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling; //导入方法依赖的package包/类
public static void IPSS(){

		InvertedPendulum ip = new InvertedPendulum();
		ip.physParams.actionNoise = 0.;
		RewardFunction rf = new InvertedPendulum.InvertedPendulumRewardFunction(Math.PI/8.);
		TerminalFunction tf = new InvertedPendulum.InvertedPendulumTerminalFunction(Math.PI/8.);
		ip.setRf(rf);
		ip.setTf(tf);
		SADomain domain = ip.generateDomain();

		State initialState = new InvertedPendulumState();

		SparseSampling ss = new SparseSampling(domain, 1, new SimpleHashableStateFactory(), 10, 1);
		ss.setForgetPreviousPlanResults(true);
		ss.toggleDebugPrinting(false);
		Policy p = new GreedyQPolicy(ss);

		Episode e = PolicyUtils.rollout(p, initialState, domain.getModel(), 500);
		System.out.println("Num steps: " + e.maxTimeStep());
		Visualizer v = CartPoleVisualizer.getCartPoleVisualizer();
		new EpisodeSequenceVisualizer(v, domain, Arrays.asList(e));

	}
 
开发者ID:jmacglashan,项目名称:burlap_examples,代码行数:24,代码来源:ContinuousDomainTutorial.java


注:本文中的burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.setForgetPreviousPlanResults方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。