當前位置: 首頁>>代碼示例>>Java>>正文


Java MAValueIteration類代碼示例

本文整理匯總了Java中burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration的典型用法代碼示例。如果您正苦於以下問題:Java MAValueIteration類的具體用法?Java MAValueIteration怎麽用?Java MAValueIteration使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


MAValueIteration類屬於burlap.behavior.stochasticgames.madynamicprogramming.dpplanners包,在下文中一共展示了MAValueIteration類的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getPlannerInstance

import burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration; //導入依賴的package包/類
@Override
public MADynamicProgramming getPlannerInstance() {
	return new MAValueIteration(domain, agentDefinitions, jointReward, terminalFunction, discount, hashingFactory, qInit, backupOperator, maxDelta, maxIterations);
}
 
開發者ID:f-leno,項目名稱:DOO-Q_BRACIS2016,代碼行數:5,代碼來源:MADPPlannerFactory.java

示例2: VICoCoTest

import burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration; //導入依賴的package包/類
public static void VICoCoTest(){

		//grid game domain
		GridGame gridGame = new GridGame();
		final OOSGDomain domain = gridGame.generateDomain();

		final HashableStateFactory hashingFactory = new SimpleHashableStateFactory();

		//run the grid game version of prisoner's dilemma
		final State s = GridGame.getPrisonersDilemmaInitialState();

		//define joint reward function and termination conditions for this game
		JointRewardFunction rf = new GridGame.GGJointRewardFunction(domain, -1, 100, false);
		TerminalFunction tf = new GridGame.GGTerminalFunction(domain);

		//both agents are standard: access to all actions
		SGAgentType at = GridGame.getStandardGridGameAgentType(domain);

		//create our multi-agent planner
		MAValueIteration vi = new MAValueIteration(domain, rf, tf, 0.99, hashingFactory, 0., new CoCoQ(), 0.00015, 50);

		//instantiate a world in which our agents will play
		World w = new World(domain, rf, tf, s);


		//create a greedy joint policy from our planner's Q-values
		EGreedyMaxWellfare jp0 = new EGreedyMaxWellfare(0.);
		jp0.setBreakTiesRandomly(false); //don't break ties randomly

		//create agents that follows their end of the computed the joint policy
		MultiAgentDPPlanningAgent a0 = new MultiAgentDPPlanningAgent(domain, vi, new PolicyFromJointPolicy(0, jp0), "agent0", at);
		MultiAgentDPPlanningAgent a1 = new MultiAgentDPPlanningAgent(domain, vi, new PolicyFromJointPolicy(1, jp0), "agent1", at);

		w.join(a0);
		w.join(a1);

		//run some games of the agents playing that policy
		GameEpisode ga = null;
		for(int i = 0; i < 3; i++){
			ga = w.runGame();
		}

		//visualize results
		Visualizer v = GGVisualizer.getVisualizer(9, 9);
		new GameSequenceVisualizer(v, domain, Arrays.asList(ga));


	}
 
開發者ID:jmacglashan,項目名稱:burlap_examples,代碼行數:49,代碼來源:GridGameExample.java

示例3: getPlannerInstance

import burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration; //導入依賴的package包/類
@Override
public MADynamicProgramming getPlannerInstance() {
	return new MAValueIteration(domain, agentDefinitions, jointRewardFunction, terminalFunction, discount, hashingFactory, qInit, backupOperator, maxDelta, maxIterations);
}
 
開發者ID:jmacglashan,項目名稱:burlap,代碼行數:5,代碼來源:MADPPlannerFactory.java

示例4: VICorrelatedTest

import burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration; //導入依賴的package包/類
public static void VICorrelatedTest(){

		GridGame gridGame = new GridGame();
		final OOSGDomain domain = gridGame.generateDomain();

		final HashableStateFactory hashingFactory = new SimpleHashableStateFactory();

		final State s = GridGame.getPrisonersDilemmaInitialState();

		JointRewardFunction rf = new GridGame.GGJointRewardFunction(domain, -1, 100, false);
		TerminalFunction tf = new GridGame.GGTerminalFunction(domain);

		SGAgentType at = GridGame.getStandardGridGameAgentType(domain);
		MAValueIteration vi = new MAValueIteration(domain, rf, tf, 0.99, hashingFactory, 0., new CorrelatedQ(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective.UTILITARIAN), 0.00015, 50);

		World w = new World(domain, rf, tf, s);


		//for correlated Q, use a correlated equilibrium policy joint policy
		ECorrelatedQJointPolicy jp0 = new ECorrelatedQJointPolicy(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective.UTILITARIAN, 0.);


		MultiAgentDPPlanningAgent a0 = new MultiAgentDPPlanningAgent(domain, vi, new PolicyFromJointPolicy(0, jp0, true), "agent0", at);
		MultiAgentDPPlanningAgent a1 = new MultiAgentDPPlanningAgent(domain, vi, new PolicyFromJointPolicy(1, jp0, true), "agent1", at);

		w.join(a0);
		w.join(a1);

		GameEpisode ga = null;
		List<GameEpisode> games = new ArrayList<GameEpisode>();
		for(int i = 0; i < 10; i++){
			ga = w.runGame();
			games.add(ga);
		}

		Visualizer v = GGVisualizer.getVisualizer(9, 9);
		new GameSequenceVisualizer(v, domain, games);


	}
 
開發者ID:jmacglashan,項目名稱:burlap_examples,代碼行數:41,代碼來源:GridGameExample.java


注:本文中的burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。