Journal on Policy & Complex Systems Vol. 2, Issue 2, Fall 2015 | Page 125

Journal on Policy and Complex Systems
Agents

Agents are assigned one of two possible types , clustering or spreading . An agent is a clustering type with probability ( p ). An agent ’ s type dictates how an agent ’ s utility is calculated at a given timestep . Clustering agents earn higher utility ( K C

) the more other agents are in its neighborhood ( defined below ). Spreading agents earn higher utility ( K S
) the fewer agents are in its neighborhood . Over a 100-agent lattice , utility for each type is
K S
= n / 100
K C
= ( 1 – n )/ 100
where n is the number of agents in each agent ’ s neighborhood at that time step . Individual utility is evaluated at each time step of the model , with the maximum score any agent can earn being 1 . Group utility ( K G
) is measured by averaging across all agents of each type at each time step , then evaluating them cumulatively over 50 time steps , which constitutes one run . The maximum K G that can be earned by a group over a run is 50 ( a score of K G = 1 over 50 time steps ).
Action

Action in the model proceeds as follows . At each time step , every agent has the opportunity to move to a neighboring cell if it will increase its utility . There can be more than one agent per cell — in principle ; all 100 agents could end up in one cell . The two types move simultaneously around the lattice to reflect uncertainty in real life decisions — we know the current state of affairs ( at best ), but we do not know what the other actors around us will do , even if we know their type and number of neighbors — or even if they engage in signaling .

Higher computational complexity could allow for more sophisticated agents that can also consider their neighbors and calculate what they are likely to do given their type and situation . There are two reasons why I do not include this here . First , as we will see ahead , interesting results emerge from this simpler version , so I begin there and leave more strategic play for future work . Second , while agents may be able to calculate the moves of other agents based on their current type and number of neighbors , each agent also holds private information about its own vision and willingness to change types ( modeled as probability ), which will render agents ’ calculations likely incorrect . Again , I leave this added complexity , as well as complexity that include signaling , for future work .
In real life , of course , we also rarely know beyond a guess what our neighbors will choose to do in the future . In the model , the present moment captures this to some extent — if a cell is very full at present , it is more likely to have agents
122