```
from mesa import Model, Agent # core mesa classes
from mesa.space import NetworkGrid
from mesa.time import BaseScheduler
from mesa.datacollection import DataCollector
import networkx as nx # for the grid
import numpy as np # computations
from matplotlib import pyplot as plt # visualizing output
```

# 7 Agent-Based Modeling on Networks

Many of the problems that interest us in networks relate to agents making actions or decisions on network structures. While in some cases we can develop relatively complete mathematical descriptions of systems like these, in other cases we need to perform computational simulations and experiments. In this set of notes, we’ll focus on basic techniques for *agent-based modeling* (ABM) in Python.

In agent-based modeling, we construct a model by defining a set of agents and the rules by which those agents interact. There are many good software packages for agent-based modeling, perhaps the most famous of which is NetLogo. In this class, we’ll use one of several agent-based modeling frameworks developed for Python, called mesa. Mesa includes a number of useful tools for constructing, analyzing, and visualizing agent-based models. You can install Mesa using

`pip install mesa`

at the command line or by searching for and installing it in the Environments tab of Anaconda Navigator. Once you’ve installed Mesa, you are ready to use its tools.

## 7.1 Components of an Agent-Based Model

Let’s start with some vocabulary. A Mesa model has several components:

- An
**agent**is a representation of the individuals who make decisions and perform actions. Agents have a`step()`

method that describes their behavior. - The
**grid**is a representation of relationships between individuals. The grid can be, say, a 2d rectangle, in which case we could imagine it representing space. In this class, we’ll of course use a*network grid*, in which we can use a network to specify relationships. - The
**scheduler**determines the order in which agents act. In a*synchronous*model, all agents act simultaneously. In an*asychronous*model, agents act one at a time, in either a fixed or a random order. The schedule also has a`step()`

method that calls the`step()`

method of the agents according to the schedule. - The
**data collector**helps us gather data on our simulation.

## 7.2 First Example: Simple Random Walk

For our first agent-based model, we are going to code up an agent-based implementation of the simple random walk. There are lots of reasonable ways to do this, and Mesa is actually a bit of overkill for this particular problem. Still, we’ll learn some important techniques and concepts along the way.

Let’s start by importing several tools that we’ll use.

Since this is a networks class, we’ll use a network-based grid. We imported the capability to do that above as the `mesa.space.NetworkGrid`

class. Of course, we need a network to use. For this example, we’ll use the famous Zachary Karate Club, which is built in to NetworkX:

`= nx.les_miserables_graph() G `

We’ll soon use this to create our model.

### The Model Class

To specify an ABM in Mesa we need to define two classes: a class describing the model and a class describing each individual agent. The main responsibilities of the model class are to describe:

- How the model is initialized, via the
`__init__()`

method. This includes:- Creating any agents needed.
- Placing those agents on the grid and placing them in the schedule.
- Defining any data collection tools.

- What happens in a single time-step of the model, via the
`step()`

method.

The model class actually has a lot more functionality than this. Fortunately, we don’t usually need to define this functionality, because the model class we create inherits the needed functionality from `mesa.Model`

(which we imported above). Here’s our `SRWModel`

class. The syntax can look a little complicated whenever we work with a new package, but what’s going on is fundamentally simple.

```
class RWModel(Model):
# model setup
def __init__(self, G, agent_class, **kwargs):
self.schedule = BaseScheduler(self) # time structure
self.grid = NetworkGrid(G) # space structure
# create a single agent who will walk around the graph
# we haven't defined SWRAgent yet
# the agent has a name and is associated to the model
= agent_class("Anakin Graphwalker", self, **kwargs)
agent
# place the agent at a random node on the graph
= self.random.choice(list(G.nodes))
node self.grid.place_agent(agent, node)
# place the agent into the schedule
self.schedule.add(agent)
# data collection. Here we're just going to collect the
# current position of each agent
self.collector = DataCollector(
= {
agent_reporters "node" : lambda a: a.pos
}
)
# this is where a timestep actually happens
# once we've set up the model's __init__() method
# and the step() method of the agent class,
# this is one is usually pretty simple
def step(self):
self.schedule.step()
self.collector.collect(self)
```

### The Agent Class

Now we’re ready to define what the agent is supposed to do! In the SRW, the agent looks at all nodes adjacent to theirs, chooses one of them uniformly at random, and moves to it. We need to implement this behavior in the `step()`

method. While there are some more mesa functions involved that you may not have seen before, the approach is very simple.

```
class SRWAgent(Agent):
def step(self):
# find all possible next steps
# include_center determines whether or not we count the
# current position as a possibility
= self.model.grid.get_neighbors(self.pos,
options = False)
include_center
# pick a random one and go there
= self.random.choice(options)
new_node self.model.grid.move_agent(self, new_node)
```

Note that, in order to get information about the possible locations, and to move the agent, we needed to use the `grid`

attribute of the `SRWModel`

that we defined above. More generally, the grid handles all “spatial” operations that we usually need to do.

### Experiment

Phew, that’s it! Once we’ve defined our model class, we can then run it for a bunch of timesteps:

```
= RWModel(G, SRWAgent)
model
for i in range(100000):
model.step()
```

We can get data on the behavior of the simulation using the `collector`

attribute of the model. We programmed the collector to gather only the position of the walker. There are lots of other possibilities we could have chosen instead.

```
= model.collector.get_agent_vars_dataframe()
walk_report walk_report.head()
```

node | ||
---|---|---|

Step | AgentID | |

1 | Anakin Graphwalker | Gillenormand |

2 | Anakin Graphwalker | Marius |

3 | Anakin Graphwalker | Joly |

4 | Anakin Graphwalker | Gavroche |

5 | Anakin Graphwalker | Montparnasse |

Now let’s ask: is the simulation we just did lined up with what we know about the theory of the simple random walk? Recall that the *stationary distribution* \(\pi\) of the SRW is supposed to describe the long-term behavior of the walk, with \(\pi_i\) giving the limiting probability that the walker is on node \(i\). Recall further that the stationary distribution for the SRW is actually known in closed form: it’s \(\pi_i = k_i / 2m\), where \(k_i\) is the degree of node \(i\). So, we would expect this to be a good estimate of the fraction of time that the walker spent on node \(i\). Let’s check this!

First, we can compute the fraction of time that the agent spent on each node:

```
= walk_report.groupby("node").size()
counts = counts / sum(counts)
freqs freqs.head()
```

```
node
Anzelma 0.00599
Babet 0.01967
Bahorel 0.02346
Bamatabois 0.01613
BaronessT 0.00397
dtype: float64
```

Now we can compute the degree sequence and stationary distribution of the underlying graph:

```
= [G.degree(i) for i in freqs.index]
degs = degs / np.sum(degs) stationary_dist
```

Finally, we can plot and see whether the prediction lines up with the observation:

```
0, .12],
plt.plot([0, .12],
[= "black", label = "prediction")
color
plt.scatter(stationary_dist,
freqs, = 100, label = "ABM")
zorder
set(xlabel = r"$\frac{k_i}{2m}$",
plt.gca().= "% of time spent on node")
ylabel
plt.legend()
```

`<matplotlib.legend.Legend at 0x7f9cc8f55be0>`

That’s a match!

### Variation: PageRank

The reason that we parameterized the `RWModel`

class with the argument `agent_class`

is that we can now implement PageRank just by modifying the agent behavior. Let’s now make a new kind of agent that does the PageRank step:

```
class PageRankAgent(Agent):
def __init__(self, agent_id, model, alpha):
super().__init__(agent_id, model)
self.alpha = alpha
def step(self):
if np.random.rand() < self.alpha: # teleport
= list(self.model.grid.G.nodes.keys())
options else: # standard RW step
= self.model.grid.get_neighbors(self.pos,
options = False)
include_center
# pick a random one and go there
= self.random.choice(options)
new_node self.model.grid.move_agent(self, new_node)
```

That’s all we need to do in order to implement PageRank in this graph. Let’s go ahead and run PageRank

```
= RWModel(G, PageRankAgent, alpha = 0.15)
pagerank_model
for i in range(100000):
pagerank_model.step()
```

That’s it! Now we could check the match with the stationary distribution like we did last time. Instead, let’s simply draw the graph.

```
= pagerank_model.collector.get_agent_vars_dataframe()
walk_report
= walk_report.groupby("node").size()
counts = counts / np.sum(counts)
freqs
nx.draw(G, = [2000*freqs[i] for i in G.nodes],
node_size = "grey") edge_color
```

## 7.3 Multi-Agent Models

Now let’s consider our first multi-agent model, the *voter model*.

```
from mesa.time import RandomActivation
class CompartmentalModel(Model):
# model setup
def __init__(self, G, agent_class, possible_states = [0,1], state_density = [0.5, 0.5]):
self.schedule = RandomActivation(self) # time structure
self.grid = NetworkGrid(G) # space structure
for node in list(G.nodes):
= np.random.choice(possible_states, p = state_density)
state = agent_class(node, self, state)
agent self.grid.place_agent(agent, node)
self.schedule.add(agent)
self.collector = DataCollector(
= {
agent_reporters "state" : lambda a: a.state
}
)
def step(self):
self.schedule.step()
self.collector.collect(self)
```

```
class CompartmentalAgent(Agent):
def __init__(self, agent_id, model, state):
super().__init__(agent_id, model)
self.state = state
def step(self):
= self.model.grid.get_neighbors(self.pos,
neighbor_locs = False)
include_center
= self.model.grid.get_cell_list_contents(neighbor_locs)
neighbors
= np.random.choice(neighbors)
adopt_from
self.state = adopt_from.state
```

```
for run in range(10):
= CompartmentalModel(G, CompartmentalAgent, [0, 1], [0.5, 0.5])
voter_model for i in range(50):
voter_model.step()
= voter_model.collector.get_agent_vars_dataframe()
report "Step").mean())
plt.plot(report.groupby(
set(xlabel = "Timestep", ylabel = "% of nodes with opinion 1") plt.gca().
```

`[Text(0.5, 0, 'Timestep'), Text(0, 0.5, '% of nodes with opinion 1')]`