API Reference
Vectorised Model
Vectorised implementation of the OOP model.
- class abm_project.vectorised_model.VectorisedModel(num_agents: int = 100, width: int = 10, height: int = 10, memory_count: int = 1, env_update_fn: EnvUpdateFn | None = None, rng: Generator = None, rationality: float = 1.0, max_storage: int = 1000, moore: bool = True, simmer_time: int = 1, neighb_prediction_option: str = 'linear', severity_benefit_option: str = 'adaptive', radius_option: str = 'single', prop_pessimistic: float = 0, pessimism_level: float = 1, randomise: bool = True, b_1: ndarray[tuple[Any, ...], dtype[float64]] | None = None, b_2: ndarray[tuple[Any, ...], dtype[float64]] | None = None, gamma_s: float = 0.01)[source]
Vectorised base model for agent-based simulations.
This model comprises a 2D lattice of agents who repeatedly choose between cooperation (pro-environmental behaviour) and defection, based on the state of their local environment and the social norms imposed by their direct neighbors.
Agents have heterogeneous attributes which weight the respective contributions of environmental concern and social norms in the decision-making process.
- action
2D array of agents’ actions with shape (time, agent).
- Type:
npt.NDArray[np.int64]
- environment
2D array of agents’ environments with shape (time, agent).
- Type:
npt.NDArray[np.int64]
- s
2D array of agents’ support for cooperation with shape (time, agent).
- Type:
npt.NDArray[np.float64]
- b
2D array of agents’ decision-making weights, shape (attributes, agent).
- Type:
npt.NDArray[np.float64]
- rationality
float Homogeneous rationality coefficient for all agents.
- adj
Normalised adjacency matrix with shape (agent, agent).
- Type:
npt.NDArray[np.float64]
- time
Current time step in the simulation.
- Type:
int
- num_agents
Total number of agents in the grid.
- Type:
int
- width
Width of the grid.
- Type:
int
- height
Height of the grid.
- Type:
int
- simmer_time
Number of agent adaptation steps between environment updates.
- Type:
int
- rng
Random number generator for stochastic processes.
- Type:
np.random.Generator
- action_probabilities() ndarray[tuple[Any, ...], dtype[float64]] [source]
Calculate the probability of each possible action.
The probabilities are calculated using a logit softmax function over the utilities of each action. The formula is:
\[P(a_i(t) = a) = \frac{\exp(\lambda \cdot V_i(a))}{\exp(\lambda \cdot \ V_i(C)) + \exp(\lambda \cdot V_i(D))}\]Where \(V_i(a)\) is the representative utility for action \(a\) for agent \(i\).
- Returns:
An array of probabilities for each action, with shape (2,agent).
- adapt(n: ndarray[tuple[Any, ...], dtype[float64]])[source]
Update agents’ support for cooperation.
Agents’ support for cooperation changes as a function of the current environment. It decreases when the environment is either particularly healthy (no reason to act) or particularly unhealthy (no point in acting). It increases when the environment is not at either of these extremes.
We write the change in support as a derivative:
\[\frac{ds_i}{dt} = \alpha_i \sigma(n_i) (1 - s_i(t)) \ - \beta_i (1 - \sigma(n_i)) s_i(t)\]where \(\sigma(n_i) = 4n_i (1 - n_i)\).
- Parameters:
n – Current state of the environment, with shape (agent,)
- calculate_individual_preference(action: int) ndarray[tuple[Any, ...], dtype[float64]] [source]
Calculate agents’ individual preference for an action.
- calculate_social_pressure(action: int) ndarray[tuple[Any, ...], dtype[float64]] [source]
Calculate agents’ social pressure when taking a given action.
- decide(i: int)[source]
Select a new action for each agent.
The probability of selecting each action is set by an agents’ logit model, based on their current environment and the social norms imposed by their neighbors.
To select a new action, we sample a random number in [0,1] for each agent. If it does not exceed the probability of cooperation, the agent cooperates, and defects otherwise.
- Parameters:
i – Simmer step idx
- initialise(zero: bool = False)[source]
Initialise agents’ and environment state.
Optionally sets agents’ initial actions to zero (defection), and environments to one (healthy).
After initialising the environment, runs agent adaptation for 100 steps to reach stable support levels (for cooperation) given agent heterogeneity.
- Parameters:
zero – Initialise environment as healthy and agents as cooperating.
- mean_local_action(memory: int = 1) ndarray[tuple[Any, ...], dtype[float64]] [source]
Calculate the average action in each agents’ local neighborhood.
- Parameters:
memory – Number of previous neighbors’ actions to consider.
- Returns:
The mean local action for each agent, reflecting the perceived social norm for each agent at the current timestep. Shape is (agent,).
- pred_neighb_action() ndarray[tuple[Any, ...], dtype[float64]] [source]
Predict the average action of peers based on their recent actions.
- Parameters:
memory (int) – Number of previous steps to use for prediction.
method (str) – Prediction method, “linear” or “logistic”.
- Returns:
Predicted average action of neighbors for each agent.
- Return type:
npt.NDArray[np.float64]
- representative_utility(action: int) float [source]
Calculate the representative utility of an action for each agent.
Representative utility is a linear combination of the support for cooperation and the social norms imposed by an agents’ neighbors:
\[V_i(a) = b_1 \cdot [a^* \cdot s_i(t) + (1 - a^*) \cdot (1 - s_i(t))] + \ b_2 (a^* - \overline{A^*}_i (t))^2\]Where \(a^* = (a + 1)/2\) is a transformation of the action to the set \(\{0,1\}\).
- run(steps: int = 20)[source]
Run simulation for specified number of steps.
- Parameters:
steps – Number of steps to iterate.
- simmer()[source]
Simulate a number of agent decision-making and adaptation steps.
A step comprises the following processes:
- Agents choose an action based on their current support for cooperation,
and the social norms imposed by their neighbors.
- Agents adapt their support for cooperation based on the current state
of the environment.
Note that the environment is fixed during this process. As such, a longer simmer time reflects a faster rate of behavioural change relative to the rate of environmental change.
Mean-field
Mean-field model calculations and simulation.
- class abm_project.mean_field.FixedpointResult(lower: float | None = None, middle: float | None = None, upper: float | None = None)[source]
Solutions to a mean-action fixed point problem.
- roots() list[float] [source]
Retrieve the fixed points.
- Returns:
A list of floats representing each fixed point, both stable and unstable, in increasing value order.
- abm_project.mean_field.compute_s_from_p(p: float, b: float, c: float) float [source]
Calculate mean preference for cooperation given mean P(C).
- abm_project.mean_field.f_dm_dt(rationality: float, b: float, c: float, alpha: float, beta: float, rate: float = 0.001)[source]
Construct parameterised mean-field dm/dt derivative function.
Calculates rate of change in the average action, given the current average state of the environment, and average action in a mean-field model.
- Parameters:
rationality – Controls how rational agents are. Larger is more rational (deterministic). 0 is random.
b – Utility function weight for the ‘individual action preference’ term.
c – Utility function weight for the ‘peer pressure’ term.
alpha – How quickly agents increase support for climate mitigation when the environment is non-extreme.
beta – How quickly agents decrease support for climate mitigation when the environment is particularly good (no reason to act) or particularly bad (action is meaningless).
rate – Scale coefficient for the derivative, controls the general rate of change in preference for cooperation.
- abm_project.mean_field.f_dn_dt(recovery: float, pollution: float, rate: float = 0.01)[source]
Construct parameterised mean-field dn/dt derivative function.
Calculates rate of change in the mean environmental state, given the expected probability of cooperation in a mean-field model.
- Parameters:
recovery – How quickly the environment recovers under positive action.
pollution – How quickly the environment degrades due to negative action.
rate – Scale coefficient for the derivative, controls the general rate of change in the environment.
- abm_project.mean_field.f_ds_dt(alpha: float, beta: float, rate: float = 0.001)[source]
Construct parameterised mean-field ds/dt derivative function.
Calculates rate of change in the average preference for cooperation, given the current average state of the average environment in a mean-field model.
- Parameters:
alpha – How quickly agents increase support for climate mitigation when the environment is non-extreme.
beta – How quickly agents decrease support for climate mitigation when the environment is particularly good (no reason to act) or particularly bad (action is meaningless).
rate – Scale coefficient for the derivative, controls the general rate of change in preference for cooperation.
- abm_project.mean_field.fixedpoint_mean_action(s: float, c: float, rationality: float = 1, ignore_warnings: bool = False) FixedpointResult [source]
Find all possible mean-action values, given mean preference for cooperation.
- Parameters:
s – Average preference for climate mitigation in the mean-field model.
c – Utility function weight for the ‘peer pressure’ term.
rationality – Controls how rational agents are. Larger is more rational (deterministic). 0 is random.
ignore_warnings – Don’t print warning messages when fixed-point solver doesn’t converge.
- Returns:
A FixedPointResult object containing all possible values which the mean action can take.
- abm_project.mean_field.solve(b: float, c: float, alpha: float, beta: float, pollution: float, recovery: float, n_update_rate: float, s_update_rate: float, n0: float | int, m0: float | int, num_steps: int, rationality: float = 1.0)[source]
Simulate a mean-field model run.
- Parameters:
b – Utility function weight for the ‘individual action preference’ term.
c – Utility function weight for the ‘peer pressure’ term.
alpha – How quickly agents increase support for climate mitigation when the environment is non-extreme.
beta – How quickly agents decrease support for climate mitigation when the environment is particularly good (no reason to act) or particularly bad (action is meaningless).
pollution – How quickly the environment degrades due to negative action.
recovery – How quickly the environment recovers under positive action.
n_update_rate – Scale coefficient for dn/dt, controls the general rate of change in the (average) environment.
s_update_rate – Scale coefficient for ds/dt, controls the general rate of change in preference for cooperation.
n0 – Initial average environmental state.
m0 – Initial average action.
num_steps – Number of time-steps to simulate.
rationality – Controls how rational agents are. Larger is more rational (deterministic). 0 is random.
- Returns:
A tuple (t, results), where t is a vector of real-values time points at which the model state is measured, and results is a tuple (n, s, sp, m, p), where: n is the mean environment state, s is the mean preference for climate mitigation, sp is the mean social pressure experienced at each timestep, m is the mean action, and p is the probability of choosing climate mitigation.
- abm_project.mean_field.solve_for_equilibria(b: float, c: float, rationality: float, recovery: float, pollution: float, alpha: float = 1, beta: float = 1) tuple[ndarray[tuple[Any, ...], dtype[float64]], ndarray[tuple[Any, ...], dtype[float64]]] [source]
Identify mean-field equilibrium points for a model.
Equilibria are characterised by the state of the environment, and the mean action. A point (n,m) is an equilibria if dn/dt = dm/dt = 0.
- Parameters:
b – Utility function weight for the ‘individual action preference’ term.
c – Utility function weight for the ‘peer pressure’ term.
rationality – Controls how rational agents are. Larger is more rational (deterministic). 0 is random.
alpha – How quickly agents increase support for climate mitigation when the environment is non-extreme.
beta – How quickly agents decrease support for climate mitigation when the environment is particularly good (no reason to act) or particularly bad (action is meaningless).
recovery – How quickly the environment recovers under positive action.
pollution – How quickly the environment degrades due to negative action.
rate – Scale coefficient for the derivative, controls the general rate of change in preference for cooperation.
- Returns:
A tuple (N, M) containing pairs of equilibria points (environment state, action).
Metrics
Functions to measure model observables.
- abm_project.metrics.pluralistic_ignorance(model: VectorisedModel) ndarray[tuple[Any, ...], dtype[float64]] [source]
Measure agents’ pluralistic ignorance at the end of simulation.
Pluralistic ignorance is a phenomenon which occurs when individuals underestimate public support for a particular action, leading them to behave in a manner which does not reflect their own beliefs, even when true public support is high.
To measure an agent’s pluralistic ignorance at the end of simulation, we consider their expected actions:
Under perceived social norms, \(\mathbb{E}[a_i]_\text{perceived}\)
In absence of social norms, \(\mathbb{E}[a_i]_\text{individual}\)
- When observing their neighbors’ true preferences,
\(\mathbb{E}[a_i]_\text{true}\)
An agent \(i\)’s pluralistic ignorance is calculated as:
\[\psi_i = \max\{0, \ |\mathbb{E}[a_i]_\text{perceived} - \mathbb{E}[a_i]_\text{individual}| - \ |\mathbb{E}[a_i]_\text{true} - \mathbb{E}[a_i]_\text{individual}|\}\]i.e., it is large when knowing the true social norm would allow an agent to behave in a manner more consistent with their individual preferences.
- Parameters:
model – A VectorisedModel which has been run for at least k timesteps.
- Returns:
A 1D Numpy array containing the measured pluralistic ignorance for each agent.
Kraan Model
Implementation of Kraan 2D lattice model.
- class abm_project.kraan.KraanModel(width: int, height: int, c: float, seed: int, n_update_fn)[source]
Agent-based decision model for energy transition.
Agents decide to ‘act’ or ‘not act’ at each timestep based on the state of the environment and the social pressures of their neighbors.
- choose(p_active) ndarray[tuple[Any, ...], dtype[int64]] [source]
Sample a decision for each agent.
After sampling a random number $r in [0,1]$, decisions are taken to be +1 (action) if $r < p_text{active}$ for a given agent, and -1 (inactive) otherwise.
- Parameters:
p_active – 1D numpy array with length equal to the number of agents, containing the probability for each agent choosing to act.
- Returns:
A 1D numpy array containing the sampled action for each agent.
- decide()[source]
Simulate a single decision step for each agent.
Agents’ decisions are probabilistically sampled according to a logistic model, with representative utility of an action $V_i(a)$ balancing an agents’ current perception of the environment, and their social pressures.
- run(update_steps: int, simmer_steps: int)[source]
Run model for a number of environmental updates.
Runs the model for a number of steps. Each step comprises two parts: 1. Update the environment, 2. Simulate a number of decision steps, so as to reach equilibrium.
Finally, we yield control to the caller, passing the current timestep.
- Parameters:
update_steps – Number of times to update the environment.
simmer_steps – Number of decision steps to simulate after each environment update.
- Yields:
The current timestep, measured in number of environmental updates (minus 1).
- simmer(steps)[source]
Simulate a number of decisions for each agent.
This is a convenience method for allowing the model to reach equilibrium after an environment update.
- Parameters:
steps – Number of decisions to simulate.
- property u_active: float
Calculate the utility for an agent to be active in the current state.
The environment is defined by
$h = frac{U(text{active}) + U(text{inactive})}{2}$
Since the utility for inaction is fixed, we calculate the utility for action by rearranging this formula.
- Returns:
The current (homogeneous) utility for action.
- abm_project.kraan.exogenous_env(n0: float = -1, n_max: float = 1, increment_steps: int = 40, decrement_steps: int = 40)[source]
Construct an exogenous environment update function.
Comprises a sequence of linear increments to the environment, followed by a sequence of linear decrements, returning to the original value.
Default arguments are as specified in Kraan 2019.
- Parameters:
n0 – Initial state of the environment at time $t=0$.
n_max – Maximum value of the environment, to be reached after increments.
increment_steps – Number of steps to take when increasing environment state to n_max.
decrement_steps – Number of steps to take when decrementing environment state from n_max to n0.
- Returns:
A constructed function to execute the specified update strategy.
OOP Model
Agent-based model base class for simulations.
This module defines a base class for agent-based models, providing a framework for initializing agents, calculating neighbor actions, and managing the simulation environment.
- class abm_project.oop_model.BaseModel(width: int = 10, height: int = 10, radius: int = 1, memory_count: int = 1, env_update_option: str = 'linear', adaptive_attr_option: str = None, neighb_prediction_option: str = 'linear', peer_pressure_learning_rate: float = 0.1, rationality: float = 1.0, rng: Generator = None, env_status_fn=None, peer_pressure_coeff_fn=None, env_perception_coeff_fn=None, results_save_name: str = None)[source]
Base model for agent-based simulations.
This class initializes a grid of agents and provides methods for running the simulation, updating the environment, and calculating neighbor actions.
- num_agents
Total number of agents in the grid.
- Type:
int
- width
Width of the grid.
- Type:
int
- height
Height of the grid.
- Type:
int
- radius
Radius for neighbor calculations.
- Type:
int
- memory_count
Number of past actions to remember for each agent.
- Type:
int
- env_update_option
Method to update the environment status.
- Type:
str
- adaptive_attr_option
Option for adaptive attributes.
- Type:
str
- rng
Random number generator.
- Type:
np.random.Generator
- agents
2D array of Agent objects representing the grid.
- Type:
np.ndarray
- agent_action_history
History of agent actions.
- Type:
list
- agent_env_status_history
History of environment status.
- Type:
list
- agent_peer_pressure_coeff_history
History of peer pressure coefficients.
- Type:
list
- agent_env_utility_history
History of environment utilities.
- Type:
list
- time
Current time step in the simulation.
- Type:
int
- DEFAULT_ADAPTIVE_ATTR_OPTION = None
- DEFAULT_ENV_UPDATE_OPTION = 'linear'
- DEFAULT_HEIGHT = 10
- DEFAULT_LEARNING_RATE = 0.1
- DEFAULT_MEMORY_COUNT = 1
- DEFAULT_NUM_AGENTS = 100
- DEFAULT_PREDICTION_OPTION = 'linear'
- DEFAULT_RADIUS = 1
- DEFAULT_RATIONALITY = 1.0
- DEFAULT_WIDTH = 10
- ave_neighb_action(x: int, y: int, memory: int = 1) float [source]
Calculate the average action of peers based on their recent actions.
This method computes the average action of neighboring agents, considering the last memory actions (from the end). If memory=1, only the most recent action is used. If memory > 1, the mean of the last memory actions is used for each neighbor.
- Parameters:
x (int) – X-coordinate of the agent.
y (int) – Y-coordinate of the agent.
memory (int) – Number of most recent actions to consider.
- Returns:
Average action of neighbors.
- Return type:
float
- get_agent_attribute_at_time(attribute: str, time: int) ndarray [source]
Get the values of a specific agent attribute at a given time step.
- Parameters:
attribute (str) – The attribute history to retrieve (e.g., ‘agent_action_history’).
time (int) – The time step to retrieve values for.
- Returns:
A 2D array of the attribute values at the specified time.
- Return type:
np.ndarray
- get_agent_grid_attribute(attribute: str) ndarray [source]
Get a 2D grid of a specific agent attribute.
This method retrieves the most recent values of a specified attribute from all agents in the grid and returns it as a 2D array.
- Parameters:
attribute (str) – The attribute to
agents (retrieve from)
- Returns:
A 2D array of the specified attribute values across the grid.
- Return type:
np.ndarray
- get_neighbor_attribute_values(x: int, y: int, attribute: str) ndarray [source]
Get the values of a specific attribute from neighboring agents.
This method retrieves the most recent values of a specified attribute from all neighbors of the agent at position (x, y).
- Parameters:
x (int) – X-coordinate of the agent.
y (int) – Y-coordinate of the agent.
attribute (str) – The attribute to retrieve from neighbors.
- Returns:
Array of attribute values from neighboring agents.
- Return type:
np.ndarray
- get_neighbors(x: int, y: int) list[Agent] [source]
Get the neighbors of an agent at position (x, y).
Neighbors are defined as agents within a Moore neighborhood depending on the radius.
- Parameters:
x (int) – X-coordinate of the agent.
y (int) – Y-coordinate of the agent.
- Returns:
List of neighboring agents.
- Return type:
List[Agent]
- pred_neighb_action(x: int, y: int) float [source]
Predict the average action of peers based on their recent actions.
This method predicts the average action of neighboring agents based on their most recent actions, using linear regression.
- Parameters:
x (int) – X-coordinate of the agent.
y (int) – Y-coordinate of the agent.
- Returns:
Predicted average action of neighbors.
- Return type:
float
- run(steps: int = 20) None [source]
Run the model for a specified number of steps.
- Parameters:
steps (int) – Number of steps to run the model.
Agent class for an agent-based model simulation.
This class represents an agent that interacts with its environment and peers. It includes methods for decision-making based on peer actions and environmental status.
- class abm_project.agent.Agent(id: int, memory_count: int = 1, rng: Generator = None, env_update_option: str = 'linear', adaptive_attr_option: str = None, peer_pressure_learning_rate=0.2, rationality=1.0, env_status_fn=None, peer_pressure_coeff_fn=None, env_perception_coeff_fn=None)[source]
Agent class for an agent-based model simulation.
This class represents an agent that interacts with its environment and peers. It includes methods for decision-making based on peer actions and environmental status.
- ACTIONS = [-1, 1]
- DEFAULT_ADAPTIVE_ATTR_OPTION = None
- DEFAULT_ENV_UPDATE_OPTION = 'linear'
- DEFAULT_MEMORY_COUNT = 1
- DEFAULT_RATIONALITY = 1.0
- calculate_action_probabilities(ave_peer_action: float) ndarray [source]
Calculate the probability of each possible action.
The probabilities are calculated using a logit softmax function over the utilities of each action. The formula is: P(a_i(t) = a) = exp(V_i(a)) / sum(exp(V_i(a’)) for all a’) where V_i(a) is the utility of action a for agent i.
- Parameters:
ave_peer_action (float) – The average action of peers.
- Returns:
An array of probabilities for each action.
- Return type:
np.ndarray
- calculate_action_utility(action: int, ave_peer_action: float) float [source]
Calculate the utility of taking a specific action.
The utility is calculated as the perceived severity of the environment multiplied by the action, minus the cost of deviating from the average peer action. The formula is: V_i(a_i(t)) = a_i(t) * U_i(t) - c * (a_i(t) - A_i(t))^2
- Parameters:
action (int) – The action taken by the agent, either -1 or 1.
ave_peer_action (float) – The average action of peers.
- Returns:
The utility of the action.
- Return type:
float
- calculate_deviation_cost(action: int, ave_peer_action: float) float [source]
Calculate the cost of deviating from the average peer action.
The cost is calculated as: c * (a_i(t) - A_i(t))^2
- Parameters:
action (int) – The action taken by the agent, either -1 or 1.
ave_peer_action (float) – The average action of peers.
- Returns:
The cost of deviation from your neighbors.
- Return type:
float
- calculate_perceived_severity() float [source]
Calculate the perceived severity of the environment.
The perceived severity is a function of the environment status and the agent’s perception coefficient. It is calculated as: env_perception_coeff * env_status * -1 The negative sign indicates that a higher environment status leads to a lower perceived severity. The perceived severity is used to determine the utility of actions.
- Returns:
The perceived severity of the environment.
- Return type:
float
- decide_action(ave_peer_action: float, all_peer_actions: ndarray) None [source]
Decide on a new action based on peer actions and environment.
- update_env_perception_coeff() float [source]
Update the agent’s environment perception coefficient.
This coefficient is used to calculate the cost of deviating from the agent’s perception of the environment.
- update_environment_status(action_decision: int) None [source]
Update the environment status based on the agent’s action.
The environment status is updated based on the agent’s action and the current environment status. The update is done using a sigmoid function, exponential decay, or linear update, depending on the env_update_option specified during initialization. The formula for the update is: env_status(t+1) = env_status(t) + delta where delta is calculated based on the action decision and the current environment status.
- Sigmoid: Rate of change is higher when the environment status is around 0.5.
Lowest delta is at 1, highest delta is at 0.0.
- Sigmoid Asymmetric: Delta is asymmetric based on the action decision.
Positive delta is lower when the environment status is low, and negative delta is higher when the environment status is low.
- Exponential: Rate of change decreases as the environment status increases.
Lowest delta is at 1, highest delta is at 0.0.
Linear: Delta is a constant value based on the action decision.
Bell: Lowest delta at 0 and 1, highest delta at 0.5.
- Bimodal: Highest delta at two peaks, around 0.25 and 0.75,
and lowest delta at 0, 0.5, and 1.
- Parameters:
action_decision (int) – The action taken by the agent, either -1 or 1.
- Raises:
ValueError – If the env_update_option is invalid.
Plotting Tools
Plotting functions for agent-based model visualizations.
- abm_project.plotting.animate_grid_states(grid_history, colormap, title, colorbar_label, file_name=None, clim=None)[source]
Animate the grid states over time.
- Parameters:
grid_history (np.ndarray) – 3D array of shape (num_steps, height, width)
colormap (str or Colormap) – Colormap for the grid visualization.
title (str) – Title of the plot.
colorbar_label (str) – Label for the colorbar.
file_name (str, optional) – Name of the file to save the animation. If None, will display the plot.
clim (tuple, optional) – Tuple (vmin, vmax) to set colorbar limits.
- Returns:
The animation object.
- Return type:
ani (FuncAnimation)
- abm_project.plotting.get_data_directory(file_name)[source]
Get the directory for saving data files.
- Parameters:
file_name (str, optional) – Name of the file to save the data. If None will return the directory without a file name.
- Returns:
Directory path for saving data files.
- Return type:
str
- abm_project.plotting.get_file_basename(suffix, neighb, severity, radius, b2, env)[source]
Generate a standardized file basename based on parameters.
- Parameters:
suffix (str) – Suffix for the file name.
neighb (str) – Neighborhood type (e.g., “linear”, “von_neumann”).
severity (str) – Severity level (e.g., “low”, “medium”, “high”).
radius (int) – Radius of the neighborhood.
b2 (float | None) – Second utility function weight, or None if not applicable.
env (str) – Environment update type (e.g., “static”, “dynamic”).
- Returns:
A standardized file basename.
- Return type:
str
- abm_project.plotting.get_plot_directory(file_name)[source]
Get the directory for saving plots.
- Parameters:
file_name (str, optional) – Name of the file to save the plot. If None will return the directory without a file name.
- Returns:
Directory path for saving plots.
- Return type:
str
- abm_project.plotting.plot_current_grid_state(grid, colormap, title, colorbar_label, file_name=None, clim=None)[source]
Plot the current state of the grid.
- Parameters:
grid (np.ndarray) – 2D array representing the grid state.
colormap (str or Colormap) – Colormap for the grid visualization.
title (str) – Title of the plot.
colorbar_label (str) – Label for the colorbar.
file_name (str, optional) – Name of the file to save the plot. If None will display the plot.
clim (tuple, optional) – Tuple (vmin, vmax) to set colorbar limits.
- abm_project.plotting.plot_grid_average_over_time(grid_values, title, xlabel, ylabel, file_name=None)[source]
Plot overall agent values over time.
- Parameters:
grid_values (np.ndarray) – 3D array of shape (num_steps, height, width) representing the values to average over time.
title (str) – Title of the plot.
xlabel (str) – Label for the x-axis.
ylabel (str) – Label for the y-axis.
file_name (str, optional) – Name of the file to save the plot. If None, will display the plot.
- abm_project.plotting.plot_list_over_time(data, title, xlabel, ylabel, file_name=None, legend_labels=None)[source]
Plot a list of data over time.
- Parameters:
data (list[np.ndarray]) – List of 1D arrays to plot.
title (str) – Title of the plot.
xlabel (str) – Label for the x-axis.
ylabel (str) – Label for the y-axis.
file_name (str, optional) – Name of the file to save the plot. If None, will display the plot.
legend_labels (list[str], optional) – Labels for each line in the legend.
- abm_project.plotting.plot_mean_and_variability_array(data: ndarray, title: str, kind: str = 'std', file_name: str | None = None)[source]
Plot the mean and variability of a 2D array over time.
- abm_project.plotting.plot_phase_portrait(c: float, recovery: float, pollution: float, rationality: float = 1, gamma_n: float = 0.01, gamma_s: float = 0.001, equilibria: bool = True, dn_dt_nullcline: bool = True, dm_dt_nullcline: bool = False, critical_points: bool = False, ax: Axes | None = None, b: float | None = None)[source]
Draw a phase portrait for a mean-field model.
- Parameters:
c – Utility function weight for the ‘peer pressure’ term.
recovery – How quickly the environment recovers under positive action.
pollution – How quickly the environment degrades due to negative action.
rationality – Controls how rational agents are. Larger is more rational (deterministic). 0 is random.
gamma_n – Scale coefficient for dn/dt, controls the general rate of change in the (average) environment.
gamma_s – Scale coefficient for ds/dt, controls the general rate of change in preference for cooperation.
equilibria – Display equilibria as red circles.
dn_dt_nullcline – Show dn/dt = 0 as a dashed grey line.
dm_dt_nullcline – Show dm/dt = 0 as orange (stable) and green (unstable) circles.
critical_points – Show critical points where ds/dt diverges (currently unimplemented).
ax – Optional matplotlib Axes object to draw plot onto. If unspecified, uses the current artist.
b – Optional utility function weight for the ‘individual preference’ term.
- abm_project.plotting.plot_sobol_indices(Si, time_steps, var_names, output_label, file_name=None)[source]
Plot Sobol sensitivity indices for given time steps and variable names.
- abm_project.plotting.plot_support_derivative(a: float = 1, b: float = 1, savedir: Path | None = None)[source]
Plot the derivative of support for cooperation with respect to environment.
- abm_project.plotting.save_and_plot_heatmap(data, title, suffix, neighb, severity, radius, b2, env_update_type, savedir, rationality_values=None, gamma_s_values=None)[source]
Save and plot a heatmap of the given data.
- Parameters:
data (np.ndarray) – 2D array of data to plot.
title (str) – Title for the heatmap.
suffix (str) – Suffix for the file name.
neighb (str) – Neighborhood type (e.g., “linear”, “von_neumann”).
severity (str) – Severity level (e.g., “low”, “medium”, “high”).
radius (int) – Radius of the neighborhood.
b2 (float | None) – Second utility function weight, or None if not applicable.
env_update_type (str) – Environment update type (e.g., “static”, “dynamic”).
savedir (Path) – Directory to save the heatmap and data.
rationality_values (list[float] | None) – List of rationality values for x-axis ticks.
gamma_s_values (list[float] | None) – List of support update rates for y-axis ticks.
Utilities
Utility functions for agent-based models.
- abm_project.utils.exponential_update(rate: float)[source]
Construct exponential environment update function.
\[n(t+1) = n(t) + a\cdot r \cdot \exp(-n(t))\]- Parameters:
rate – Multiplicative coefficient for exponential function.
- Returns:
Exponential update function.
- abm_project.utils.lattice2d(width: int, height: int, periodic: bool = True, diagonals: bool = False)[source]
Construct normalised adjacency matrix for a 2D lattice.
Uses networkx to create a 2D lattice with optional periodic boundaries and Moore neighborhoods (diagonals). Converts this to a sparse adjacency matrix with normalised rows, to simplify computing averages over neighborhoods.
- Parameters:
width – Number of nodes along the horizontal span of the lattice.
height – Number of nodes along the vertical span of the lattice.
periodic – Connect nodes at the edges of the lattice with periodic boundary conditions.
diagonals – Connect nodes to their diagonal neighbors, also known as the Moore neighborhood. Default is the von Neumann neighborhood (cartesian neighbors).
- Returns:
Sparse CSR adjacency matrix with shape (width x height, width x height), normalised per row.
- abm_project.utils.linear_update(rate: float)[source]
Construct linear environment update function.
\[n(t+1) = n(t) + a\cdot r\]The returned value is clipped to the interval [0,1].
- Parameters:
rate – Linear step size.
- Returns:
Linear update function.
- abm_project.utils.piecewise_exponential_update(recovery: float, pollution: float, gamma: float)[source]
Construct piecewise exponential environment update function.
- Parameters:
recovery – Rate of improvement due to good actions
pollution – Rate of degradation due to bad actions
gamma – Step size for environmental change
- Returns:
Piecewise exponential update function.
- abm_project.utils.sigmoid(x, a, b)[source]
Sigmoid function for logistic regression.
This function defines a sigmoid curve for logistic regression fitting, which maps any real-valued number into the range [0, 1].
- Parameters:
x (float or np.ndarray) – Input value(s) to the sigmoid function.
a (float) – Slope of the sigmoid curve.
b (float) – Offset of the sigmoid curve.
- Returns:
Sigmoid-transformed value(s).
- Return type:
float or np.ndarray
- abm_project.utils.sigmoid_update(n: ndarray[tuple[Any, ...], dtype[float64]], a: ndarray[tuple[Any, ...], dtype[int64]]) ndarray[tuple[Any, ...], dtype[float64]] [source]
Update environment according to sigmoid rule.
- Parameters:
n – Each agents’ current environment, shape: (agents,)
a – Each agents’ current action, shape: (agents,)
- Returns:
Numpy array of new environment values for each agent, with same shape as n.
Batch Run Tools
Batch run tools for ABM project.
- class abm_project.batch_run_tools.UnionFind(size)[source]
Union-Find data structure for efficient connectivity checks.
- abm_project.batch_run_tools.analyze_environment_clusters_periodic(environment: ndarray, width: int, height: int, threshold: float = 0.5, diagonal: bool = False)[source]
Analyze clusters in a 2D environment with periodic boundaries.
- Parameters:
environment – 1D array of environment values per agent.
width – Grid width.
height – Grid height.
threshold – Threshold to binarize the environment (default = 0.5)
diagonal – If True, use 8-connectivity (diagonal neighbors included). If False, use 4-connectivity (adjacent only).
- Returns:
Number of clusters found cluster_sizes: List of sizes of each cluster labels: 2D array of labeled clusters
- Return type:
num_clusters
- abm_project.batch_run_tools.attribute_variance_over_time(models, attr)[source]
Calculate the mean variance of a given attribute over time across all models.
- Parameters:
models (list[BaseModel]) – List of BaseModel instances.
attr (str) – Attribute name to compute variance over (e.g., ‘agent_peer_pressure_coeff_history’).
- Returns:
Mean variance at each time step.
- Return type:
list
- abm_project.batch_run_tools.average_metric_over_time(models, attr, inner_mean=False)[source]
Calculate the average of a given attribute over time across all models.
- Parameters:
models (list[BaseModel]) – List of BaseModel instances.
attr (str) – Attribute name to average (e.g., ‘agent_action_history’).
inner_mean (bool) – If True, take mean over inner array before averaging across models.
- Returns:
Average value at each time step.
- Return type:
list
- abm_project.batch_run_tools.clustering_score_over_time(model, attribute: str = 'action', width: int = 50, height: int = 50, radius: int = 1) list[float] [source]
Compute a clustering score at each time step for a 1D action vector per step.
- Parameters:
model – The model instance.
attribute (str) – Attribute name for the per-timestep 1D agent data.
width (int) – Grid width.
height (int) – Grid height.
radius (int) – Neighborhood radius.
- Returns:
Clustering scores per time step.
- Return type:
list[float]
- abm_project.batch_run_tools.extract_mean_and_variance(models, attribute: str)[source]
Extract the mean and variance of a specified attribute.
- Parameters:
models (list) – List of BaseModel instances.
attribute (str) – The name of the attribute history to extract (e.g., “agent_action_history”).
- Returns:
(means, variances), where each is a list of values per time step.
- Return type:
tuple
- abm_project.batch_run_tools.get_dominant_frequency_and_power(signal: ndarray, dt: float = 1.0) tuple[float, float] [source]
Calculate the dominant frequency and its power in a time series signal.
- Parameters:
signal (np.ndarray) – 1D array of time series data.
dt (float) – Time step between samples in seconds.
- Returns:
Dominant frequency (Hz) and its power.
- Return type:
tuple[float, float]
- abm_project.batch_run_tools.local_action_agreement_score(flat_grid: ndarray, width: int, height: int, radius: int = 1) float [source]
Compute a spatial clustering score based on local action agreement.
- Parameters:
flat_grid (np.ndarray) – 1D array of agent actions.
width (int) – Width of the agent grid.
height (int) – Height of the agent grid.
radius (int) – Neighborhood radius.
- Returns:
Clustering score (0 to 1).
- Return type:
float
- abm_project.batch_run_tools.run_parameter_batch(num_runs: int, model_class, steps: int = 100, **kwargs) list[BaseModel] [source]
Run a batch of agent-based model simulations.
- Parameters:
num_runs (int) – Number of model runs to execute.
model_class (type) – The class of the model to instantiate.
steps (int) – Number of simulation steps to run for each model.
**kwargs – Additional keyword arguments to pass to the model constructor.
- Returns:
List of model instances after running the simulation.
- Return type:
list[BaseModel]