Git Product home page Git Product logo

zeyus / flamegpu2-prisoners-dilemma-abm Goto Github PK

View Code? Open in Web Editor NEW
3.0 2.0 0.0 11.98 MB

A prisoner's dilemma agent based model simulation for investigating effects of differing strategies on emergent behaviours and spatial patterns with configurable environments.

License: MIT License

Python 73.37% R 26.63%
abm agent-based-modeling agent-based-simulation altruism cooperation cuda flamegpu flamegpu2 flamegpu2-visualiser game-theory

flamegpu2-prisoners-dilemma-abm's Introduction

FLAMEGPU2 Based Prisoner's Dilemma ABM

What is it?

A (3D) 2D ABM simulation executed on the GPU. This ABM models interactions of "games" between agents, specifically the Prisoner's Dilemma game, in which participants can either cooperate or defect, resulting in a payoff, depending on the combination of decisions.

Using FLAMEGPU2, agent counts can go into the millions without a problem, especially on a decent GPU.

Details

One of the earlier descriptions of the game interactions:

image

Figure from Axelrod, R., & Hamilton, W. D. (1981). The Evolution of Cooperation. Science, 211(4489), 1390โ€“1396. https://doi.org/10.1126/science.7466396

In this setup the default payoff matrix for interactions is

They Cooperate They Defect
I Cooperate 3.0 -1.0
I Defect 5.0 0.0

Agents play all of their neighbours and if they cannot, they move, and pay a cost of travel.

After play or movement each agent has the opportunity to reproduce if they have sufficient energy and an available space around them, if they reproduce, they pay the energy cost of reproduction.

Each round there is a cost of living imposed on all agents except agents that were just born in the reproduction phase.

Any agents that drop below zero energy die.

Other features

  • Agent strategy distributions can be configured (more than 4 strategies broken at the moment, but strategy probability can be set to 0.0)
  • Agents have a random trait assigned to them (default 1 of 4 possible traits) (more than 4 traits broken at the moment)
  • Strategy mutation can be configured at a specific mutation rate which applies during reproduction
  • Agents can employ a global strategy (i.e. always cooperate, with any agent) or a strategy for agents with the same trait (kin) or others, OR (reporting broken for this but it works) a strategy per unique trait)
  • Environmental noise can be configured for a chance of miscommunication (i.e. if i choose cooperate, it becomes defect)
  • Can run in a CUDAEnsemble for a whole suite of simulation runs
  • logging is configured for both single and multi runs, currently it collects the agent counts by their strategies, but it should also not bother doing any counts (for performance) if logging is disabled, which it still does

Model description

Prisoner's Dilemma ABM model flow

Running the simulation

Prerequisites

  • python (tested on 3.11)
  • CUDA Capable GPU
  • Windows or Linux (not sure about FLAMEGPU2 mac support, might be possible to compile it)
  • NVIDIA CUDA
  • pyflamegpu, version 2.0.0-rc (or higher), either built from source with whichever CUDA version you like, or download a pip wheel that matches your system
  • numpy (for initial agent matrix positioning, I'll see if I can remove this requirement later because it's clearly adding huge overhead)

Numpy is included in requirements.txt:

git clone https://github.com/zeyus/FLAMEGPU2-Prisoners-Dilemma-ABM.git
cd FLAMEGPU2-Prisoners-Dilemma-ABM
python3 -m pip install -r requirements.txt

Try it out

from the root directory of the repository run:

python3 src/model.py

The first section in model.py contains most of the variables you might want to change.

The default settings are defined as follows:

# upper agent limit ... please make it a square number for sanity
# this is essentially the size of the grid
MAX_AGENT_SPACES: int = 2**18
# starting agent limit
INIT_AGENT_COUNT: int = int(MAX_AGENT_SPACES * 0.16)

# you can set this anywhere between INIT_AGENT_COUNT and MAX_AGENT_COUNT inclusive
# carrying capacity
AGENT_HARD_LIMIT: int = int(MAX_AGENT_SPACES * 0.5)

# how long to run the sim for
STEP_COUNT: int = 100
# TODO: logging / Debugging
WRITE_LOG: bool = True
LOG_FILE: str = f"data/{strftime('%Y-%m-%d %H-%M-%S')}_{RANDOM_SEED}.json"
VERBOSE_OUTPUT: bool = False
DEBUG_OUTPUT: bool = False
OUTPUT_EVERY_N_STEPS: int = 1

# rate limit simulation?
SIMULATION_SPS_LIMIT: int = 0  # 0 = unlimited

# Show agent visualisation
USE_VISUALISATION: bool = True and pyflamegpu.VISUALISATION

# visualisation camera speed
VISUALISATION_CAMERA_SPEED: float = 0.1
# pause the simulation at start
PAUSE_AT_START: bool = False
VISUALISATION_BG_RGB: List[float] = [0.1, 0.1, 0.1]

# should agents rotate to face the direction of their last action?
VISUALISATION_ORIENT_AGENTS: bool = False
# radius of message search grid (broken now from hardcoded x,y offset map)
MAX_PLAY_DISTANCE: int = 1

# Energy cost per step
COST_OF_LIVING: float = 1.0

# Reproduce if energy is above this threshold
REPRODUCE_MIN_ENERGY: float = 100.0
# Cost of reproduction
REPRODUCE_COST: float = 50.0
# Can reproduce in dead agent's space?
# @TODO: if time, actually implement this, for now. no effect (always True)
ALLOW_IMMEDIATE_SPACE_OCCUPATION: bool = True
# Inheritence: (0, 1]. If 0.0, start with default energy, if 0.5, start with half of parent, etc.
REPRODUCTION_INHERITENCE: float = 0.0
# how many children max per step
MAX_CHILDREN_PER_STEP: int = 1

# Payoff for both cooperating
PAYOFF_CC: float = 3.0
# Payoff for the defector
PAYOFF_DC: float = 5.0
# Payoff for cooperating against a defector
PAYOFF_CD: float = -1.0
# Payoff for defecting against a defector
PAYOFF_DD: float = 0.0

# How agents move
AGENT_TRAVEL_STRATEGIES: List[str] = ["random"]
AGENT_TRAVEL_STRATEGY: int = AGENT_TRAVEL_STRATEGIES.index("random")

# Cost of movement / migration
AGENT_TRAVEL_COST: float = 0.5 * COST_OF_LIVING

# Upper energy limit (do we need this?)
MAX_ENERGY: float = 150.0
# How much energy an agent can start with (max)
INIT_ENERGY_MU: float = 50.0
INIT_ENERGY_SIGMA: float = 10.0
# of cours this can be a specific value
# but this allows for 5 moves before death.
INIT_ENERGY_MIN: float = 5.0
# Noise will invert the agent's decision
ENV_NOISE: float = 0.0

# Agent strategies for the PD game
# "proportion" let's you say how likely agents spawn with a particular strategy
AGENT_STRATEGY_COOP: int = 0
AGENT_STRATEGY_DEFECT: int = 1
AGENT_STRATEGY_TIT_FOR_TAT: int = 2
AGENT_STRATEGY_RANDOM: int = 3

# @TODO: fix if number of strategies is not 4 (logging var...)
AGENT_STRATEGIES: dict = {
    "always_coop": {
        "name": "always_coop",
        "id": AGENT_STRATEGY_COOP,
        "proportion": 1 / 4,
    },
    "always_defect": {
        "name": "always_defect",
        "id": AGENT_STRATEGY_DEFECT,
        "proportion": 1 / 4,
    },
    # defaults to coop if no previous play recorded
    "tit_for_tat": {
        "name": "tit_for_tat",
        "id": AGENT_STRATEGY_TIT_FOR_TAT,
        "proportion": 1 / 4,
    },
    "random": {
        "name": "random",
        "id": AGENT_STRATEGY_RANDOM,
        "proportion": 1 / 4,
    },
}

# How many variants of agents are there?, more wil result in more agent colors
AGENT_TRAIT_COUNT: int = 4
# @TODO: allow for 1 trait (implies no strategy per trait)
# AGENT_TRAIT_COUNT: int = 1

# if this is true, agents will just have ONE strategy for all
# regardless of AGENT_STRATEGY_PER_TRAIT setting.
AGENT_STRATEGY_PURE: bool = False
# Should an agent deal differently per variant? (max strategies = number of variants)
# or, should they have a strategy for same vs different (max strategies = 2)
AGENT_STRATEGY_PER_TRAIT: bool = False

# Mutation frequency
AGENT_TRAIT_MUTATION_RATE: float = 0.0


MULTI_RUN = False
MULTI_RUN_STEPS = 10000
MULTI_RUN_COUNT = 1

Screenshot

Screenshots from ABM simulation

References

ABM concepts for tags/traits adapted from:

Environmental pressure/cost of living concepts adapted from:

  • Smaldino, P., Schank, J., & Mcelreath, R. (2013). Increased Costs of Cooperation Help Cooperators in the Long Run. The American Naturalist. https://doi.org/10.1086/669615

flamegpu2-prisoners-dilemma-abm's People

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

flamegpu2-prisoners-dilemma-abm's Issues

Move out CUDA code to separate files

Find a way to refactor so "global" vars (i.e. ones that won't change during simulation runs or between runs in a mult-run configuration) get generated and put into a .h or .cuh. This file should also contain generic functions (if possible) for doing some of the calculations.

Allow for customizing strategies

it isn't easy to add a new strategy...but this is probably quite hard to do in a nice way. Again using the current base conversion system we could easily add up to 9 strategies (enough?)

Allow for more traits

locked at 4 at the moment, using the base conversion logic we could easily make it up to len(32 bit UInt max) - 1...which would be sufficient probably, unless there was a specific need for more

Change method for spawning agent random grid positions

Currently uses an NP array with a shuffled sequence of all the IDs and then reshapes it to the grid size...
It works fine but this part seems to have quite an overhead on the initial loading

  1. evaluate performance overhead (it's only once per simulation, so there might be better places to optimize e.g. stuff inside the sim)
  2. if a problem, refactor, this will benefit initial sim loading and especially multi-run speed.

Remove updating env/macro props when not logging

Those updates are expensive, so is the calculation in the step/exit function...
it should be easy enough to not even bother with it if logging is disabled

if logging is enabled it's necessary, but maybe there's a better way.

Randomness

Might be possible to use the agent shuffling, or another method to reduce the number of random number generation calls.
Some are necessary,like for mutation, but there's probably a smarter way to do it

Fix broken logging when agents have a strategy per trait

Works for global strategy, works for us/them strategy, but not if they have 4.

a similar system of IDs could be implemented with base conversion, but it couldn't be stored in an array because that would be mostly wasted.

number of traits: t [1-n] (n should be one digit in length less than 32 bit UInt max)
number of strategies: s [1-9]

Strategy t1 t2 tn
s1 0 0 0
s2 1 1 1
s3 3 3 3

so bits but with base t e.g. 010 993 819233220

Maybe swap out entire code blocks for strata and model

Just an idea, might not be smart if you want to use cudaensemble. If they can layer on, so they only need to be included if a particular strategy is used, or maybe if it's not too expensive do a condition test at the model level but it must be cheaper than checking in every game and birth, even if it's an agent var or env prop

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.