Modelling multi-agent systems with Recurrent Neural Networks


Posted on

image of lighting and a tree in the dark

Multi-agent systems with land usage

A Multi-agent system is a system that accommodates various intelligent agents with their own goals, or community goals, that they are trying to achieve. We can observe many systems like that in real life for example; construction workers trying to build a house, an army trying to conquer a country, or population behaviour during the Covid-19 pandemic.

Many Data Scientists are using various statistical models to model multi-agent systems using methods such as gradient descent to learn parameters of the agents in order to predict their policy of actions or behaviour.

Another useful tool for such data, particularly in land management, are Recurrent Neural Networks (RNNs). Where actions are not immediate but occur over a certain period of time, or to put it simply, the data can be placed into a time series where each time range has a different state than the previous and future ones.

An excellent framework for modelling multi-agent systems and ecosystems is CRAFTY, where a year is an amplitude under which simulation produces output based on agent parameters and their reward (goal), which could be wealth maximization or maximum food production. The framework contains it is own market, based on supply and demand, which is controlled. This is a magnificent playground for the usage of RNNs.

Another great real life example of such data is predicting a Beijing traffic heat map a few minutes or hours in the future, made by google.

In order to create such networks we need a tool. Personally I recommend PyTorch.

What is PyTorch

Along with its biggest competition being Tensorflow, PyTorch is a python scientific computing package targeted at two sets of audiences:

- Those utilizing the power of NumPy to make calculations on GPU

- Those undertaking Deep Learning research with python coding style and much better user experience than Tensorflow.

Thanks to the PyTorch library we can create neural network models without worrying about computations such as derivative or any tensor operations involved in the process. However, one problem is that it requires software engineering skills to write those models efficiently. This is where PyTorch Lightining takes up the challenge of decoupling engineering from pure science.

PyTorch Lightening

Thanks to PyTorch lightning which is an abstract version of normal PyTorch, you don’t have to remember all the tiny details but instead can focus on pure development. Another great feature is that it provides you with warnings and gives you great machine learning tips to fully utilise your resources. For example given 8 CPU cores, if you utilise only 1 of them PyTorch will notify you that you can optimise your model to be much swifter than before.

Probably the most useful advantage for any university based institution with several GPUs at their disposal is that PyTorch lightening does the job of splitting the load among GPUs for you. There is no need to handle data parallelism anymore.

Tutorial

An offcial pyTorch tutorial can be found here: https://github.com/PyTorchLightning/pytorch-lightning

As a requirement, knowledge of python is needed : python tutorial.

Let me show you an example of how to convert your past model into PyTorch Lightining, or if you are not familiar with it, how to create a model straight away in PyTorch Lightning.

The first nuance we can observe is that we no longer need to specify if our model is in training mode or evaluation mode. We can straight away get rid of all calls.

As we can see, we no longer need to remember about reseting gradients in each training loop (or even looping itself as we can see later), as well as backpropagating our data or iterating our optimizer. Also detaching data from the model, for example to visualize it, is no longer necessary as it will be automatically converted to CPU and cloned from GPU. Another bonus is Tensorboard support which I will not cover in this tutorial.

Let me show you quick example of converting your old pyTorch model into pyTorch lightning module.

First we start with changing extension of our class to inherit from a different object

From

To

From now on we treat our current object as the model so instead of creating instances of it, we can just use the keyword “self”.

Also in order to call model we can just call self(input_data) instead of model(input_data).

Now we have to implement 5 additional functions in order for pyTorch Lighting to work which are : training step, forward pass, train dataloader, val_dataloader, validation_step.

To compare the level of transparency and change between old pyTroch and Lightning pyTorch, take a look into my repo where you can see how the model looks before transformation and you can notice that we no longer need to take care of few additional things such as epoch loop. https://github.com/ FamishedHound/pyTorchLightining-/commits/master

Related Blogs


Disclaimer

The opinions expressed by our bloggers and those providing comments are personal, and may not necessarily reflect the opinions of Lancaster University. Responsibility for the accuracy of any of the information contained within blog posts belongs to the blogger.


Back to blog listing