As mentioned in an earlier blog post, I have been busy outside of university working on a game called OIL with the aim of releasing it before graduation. This week was a big week for OIL, I attended the Pocket Gamer Connects conference in London where we were taking part in the Very Big Indie Pitch (which we got 2nd place in!). In addition to this we had meetings all week with various people, such as Amazon and Apple. Furthermore we were very fortunate to have a playtest session with UsTwo games (Monument Valley), who spent time with us playing OIL and giving us feedback.
Unfortunately, this meant that I didn’t have access to a powerful pc and had very little time to myself so was unable to do any meaningful work on my simulation. I had a little think about potential new solutions, but didn’t come up with anything actionable.
The semester has started and development of my simulation continues. Last semester I left the project in a state where I could deploy the simulation using SpatialOS, however it only ran on one virtual machine so only served as a demonstration of my understanding of the SpatialOS pipeline. I did have a multi-worker deployment but the lack of communication between the workers was a big issue and was my priority going into this year.
After thinking about the problem, I identified two key needs. The first is the creation of a data structure which would hold all of the information about a body which would be distributed. This structure would hold the minimum amount of data possible to make the solution scalable without using excess bandwidth. The key pieces of data required are the mass of the body and the position of the centre of mass. Additionally, the data structure could be combined with other local objects to produce a compound structure which holds the combined mass and centre of mass for a group of bodies. This would make the accuracy of the simulation lower, as the exact position of individual bodies is required for a true simulation of gravity, but the network usage performance gains could be enough to outweigh the negative effects.
The second key need would be a simulation manager, a global object which would control the passing of the data structures so that each body, or group of bodies, has enough information about the rest of the system. This problem is not as easily designed around. The global manager would have to be managed by a worker which means that it does not have continuous access to data originating from other workers. To access this data, queries would need to be sent through the SpatialOS platform. SpatialOS provides Streaming Queries which are specifically for use by objects which need regular updates about other objects in the world, rather than regular Queries which are for infrequent requests to various objects. However, the regularity these are designed for is completely different to the regularity required by this simulation. This feature is designed for periodic (~2.0 seconds) updates about newly available information, rather than frame-by-frame updates needed for the calculations being performed.
Whilst this could still potentially work, all the accuracy and performance of the simulation would be completely lost and it would certainly not be scalable. The aims of this project are to test how SpatialOS can be used to run a performant and accurate scientific simulation. Getting used to SpatialOS’s unique design and programming patterns has been written about and there’s lots of helpful discussion on their website. In particular this blog talks about how do rethink world management in a way that agents are their own manages and do not rely on any global manager. Fortunately for these developers, they were developing a city simulation which has great granularity with little need for agents in vastly different physical locations to interact with one another. Unlike the n-body gravity simulation.
I came to the realisation that it would probably be best to cut my losses and start work on a different simulation model, something that was more inherently suited to way in which SpatialOS is designed. My next tasks are to speak with my supervisor to see if this is feasible at this stage and to see if she has any suggestions on alternate simulations.
This week I continued with the task of implementing the prototype simulation. This began with attempting to adapt the Barnes-Hut algorithm for a SpatialOS application. After failing to get the simulation to run on SpatialOS I decided to try a fresh unity project with an implementation of the algorithm with no attempt to incorporate SpatialOS. As a result I was able to fine tune the numbers and parameters of the algorithm so that it produced a simulation that behaved in an expected manner.
Alongside the creation of this initial prototype I looked into ways that the SpatialOS documentation suggested I could carry out neighbour searching. A suggested technique for local queries is using Unity’s ‘FindGameObject’ functions which is the method that was used in this prototype. This negated the need for an overarching simulation manager and allowed all of the functionality of the simulation to be split into the Planet script which is attached to each body. Therefore when the prototype is implemented in SpatialOS the bodies can be under the authority of different workers and the simulation can still run.
Using this prototype an initial SpatialOS implementation was produced which operated as a local deployment, all running on my computer. Using this prototype I was able to fully understand the concept of SpatialOS components and the need to use them for any persistent data controlled by the bodies. In particular their coordinates in world space as well as their velocity.
I also experimented with client-side functionality implementing a script which is attached to all bodies but is only run on the client. This script queries the SpatialOS components of Velocity and Position and manipulates this data to produce a colour for the body on the client. The purpose of this was to understand how the client can interact with the simulation without any authority over the bodies. This will lead the way to having an interactive simulation.
Research was carried out into the setup of the launch configuration .json as well as the worker configuration .json to control the load balancing of the simulation. The information procured here was used to adjust the project’s parameters so that the area the simulation occurs in could be split between the domains of different workers. Also by limiting the numbers of bodies the workers can interact with, the number of workers which manage the the simulation can be controlled. It is also possible to define the number of workers, however they are unlikely to share their load if they are still controlling less bodies than possible.
After a cloud deployment using the initial parameters (and therefore one worker) was created a multi-worker implementation was attempted locally and in the cloud. The simulation did run, but it did not behave in the expected manner as the neighbour searching method used only found bodies attached to the same worker. This resulted in the simulation fragmenting according to worker domains. The next stage of work will be creating a neighbour search which looks to other workers, as well as locally, so that the simulation behaves as if it is managed by one machine.
Below is a video which demonstrates these prototypes:
Outside of my university studies I am working on a game called OIL which was a winner of the Abertay Dare Academy competition. As a part of this prize I went to India on University business for over a week. I also had a family event booked in Sweden for a week prior to this which was booked before even entering the Dare Academy competition let alone receiving the prize, so it was not expected that my term would be as heavily impacted as it was. In addition to this throughout the term I have had to dedicate a not inconsiderable amount of time to the development of the OIL project which has lead to my honours progress being slower than I would have liked but required stopping work on the honours altogether for this period of time. Upon returning to the country a focus was placed on my other university work so that I could complete it leaving only my honours left to work on. Which will be the focus from next week.
I spent a lot of time in the previous week thinking about and researching threading only to come to the realisation that I should avoid it all together which was a fairly big hit on morale, I was very keen to dive into research without dedicating the time to think about the problem space an whether the approach I was researching was really the best one.
I also realised that my original aims for this semester were not to have a functioning version of the simulation anyway and that establishing a performance test would be a valuable first task as it could inform the simulation implementation.
I mostly thought about my approach to the project this week whilst also working on my other university work in preparation for my mid term break which will be explained in the next post.
I created a Unity project based on the SpatialOS starter project, which has a small amount of the core SpatialOS-Unity already implemented and within this project converted the java algorithm which I found into c# scripts.
My reasoning behind jumping straight into implementation before doing any documented planning of the software architecture was that I would be more likely to identify areas which would help make a solid groundwork to include in the plan. I personally find that it can be quite difficult to make a useful plan without a good understanding of the problem space and part of the understanding can be gained by having an initial attempt at implementation.
Straight away this proved to be the case as I came across an issue I had not considered: how I would structure the parallelisation of the algorithm. Most of my work this week was based around the research of threading and parallelisation within the contexts of Unity and SpatialOS. Unity is not a threadsafe api and neither is SpatialOS and so only limited calculations can be performed using the System.Thread namespace. The simulation to be performed is not an ’embarrassingly parallel’ problem as there is a lot of shared data access which does not help with the complexity of the task. The main issue arises from the need for each body to receive data about every other body each frame. The algorithm I had researched uses a controller class which stores all of the bodies and performs all body-body communication. This would not really work within the context of SpatialOS as a virtual machine or ‘worker’ should not need to have that kind of information about all other workers… the core way in which SpatialOS is constructed is broken.
A solution really needs to be found which utilises bodies as reasonably self contained units which perform their own neighbour searching and data acquisition (as well as their calculations) without reliance on a manger. This would mean their method of neighbour acquisition could be modified in a sequence as described in my proposal.
I researched the n-body simulation to look for existing algorithms as developing the simulation algorithm from scratch is not the purpose of the project. I found a java implementation on Princeton’s physics departments website which I analysed and thought would be a good starting point for building a prototype of the simulation
I had a really useful meeting with my supervisor, Ruth, as well as with James Bown, this meeting was in response to an initial draft of my proposal essay which had been created this week.
The first major takeaway was the answer to a question I had come across a couple of weeks ago regarding execution speed of the simulation. I was unsure whether to create a real-time application or one which executed the simulation very quickly to gather more data for statistical analysis. Two points were raised on this matter.
The first point was the scientific approach of using a model. When little is known about how the simulation will perform, it makes more sense to have it execute in real-time. This means that real-time adjustments may be made, and it can be observed while it operates making it easier to debug. When an interesting behaviour is noticed and can be replicated this is when it becomes useful to create a simulation that executes rapidly so that the behaviour can be statistically analysed.
The second point was regarding my personal development. I am undertaking this project from a games background, therefore it makes sense to use my ability as a games developer and create a real-time application which can be demonstrated more easily to interested parties, rather than a lost of data with accompanying statistical analysis which is relatively less useful in the context of games development.
In the initial draft of the proposal the methodology was not particularly well fleshed out as I wasn’t really sure what I would be testing for or what to model. Core things which really need to be outlined in a proposal! Ruth and Jim were able to give me great pointers as to which direction I should go in for these. It was already decided that I would be using an agent-based model, so a sequence of tests was suggested which looked at the performance of agent-agent communication. Ranging from only client-agent communication to global random agent-agent communication. This could then be extended with a number of features such as:
Environmental interactions e.g. diffusion of a substance through the simulation
As the simulation itself is not of huge importance, only its performance characteristics, an existing simulation algorithm should be used. For this, n-body simulations were suggested. These are usually a model of newtonian gravity with complex mathematical functionality and agent-agent communication. Two main implementations exist: a ‘brute force’ and a ‘barnes-hut’. These mostly differ in how they gather data from other agents, and my own implementation will likely be created using these as a basis as data gathering from agents will be a key component which is tested.
This week I spent some time researching development platforms for the creation of the simulation. I had initially decided that I would create example applications for three different environments and would performance test these to decide which one to use. On considering this idea further I realised that this could constitute an honours project by itself, and would not be a good use of time when it is largely inconsequential to my aims. Justification of a chosen environment could be achieved through research instead.
I watched a GDC talk given by Improbable from this years GDC where they gave an introduction to SpatialOS and also invited a range of developers to give further insight and testimonials. The talk explained that the Unity implementation had been maintained from the beginning and was by far the most developed implementation. This indicated to me that it made more sense to use this over the Unreal implementation which has only recently been released.
The talk also helped solidify some of the concepts of the platform, which further helped me make a decision regarding development environment. All of the simulation complexity will be handled by specific workers in the cloud, that means that the rendering of the simulation should not affect the performance of the simulation at all. All other parts of the game engine can be used voluntarily, so using a game engine rather than a more bare-bones framework shouldn’t provide any performance issues, especially if using a real-time simulation.
This resulted in a decision being made to use Unity as the development environment.