Having already created a simulation and converted it into a SpatialOS deployment, the process of doing the same for this simulation was relatively painless. I was very quickly able to run a local deployment of the simulation and also was able to have a cloud deployment using a single virtual machine without too many issues. I did notice, however, that the performance was lower than that of the implementation in a regular unity application.
I began the expansion of the simulation to include multiple workers using a local deployment, initially, however this was really quite difficult. The performance overhead per local worker meant that even for a tiny simulation it would quickly max out my laptop’s CPU usage, making debugging of the simulation slow.
The solution for this was to upload the deployment to the cloud. Unfortunately, the time for every deployment to build, upload and process ready for running on the cloud was a lengthy process which made progress extremely slow and frustrating.
The main issue I was having was with the spatial layout of the workers and how their domains are calculated and assigned. This is managed through the SpatialOS deployment configs, specifically the load balancing configs within them. What these configurations essentially control are how many workers are active, what size their domain is, where in space their domain is and how they are arranged relative to one another. These configs can’t really be tested without deploying, and some of the parameters can have unexpected effects. This meant to get the load balancing configurations right multiple uploads were required.
The best layout discovered for this implementation was, counter-intuitively, a random placement with small maximum domain size. Worker death was still very high due to overloading workers, but the simulation did run. The worker overhead and the amount of network communication occurring for the updates of the bodies meant that this deployment was still slow in comparison to the standard unity implementation.