Although the PiGlow visualisation of CPU usage was pretty, we reckoned we could go a couple of steps further and integrate a much more complete tangible solution - a hardware-driven load monitor dashboard.
Made of cardboard.
This was to be driven by two high torque servos (Ben had them lying around) which would rotate according to whichever performance indicator we chose. Servos are not, of course, very good pointers so with a trusty craft knife to the fore we re-purposed some Pi packaging into a cardboard user interface.
On the code side, we particularly wanted to monitor the load across the entire cluster so we ended up writing our own Python HTTP API with Flask and copied client scripts over to all the nodes. The server code, which sits on the top Pi in the stack does two things, firstly it listens for updates from the clients and calculates the metrics (mean CPU % and memory %) and then it drives the servos on the dashboard. These then rotate (clockwise) to indicate the scale of the metric as you can see in the video:
The video shows how the load increases over the swarm as we ran Apache Bench against a simple Node.js application.
We had one main problem: how do we try to stop the servos from juddering due to interference on the SPI lines? We discovered that the best way of stopping this was to use the
detach() method on each of our Python servo objects after each movement. This greatly helped but as you can see in the video, it could still do with some improvement.
As with the PiGlow code, it's available in the GitHub repository so, please feel free to check it out and fork it, etc.
My plans for the future of this swarm include trying out distributing a Hadoop cluster and running one node as a staging server for a large project of mine. Stay tuned to see what I get up to with the mean, lean, number-crunching Pi machine!