Behind the scenes NodeJS application, with restricted RESTful API access, for Carmen Data Ltd
When I joined Carmen Data Ltd in 2007 as the only full time developer, I took over the maintenance of the single dedicated hardware server that hosted 100+ websites.
As well as actively developing the front and back end of all our sites and being database administrator, it made life much easier if I learnt the following as well:
- Linux admistration
- Bash programming
- Command line MySQL
- Apache server configuration
- ColdFusion server management
Over the years, all the scripts and maintenance programs that had been written had become fragmented, spread around multiple languages and were all triggered indepentenly of one another. Some of the processes were web pages, that would deal with large data imports and slow calculations – stepping back and reviewing the situation made it clear that a solution needed designing from the ground up to handle all these processes.
In 2016 I designed a NodeJS Worker Server to take control of all tasks that were not directly related to serving our websites.
The intention was to port all of our Bash, NodeJS and ColdFusion scripts into a single, central NodeJS Application on it’s own Cloud Server. This tied in nicely with the migration of our Web Server into another Cloud Server, which would sit behind a Load Balancer.
We built a queue system that tasks could wait in, to stop multiple conflicting processes running at the same time. We added a priority and inheritance system so tasks could skip to the front of the queue and also push their dependants to the back so they could finish first.
The NPM Q library is used extensively throughout the codebase: https://www.npmjs.com/package/q
Using Q and promises allowed us to break up the code and execute it in a very modular fashion. This made it easy to maximise code reuse and report and log errors by using the catch feature built into promises.
Despite working hard to catch any potential errors, sometimes things go wrong, as every developer knows. So we make use of pm2 and keymetrics: http://pm2.keymetrics.io/
This allows us to monitor the status of the application and pm2 will auto attempt a restart if any thing crashes. We also get instant email notificatons of uncaught errors and automated server restarts.
By logging each task’s start and end points in our own database, we’ve built visual analysis tools to show us when the server is working and when it’s free. This means we can schedule certian heavier tasks during the downtimes, allowing us to balance the work of the server throughout the day.
In order to schedule tasks manually, we added a RESTful API, following Google’s API design guidelines: https://cloud.google.com/apis/design/
This allows us to add buttons to our administration user interfaces on our Web Server, that can quickly add a task to the Worker’s queue and get feedback on where it is in the queue and it’s status when running.
The Worker Server runs around the clock, handling a variety of tasks that make our websites run smoother, data imports faster and general database maintance much easier. It runs a range of tasks from randomising the vehicle manufacturers on a demo site, to importing millions of rates into our MySQL HA Group.
It’s now an itegral part of our system and I wish it was the first thing I’d built back in 2007.