Performance and Capacity Planning

With© all processing is done on the node, why it’s important to test your services before you run them in production. This post aims to give you insight of how things work behind the scene together with some tips and tricks of how to build your services to scale.

Source code

Although you don't need access to the source code to follow these instructions, you might be curious and want to clone or fork the repo from GitHub anyway. The repo's can be found here:

How it works

microServiceBusHost.js is an object responsible for hosting all services. It’s also responsible for all management communications with the Hub. By management communications we refer to commands like signing in and out, receiving updates and other commands like stop/start and enable/disabling debugging.

After the node has signed in, it receives all the Flows it participates in. Next it will iterate through all the Flows and examine every service to determine if it should be hosted on this particular Node. If the service is to be hosted on the Node, microServiceBusHost will:

  1. Download the service from the Hub.
  2. Create the service and extend it with an instance of microService.js (*)
  3. Start the service
  4. Start a web hosting process (if there is any REST inbound services)
  • microService is abstract object with all the helper functions to communicate with the Hub queues, topics and subscriptions.

Each service must implement three methods; Start, Stop and Process, and as you might already know, there are three types of services; Inbound, Outbound and Internal.

The Process operation is only relevant to Outbound- and Internal services, and is called by the microServiceBusHost when a message has been assigned to the service.

Monitor your nodes

As briefly described in the first section, services are scripts managed and created in the portal which are downloaded to the Node as it signs in. Once downloaded, microServiceBusHost then instantiates it and calls the Start function.

Delegated architectures such as with, presents some challenges:

Different nodes runs different services (scripts) Nodes might interact with sensors and other hardware not available on a developer environment Nodes are remote, and might be difficult to access Your first source of what is going happening on your node would be the Tracking page in the portal. The tracking lets you drill down on instances of your Flows to examine the message together with its variables, properties and exception messages.

To get a deeper insight of what is happening in your service you can write debug output mesasges which you can monitor in the Console page in the portal. A normal console.log would have the same affect, but through using the this.Debug(…) operation, nothing is written to the console unless you explicitly enable debugging from the portal.

Profiling As with any node.js application you can have dependencies to other npm packages. When using external resources, such as npm packages, it’s important to make sure the service doesn’t exhaust your system resources.

Through the settings.json file, you can enable insight to memory consumption by setting the value of trackMemoryUsage to anything but 0. The value represents how often you’d like the current memory consumption to be outputted to your console.

Enabling trackMemoryUsage will print out RSS (Resident set size), Heap total and Heap used to your console.

If you are observing a constant increase of memory usage, you might consider taking a Heap snapshot. This can be done by pressing “d” from the console (not available remote). This action will create a memory dump file, and store it in your installation folder. Before you do this, you’ll have to install the headdump npm package.

Head snapshot files can be used in Crome developer console, to further analyze issues.

Back to help