How Much IT Infrastructure Do I Need? Capacity Planning for Workflow Automation

Learn about capacity planning for workflow automation

Last published at: March 6th, 2026

Capacity planning is the process of determining the resource capacity needed to meet expected production demands. In most organizations, capacity planning begins with a basic baseline capacity. Use cases that include different potential production increases are considered, and these are likely factored into expansion planning.

A straightforward manufacturing example is a baseline scenario in which one machine produces 100 items per hour. However, if demand grows to 300 items per hour, how many additional machines (along with the space, workforce, utilities, storage, packaging, and other necessary resources) will be needed? The simple answer is three machines. Planning infrastructure for workflow automation similarly requires you to define the following:

  • How many processes do you need to run?
  • How fast do you need to execute them?
  • How many processors? How much memory? etc. (# machines))

Suppose you need to handle more processes in less time using the infrastructure you originally designed for baseline estimates. In that case, you must add more capacity to the infrastructure (or improve its speed, efficiency, etc.).

The foundational setup for FlowWright consists of two servers: 

  • an application server and 
  • a database server.  
     

Most organizations manage and share databases on a single database server. If you expect that your primary database server will be overburdened by processing needs, relocate the FlowWright database to a server with adequate resources.

The baseline application server for FlowWright has 8GB of memory, a Quad-core CPU, and 100GB of storage. This setup represents the minimum required configuration for the FlowWright application server. When planning workflow capacity, the number of workflow instances processed, their complexity, and throughput needs will determine whether to scale up the application server's capacity and speed.

Sometimes, a certain amount of work must be completed within a specific timeframe. For example, if 10,000 instances need to be processed in one hour, and you realize the application server lacks the resources to handle all of them, you can increase your processing capacity in two ways.  

  1. Resources such as the number of CPUs and memory can be increased in a virtual or cloud environment.
  2. If the application server runs on an on-premises or hosted physical server, FlowWright can be configured with another application server to handle distributed processing and your increased load.

Sometimes, capacity planning is difficult because the amount of work and its execution can be complex and unpredictable. For example, imagine a process with 87 steps, where after completing the 3rd step, the workflow often halts — sometimes for weeks or months. In this scenario, launching 1 million instances of this process might result in very little work per individual workflow, as the engine has few tasks to perform for each. The resources needed to process 87 complex steps differ greatly from those required to process just three simple steps. 

Sometimes, the processing load is primarily due to integration, while other systems handle the majority of resource-intensive computing. For example, one FlowWright customer processes millions of prescriptions daily, but most of the work on each prescription occurs on the client’s application server rather than FlowWright's workflow server. One task performed on each prescription is optical character recognition (OCR), which is CPU- and memory-intensive. OCR processes are assigned to an OCR server using FlowWright's asynchronous steps. These steps make a REST API call to other systems to perform the work and then become idle, consuming no resources from the FlowWright infrastructure. When the OCR server finishes its task, it calls FlowWright via the FlowWright API to continue processing the workflow instance. FlowWright manages the process and coordinates between systems, consuming minimal resources directly.

Capacity planning, like data science, can be supported by software solutions and tools designed specifically for this purpose. When performing capacity planning for workflow processes, the following variables must be included in the calculations.

  • # of workflow instances processed
  • # of steps continuously processed
  • How much processing is performed by specific complex steps
  • The number of processes that need to be processed within a period

Sometimes, the size of the processing infrastructure also depends on the number and complexity of process decisions. Some processes may involve complex calculations that determine the workflow's path. For instance, a simple choice in a process might be deciding whether to OCR the incoming prescription, which has significant processing implications. If the process requires OCR, more processing is needed than if it does not.    

Capacity planning is an art. Some customers who automated their back-end server processes have already gone through this exercise, seen their business soar, and had to increase resources on their physical servers and virtual environments. If you need help with resource planning, we are here to help. Our resources can analyse your environment, processes, and steps and recommend proper infrastructure requirements.