Cluster
To improve data throughput and provide basic fault tolerance, you can configure multiple FlowForce Server instances to run as a cluster. This provides the following benefits:
•Load balancing
•Leaner resource management
•Scheduled maintenance
•Reduced risk of service interruption
Note: | Cross-system clusters are not supported, which means that a worker-master connection cannot be established between different OS platforms (e.g., between Linux and Windows). |
Load balancing
When hardware limits cause FlowForce Server to be overwhelmed by multiple job instances running simultaneously, it is possible to redistribute the workload to another running instance of FlowForce Server (a so-called "worker"). You can set up a cluster comprised of one master machine and multiple worker machines and thus take advantage of all the licensed cores in the cluster.
Leaner resource management
One of the machines designated as a master continuously monitors job triggers and allocates queued items to workers or even to itself, depending on the configuration. You can configure the queue settings and assign a job to a particular queue. For example, you can configure the master machine not to process any job instances at all. This will allow freeing up the master's resources and dedicating them exclusively to the continuous provision of FlowForce Service as opposed to data processing.
Scheduled maintenance of workers
You can restart or temporarily shut down any running instance of FlowForce Server that is not the master, without interrupting the provision of service. Note that the master is expected to be available at all times; restarting or shutting it down will still interrupt the provision of service.
Reduced risk of service interruption
In the case of hardware failures, power outages, unplugged network cables, etc., the impact depends on whether the affected machine is a worker or a master:
•If the machine is a worker, any running FlowForce job instances on that worker will be lost. However, the general provision of FlowForce service will not be lost, because new instances of the same job will be taken over by a different worker (or by the master, if configured). The execution status of the job, including failure, is reported to the master and is visible in the job log so that an administrator can take appropriate action manually.
•If the machine is a master, the provision of service will be lost. In this case, new job instances cannot start as long as the master is unavailable.
Terminology
The following terminology is used in conjunction with distributed execution and load balancing.
Server instance | A server instance is a running and licensed installation of FlowForce Server. Both services (FlowForce Web Server and FlowForce Server) are assumed to be up and running on the machine.
|
Cluster | A cluster represents a group of several instances of FlowForce Server, running on different machines, that communicate for the purpose of executing jobs in parallel. A cluster consists of one master FlowForce Server and one or several workers.
|
Master | A master is a FlowForce Server instance that continuously evaluates job-triggering conditions and provides the FlowForce service interface. The master is aware of worker machines in the same cluster and may be configured to assign job instances to them, in addition to or instead of processing job instances itself.
|
Worker | A FlowForce Server instance that is configured to communicate with the master instance instead of executing any local jobs. A worker can execute only jobs that the master has assigned to it.
|
Execution queue | A queue is a processor of jobs that controls the number of job instances that can be running at a time and the delay between runs. Through queue configuration, you can use server resources more efficiently.
You can create a queue inside a job (local queue) or define a queue as a standalone object (global queue). A local queue processes only the instances of the job in which the local queue has been configured. A global queue can process instances of one job and instances of different jobs.
Global queues provide a flexible mechanism to manage a server load on a single FlowForce machine as well as on a cluster (see below).
Local and global queues in a cluster environment (Advanced Edition)Setting up a cluster means that processing is distributed among cluster members: one master machine and one or more worker machines. For a global queue, you can select a cluster member to run on, which can be the master or any worker, only the master, or only a worker. With local queues, jobs can run only on the master machine and not on any other cluster member.
Security considerationsQueues utilize the same security access mechanism as any other FlowForce Server configuration objects. Namely, a user must have the Define execution queues privilege in order to create queues (see also Define Users and Roles). In addition, users can view queues and assign jobs to queues if they have appropriate container permissions (see also How Permissions Work). By default, any authenticated user gets the Queue - Use permission, which means they can assign jobs to queues.
To restrict access to queues, navigate to the container where the queue is defined and change the permission of the container to Queue - No access for the role authenticated. Next, assign the permission Queue - Use to any roles or users that you need. For more information, see Restrict Access to the /public Container.
To find out more about local and global queues, see Queues.
|