Distributed Execution Terminology
The following terminology is used in conjunction with distributed execution and load balancing.
Server Instance
A server instance is a running and licensed installation of FlowForce Server. Both services (FlowForce Web Server and FlowForce Server) are assumed to be up and running on the machine.
Job instance
A job instance is not the same as a job. When you configure a FlowForce job from the job configuration page, you create in fact a job configuration. Every time when the defined trigger criteria for a job apply, an instance of the job starts running. Job instances are distributed within the cluster as defined by the execution queue associated with the job. A job instance will always run in its entirety on a single cluster member.
Cluster
A cluster represents several service instances of FlowForce Server that communicate for the purpose of executing jobs in parallel or redistributing jobs if any instance is not available. A cluster consists of one "master" FlowForce Server and one or several "workers".
Master
A "master" is a FlowForce Server instance that continuously evaluates job-triggering conditions and provides the FlowForce service interface. A master is aware of worker machines in the same cluster and may be configured to assign job instances to them, in addition to (or instead of) processing job instances itself.
Worker
A FlowForce Server instance that is configured to communicate with a master instance instead of executing any local jobs. A worker can execute only jobs that a master FlowForce Server has assigned to it.
Execution Queue
An execution queue is a "processor" of jobs; it controls how job instances run. In order to run, every job instance is assigned to a target execution queue. The queue controls how many job instances (of all the jobs assigned to the queue) can be running at any one time and the delay between runs. By default, the queue settings are local to the job, but you can also define queues as standalone objects shared by multiple jobs. When multiple jobs are assigned to the same execution queue, they will share that queue for executing.
Queues benefit from the same security access mechanism as other FlowForce Server configuration objects. Namely, a user must have the "Define execution queues" privilege in order to create queues, see also How Privileges Work. In addition, users can view queues, or assign jobs to queues, only if they have appropriate container permissions (not the same as privileges), see also How Permissions Work. By default, any authenticated user gets the "Queue - Use" permission, which means they can assign jobs to queues. To restrict access to queues, navigate to the container where the queue is defined, and change the permission of the container to "Queue - No access" for the role authenticated. Next, assign the permission "Queue - Use" to any specific roles or users that you need. For more information, see Restricting Access to the /public Container.