Altova FlowForce Server 2025 Advanced Edition

At the core of distributed execution lies the concept of execution queues.

 

An execution queue is a processor of jobs. It controls how job instances run. In order to run, every job instance is assigned to a target execution queue. The queue controls how many job instances (of all the jobs assigned to the queue) can be running at any one time and the delay between runs. By default, the queue settings are local to the job (local queues), but you can also define queues as standalone objects (global queues) shared by multiple jobs.

 

Global vs local queues

A local queue is created within the framework of a particular job. A global queue is created external to a job, as a standalone object.

 

With standalone queues, you can benefit from distributed processing. Distributed processing means that you create a cluster comprising the master machine and one or more worker machines. Distributed processing is supported only in Advanced Edition. Local queues behave like global queues, with the sole difference being that local queues do not support distributed processing (clusters). This means that local queues can be configured only on the master machine; worker machines are not compatible with local queues.

 

Security considerations

Queues benefit from the same security access mechanism as other FlowForce Server configuration objects. Namely, a user must have the Define execution queues privilege in order to create queues (see also Define Users and Roles). In addition, users can view queues and assign jobs to queues if they have appropriate container permissions (see also How Permissions Work). By default, any authenticated user gets the Queue - Use permission, which means they can assign jobs to queues. To restrict access to queues, navigate to the container where the queue is defined and change the permission of the container to Queue - No access for the role authenticated. Next, assign the permission Queue - Use to any roles or users that you need. For more information, see Restrict Access to the /public Container.

 

Global queues

Shared (global) queues provide a flexible mechanism to control server load on a single FlowForce machine or on a cluster. Configuring load balancing is a multi-step process that comprises the following procedures:

 

1.First, you need to create a queue.

2.Second, for each queue, you need to define its processing settings. For example, you can configure a queue to run only on the master, only on workers, or on both. It is also possible to define basic fallback criteria. For instance, a queue may be configured to run by default on the master and all its workers; however, if all workers become unavailable, the queue will fall back to the master only.

3.Third, you need to assign jobs to the queue created previously.

 

To find out more about these procedures, see Queues.

 

Note:Cross-system clusters are not supported, which means that a worker-master connection cannot be established between different OS platforms (e.g., between Linux and Windows).

 

© 2018-2024 Altova GmbH