Introducing a small but powerful PeopleSoft bolt-on that makes web services very easy. If you have a SQL statement, you can turn that into a web service in PeopleSoft in a few minutes.Read More & Purchase
Queues only apply to
Asynchronous Service Operations. A queue defines several key aspects of how the publishing and subscribing of the messages in the queue are handled.
A good example of when several service operations are in the same queue that need to process in order are the “person” related service operations in the
The default way this is configures is there is NOT partitioning. So events will be processed and published in the order they are received. This make some sense. I would argue the default configuration for this queue should probably be partitioned by EMPLID. With the standard setup of this queue a subscribing system will receive the messages in the order they were published. So let’s look at this sequence of events:
If you have these events happening in an HCM instance and they need to publish to something like a Campus system you need these events to be sent in the order they are created. The create of the person needs to happen and be processed first in the receiving system so the person can be created in that system. Then the later updates can be processed. You can’t process the PERSON_DIVERSITY_SYNC prior to the EMPLID being created. If you processed the “update email” before the “add emails” the receiving system would have unreliable results.
This section will explain
Queue partitioning. Partitioning is defined at the
Queue level and it tells the integration broker if messages in a queue can be processed in parallel or not. This is queue dependent and setting up proper queues and partitioning can be critical in setting up effective loosely coupled systems. If you configure it incorrectly, one error in the queue can block everything which can be disastrous for high volume message queues. There are times that you want to configure partitioning and other times when you do not. This is really dependent on the nature of the message and how the receiving system handles the data.
When a queue is
partitioned, that means that the messages can be processed in parallel. Any messages with the same “common fields” will process in the order they were received. Partitioning is best explained with an example.
USER_PROFILE Queue Partitioning Example
Lets take the case of the
USER_PROFILE_SYNC service operation. This Service Operation is used to sync PeopleTools user profiles between different PeopleTools databases or other systems. This service operation is defined in the USER_PROFILE queue.
When you look at the queue setup for the USER_PROFILE queue you can see that all the common fields are listed on the right hand side. In this screenshot the OPRID is checked for partitioning.
Since we have partitioning setup on the OPRID common field level, the publication error on the JOHND user profile will NOT block the publication messages of SALLIES and ROBERTD. The SALLIES and ROBERTD message both went to success. This is shown in the drawing where each user basically has their own “lane” in the queue and the “lane” is setup based on the partitioned field.
If JOHND decides that he does not like his new password and changes it a minute later then that second password change will actually sit in “NEW” status behind the message in error because the queue is partitioned and the integration broker was smart enough to realize that there was a message for that user in error and it does not processes the second or third message. All messages for the JOHND user are basically frozen but other user profiles are not impacted. If someone were to come in and cancel the JOHND message in error then the message behind it would process. However, it could lead to unpredictable results