When integrating with external systems, Developers will often encounter scenarios where they need to make multiple outbound API calls as part of a batch process. For example, you may have an App Engine that queries for a set of records and needs to send each one to an external system via REST API. The naive implementation processes each record sequentially – it makes the API call, waits for the response, logs the result, and then moves to the next record. This approach is simple but can lead to very long processing times, especially when network latency is involved. If each API call takes 500ms round-trip and you have 1,000 records, that is over 8 minutes of wall clock time spent just waiting on the network. Add in any API rate limiting, retries, or slow responses and the numbers get worse quickly.
PeopleSoft Application Engine is single-threaded. When your batch process needs to make dozens or hundreds of outbound API calls, each call executes sequentially – the process waits for each HTTP response before moving to the next. For network-bound operations like REST API calls, this serialization is a serious bottleneck. A process that could finish in minutes ends up running for hours.
This case study describes a pattern we have used across multiple client implementations to solve this problem: use Integration Broker’s asynchronous service operations as a concurrent work queue. Instead of processing each unit of work sequentially in App Engine, you publish each work item as an async message. Integration Broker’s subscription handlers pick up those messages and process them in parallel, limited only by your application server thread configuration.
Consider a typical outbound integration scenario. You have an App Engine process that:
This works, but it has two significant constraints:
Single-threaded execution. App Engine processes one record at a time. If each API call takes 500ms round-trip and you have 1,000 records, that is over 8 minutes of wall clock time spent just waiting on the network. Add in any API rate limiting, retries, or slow responses and the numbers get worse quickly.
Process Scheduler slot consumption. While your App Engine is sitting idle waiting on network responses, it holds a Process Scheduler slot. Most PeopleSoft environments have a limited number of concurrent App Engine slots (often 3-10). A long-running integration process can block other batch jobs from running – payroll, financial posting, reporting – all waiting for a scheduler slot.
The solution splits the work into two phases:
The key insight is that this is a local-to-local integration. You are not sending messages to another PeopleSoft node. You are using Integration Broker’s queue infrastructure as a work distribution mechanism within the same database. See Understanding Local Integration Broker Routings for background on this concept.
Integration Broker’s application server processes are multi-threaded. When you publish 100 async messages, IB does not process them one at a time. It distributes them across available handler threads. If your application server has 10 handler threads configured, you get up to 10x throughput compared to sequential App Engine processing.
Additionally, async subscription processing does not consume Process Scheduler slots. Your batch queue stays clear for other jobs. The IB application server handles the work independently.
Before building the service operation, create a custom table to track event status. This is critical for monitoring and error recovery. A minimal design:
| Field | Type | Purpose |
|---|---|---|
| EVENT_ID | VARCHAR(36) | Unique identifier (GUID) |
| EMPLID or KEY_FIELD | VARCHAR | Business key for the work item |
| EVENT_STATUS | VARCHAR(4) | NEW, QUE, COMP, ERR, CANC |
| IBTRANSACTIONID | VARCHAR(36) | Links back to IB message monitor |
| EVENT_PAYLOAD | LONG | Raw request/response data for debugging |
| EVENT_LOG | LONG | Detailed processing log |
| CREATED_DTTM | DATETIME | When the event was created |
| UPDATED_DTTM | DATETIME | Last status change |
Create a non-rowset based message for your service operation. The message carries the minimum payload needed for the subscription handler to do its work – typically just a key identifier. The handler will query for the full data it needs.
Create a dedicated queue for your concurrent processing operation. Enable partitioning on the business key field (e.g., EMPLID). This is what enables parallel processing – messages with different partition keys can be processed concurrently, while messages with the same key are processed in order.
See Queues for a detailed explanation of queue partitioning.
Create an asynchronous one-way service operation with a local-to-local routing. The routing should map back to the same default local node, triggering your subscription handler. See Service Operation Routings for details on routing configuration.
Create an Application Package class that implements INotificationHandler. This is the worker that does the actual processing for each message.
The event creation process is straightforward. Query for work items, create an event record for each one, and publish a message.
/* Event Creator - runs in App Engine or triggered from a page */
Local Message &msg;
Local XmlDoc &xmlDoc;
Local XmlNode &rootNode;
/* Query for work items that need processing */
Local SQL &sqlSelect = CreateSQL("SELECT EMPLID FROM PS_Z_WORK_TABLE WHERE PROCESS_FLAG = 'N'");
Local string &emplid;
While &sqlSelect.Fetch(&emplid)
/* Create the event record in our tracking table */
Local string &eventId = UuidGen();
Local SQL &sqlInsert = CreateSQL("INSERT INTO PS_Z_ASYNC_EVENTS (EVENT_ID, EMPLID, EVENT_STATUS, CREATED_DTTM) VALUES (:1, :2, 'NEW', %CurrentDateTimeIn)");
&sqlInsert.Execute(&eventId, &emplid);
/* Build the async message with minimal payload */
&msg = CreateMessage(Operation.CHG_ASYNC_WORKER);
&xmlDoc = CreateXmlDoc("<?xml version='1.0'?><request/>");
&rootNode = &xmlDoc.DocumentElement;
&rootNode.AddElement("eventId").NodeValue = &eventId;
&rootNode.AddElement("emplid").NodeValue = &emplid;
&msg.SetXmlDoc(&xmlDoc);
/* Publish - this returns immediately */
%IntBroker.Publish(&msg);
/* Update event status to queued */
Local SQL &sqlUpdate = CreateSQL("UPDATE PS_Z_ASYNC_EVENTS SET EVENT_STATUS = 'QUE' WHERE EVENT_ID = :1");
&sqlUpdate.Execute(&eventId);
End-While;
CommitWork();
The subscription handler does the real work. Each invocation handles one work item independently.
import PS_PT:Integration:INotificationHandler;
class AsyncWorker implements PS_PT:Integration:INotificationHandler
method OnNotify(&_MSG As Message);
method CallExternalAPI(&emplid As string) Returns string;
method LogEvent(&eventId As string, &status As string, &logText As string);
property string EventId;
property string Emplid;
end-class;
method OnNotify
/+ &_MSG as Message +/
/+ Extends/implements PS_PT:Integration:INotificationHandler.OnNotify +/
Local XmlDoc &xmlDoc;
Local XmlNode &rootNode;
Local string &response;
/* Parse the message payload */
&xmlDoc = &_MSG.GetXmlDoc();
&rootNode = &xmlDoc.DocumentElement;
%This.EventId = &rootNode.GetElement("eventId").NodeValue;
%This.Emplid = &rootNode.GetElement("emplid").NodeValue;
/* Store the IB Transaction ID for cross-referencing with message monitor */
Local SQL &sqlTxn = CreateSQL("UPDATE PS_Z_ASYNC_EVENTS SET IBTRANSACTIONID = :1 WHERE EVENT_ID = :2");
&sqlTxn.Execute(&_MSG.TransactionId, %This.EventId);
try
/* Do the actual work - call the external API */
&response = %This.CallExternalAPI(%This.Emplid);
/* Mark as complete */
%This.LogEvent(%This.EventId, "COMP", "Successfully processed. Response: " | &response);
catch Exception &ex
/* Mark as error - do not re-throw, or IB will retry and potentially loop */
%This.LogEvent(%This.EventId, "ERR", "Error: " | &ex.ToString());
end-try;
end-method;
method CallExternalAPI
/+ &emplid as String +/
/+ Returns String +/
/* Build and execute the outbound REST call */
/* This is where your actual integration logic lives */
/* See the HTTP Target Connector documentation for details */
Local Message &request, &response;
&request = CreateMessage(Operation.CHG_OUTBOUND_REST, %IntBroker_Request);
/* ... build request payload ... */
&response = %IntBroker.SyncRequest(&request);
If &response.ResponseStatus = %IB_Status_Success Then
Return &response.GetXmlDoc().GenXmlString();
Else
throw CreateException(0, 0, "API call failed with status: " | &response.ResponseStatus);
End-If;
end-method;
method LogEvent
/+ &eventId as String, +/
/+ &status as String, +/
/+ &logText as String +/
Local SQL &sql = CreateSQL("UPDATE PS_Z_ASYNC_EVENTS SET EVENT_STATUS = :1, EVENT_LOG = :2, UPDATED_DTTM = %CurrentDateTimeIn WHERE EVENT_ID = :3");
&sql.Execute(&status, &logText, &eventId);
CommitWork();
end-method;
When your event creator runs on a schedule (e.g., hourly), it may pick up the same records again if they were not yet processed or if they changed again since the last run. Two strategies handle this:
Duplicate detection in the subscription handler. Before processing, check if a newer event exists for the same business key. If so, cancel the current (older) event and let the newer one handle it. The newest event always has the most current data.
/* Check for newer events for this same EMPLID */
Local SQL &sqlNewer = CreateSQL("SELECT 'X' FROM PS_Z_ASYNC_EVENTS WHERE EMPLID = :1 AND CREATED_DTTM > (SELECT CREATED_DTTM FROM PS_Z_ASYNC_EVENTS WHERE EVENT_ID = :2) AND EVENT_STATUS IN ('NEW', 'QUE')");
Local string &exists;
If &sqlNewer.Fetch(&exists) Then
/* A newer event exists - cancel this one */
%This.LogEvent(%This.EventId, "CANC", "Cancelled - newer event exists for EMPLID " | %This.Emplid);
Return;
End-If;
Overlap-based scheduling. When generating events from a “changed since” query, intentionally overlap your time window (e.g., look back 1 hour beyond the last run). Duplicates are cheap to cancel; missed records are expensive to debug.
This pattern shifts monitoring from the Process Scheduler to the Integration Broker and your custom event table. Build a management page that:
For events in error status, create a PeopleSoft query that finds them:
SELECT EVENT_ID, EMPLID, EVENT_STATUS, CREATED_DTTM
FROM PS_Z_ASYNC_EVENTS
WHERE EVENT_STATUS = 'ERR'
AND CREATED_DTTM > SYSDATE - 1
ORDER BY CREATED_DTTM DESC
You can build an App Engine process that runs this query and creates new events for each errored record, effectively retrying them. Schedule this to run periodically for automatic error recovery. This is a simple and effective approach – the new event goes through the same async processing pipeline and gets a fresh attempt.
Over time, your event table will grow. Add a housekeeping step to your batch process that deletes completed and cancelled events older than a configurable retention period (e.g., 30 days for completed, 7 days for cancelled). Keep errored events longer for analysis.
This pattern is not free. You are trading simplicity for throughput. Be aware of these considerations:
This pattern works well when:
It is less appropriate when:
We have used this pattern across several implementations including identity provisioning, HR-to-Campus data synchronization, and account reconciliation workflows. In each case, the pattern delivered:
Chris Malek s a PeopleTools® Technical Consultant with over two decades of experience working on PeopleSoft enterprise software projects. He is available for consulting engagements.
Work with Chris