Contents

Case Study - Async Services for Concurrent Processing

When integrating with external systems, Developers will often encounter scenarios where they need to make multiple outbound API calls as part of a batch process. For example, you may have an App Engine that queries for a set of records and needs to send each one to an external system via REST API. The naive implementation processes each record sequentially – it makes the API call, waits for the response, logs the result, and then moves to the next record. This approach is simple but can lead to very long processing times, especially when network latency is involved. If each API call takes 500ms round-trip and you have 1,000 records, that is over 8 minutes of wall clock time spent just waiting on the network. Add in any API rate limiting, retries, or slow responses and the numbers get worse quickly.

PeopleSoft Application Engine is single-threaded. When your batch process needs to make dozens or hundreds of outbound API calls, each call executes sequentially – the process waits for each HTTP response before moving to the next. For network-bound operations like REST API calls, this serialization is a serious bottleneck. A process that could finish in minutes ends up running for hours.

This case study describes a pattern we have used across multiple client implementations to solve this problem: use Integration Broker’s asynchronous service operations as a concurrent work queue. Instead of processing each unit of work sequentially in App Engine, you publish each work item as an async message. Integration Broker’s subscription handlers pick up those messages and process them in parallel, limited only by your application server thread configuration.

The Problem

Consider a typical outbound integration scenario. You have an App Engine process that:

  1. Queries for a set of records that need to be sent to an external system
  2. For each record, makes an outbound REST API call
  3. Waits for the response
  4. Logs the result
  5. Moves to the next record

This works, but it has two significant constraints:

Single-threaded execution. App Engine processes one record at a time. If each API call takes 500ms round-trip and you have 1,000 records, that is over 8 minutes of wall clock time spent just waiting on the network. Add in any API rate limiting, retries, or slow responses and the numbers get worse quickly.

Process Scheduler slot consumption. While your App Engine is sitting idle waiting on network responses, it holds a Process Scheduler slot. Most PeopleSoft environments have a limited number of concurrent App Engine slots (often 3-10). A long-running integration process can block other batch jobs from running – payroll, financial posting, reporting – all waiting for a scheduler slot.

The Pattern

The solution splits the work into two phases:

  1. Event Creation (App Engine or Component) – Query for work items and publish each one as an individual async message to a local service operation.
  2. Event Processing (Subscription Handler) – Integration Broker’s subscription handlers pick up the messages and process them concurrently based on your application server thread pool configuration.
flowchart TD A["App Engine / Component\n(Event Creator)\n\nQueries for work items\nPublishes each as an async message"] B["Async Service Operation\n(Local-to-Local Queue)\n\nPartitioned queue enables\nparallel processing"] C1["OnNotify Handler\nWork Item 1"] C2["OnNotify Handler\nWork Item 2"] C3["OnNotify Handler\nWork Item 3"] CN["OnNotify Handler\nWork Item N"] A -->|"Publishes N messages\n(one per work item)"| B B --> C1 B --> C2 B --> C3 B --> CN

The key insight is that this is a local-to-local integration. You are not sending messages to another PeopleSoft node. You are using Integration Broker’s queue infrastructure as a work distribution mechanism within the same database. See Understanding Local Integration Broker Routings for background on this concept.

Why This Works

Integration Broker’s application server processes are multi-threaded. When you publish 100 async messages, IB does not process them one at a time. It distributes them across available handler threads. If your application server has 10 handler threads configured, you get up to 10x throughput compared to sequential App Engine processing.

Additionally, async subscription processing does not consume Process Scheduler slots. Your batch queue stays clear for other jobs. The IB application server handles the work independently.

Setting Up the Infrastructure

Event Tracking Table

Before building the service operation, create a custom table to track event status. This is critical for monitoring and error recovery. A minimal design:

Field Type Purpose
EVENT_ID VARCHAR(36) Unique identifier (GUID)
EMPLID or KEY_FIELD VARCHAR Business key for the work item
EVENT_STATUS VARCHAR(4) NEW, QUE, COMP, ERR, CANC
IBTRANSACTIONID VARCHAR(36) Links back to IB message monitor
EVENT_PAYLOAD LONG Raw request/response data for debugging
EVENT_LOG LONG Detailed processing log
CREATED_DTTM DATETIME When the event was created
UPDATED_DTTM DATETIME Last status change

Message Definition

Create a non-rowset based message for your service operation. The message carries the minimum payload needed for the subscription handler to do its work – typically just a key identifier. The handler will query for the full data it needs.

Queue Configuration

Create a dedicated queue for your concurrent processing operation. Enable partitioning on the business key field (e.g., EMPLID). This is what enables parallel processing – messages with different partition keys can be processed concurrently, while messages with the same key are processed in order.

See Queues for a detailed explanation of queue partitioning.

Service Operation

Create an asynchronous one-way service operation with a local-to-local routing. The routing should map back to the same default local node, triggering your subscription handler. See Service Operation Routings for details on routing configuration.

OnNotify Handler

Create an Application Package class that implements INotificationHandler. This is the worker that does the actual processing for each message.

PeopleCode: Event Creator

The event creation process is straightforward. Query for work items, create an event record for each one, and publish a message.

/* Event Creator - runs in App Engine or triggered from a page */
Local Message &msg;
Local XmlDoc &xmlDoc;
Local XmlNode &rootNode;

/* Query for work items that need processing */
Local SQL &sqlSelect = CreateSQL("SELECT EMPLID FROM PS_Z_WORK_TABLE WHERE PROCESS_FLAG = 'N'");
Local string &emplid;

While &sqlSelect.Fetch(&emplid)

   /* Create the event record in our tracking table */
   Local string &eventId = UuidGen();
   Local SQL &sqlInsert = CreateSQL("INSERT INTO PS_Z_ASYNC_EVENTS (EVENT_ID, EMPLID, EVENT_STATUS, CREATED_DTTM) VALUES (:1, :2, 'NEW', %CurrentDateTimeIn)");
   &sqlInsert.Execute(&eventId, &emplid);

   /* Build the async message with minimal payload */
   &msg = CreateMessage(Operation.CHG_ASYNC_WORKER);

   &xmlDoc = CreateXmlDoc("<?xml version='1.0'?><request/>");
   &rootNode = &xmlDoc.DocumentElement;
   &rootNode.AddElement("eventId").NodeValue = &eventId;
   &rootNode.AddElement("emplid").NodeValue = &emplid;

   &msg.SetXmlDoc(&xmlDoc);

   /* Publish - this returns immediately */
   %IntBroker.Publish(&msg);

   /* Update event status to queued */
   Local SQL &sqlUpdate = CreateSQL("UPDATE PS_Z_ASYNC_EVENTS SET EVENT_STATUS = 'QUE' WHERE EVENT_ID = :1");
   &sqlUpdate.Execute(&eventId);

End-While;

CommitWork();

PeopleCode: Subscription Handler (OnNotify)

The subscription handler does the real work. Each invocation handles one work item independently.

import PS_PT:Integration:INotificationHandler;

class AsyncWorker implements PS_PT:Integration:INotificationHandler
   method OnNotify(&_MSG As Message);
   method CallExternalAPI(&emplid As string) Returns string;
   method LogEvent(&eventId As string, &status As string, &logText As string);

   property string EventId;
   property string Emplid;
end-class;


method OnNotify
   /+ &_MSG as Message +/
   /+ Extends/implements PS_PT:Integration:INotificationHandler.OnNotify +/

   Local XmlDoc &xmlDoc;
   Local XmlNode &rootNode;
   Local string &response;

   /* Parse the message payload */
   &xmlDoc = &_MSG.GetXmlDoc();
   &rootNode = &xmlDoc.DocumentElement;

   %This.EventId = &rootNode.GetElement("eventId").NodeValue;
   %This.Emplid = &rootNode.GetElement("emplid").NodeValue;

   /* Store the IB Transaction ID for cross-referencing with message monitor */
   Local SQL &sqlTxn = CreateSQL("UPDATE PS_Z_ASYNC_EVENTS SET IBTRANSACTIONID = :1 WHERE EVENT_ID = :2");
   &sqlTxn.Execute(&_MSG.TransactionId, %This.EventId);

   try
      /* Do the actual work - call the external API */
      &response = %This.CallExternalAPI(%This.Emplid);

      /* Mark as complete */
      %This.LogEvent(%This.EventId, "COMP", "Successfully processed. Response: " | &response);

   catch Exception &ex
      /* Mark as error - do not re-throw, or IB will retry and potentially loop */
      %This.LogEvent(%This.EventId, "ERR", "Error: " | &ex.ToString());
   end-try;

end-method;


method CallExternalAPI
   /+ &emplid as String +/
   /+ Returns String +/

   /* Build and execute the outbound REST call */
   /* This is where your actual integration logic lives */
   /* See the HTTP Target Connector documentation for details */

   Local Message &request, &response;
   &request = CreateMessage(Operation.CHG_OUTBOUND_REST, %IntBroker_Request);

   /* ... build request payload ... */

   &response = %IntBroker.SyncRequest(&request);

   If &response.ResponseStatus = %IB_Status_Success Then
      Return &response.GetXmlDoc().GenXmlString();
   Else
      throw CreateException(0, 0, "API call failed with status: " | &response.ResponseStatus);
   End-If;

end-method;


method LogEvent
   /+ &eventId as String, +/
   /+ &status as String, +/
   /+ &logText as String +/

   Local SQL &sql = CreateSQL("UPDATE PS_Z_ASYNC_EVENTS SET EVENT_STATUS = :1, EVENT_LOG = :2, UPDATED_DTTM = %CurrentDateTimeIn WHERE EVENT_ID = :3");
   &sql.Execute(&status, &logText, &eventId);
   CommitWork();

end-method;

Handling Duplicates and Flooding

When your event creator runs on a schedule (e.g., hourly), it may pick up the same records again if they were not yet processed or if they changed again since the last run. Two strategies handle this:

Duplicate detection in the subscription handler. Before processing, check if a newer event exists for the same business key. If so, cancel the current (older) event and let the newer one handle it. The newest event always has the most current data.

/* Check for newer events for this same EMPLID */
Local SQL &sqlNewer = CreateSQL("SELECT 'X' FROM PS_Z_ASYNC_EVENTS WHERE EMPLID = :1 AND CREATED_DTTM > (SELECT CREATED_DTTM FROM PS_Z_ASYNC_EVENTS WHERE EVENT_ID = :2) AND EVENT_STATUS IN ('NEW', 'QUE')");
Local string &exists;

If &sqlNewer.Fetch(&exists) Then
   /* A newer event exists - cancel this one */
   %This.LogEvent(%This.EventId, "CANC", "Cancelled - newer event exists for EMPLID " | %This.Emplid);
   Return;
End-If;

Overlap-based scheduling. When generating events from a “changed since” query, intentionally overlap your time window (e.g., look back 1 hour beyond the last run). Duplicates are cheap to cancel; missed records are expensive to debug.

Monitoring and Error Recovery

This pattern shifts monitoring from the Process Scheduler to the Integration Broker and your custom event table. Build a management page that:

  • Shows event counts by status (NEW, QUE, COMP, ERR, CANC)
  • Allows filtering by date range and business key
  • Displays the event log for each record
  • Supports resubmitting errored events (create a new event for the same key)
  • Links to the IB message monitor via the stored Transaction ID

Retry Strategy

For events in error status, create a PeopleSoft query that finds them:

SELECT EVENT_ID, EMPLID, EVENT_STATUS, CREATED_DTTM
FROM PS_Z_ASYNC_EVENTS
WHERE EVENT_STATUS = 'ERR'
AND CREATED_DTTM > SYSDATE - 1
ORDER BY CREATED_DTTM DESC

You can build an App Engine process that runs this query and creates new events for each errored record, effectively retrying them. Schedule this to run periodically for automatic error recovery. This is a simple and effective approach – the new event goes through the same async processing pipeline and gets a fresh attempt.

Housekeeping

Over time, your event table will grow. Add a housekeeping step to your batch process that deletes completed and cancelled events older than a configurable retention period (e.g., 30 days for completed, 7 days for cancelled). Keep errored events longer for analysis.

Trade-offs

This pattern is not free. You are trading simplicity for throughput. Be aware of these considerations:

What You Gain

  • Concurrent processing – Multiple work items processed simultaneously, bounded by IB thread configuration
  • No Process Scheduler impact – Frees batch slots for other jobs
  • Built-in retry infrastructure – IB’s message queue handles redelivery; your event table adds business-level retry
  • Error isolation – One failed record does not block others (with proper partitioning)
  • Near-real-time capability – The same pattern works for event-driven processing from component SavePostChange, not just batch

What Gets Harder

  • Monitoring shifts location. You need to watch the IB message monitor and your event table instead of (or in addition to) Process Scheduler. Train your support team accordingly.
  • Debugging is less linear. With sequential processing you can read the log top to bottom. With concurrent processing, events interleave. Your event log per record becomes essential.
  • IB infrastructure matters more. Your application server handler thread configuration directly affects throughput. Undersized IB infrastructure limits the benefit. Work with your Basis team to ensure adequate handler threads.
  • Error handling requires more thought. In App Engine, an unhandled error stops the process and someone notices. In an async handler, an unhandled error on one message does not stop the others. You need deliberate error tracking and alerting (email notifications, dashboard queries).
  • Commit boundaries change. Each subscription handler invocation is its own unit of work with its own commit. You cannot roll back across multiple messages the way you could in a single App Engine step.

When to Use This Pattern

This pattern works well when:

  • Your batch process makes multiple outbound API calls that are independent of each other
  • Network latency is a significant portion of your processing time
  • You are consuming Process Scheduler slots for work that is mostly waiting on I/O
  • You need near-real-time event processing in addition to batch
  • Individual work items can fail independently without affecting others

It is less appropriate when:

  • Work items must be processed in strict sequence
  • The total volume is small enough that sequential processing is fast enough
  • You need a single transactional commit across all work items
  • Your team does not have experience monitoring Integration Broker

Real-World Results

We have used this pattern across several implementations including identity provisioning, HR-to-Campus data synchronization, and account reconciliation workflows. In each case, the pattern delivered:

  • Processing hundreds of outbound API calls concurrently instead of sequentially
  • Batch jobs that previously ran for hours completing in minutes
  • Process Scheduler slots freed for other critical batch processes
  • Granular error handling and retry capability per record rather than all-or-nothing batch failures
  • The ability to handle both scheduled batch runs and real-time event-driven processing through the same subscription handler code

Author Info
Chris Malek

Chris Malek s a PeopleTools® Technical Consultant with over two decades of experience working on PeopleSoft enterprise software projects. He is available for consulting engagements.

Work with Chris