This case study examines a real-world integration between PeopleSoft Campus Solutions and Mainstay, a SaaS platform used to send SMS and email campaigns to students. As with the D2L case study, this is an outbound integration: PeopleSoft is the HTTP client and pushes data into the vendor. The patterns here apply to any outbound sync where the source of record lives in PeopleSoft and the target is a simple REST API.
Two themes drive most of the design decisions:
Before the integration, every SMS campaign population was built by hand:
This process relied heavily on the operator’s institutional knowledge. It worked, but it capped how many campaigns could run, how quickly new ones could be launched, and how targeted each one could be. The fundamental goal of the integration was to push PeopleSoft data into the SMS platform as structured custom fields so that the operator could build audiences in the platform’s UI rather than hand-building files.
“Near-real-time” mattered too. A welcome campaign for newly admitted students loses most of its value if it goes out the next day.
The vendor’s contact object has a small set of standard fields (first name, last name, phone, email, CRM ID, preferred name, messaging status) and an open bag of custom fields keyed by name. Custom fields must be pre-defined in the vendor UI and are typed: text, date, number, boolean, phone, email, URL, or multi-choice.
The design principle was to pre-compute in PeopleSoft and push a flat, ready-to-segment value. The audience builder in the SMS platform is simple boolean logic over custom fields, so anything the operator needs must already exist as a field. Examples:
| Custom Field | Type | Computed From |
|---|---|---|
psftAcademicProgram |
text | Most-effective-dated ACAD_PROG row |
psftAcademicProgramStatus |
text | Most-effective-dated ACAD_PROG.PROG_STATUS |
psftMatriculationDate |
date | ACAD_PROG.EFFDT where PROG_ACTION = 'MATR', falling back to min EFFDT if no matriculation row exists |
psftHasEnrollmentHold |
boolean | EXISTS against SRVC_IND_DATA joined to service-impact AENR |
psftCoursesNeeded |
number | SUM(CRSES_NEEDED) from the academic-advisement results for the student’s program and plan |
psftIncompleteGradeLapseDate |
date | Lapse deadline for any open incomplete, or the UNSET sentinel if the student has none |
psftFirstClassStartDate |
date | First post-matriculation class with non-zero units |
psftNextClassStartDate |
date | Next class start on or after today |
psftLastSyncDate |
date | Set by the integration on every push |
A few conventions made the field catalog easier to live with:
psft so the operator can tell at a glance which custom fields came from the SIS versus other sources. If a second source system is added later, its fields get a different prefix.psftNextClassStartDate is accurate the day it is computed; a week later it may no longer be the next class. Any field whose correctness drifts with the calendar is a reason to schedule a periodic re-sync regardless of whether the underlying PeopleSoft data changed. See the stale detector below.No real-world API is clean. The ones that cost time on this project were:
/contacts/123 returns a redirect; /contacts/123/ returns the resource. Always send the trailing slash."UNSET" as the value. This is not optional. If a student no longer has an incomplete grade, the integration has to send "psftIncompleteGradeLapseDate": "UNSET" to remove the stale date, not omit it and not send null.POST on an existing phone returns 400. The contact must be fetched first (by CRM ID or phone), then created with POST or updated with PUT.The integration follows the same layered separation described in the D2L case study. Drivers decide what to send; an Application Package SDK handles how to talk to the vendor.
All code lives in a single Application Package (Z_SMS_INTEGRATION) with a handful of classes:
APIClient — HTTP, authentication, URL resolution, retry, logging. Nothing above this class knows about JSON.MainstayContact — Parses a contact JSON payload into a PeopleCode object. Used for GET responses and to diff current state against what the integration is about to send.Student — The heart of the package. Takes an EMPLID and builds the outbound JSON payload: standard fields, the full set of psft* custom fields, and the UNSET sentinel where a previously-populated field no longer applies. 95% of future change requests land here.AsyncHandler — The OnNotify subscription handler. One message in, one vendor interaction out, one cache row updated.PersonBasicSyncHandler — Near-real-time trigger on the delivered PERSON_BASIC_SYNC message. Any time a student’s name or mobile phone changes in the bio/demo pages, a sync message is published immediately.Utilities — Shared helpers (URL resolution, date formatting, database-name lookup, et cetera).A driver, whether a batch App Engine, a page button, or a subscription handler, only sees the Student and APIClient classes. It never constructs JSON, never calls %IntBroker.ProvideRestTest, never touches an HTTP status code.
A custom cache table stores one row per EMPLID that has ever been synced:
| Field | Purpose |
|---|---|
EMPLID |
Key. PeopleSoft student ID. |
Z_SMS_ID |
Vendor’s contact ID, used as a cross-reference to skip lookups on subsequent syncs. |
LASTSYNCDATE |
Timestamp of the last push. Drives the “stale” detector. |
Z_SMS_JSON_IN |
Last JSON sent to the vendor. |
Z_SMS_JSON_OUT |
Last JSON received back. |
NU_RETURN_CD |
Last HTTP status code. |
NU_MS_ERROR_LOC |
P if the failure was a PeopleSoft data issue; M if the vendor rejected it. |
ERROR_TEXT |
Last error message. |
This table does three jobs at once. It is the cross-reference that avoids an extra GET on every sync, the audit trail for production support, and the source of truth for the stale detector. Putting the JSON bodies on the row is cheap storage and saves hours of debugging later.
Two URL definitions hold the vendor base URL and API token: one for production, one for everything else. At runtime the code inspects the database name and picks the right URL definition. A DEV database refreshed from PRD cannot accidentally point at the production SMS tenant because the decision is made at call time, not cached at refresh time. This is the same pattern used for every other outbound integration in the system.
This is the hard part. PeopleSoft Campus does not publish events for most tables. There is no ACAD_PROG_CHANGED service operation, and most campus records do not have last-updated timestamps. The integration has to synthesize change detection.
Three strategies combine:
PERSON_BASIC_SYNC fires on name and mobile phone changes. A subscription handler on that message publishes the sync message immediately, so the vendor’s contact stays current on the fields that matter most for SMS delivery.All three strategies share a single configuration table that the daemon reads on every run. Each row is a rule:
| Rule Type | What It Is | Example |
|---|---|---|
| Table | A full SQL statement that returns a list of EMPLIDs. Used for one-time conversions and ad-hoc catch-up. |
All active students in institution X. |
| Audit | A record name and timestamp field. The daemon generates the SQL automatically: “EMPLIDs from this audit table with audit-timestamp > last run.” | ENRL_REQ_DETAIL.DTTM_STAMP_SEC, ACAD_PROG audits, custom per-site audit tables. |
| Stale | A SQL fragment over the cache table itself, parameterized by a date computed from the rule’s DAYS value. |
WHERE lastsyncdate < %DATEIN(:1) AND student is active. |
Each rule also carries a row-count limit (both a daemon-run limit and an ad-hoc-run limit). The limits exist to keep a single run from flooding the queue and tripping the vendor’s rate limit.
The daemon and ad-hoc App Engines do not make HTTP calls. They publish one async message per EMPLID to a local-to-local service operation. A subscription handler picks each message up and does the vendor interaction. This is the pattern described in detail in the Async Concurrent Processing case study; only the integration-specific details are noted here.
Why this split:
/* Event creator - runs in the daemon App Engine */
Local Message &msg;
Local string &emplid;
While &sqlChangedEmplids.Fetch(&emplid)
&msg = CreateMessage(Operation.Z_SMS_SYNC);
&msg.GetXmlDoc().DocumentElement.AddElement("emplid").NodeValue = &emplid;
%IntBroker.Publish(&msg);
End-While;
CommitWork();
/* OnNotify handler - runs on the IB app server, once per message */
import Z_SMS_INTEGRATION:Student;
method OnNotify
/+ &_MSG as Message +/
Local string &emplid = &_MSG.GetXmlDoc().DocumentElement.GetElement("emplid").NodeValue;
Local Z_SMS_INTEGRATION:Student &s = create Z_SMS_INTEGRATION:Student(&emplid);
try
&s.Sync(); /* Builds payload, calls vendor, updates cache */
catch Exception &ex
&s.RecordError(&ex); /* Mark the cache row, do not rethrow */
end-try;
end-method
The handler never re-throws. An unhandled exception would hand control to IB’s own retry machinery and interfere with the cache-driven retry strategy. Error handling lives in the cache table, not in the message monitor. See the warning in the async-concurrent-processing case study for the full rationale.
This one cost real time in production and is worth reading twice.
The async queue was partitioned by EMPLID in non-production. Partitioning is what allows IB to process messages with different partition keys in parallel (see Queues). With one EMPLID per message and a large working set, every message lands in a distinct partition, so the queue effectively becomes “process as many in parallel as there are handler threads.”
In non-production with modest thread counts, this worked perfectly. In production, the application server had more handler threads configured, and the partitioning let IB fan out to so many concurrent vendor calls that the per-tenant in-flight limit (100) was exceeded within seconds. The vendor returned 429s; the integration retried; the retries also 429’d; throughput collapsed.
The fix was counterintuitive: remove partitioning, which forces the queue to process messages strictly in order, effectively single-threaded. Throughput went down per second but stayed well inside the vendor’s limits, so the job as a whole finished faster and without errors.
The general rule is to treat the vendor’s rate limit as the binding constraint and size IB concurrency to match. Partitioning, thread count, and message-per-second throughput all have to fit under that ceiling.
Operations shift from the Process Scheduler to the message monitor plus the cache table. A simple admin page gave support staff everything they needed:
EMPLID, see the last payload, the last response, the last status code, and trigger a one-off re-sync.auth_valid endpoint for this).A handful of SQL queries against the cache table cover most operational questions:
LASTSYNCDATE is older than the stale threshold, to confirm the daemon is keeping up.Storing the full request and response JSON on the cache row is what makes these queries useful. Without the bodies you know something failed; with them you can reproduce the exact API call.
Rate limits are a design input, not an afterthought. Understand the vendor’s per-second, per-hour, and concurrent-request limits before you design concurrency. In this project the concurrent-request cap, not the per-hour budget, was the binding constraint.
Turn partitioning off when the downstream is the bottleneck. Partitioning is a throughput accelerator for internal work. Against an external rate-limited API, it is an easy way to exceed the vendor’s cap. See Queues.
Use whatever change signal you have, and let a stale detector cover the rest. Most campus tables do not publish events and do not have last-updated timestamps. This site happened to have custom database triggers feeding audit tables, which made change detection much easier; that is not typical and should not be assumed. Pair whatever change signal exists (native events, delivered audit flags, site-specific triggers, even a “last batch run” watermark) with a stale-detector that forces periodic re-syncs. Deleted rows and time-sensitive computed fields need the stale detector regardless.
Hook into PERSON_BASIC_SYNC for bio/demo changes. Name and mobile phone edits publish automatically. A subscription handler on that message gives near-real-time updates for the fields that matter most to SMS delivery, with no polling.
Design the custom-field catalog for the audience builder, not for generality. The SMS platform’s audience builder is simple boolean logic over flat fields. Collapse effective-dated PeopleSoft SQL into the single value the operator will actually filter on. Prefix every field with a source namespace so a second system can be added later without collisions.
Handle UNSET and other sentinel values explicitly. Vendor APIs rarely accept null. Work out the deletion protocol during API exploration and encode it in the SDK layer so drivers never have to think about it.
Cache the vendor’s ID and the last JSON payload locally. A small local table with the vendor’s contact ID, last request, last response, last status code, and last sync timestamp covers cross-referencing, production support, and stale detection in one place.
Resolve URLs at runtime from the database name. A DEV environment refreshed from PRD must never talk to the production vendor tenant. Runtime resolution, not a post-refresh script, is the right mechanism. The same principle applies to any outbound integration; see the D2L case study.
Chris Malek s a PeopleTools® Technical Consultant with over two decades of experience working on PeopleSoft enterprise software projects. He is available for consulting engagements.
Work with Chris