Handling Special/Chinese Characters in GID/XID during OTM Cloud-to-Cloud Migration


🩺 The Problem

While migrating data from one OTM Cloud instance to another, we encountered issues with corporations having special or Chinese characters in their GIDs and XIDs. These characters triggered validation errors in OTM and caused problems during data extraction via CSV due to character limit constraints.

🧪 Steps to Reproduce

  1. In the source OTM, identify a CORPORATION record with special/Chinese characters in its GID or XID

  2. Attempt to export this record using CSV

  3. Notice that the special characters are converted into HTML hash codes (e.g., Ӓ), inflating the actual string length

  4. If the encoded XID exceeds 50 characters, OTM blocks the import due to length violation

  5. Also observe validation failures if the system property doesn't permit special characters in XIDs

🔍 Investigation Findings

The property glog.ui.validateXid.suppress is used in OTM to bypass validation for XIDs:

  • Setting it to true (lowercase) disables validation globally, which is risky

  • We only needed to suppress validation for the CORPORATION table, not all tables

  • Found that glog.ui.validateXid.suppress.CORPORATION = TRUE (uppercase TRUE) works as a table-specific override

  • CSV export converts special characters into encoded hash codes, exceeding XID limits

  • XML export avoids this but wasn't scalable for 20K+ records

⚙️ Root Cause Analysis

Primary Issues:

  • Special characters in XIDs get encoded during CSV export, resulting in strings longer than the allowed 50 characters

  • OTM validates these XIDs and throws errors unless suppression is configured

Case Sensitivity in GLOG Properties:

  • true (lowercase): Applies suppression globally across all tables

  • TRUE (uppercase): Applies suppression only to the specified table (e.g., CORPORATION)

  • Misconfiguration or incorrect casing in the property can cause the setting to be ignored

💡 The Solution

Configuration Approach

Used a scoped suppression approach by adding the following in the custom properties:

glog.ui.validateXid.suppress.corporation = TRUE

✅ This allowed CORPORATION XIDs with special characters to pass validation.

❌ We avoided setting:

glog.ui.validateXid.suppress = true

to prevent validation suppression across all tables.

Data Handling Strategy

To address the CSV export issue:

Developed a Python script to:

  • Read the exported CSV

  • Convert HTML-encoded values back to proper characters

  • Transform the data into OTM-compatible XML format

This allowed us to retain valid character data and avoid XID length violations.

✅ Results Achieved

  • Successfully migrated all ~20,000 CORPORATION records

  • Avoided global suppression and preserved system integrity

  • Ensured clean XML data for import with accurate XID formatting

📘 Key Lessons Learned

  1. Never use global suppression unless absolutely necessary

  2. Always verify whether OTM properties are case-sensitive and behave differently based on casing

  3. Choose XML over CSV for exporting/importing data when special characters are involved

🧠 Why This Matters

In cloud-to-cloud migration projects, precision in how you handle custom properties can prevent unintended behavior. Misuse of global properties like glog.ui.validateXid.suppress = true can lead to systemic data acceptance problems and compromise validation across the system.

🔍 Pro Tips

  • Use glog.ui.validateXid.suppress.<TABLE> with TRUE (uppercase) for table-specific XID validation suppression

  • Avoid global setting unless it's reviewed and approved

  • For large data sets with encoding issues, automate transformation from CSV to XML using scripts

📌 Key Takeaway

Proper case-sensitive configuration and format-aware data handling are essential for smooth OTM data migration—especially when dealing with special characters in IDs.

💻 Access the Solution Code

You can check out the Python script I used to convert CSV to XML (with proper character handling) here:

🔗 GitHub Link to Script


Complexity Level: ⭐⭐⭐⭐ (High)

This solution required:

  • Deep understanding of OTM property scoping and validation mechanisms

  • Handling of HTML encoding side-effects on data limits

  • Writing robust Python code to convert and clean large datasets (~20k+ records)

  • Balancing accuracy with automation under migration timelines


Tags: #OTM #OracleTransportationManagement #OTMMigration #GIDXID #SpecialCharacters #CloudToCloud #OTMTips #CSVvsXML #PythonAutomation #XIDValidation #OTMPropertyManagement

0
Subscribe to my newsletter

Read articles from Tushar Jairam Kukreja directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Tushar Jairam Kukreja
Tushar Jairam Kukreja