Hello,
I have to design jobs to read/write data from/into many SAP tables. The data in these tables are likely to grow over time. What approach I should consider for achieving the highest possible data throughput ?
Additionally, the data in these SAP tables come from different countries and code pages - some from japanese, some from korean and some from european, some english. How to best design the jobs to deal with different/new code pages ?
Will really appreciate any help on this.
Thanks,
mpp
SAP and code page questions
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
You should not need any difference to the logic, except where explicit character comparisons (including sorting) is being performed. There are NLS character maps at the boundaries between DataStage and all external data sources so, provided you get these correct, everything inside DataStage should be being done in Unicode.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Are you extracting from SAP R/3 or loading into SAP BW (or both)? I have no experience working with SAP from Parallel jobs; hopefully someone else will be able to answer.
Certainly the SAP Extract PACK and SAP Load PACK are for use with server jobs. Without them you have to negotiate your own way around ABAP code and the RFC mechanism.
If you think there's a business case, get onto Ascential to produce a PX version of SAP connectivity.
(Some places do use multi-instance server jobs with SAP; I know it's not the same thing, but it IS partition parallelism of a kind.)
Certainly the SAP Extract PACK and SAP Load PACK are for use with server jobs. Without them you have to negotiate your own way around ABAP code and the RFC mechanism.
If you think there's a business case, get onto Ascential to produce a PX version of SAP connectivity.
(Some places do use multi-instance server jobs with SAP; I know it's not the same thing, but it IS partition parallelism of a kind.)
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Ray thanks for the response. I am extracting from R3 (and few other sources) and loading into BW. Multi-instance approach may not work as I need to aggregate some of these data differently from the data partitioning.
Does anyone know when Ascential plan to provide a parallel version of SAP extract pack and SAP load pack ? I looked into the 7.5 document and didn't see anything there
Does anyone know when Ascential plan to provide a parallel version of SAP extract pack and SAP load pack ? I looked into the 7.5 document and didn't see anything there