-
2.1 Database export
-
Unsorted export
-
Use migration monitor
-
DDL mapping file
- unsorted and sorted export in parallel
- R3load with latest patch level
- SAP note 954268
-
Package splitter
- Use the JAVA package splitter
- Choose the options for top tables (-top 50-200), table size limit (-tableLimit
500-1000), and package size limit (-packageLimit 500-2000) for standard
migrations running with SAPInst.
- Determine the tables that have the longest runtimes for more complex
migrations and put their names into a file (for example, table_names.txt) that
is used by the package splitter (tableFile). All these tables will be split out. You
can use this option together with the options mentioned above
-
Export scenarios
- Local
-
Remote
-
Used when CPU is bottleneck
- Run R3load process on application server
- Ensure that network is not bottleneck
-
2.2 Advanced migration techniques
-
Socket option
- Fast and stable network connection bet export and import servers
- Possible also within one server
-
Combine with other data transport technique
- Use multiple Migmon instances
-
Unicode conversion
- SAP note 971646
-
Table splitting
-
When
- Single table or few tables, cluster table such as CDCLS during Unicode Conversion
-
What
- Define WHERE clauses that select only a subset of the table. You can use the tool R3ta to determine the
WHERE clauses that are stored in a single WHR file. To split the WHERE
clauses into separate WHR files that contain one WHERE clause each, you can
use the Where Splitter.
-
How
- Check whether you can create an index that refers to the table column that is
used in the WHERE clause.
- If you have created an additional index to support table splitting, check
whether you can reorganize the table to be clustered by this index
- If some of the table pieces show a longer export time compared with the
others, you can introduce additional WHERE conditions to further split this
table part
-
Decide between DB2 LOAD or concurrent R3load processes using DB2
INSERT
- DB2 LOAD for the import
- Enable the incremental index build.
Set the environment variable DB6LOAD_INDEXING_MODE=2
- Serialize the affected table using
the orderBy entry in the import monitor properties file or use dedicated
Migration Monitor instances
- Parallel insert using DB2 INSERT is easier to use
-
2.3 Target System - DB2 layout and configuration options
-
Table space principles and configuration
-
Ease of use
- Automatic storage table spaces managed
by DB2’s automatic storage management
-
Flexibility
- Database managed space (DMS) table spaces in autoresize mode
-
Recommendations
- Use enough spindles for your I/O.
- Separate logging I/O from data, index, and temporary I/O
- Try to avoid container sizes larger than 20 GB
- Switch the file system caching off to avoid additional operating system (OS)
buffering
- Place large tables into separate table spaces and introduce new data classes
- Avoid table spaces that are significantly larger than the others, as this may
impact backup performance
-
Logging configuration
- Switch off archival logging and use circular logging during the migration
- Provide enough log space. A good starter is to use the planned configuration
for production. More space may be required if using many parallel R3load
with Inserts
-
2.4 Database import
-
Default
- Use the DB2 LOAD API with default settings for R3load
- Specify enough utility heap to ensure parallelism of the DB2 LOAD API. use
200,000 as a starter
- Configure SORTHEAP and SHEAPTHRES_SHR to accommodate the large
sorts during index build. A good starting point is 50,000 pages for
SORTHEAP and 2 * (SORTHEAP) * (number of R3load processes) for
SHEAPTHRES_SHR
- Be sure that the file system caching is disabled on database level during table
space creation or use the ALTER TABLESPACE statement for existing table
spaces
- Do not use STMM during the migration final test and cutover to ensure stable
runtimes
- To avoid failing R3load processes due to wait situations, set LOCKTIMEOUT
to -1
- Define one buffer pool using the remaining memory after utility heap and sort
memory are configured
- Leave all other configuration settings according to the SAP Notes for DB2
standard parameter settings
- Create primary and secondary indexes before you load data
- Allocate enough temporary space for the index build to avoid temp space
overflow. Be aware that the amount of data may have increased since the last
successful test migration. Therefore, increase temporary space to provide
enough reserve for the final productive migration
- As a starting point, use as many R3load processes as you have CPU cores
on your server. Do not use many more R3load processes than available CPU
cores to avoid hardware (CPU, memory, I/O) resource bottlenecks
-
Optional
-
Requires more testing effort
- To determine optimized values, use Self Tuning Memory Management
(STMM) during a test migration
- Avoid many parallel index build processes by optimizing the package import
order. The goal is to prevent I/O resource overload
- Monitor the import process and adjust the configuration for the utility heap,
buffer pool, and sorting
- Monitor and optimize the configuration for I/O (for example,
NUM_IOCLEANERS, DB2_PARALLEL_IO, or disable file system caching for
logging)
- To optimize the number of parallel processes with respect to evenly
distributed resource usage, analyze the resource usage of the system (CPU,
I/O, and memory
-
Advanced
-
Very complex and might have side effects
- Use the application server for import if the I/O subsystem can handle
additional workload, the target system is short on CPU, and a fast network
connection between the application server and the database server is
available
- You can use a DMS temporary table space to optimize the index rebuild
phase, but be aware of the side effects (for example, no parallel REORG
processes and the need to create the temporary table space manually). After
a successful migration make sure to switch back to SMS temporary table
spaces
- Change the order of index creation for selected tables if the index build phase
takes significantly longer compared with other large tables, which is hard to
determine
- Optimize the CHNGPGS_THRESH parameter together with buffer pool and
index creation to optimize the I/O in this area
- Create dedicated buffer pools for one or more table spaces (for example, for
temporary table spaces)
- 2.5 BW migrations