1. 2.1 Database export
    1. Unsorted export
      1. Use migration monitor
        1. DDL mapping file
          1. unsorted and sorted export in parallel
      2. R3load with latest patch level
      3. SAP note 954268
    2. Package splitter
      1. Use the JAVA package splitter
      2. Choose the options for top tables (-top 50-200), table size limit (-tableLimit 500-1000), and package size limit (-packageLimit 500-2000) for standard migrations running with SAPInst.
      3. Determine the tables that have the longest runtimes for more complex migrations and put their names into a file (for example, table_names.txt) that is used by the package splitter (tableFile). All these tables will be split out. You can use this option together with the options mentioned above
    3. Export scenarios
      1. Local
      2. Remote
        1. Used when CPU is bottleneck
          1. Run R3load process on application server
          2. Ensure that network is not bottleneck
  2. 2.2 Advanced migration techniques
    1. Socket option
      1. Fast and stable network connection bet export and import servers
      2. Possible also within one server
      3. Combine with other data transport technique
        1. Use multiple Migmon instances
      4. Unicode conversion
        1. SAP note 971646
    2. Table splitting
      1. When
        1. Single table or few tables, cluster table such as CDCLS during Unicode Conversion
      2. What
        1. Define WHERE clauses that select only a subset of the table. You can use the tool R3ta to determine the WHERE clauses that are stored in a single WHR file. To split the WHERE clauses into separate WHR files that contain one WHERE clause each, you can use the Where Splitter.
      3. How
        1. Check whether you can create an index that refers to the table column that is used in the WHERE clause.
        2. If you have created an additional index to support table splitting, check whether you can reorganize the table to be clustered by this index
        3. If some of the table pieces show a longer export time compared with the others, you can introduce additional WHERE conditions to further split this table part
        4. Decide between DB2 LOAD or concurrent R3load processes using DB2 INSERT
          1. DB2 LOAD for the import
          2. Enable the incremental index build. Set the environment variable DB6LOAD_INDEXING_MODE=2
          3. Serialize the affected table using the orderBy entry in the import monitor properties file or use dedicated Migration Monitor instances
          4. Parallel insert using DB2 INSERT is easier to use
  3. 2.3 Target System - DB2 layout and configuration options
    1. Table space principles and configuration
      1. Ease of use
        1. Automatic storage table spaces managed by DB2’s automatic storage management
      2. Flexibility
        1. Database managed space (DMS) table spaces in autoresize mode
      3. Recommendations
        1. Use enough spindles for your I/O.
        2. Separate logging I/O from data, index, and temporary I/O
        3. Try to avoid container sizes larger than 20 GB
        4. Switch the file system caching off to avoid additional operating system (OS) buffering
        5. Place large tables into separate table spaces and introduce new data classes
        6. Avoid table spaces that are significantly larger than the others, as this may impact backup performance
    2. Logging configuration
      1. Switch off archival logging and use circular logging during the migration
      2. Provide enough log space. A good starter is to use the planned configuration for production. More space may be required if using many parallel R3load with Inserts
  4. 2.4 Database import
    1. Default
      1. Use the DB2 LOAD API with default settings for R3load
      2. Specify enough utility heap to ensure parallelism of the DB2 LOAD API. use 200,000 as a starter
      3. Configure SORTHEAP and SHEAPTHRES_SHR to accommodate the large sorts during index build. A good starting point is 50,000 pages for SORTHEAP and 2 * (SORTHEAP) * (number of R3load processes) for SHEAPTHRES_SHR
      4. Be sure that the file system caching is disabled on database level during table space creation or use the ALTER TABLESPACE statement for existing table spaces
      5. Do not use STMM during the migration final test and cutover to ensure stable runtimes
      6. To avoid failing R3load processes due to wait situations, set LOCKTIMEOUT to -1
      7. Define one buffer pool using the remaining memory after utility heap and sort memory are configured
      8. Leave all other configuration settings according to the SAP Notes for DB2 standard parameter settings
      9. Create primary and secondary indexes before you load data
      10. Allocate enough temporary space for the index build to avoid temp space overflow. Be aware that the amount of data may have increased since the last successful test migration. Therefore, increase temporary space to provide enough reserve for the final productive migration
      11. As a starting point, use as many R3load processes as you have CPU cores on your server. Do not use many more R3load processes than available CPU cores to avoid hardware (CPU, memory, I/O) resource bottlenecks
    2. Optional
      1. Requires more testing effort
        1. To determine optimized values, use Self Tuning Memory Management (STMM) during a test migration
        2. Avoid many parallel index build processes by optimizing the package import order. The goal is to prevent I/O resource overload
        3. Monitor the import process and adjust the configuration for the utility heap, buffer pool, and sorting
        4. Monitor and optimize the configuration for I/O (for example, NUM_IOCLEANERS, DB2_PARALLEL_IO, or disable file system caching for logging)
        5. To optimize the number of parallel processes with respect to evenly distributed resource usage, analyze the resource usage of the system (CPU, I/O, and memory
    3. Advanced
      1. Very complex and might have side effects
        1. Use the application server for import if the I/O subsystem can handle additional workload, the target system is short on CPU, and a fast network connection between the application server and the database server is available
        2. You can use a DMS temporary table space to optimize the index rebuild phase, but be aware of the side effects (for example, no parallel REORG processes and the need to create the temporary table space manually). After a successful migration make sure to switch back to SMS temporary table spaces
        3. Change the order of index creation for selected tables if the index build phase takes significantly longer compared with other large tables, which is hard to determine
        4. Optimize the CHNGPGS_THRESH parameter together with buffer pool and index creation to optimize the I/O in this area
        5. Create dedicated buffer pools for one or more table spaces (for example, for temporary table spaces)
  5. 2.5 BW migrations