Migration Policy Demonstration

Only    6seat(s) available

Experiment name: Migration Policy Demonstration of IBM Spectrum Scale Lifecycle Management

Experiment content:
This experiment is intended to let you understand the basic operations and concepts of the migration policies in IBM Spectrum Scale (GPFS) parallel file system.

Experiment resources:
IBM Spectrum Scale 5.0.1 software
Red Hat Enterprise Linux 7.4 (VM)

Migration Policy Demonstration of IBM Spectrum Scale Lifecycle Management

The following content is displayed on the same screen for your experiment so that you can make any necessary reference in experiment. Start your experiment now!

  1. Log on the graphic management interface (GUI) of IBM Spectrum Scale(Duration: 3 min)
    Input "admin" as name and "admin001" as password, and then click the "Sign In" button

    Log onto the Spectrum Scale management platform
  2. View resource pools(Duration: 4 min)
    Navigate to the menu "Storage -> Pools" at the left.

    View all the resource pools currently managed by the system:

    - ssdpool: a resource pool consisting of high-performance disks, which is mainly used for storing hot data or the data with relatively high storage performance requirements
    - saspool: a resource pool consisting of medium-performance disks, which is mainly used for storing the data with medium storage performance requirements
    - nlsaspool: a resource pool consisting of low-performance disks, which is mainly used for storing warm data and the data that needs to be retained for a long term
    Note: For the data stored into a disk, Spectrum Scale supports the data migration based on automatic migration policies. For example, once lots of profiles in json or xml are written into ssdpool, the upper limit of disk capacity would be triggered quickly. In this case, Spectrum Scale can automatically migrate inactive data to other pools like nlsapool
    Next, we will quickly configure automatic migration policies in the GUI of Spectrum Scale:
  3. Enter the Information Lifecycle Management page(Duration: 4 min)
    Navigate to the Menu "Files -> Information Lifecycle" at the left

    View the list of policies at the left:

    Active Policy: refers to currently active policy rules
    Policy Repository: refers to the policy repository
  4. Create a policy(Duration: 5 min)
    detail.storage.offering.scale.lifecycle.migration.manual.list.item04=- Click into the Policy Repository tab page

    - Click the button "+" to create a new policy and name it as "mypolicy2"
  5. Configure the default placement rules(Duration: 5 min)
    Note: Our purpose here is to let the common files without special statements be written into the resource pool "saspool" by default
    - Click to select the default rule "Placement default (*)" under mypolicy1
    - Edit the rule at the right as "pool = saspool" (meaning that all files are to be placed saspool by default)
    - Click the button "Apply Changes" to save your settings
  6. Create and configure the placement rules for files with high storage performance requirements(Duration: 5 min)
    Note: Our purpose here is to let the files of json and xml format be written into ssdpool by default
    - Click the button "Add Rules" to create a new placement rule (Rule name: highperf; Rule type: Placement)

    - Edit the rule at the right as "pool = ssdpool"

    - Scroll down and edit the rule (Placement Criteria: Extension IN *.json, *.xml) as shown in the figure

    - Click the button "Apply Changes" at the lower left corner to save your settings
  7. Create and configure the migration rules(Duration: 5 min)
    Note: Our purpose here is that, when the space usage of the resource pool "ssdpool" exceeds 20%, the files of json and xml formats can be migrated into the resource pool "nlsaspool" so as to release the space of ssdpool until it has 99% available space
    - Click the button "Add Rules" to create a new migration rule (Rule name: freeup; Rule type: Migration)

    - Configure relevant parameters at the right
    - Source=ssdpool, target=nlsaspool,
    - Migration Threshold (start=20%, stop=1%),
    - Migration Criteria (Extension IN *.json, *.xml), as shown in the figure below

    - Click the button "Apply Changes" at the left to save your settings
  8. Adjust the sequence of placement rules(Duration: 5 min)
    - Drag the "Placement default" rule to the bottom

    - Click the button "Apply Changes" to save your settings
  9. Activate the policies(Duration: 5 min)
    Note: The newly-created "mypolicy2" policy containing migration rules does not take effect and is just registered in the Policy Repository. Next, we should activate all these rules.
    At the left, scroll up to the top, right click to select mypolicy2, select "Apply as Active Policy", click into the "Active Policy" tab page, and view the list of active policies
  10. Simulate writing files to trigger migration conditions and verify the migration policies(Duration: 8 min)
    Note: The command line operation instructions are given as follows. In the directory "/gpfs/migration" at the GPFS server side, we can see that the files test1.json and test2.json are stored in ssdpool by default. Then, we simulate writing an 1GB test file "test.json", and the usage of ssdpool will exceed 20% to trigger the migration of this json file to nlsapool. Waiting for several minutes, we can see that the files test1.json and test2.json have been migrated into nlsapool, proving that the migration policy is set successfully.
    - Find the PuTTY client in the taskbar at the bottom of desktop, which has logged into the GPFS server by default
    - Enter the directory /gpfs/migrationtest

    # cd /gpfs/migrationtest
    - Use the commands of Spectrum Scale to verify the storage resource pools where the current test files are located

    # mmlsattr -L test1.json

    # mmlsattr -L test2.json

    View the values of "storage pool name" values in the output results, which should be normally displayed as follows:
    
test1.json -> ssdpool

    test2.json -> ssdpool
    - Use the command "mmdf gpfs" to view the usage of ssdpool resource pool

    # mmdf gpfs –P ssdpool --block-size auto

    You will see that the remaining space in ssdpool (free in full blocksz) is about 94%
-
    - Create a test file to trigger the migration conditions (20%)

    Note: We have created an 1GB file named test.json; based on previously set default placement policy, this file will be written into ssdpool automatically, and then the threshold of 20% will be triggered.

    # dd if=/dev/zero of=test.json bs=1M count=1000
    - Use the command "mmdf gpfs" to view the usage of ssdpool resource pool again

    # mmdf gpfs –P ssdpool --block-size auto

    You will see that the remaining space in ssdpool (free in full blocksz) is about 77%, which will trigger the migration conditions (20%)
-
    - Wait for about 5-10 min and then view results

    # mmlsattr -L test1.json

    # mmlsattr -L test2.json
    - View the values of "storage pool name" values in the output results, which should be normally displayed as follows:
    
test1.json -> nlsaspool

    test2.json -> nlsaspool
    Through the simple tests described above, we can find that Spectrum Scale allows you to online migrate data by quick configuration. These tests only demonstrate the judgment conditions based on file suffix. You may test with other parameters, such as user or user group.
card_3

This experiment needs to be performed on PC side. Please turn to the PC side for experiment.

https://csc.cn.ibm.com/

The resources required are unavailable now, and the estimated waiting time is:

You have just experienced this experiment, and you have to wait for :

before next experiment is allowed

card_3