How can I optimize SnapMirror performance, and what are best practices for FSx for NetApp ONTAP?
I want to optimize my SnapMirror performance, and use best practices for Amazon FSx for NetApp ONTAP.
Resolution
Follow these best practices for SnapMirror, and use network compression to optimize performance.
Best practices for SnapMirror
- To keep the source and destination volumes the same size or slightly larger, run the volume autosize command to activate autogrow on the destination volume. For more information, see Configure volumes to automatically provide more space when they are full on the NetApp website.
- Make sure that storage efficiency jobs, such as deduplication, data compression, and SnapMirror operations, don't run concurrently. For more information, see Storage efficiency with deduplication and data compression on the NetApp website.
- Don't reuse a destination volume from a previously existing SnapMirror relationship. To start a new SnapMirror relationship, create a new volume.
- Before you copy data to the destination, don't delete snapshot copies that SnapMirror creates in the source volume. Incremental changes to the destination depend on the newest common snapshot (NCS) copy. If SnapMirror doesn't find the snapshot copy on the source, then it can't perform incremental changes to the destination.
- When you configure SnapMirror to transfer, don't restrict or take the destination volume offline. When the destination is offline, SnapMirror can't perform updates to the destination.
- SnapMirror updates aren't scheduled to occur on the source volume at the same time as other snapshot copies.
- SnapMirror doesn't support NAT.
- If the network utilization that the data protocols generate is above 50%, then use a dedicated failover group for inter-cluster communication.
- When you deploy SnapMirror, the round-trip time (RTT) of a packet from the source to the destination storage system might cause write latency.
Use network compression to optimize SnapMirror performance
Use SnapMirror network compression to compress the data stream on the source system, and transfer the compressed data stream over the network. The data stream is then uncompressed on the destination system before it's written to the disk.
SnapMirror network compression increases resource utilization on the SnapMirror source and destination systems. Before you deploy compression, evaluate the resource usage and benefits. For more information, see SnapMirror network compression on the NetApp website.
To activate SnapMirror network compression, set the -is-network-compressionenabled option to true in a SnapMirror policy. When you create a new relationship, you can activate network compression.
To activate network compression for an active transfer, you must first stop the existing transfer. Then, set the -is-network-compressionenabled option to true, and resume the transfer.
Note: Network compression is turned off by default and isn't available in Clustered Data ONTAP version 8.2 or earlier.
To create a custom policy, activate compression, and access the compression, run a command similar to the following one:
FsxIdxxxxxxx::> snapmirror policy create -vserver AD -policy network-comp -tries 8 -transfer-priority normal -ignore-atime false -restart always -is-network-compression-enabled true -type async-mirror -throttle unlimited
Example policy:
FsxIdxxxxxxx::> snapmirror policy show -vserver AD -policy network-comp Vserver: AD SnapMirror Policy Name: network-comp SnapMirror Policy Type: async-mirror Policy Owner: vserver-admin Tries Limit: 8 Transfer Priority: normal Ignore accesstime Enabled: false Transfer Restartability: always Network Compression Enabled: true Create Snapshot: true Comment: - Total Number of Rules: 1 Total Keep: 1 Transfer Schedule Name: - Throttle: unlimited Rules: SnapMirror Label Keep Preserve Warn Schedule Prefix Retention Period ---------------------- ---- -------- ---- -------- ---------- ---------------- sm_created 1 false 0 - - - FsxId-Example::>
Use the SnapMirror show –instance policy to show the compression ratio:
FsxIdxxxxxxx::> snapmirror show -instance Source Path: ADNTFS:testshare Destination Path: AD:AD_dest Relationship Type: XDP Relationship Group Type: none SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: network-comp Tries Limit: - Throttle (KB/sec): unlimited Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Percent Complete for Current Status: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.273a8f2c-3c33-11ee-b3db-e3ddc9559f8a_2163490733.2023-10-16_011836 Newest Snapshot Timestamp: 10/16 01:18:36 Exported Snapshot: snapmirror.273a8f2c-3c33-11ee-b3db-e3ddc9559f8a_2163490733.2023-10-16_011836 Exported Snapshot Timestamp: 10/16 01:18:36 Healthy: true Unhealthy Reason: - Destination Volume Node: FsxId-Example-01 Relationship ID: dd723540-6bc1-11ee-96b4-7d53af61bc17 Current Operation ID: - Transfer Type: - Transfer Error: - Current Throttle: - Current Transfer Priority: - Last Transfer Type: initialize Last Transfer Error: - Last Transfer Size: 21.20KB Last Transfer Network Compression Ratio: 13.7:1 Last Transfer Duration: 0:0:3 Last Transfer From: ADNTFS:testshare Last Transfer End Timestamp: 10/16 01:18:39 Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 0:0:21 Identity Preserve Vserver DR: - Volume MSIDs Preserved: - Is Auto Expand Enabled: - Number of Successful Updates: 0 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 21708 Total Transfer Time in Seconds: 3 FabricLink Source Role: - FabricLink Source Bucket: - FabricLink Peer Role: - FabricLink Peer Bucket: - FabricLink Topology: - FabricLink Pull Byte Count: - FabricLink Push Byte Count: - FabricLink Pending Work Count: - FabricLink Status: - FsxId-Example::>
Note: In the preceding example policy, the last transfer network compression ratio is 13.7:1.
Example policy with network compression deactivated:
FsxIdxxxxxxx::> snapmirror policy show -vserver AD -policy network-comp Vserver: AD SnapMirror Policy Name: network-comp SnapMirror Policy Type: async-mirror Policy Owner: vserver-admin Tries Limit: 8 Transfer Priority: normal Ignore accesstime Enabled: false Transfer Restartability: always Network Compression Enabled: false Create Snapshot: true Comment: - Total Number of Rules: 1 Total Keep: 1 Transfer Schedule Name: - Throttle: unlimited Rules: SnapMirror Label Keep Preserve Warn Schedule Prefix Retention Period ---------------------- ---- -------- ---- -------- ---------- ---------------- sm_created 1 false 0 - - - FsxIdxxxxxxx::> FsxIdxxxxxxx::> snapmirror update -destination-path AD:AD_dest Operation is queued: snapmirror update of destination "AD:AD_dest". FsxIdxxxxxxx::> snapmirror show -instance Source Path: ADNTFS:testshare Destination Path: AD:AD_dest Relationship Type: XDP Relationship Group Type: none SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: network-comp Tries Limit: - Throttle (KB/sec): unlimited Mirror State: Snapmirrored Relationship Status: Transferring File Restore File Count: - File Restore File List: - Transfer Snapshot: snapmirror.273a8f2c-3c33-11ee-b3db-e3ddc9559f8a_2163490733.2023-10-16_012943 Snapshot Progress: 0B Total Progress: 0B Percent Complete for Current Status: - Network Compression Ratio: 1:1 Snapshot Checkpoint: 0B Newest Snapshot: snapmirror.273a8f2c-3c33-11ee-b3db-e3ddc9559f8a_2163490733.2023-10-16_012204 Newest Snapshot Timestamp: 10/16 01:22:04 Exported Snapshot: snapmirror.273a8f2c-3c33-11ee-b3db-e3ddc9559f8a_2163490733.2023-10-16_012204 Exported Snapshot Timestamp: 10/16 01:22:04 Healthy: true Unhealthy Reason: - Destination Volume Node: FsxId-Example-01 Relationship ID: dd723540-6bc1-11ee-96b4-7d53af61bc17 Current Operation ID: 7bf019ca-6bc3-11ee-96b4-7d53af61bc17 Transfer Type: update Transfer Error: - Current Throttle: unlimited Current Transfer Priority: normal Last Transfer Type: resync Last Transfer Error: - Last Transfer Size: 3.27KB Last Transfer Network Compression Ratio: 1:1 Last Transfer Duration: 0:0:2 Last Transfer From: ADNTFS:testshare Last Transfer End Timestamp: 10/16 01:22:06 Progress Last Updated: 10/16 01:29:43 Relationship Capability: 8.2 and above Lag Time: 0:7:41 Identity Preserve Vserver DR: - Volume MSIDs Preserved: - Is Auto Expand Enabled: - Number of Successful Updates: 0 Number of Failed Updates: 0 Number of Successful Resyncs: 1 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 25060 Total Transfer Time in Seconds: 5 FabricLink Source Role: - FabricLink Source Bucket: - FabricLink Peer Role: - FabricLink Peer Bucket: - FabricLink Topology: - FabricLink Pull Byte Count: - FabricLink Push Byte Count: - FabricLink Pending Work Count: - FabricLink Status: - FsxIdxxxxxxx::>
For more information, see How to activate SnapMirror network compression in clustered Data ONTAP on the NetApp website.
Related information
Why does SnapMirror replication take a long time on my FSx for Netapp ONTAP volume?
Relevant content
- Accepted Answerasked 6 years agolg...
- asked 3 years agolg...
- asked 2 years agolg...
- asked 5 months agolg...
- asked 3 years agolg...
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago