adventures in afm - spectrum scale (gpfs) user...

21
Adventures in AFM 2015/11/12 DataDirect Networks UK Vic Cornell

Upload: others

Post on 23-Apr-2020

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

Adventures in AFM

2015/11/12

DataDirect Networks UK

Vic Cornell

Page 2: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

DataDirect Networks, Information in Motion, Silicon Storage Appliance, S2A, Storage Fusion Architecture, SFA, Storage Fusion Fabric, Web Object Scaler, WOS, EXAScaler, GRIDScaler, xSTREAMScaler, NAS Scaler, ReAct, ObjectAssure, In-Storage Processing and SATAssure are all trademarks of DataDirect Networks. Any unauthorized use is prohibited.

2 Please Ask Questions

Page 3: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

Adventures in AFM

▶  Still feels like a “new” feature. ▶  New Features or Tunings Appearing all the

time. ▶  Very much a toolkit which makes it a bit

challenging. ▶  Some sites started with basic modes and are

now looking at Async DR as an upgrade.

3

Page 4: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

AFM Modes with GRIDScaler Active File Management

▶  AFM is an asynchronous, cross cluster utility

▶  File data is kept consistent in some way between the “cache” and the “home” fileset

▶  Home does not know cache exists, cache does all the work (checking home for changes, sending updates to home)

▶  Four Modes: •  Single Write •  Read Only •  Local Update •  Independent Writer

RO RW

CACHE HOME

Single Writer

RW

CACHE HOME

Independent Writer

RW

CACHE HOME

Read Only

RW RO

CACHE HOME

Local Update

RW RW

Use Case: Data collection, e.g. from remote sequencer sent to home. Limited scope DR.

Use Case: Cache accelerates read access to a remote site

Use Case: SW development. After 1st retrieval cache diverges from home. Storage migration.

Use Case: Pseudo-shared namespace. Central site data collection.

Page 5: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

Imperial College 5

Tier3

Tier 2 Cache

Tier 1 Filesystem

WOSBRIDGE

WOS Core Object Store

iRODSService

IRODS Pool

iRODS Database

Block Storage

TSM database

Tape Service

Library

Tier 2 Cache

Tier 1 Filesystem

Tier 3 Cache

WOSBRIDGE

WOS Core Object Store

TSMService

iRODSServiceTSM database Tier3

Block Storage

IRODS Pool

iRODS Database

WOS Replication for Tier2

NAS Service

NAS Service

NFS/CIFS NFS/CIFS

ClientNodes

Tier1

GPFS

Tape Service

Library

Tier 3 Cache

TSMService

NAS

iRODS

TSM

Tier2Tier2

Page 6: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

AFM over NFS

▶  Two Sites ~30Km apart. ▶  10Gb WAN link ▶  Network latency about 1.3ms ▶  4 NFS servers @ Slough DC ▶  4 Gateway Servers @ South Kensington ▶  Single writer Caches on South Ken for

“Homes” in Slough ▶  No NFS tuning as yet

6

Page 7: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

Why Not AFM over GPFS

▶  NFS is simple and easy to set up. ▶  Networking is easier to separate. ▶  We didn’t have 4.1 at the time. ▶  Want complete independence between sites

and Multicluster doesn’t seem to be as “independent” as it might be.

▶  Don’t need Multicluster parallelism as we will have a large number of AFM relationships so multiple NFS streams will saturate the link.

7

Page 8: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

AFM over NFS

▶  Starting to hit the link throughput limit ▶  Not quite as fast as possible as assignment to gateway

nodes is a bit arbitrary. ▶  Doesn’t seem to like 2 of the Gateway servers.

Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec!------------ -------------- ------------- ------------ ------------ ------------- !lollipop nfs://mbslafm/SL-Tier1/lollipop Active mbsknas02-ib 0 1680053 !BSS nfs://mbslafm/SL-Tier1/BSS Active mbsknas04-ib 0 728071 !test1 nfs://mbslafm/SL-Tier1/test1 Active mbsknas02-ib 0 90021 !NPC nfs://mbslafm/SL-Tier1/NPC-SL Active mbsknas02-ib 0 542825694 !

▶  Recently hit a memory limit with “afmHardMemThreshold”

!!

8

Page 9: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

9

0

100

200

300

400

500

600

700

800

1 2 3 4 Number of Threads

AFM Performance for NFS over 10Gb WAN

Throughput MB/s

Page 10: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

AFM – GPFS Multi-cluster Testing10

Pool0

Vdisk0(LUN0)

nsd00 43 TiB

Raid6 (8d+2p)

Pool1

Vdisk1(LUN1)

Raid6 (8d+2p)

Pool10

Vdisk10(LUN10)

Raid6 (8d+2p)

Pool11

Vdisk11(LUN11)

Raid6 (8d+2p)

・・・・・・・・・・ 計 12 LUN

Pool0

Vdisk0(LUN0)

Raid6 (8d+2p)

Pool1

Vdisk1(LUN1)

Raid6 (8d+2p)

Pool2

Vdisk3(LUN2)

Raid6 (8d+2p)

Pool3

Vdisk4(LUN3)

Raid6 (8d+2p)

nsd01 43 TiB

nsd10 43 TiB

nsd11 43 TiB

nsd12 29 TiB

nsd13 29 TiB

nsd14 29 TiB

nsd15 29 TiB

SFA7700X SS8460 6TB NL-SAS x 120

40GbE Uplink

bonding bondingbonding bonding

bondingbonding

10 GbE Network 1 GbE Netowrk

8 Gb FC LACP

40GbE Uplink

bondingbonding bondingbonding bondingbonding bondingbonding

Delay 110 msec

Home Cache

SFA7700X 4 TB NL-SAS x40

6x AFM Gateway Servers

55 msec 55 msec

Generate latency using tc command

bondingbonding

Page 11: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

AFM data transfer performance Cache to Home (40GbE Uplink) 11

958

787 738

457

275

135

63 48 35

0

200

400

600

800

1,000

1,200

0 ms 2 ms 4 ms 8 ms 16 ms 32 ms 48 ms 64 ms 92 ms

MB

/sec

Latency

AFM Data Transfer Performance Cache to Home

Filesystem Block Size 16 MB

Page 12: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

Multi-Cluster Iozone performance (40GbE Uplink)12

4,503

2,630

2,169

1,680

1,206 952

151 38

2,744 2,742 2,693

1,978

1,662

858

82 35 0

500

1,000

1,500

2,000

2,500

3,000

3,500

4,000

4,500

5,000

0 ms 2 ms 4 ms 8 ms 16 ms 32 ms 64 ms 92 ms

MB

/sec

Latency

6 x 10GbE GPFS Client IOZONE 1 Thread / Client to Other Cluster Filesystem

FS Block Size 16 MB

Write(MB/sec Write)(MB/sec

Page 13: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

AFM - No buffer space available13

AFM recovery failing with E_NOBUFS usually means that gateway node memory usage crossed the afmHardMemThreshold config value. Run the following command to know the current AFM queue memory usage on gateway node: mmfsadm dump afm | grep QMem Try increasing the memory and verify if recovery progresses. mmchconfig afmHardMemThreshold=10G -i

Page 14: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

AFM - No buffer space available14

"On average each message queued at gateway node takes around 350 bytes of memory. The mmafmctl getstate command provides the number of messages in queue. When queue memory usage is approaches afmHardMemThreshold, the gateway node starts flushing the queue without waiting for async delay. (default 15 seconds) Current queue memory usage (approximately) = number of messages in queue * 350 bytes. – for 5GB that’s ~16 million files. Sometimes queue memory usage keeps growing because replication has stopped (Unmounted or disconnected) When afmHardMemThreshold is reached, queues are dropped. In this case AFM will run recovery on next fileset access.

Page 15: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

DataDirect Networks, Information in Motion, Silicon Storage Appliance, S2A, Storage Fusion Architecture, SFA, Storage Fusion Fabric, Web Object Scaler, WOS, EXAScaler, GRIDScaler, xSTREAMScaler, NAS Scaler, ReAct, ObjectAssure, In-Storage Processing and SATAssure are all trademarks of DataDirect Networks. Any unauthorized use is prohibited.

15 Questions? [email protected]

Page 16: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

102-0081 東京都千代田区四番町6-2 東急番町ビル 8F

TEL:03-3261-9101 FAX:03-3261-9140

company/datadirect-networks

@ddn_limitless

[email protected]

Thank You! Keep in touch with us

Page 17: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

Imperial Config17 [57]root@mbskwb01 /root> mmlsconfig Configuration data for cluster MedBio-SKen.mbskgs01-ib: ------------------------------------------------------- clusterName MedBio-SKen.mbskgs01-ib clusterId 2175721959885733063 dmapiFileHandleSize 32 verbsRdma enable maxMBpS 22400 healthCheckInterval 20 afmHashVersion 1 minReleaseLevel 4.1.0.4 cipherList AUTHONLY subnets 10.0.0.0 verbsPorts mlx4_1/1 mlx4_1/2 [mbskicat01-ib,mbskirods01-ib] verbsPorts mlx4_0/1 [common] worker1Threads 512 maxFilesToCache 64000 maxStatCache 128000 nsdMinWorkerThreads 64

nsdMaxWorkerThreads 1280 nsdThreadsPerDisk 16 nsdThreadsPerQueue 6 nsdSmallThreadRatio 3 maxInodeDeallocPrefetch 32 stealAggressiveThreshold 6 prefetchThreads 200 flushedDataTarget 1000 flushedInodeTarget 1000 maxblocksize 16M autoload yes pagepool 8G [nas] pagepool 32G [common] adminMode central File systems in cluster MedBio-SKen.mbskgs01-ib: ------------------------------------------------ /dev/SK-Tier1 /dev/SK-Tier3 /dev/test

Page 18: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

18

# mmlsclusterGPFS cluster information======================== GPFS cluster name: Home.gs01 GPFS cluster id: 7808808277142161757 GPFS UID domain: Home.gs01 Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp Repository type: CCR Node Daemon node name IP address Admin node name Designation-------------------------------------------------------------------- 1 gs01 10.10.10.141 gs01 quorum-manager-gateway 2 gs02 10.10.10.142 gs02 quorum-manager-gateway 5 gs05 10.10.10.145 gs05 quorum-manager-gateway 6 gs06 10.10.10.146 gs06 quorum-manager-gateway 7 gs07 10.10.10.147 gs07 gateway 10 gs08 10.10.10.148 gs08 gateway 11 gs09 10.10.10.149 gs09 gateway 12 gs10 10.10.10.150 gs10 gateway

# mmlsclusterGPFS cluster information======================== GPFS cluster name: Cache.gs03 GPFS cluster id: 10590991901404442788 GPFS UID domain: Cache.gs03 Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp Repository type: CCR Node Daemon node name IP address Admin node name Designation-------------------------------------------------------------------- 1 gs03 10.10.10.143 gs03 quorum-manager-gateway 2 gs04 10.10.10.144 gs04 quorum-manager-gateway 3 r21-gs 10.10.10.196 r21-gs gateway 4 r22-gs 10.10.10.197 r22-gs gateway 5 r23-gs 10.10.10.198 r23-gs gateway 6 r24-gs 10.10.10.199 r24-gs gateway 7 r25-gs 10.10.10.200 r25-gs gateway 8 r26-gs 10.10.10.201 r26-gs gateway

Page 19: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

19 Configuration data for cluster Home.gs01:-----------------------------------------dmapiFileHandleSize 32minReleaseLevel 4.1.1.0ccrEnabled yesautoload yescnfsNFSDprocs 256flushedDataTarget 1024flushedInodeTarget 1024logBufferCount 20logWrapThreads 16maxBufferCleaners 1024maxFileCleaners 1024maxFilesToCache 12000maxGeneralThreads 1280maxInodeDeallocPrefetch 128maxMBpS 10800maxStatCache 512maxReceiverThreads 32nsdbufspace 50nsdMaxWorkerThreads 1024nsdMinWorkerThreads 1024nsdThreadsPerDisk 16nsdThreadsPerQueue 16prefetchPct 60prefetchThreads 288scatterBufferSize 262144worker1Threads 1024worker3Threads 32tiebreakerDisks nsd00maxblocksize 16384Kpagepool 32GclusterName Home.gs01clusterId 7808808277142161757cipherList AUTHONLYadminMode central

Configuration data for cluster Cache.gs03:------------------------------------------dmapiFileHandleSize 32minReleaseLevel 4.1.1.0ccrEnabled yes autoload yescnfsNFSDprocs 256flushedDataTarget 1024flushedInodeTarget 1024logBufferCount 20logWrapThreads 16maxBufferCleaners 1024maxFileCleaners 1024maxFilesToCache 12000maxGeneralThreads 1280maxInodeDeallocPrefetch 128maxMBpS 10800maxStatCache 512maxReceiverThreads 32nsdbufspace 50nsdMaxWorkerThreads 1024nsdMinWorkerThreads 1024nsdThreadsPerDisk 16nsdThreadsPerQueue 16pagepool 32GprefetchPct 60prefetchThreads 288scatterBufferSize 262144worker1Threads 1024worker3Threads 32maxblocksize 16MtiebreakerDisks nsd12clusterName Cache.gs03clusterId 10590991901404442788cipherList AUTHONLYadminMode central

Page 20: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

20 # mmlsfs gpfsflag value description------------------- ------------------------ ----------------------------------- -f 524288 Minimum fragment size in bytes -i 4096 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k all ACL semantics in effect -n 8 Estimated number of nodes that will mount file system -B 16777216 Block size -Q user;group;fileset Quotas accounting enabled user;group;fileset Quotas enforced none Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement --filesetdf Yes Fileset df enabled? -V 14.23 (4.1.1.0) File system version --create-time Wed Oct 28 13:24:31 2015 File system creation time -z No Is DMAPI enabled? -L 16777216 Logfile size -E Yes Exact mtime mount option -S No Suppress atime mount option -K whenpossible Strict replica allocation option --fastea Yes Fast external attributes enabled? --encryption No Encryption enabled? --inode-limit 134422528 Maximum number of inodes in all inode spaces --log-replicas 0 Number of log replicas --is4KAligned Yes is4KAligned? --rapid-repair Yes rapidRepair enabled? --write-cache-threshold 0 HAWC Threshold (max 65536) -P system Disk storage pools in file system -d nsd00;nsd01;nsd02;nsd03;nsd04;nsd05;nsd06;nsd07;nsd08;nsd09;nsd10;nsd11 Disks in file system -A yes Automatic mount option -o none Additional mount options -T /gpfs Default mount point --mount-priority 0 Mount priority

# mmlsfs cacheflag value description------------------- ------------------------ ----------------------------------- -f 524288 Minimum fragment size in bytes -i 4096 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k all ACL semantics in effect -n 8 Estimated number of nodes that will mount file system -B 16777216 Block size -Q user;group;fileset Quotas accounting enabled user;group;fileset Quotas enforced none Default quotas enabled --perfileset-quota yes Per-fileset quota enforcement --filesetdf yes Fileset df enabled? -V 14.23 (4.1.1.0) File system version --create-time Mon Nov 9 14:43:41 2015 File system creation time -z no Is DMAPI enabled? -L 16777216 Logfile size -E yes Exact mtime mount option -S no Suppress atime mount option -K whenpossible Strict replica allocation option --fastea yes Fast external attributes enabled? --encryption no Encryption enabled? --inode-limit 120078336 Maximum number of inodes in all inode spaces --log-replicas 0 Number of log replicas --is4KAligned yes is4KAligned? --rapid-repair yes rapidRepair enabled? --write-cache-threshold 0 HAWC Threshold (max 65536) -P system Disk storage pools in file system -d nsd12;nsd13;nsd14;nsd15 Disks in file system -A yes Automatic mount option -o none Additional mount options -T /cache Default mount point --mount-priority 0

Page 21: Adventures in AFM - Spectrum Scale (GPFS) User Groupfiles.gpfsug.org/presentations/2016/south-bank/D2...AFM over NFS Starting to hit the link throughput limit Not quite as fast as

ddn.com © 2015 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events are subject to change.

21

# mmlsfileset cache iw -XFilesets in file system 'cache':Attributes for fileset iw:===========================Status LinkedPath /cache/iwId 1Root inode 524291Parent Id 0Created Mon Nov 9 23:16:32 2015CommentInode space 1Maximum number of inodes 102400Allocated inodes 102400Permission change flag chmodAndSetaclIAM mode offafm-associated YesTarget gpfs:///gpfs/iwhomeMode independent-writerFile Lookup Refresh Interval 30 (default)File Open Refresh Interval 30 (default)Dir Lookup Refresh Interval 60 (default)Dir Open Refresh Interval 60 (default)Async Delay 15 (default)Last pSnapId 0Display Home Snapshots noNumber of Read Threads per Gateway 64Number of Gateway Flush Threads 1024Prefetch Threshold 0 (default)Eviction Enabled yes (default)Number of Write Threads per Gateway 64

# /etc/sysctl.conf net.core.netdev_max_backlog = 250000net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.core.rmem_default=16777216net.core.wmem_default=16777216net.core.optmem_max=16777216net.ipv4.tcp_mem=16777216 16777216 16777216net.ipv4.tcp_rmem = 4096 87380 16777216net.ipv4.tcp_wmem = 4096 65536 16777216net.ipv4.tcp_timestamps=0net.ipv4.tcp_sack=1net.ipv4.tcp_fack=1net.ipv4.tcp_window_scaling=1net.ipv4.tcp_low_latency=0net.ipv4.tcp_moderate_rcvbuf=0vm.swappiness=60vm.dirty_expire_centisecs=1000vm.dirty_writeback_centisecs=500vm.dirty_background_ratio=5vm.dirty_ratio=80