本文发自 http://www.binss.me/blog/use-nvme-ssd-on-synology/,转载请注明出处。

鉴于 PS5 存储容量日益捉急,乘着今年 618 买了一条 SandDisk 至尊高速 1T 的 nvme ssd 准备给 PS5 扩容用。结果等到 8 月份官方公布消息说只有支持 PCIE4 的 ssd 才能在 PS5 上用,不甘心地尝试了一波发现真是这样后,泪流满面,得给这个硬盘找个去处。

思来想去,目前家里设备只有群晖 DS918 上有两个 m2 插口。这两个插口官方定义是插上硬盘后用作 cache 以提高存储读写性能,但个人觉得没啥卵用。能否把硬盘作为存储盘而不是 cache 来使用呢?经过一番搜索和折腾,最终成功实现。

流程

环境:群晖 DS918,DSM7

  1. m2 插槽插上 nvme 硬盘
  2. ls /dev/nvme* 看看是否检测到 nvme 设备

    binss@binss-NAS:~$ ls /dev/nvme0
    /dev/nvme0
  3. 查看设备信息

    binss@binss-NAS:~$ sudo fdisk -l /dev/nvme0n1
    Password:
    Disk /dev/nvme0n1: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk model: SanDisk Ultra 3D NVMe
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
  4. 创建分区

    binss@binss-NAS:~$ sudo synopartition --part /dev/nvme0n1 12
    
     Device   Sectors (Version7: SupportRaid)
     /dev/nvme0n11   4980480 (2431 MB)
     /dev/nvme0n12   4194304 (2048 MB)
    Reserved size:    262144 ( 128 MB)
    Primary data partition will be created.
    
    WARNING: This action will erase all data on '/dev/nvme0n1' and repart it, are you sure to continue? [y/N] y
    Cleaning all partitions...
    Creating sys partitions...
    Creating primary data partition...
    Please remember to mdadm and mkfs new partitions.
  5. 查看分区

    binss@binss-NAS:~$ sudo fdisk -l /dev/nvme0n1
    Disk /dev/nvme0n1: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk model: SanDisk Ultra 3D NVMe
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xb0561e30
    
    Device         Boot   Start        End    Sectors  Size Id Type
    /dev/nvme0n1p1          256    4980735    4980480  2.4G fd Linux raid autodetect
    /dev/nvme0n1p2      4980736    9175039    4194304    2G fd Linux raid autodetect
    /dev/nvme0n1p3      9437184 1953520064 1944082881  927G fd Linux raid autodetect
  6. 为了让 DSM 识别硬盘,需要创建一个 RAID1 设备

    binss@binss-NAS:~$ sudo -s
    sh-4.4# cat /proc/mdstat
    Personalities : [raid1]
    md3 : active raid1 sdb3[0]
          13667560448 blocks super 1.2 [1/1] [U]
    
    md5 : active raid1 sdd3[0]
          11714063360 blocks super 1.2 [1/1] [U]
    
    md4 : active raid1 sdc3[0]
          9761614848 blocks super 1.2 [1/1] [U]
    
    md2 : active raid1 sda3[0]
          239376512 blocks super 1.2 [1/1] [U]
    
    md1 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[1]
          2097088 blocks [4/4] [UUUU]
    
    md0 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
          2490176 blocks [4/4] [UUUU]
    
    unused devices: <none>

    发现目前 md 已经有 6 个,因此新建一个 md6

    sh-4.4#  mdadm --create /dev/md6 --level=1 --raid-devices=1 --force /dev/nvme0n1p3
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md6 started.
    
    sh-4.4# cat /proc/mdstat
    Personalities : [raid1]
    md6 : active raid1 nvme0n1p3[0]
          972040384 blocks super 1.2 [1/1] [U]
  7. 格式化

    sh-4.4# mkfs.btrfs -f /dev/md6
    btrfs-progs v4.0
    See http://btrfs.wiki.kernel.org for more information.
    
    Performing full device TRIM (927.01GiB) ...
    Label:              (null)
    UUID:               23791464-5f5e-4bc4-8396-cb9919a42ea4
    Node size:          16384
    Sector size:        4096
    Filesystem size:    927.01GiB
    Block group profiles:
      Data:             single            8.00MiB
      Metadata:         DUP               1.01GiB
      System:           DUP              12.00MiB
    SSD detected:       no
    Incompat features:  extref, skinny-metadata
    Number of devices:  1
    Devices:
       ID        SIZE  PATH
        1   927.01GiB  /dev/md6
  8. 重启群晖

之后在图形化界面中为该硬盘创建文件夹即可。

性能评估

结论:主板 IO 太菜,上 nvme 浪费

Nvme ssd 1T

binss@binss-NAS:/volume5/Nvme$ dd if=/dev/zero of=test bs=1M count=10000 oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 32.8041 s, 320 MB/s

sh-4.4# dd if=test of=/dev/null bs=1M count=10000 iflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 26.2564 s, 399 MB/s

hdparm -Tt /dev/nvme0n1
/dev/nvme0n1:
 Timing cached reads:   3898 MB in  2.00 seconds = 1949.05 MB/sec
 Timing buffered disk reads: 1294 MB in  3.00 seconds = 430.90 MB/sec

SATA ssd 256G

sh-4.4# dd if=/dev/zero of=test bs=1M count=10000 oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 43.338 s, 242 MB/s

sh-4.4# dd if=test of=/dev/null bs=1M count=10000 iflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 34.0311 s, 308 MB/s

hdparm -Tt /dev/sda
/dev/sda:
 Timing cached reads:   4100 MB in  2.00 seconds = 2050.70 MB/sec
 Timing buffered disk reads: 1030 MB in  3.01 seconds = 342.62 MB/sec

SATA hdd 14t

sh-4.4# dd if=/dev/zero of=test bs=1M count=10000 oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 59.5322 s, 176 MB/s

sh-4.4# dd if=test of=/dev/null bs=1M count=10000 iflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 57.2196 s, 183 MB/s

hdparm -Tt /dev/sdb
/dev/sdb:
 Timing cached reads:   3768 MB in  2.00 seconds = 1884.16 MB/sec
 Timing buffered disk reads: 640 MB in  3.00 seconds = 213.12 MB/sec

参考

https://www.reddit.com/r/synology/comments/a7o44l/guide_use_nvme_ssd_as_storage_volume_instead_of/?utm_source=amp&utm_medium=&utm_content=comments_view_all