File Systems On Solid State Drives Benchmark

13 Sep 2009 20:57

Recently I bought a solid state hard disk for my laptop. All have heard that SSDs are faster and better, and help booting your OS faster. While it's all true, you need to face a fact, that no mature file system available for Linux is ready (optimized) for SSDs.

During last 20 years, file system (and kernel-side logic like write schedulers and disk buffers) were made to optimize use of hard disks — magnetic hard disks. The disks are now quite fast when you read or write 1 GB file from them, but when you need to read 2000 small files that are located in different physical locations of the disk, their performance degrades.

Rotating disks

For the years, many developed systems (both software and hardware) to make as little disk seeks as possible. I've heard there are optimizations in kernel like: if you need to read a block that is 20 sectors after current position, don't seek, read the 19 useless sectors, and then read the meaningful one. I don't know if this is true and if yes, is this optimization of Linux kernel or some particular file system.

Solid State Drives

OK, so the file systems out there perform really well on rotating magnetic disk drives. So what about the solid state non-rotating disks. Are the previously talked optimizations good for them too? No. Solid state drives are not as fast as magnetic disks for sequential reads and writes, but they are much much better in random access to data. The seek time is approximately 0 (actually much less than 0.1 ms, when magnetic disks have the seek time about 5 ms).

SSD + File Systems?

On the other hand, there are quite a few new file systems that are meant to be run on SSD disks or at least are SSD-aware. I'm talking about NILFS2 and btrfs. Unfortunately neither of them is stable in the sense of on-disk data format. This means that partitions created with current version of NILFS or btrfs may not be readable by future versions of these file systems. Thus it's not too nice to have important data on them.

Looking for a benchmark

So this is time when I have my expensive SSD in my hand and face serious problem of choosing the right file system to keep my data secure and fast to access. I searched for some benchmarks to compare the results of different file systems, both the old XFS, ReiserFS, ext2/3, the new ext4 and SSD-optimized btrfs and NILFS2. The number of benchmarks of these filesystems on SSD drives I found is 0. So time to make my own benchmark.

My benchmark

I want to benchmark the following file systems:

  • ext2, ext3
  • ext4
  • xfs
  • reiserfs, reiser4
  • nilfs
  • btrfs
  • zfs (via fuse)

I will use my new OCZ Agility 120 GB SSD disk for this.

As this option is used even for magnetic disks, I will append noatime to mount options for every file system I check.

Also, I will try the following tweaks as suggested by random sites:

  • using noop scheduler for disk IO: echo noop > /sys/block/sdb/queue/scheduler
  • turning on and off write-back caching: hdparm -W1 /dev/sda, hdparm -W0 /dev/sda
  • discourage swapping (so that app data is kept in RAM and is not replaced with disk buffers, reading from SSD is damn fast anyway): sysctl -w vm.swappiness=1

For each setting, I'll make a test with bonnie++. I'll try to make nice graphs out of the data.

If you want me to test more settings or file systems or have some suggestion about the benchmarking tools, feel free to leave a comment.

UPDATE: I will also perform one more test: time of launching Firefox, Claws Mail and Pidgin from encfs-encrypted .mozilla, .purple and .claws-mail directories with different underlying file systems. This is very common task for me (they all start automatically after logging in).

Previous post: Problemy z NetiÄ…

Next post: Cleaning Up


More posts on this topic

Comments

Add a New Comment
or Sign in as Wikidot user
(will not be published)
- +
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License