NFS
NFS is commonly used in one of these ways:
- Dovecot is run in a single computer.
- Dovecot is run in multiple computers, users are redirected more or less randomly to different computers.
- Dovecot is run in multiple computers, each user is assigned a specific computer which is used whenever possible.
You should try to avoid the 2nd setup, see below.
Dovecot configuration
Multi-server setup that tries to flush NFS caches:
mmap_disable = yes dotlock_use_excl = no # only needed with NFSv2, NFSv3+ supports O_EXCL and it's faster mail_nfs_storage = yes # v1.1+ only mail_nfs_index = yes # v1.1+ only
High performance NFS setup with indexes on local disk (see below for benefits):
mmap_disable = no dotlock_use_excl = yes # You most likely have NFSv3 mail_nfs_storage = yes # v1.1+ only mail_nfs_index = no # v1.1+ only
Common issues
Clock synchronization
Run ntpd in the NFS server and all the NFS clients to make sure their clocks are synchronized. If the clocks are more than one second apart from each others and multiple computers access the same mailbox simultaneously, you may get errors.
NFS caching problems
NFS caching is a big problem when multiple computers are accessing the same mailbox simultaneously. The best fix for this is to prevent it from happening. Configure your setup so that a user always gets redirected to the same server (unless it's down). This also means that mail deliveries must be done by the same server, or alternatively it shouldn't update index files.
Dovecot v1.1+ flushes NFS caches when needed if you set mail_nfs_storage=yes, but unfortunately this doesn't work 100%, so you can get random errors.
Disabling NFS attribute cache helps a lot in getting rid of caching related errors, but this makes the performance MUCH worse and increases the load on NFS server. This can usually be done by giving actimeo=0 or noac mount option.
Index files
As described above, it's better to redirect users to the same server whenever possible. If you do this, it might also be a good idea to keep index files stored locally in that server. If user gets occasionally redirected to another server, the indexes will then be created locally there. This isn't a problem. However you might want to create a cronjob to delete old index directories.
If you choose to keep the index files stored in NFS, you'll need to set mmap_disable=yes. Both the mmap_disable and indexing to NFS will result in a notable performance hit. If you're not running lockd you'll have to set lock_method=dotlock, but this degrades performance. Note that many NFS installations have problems with lockd. If you're beginning to get all kinds of locking related errors, try if the problems go away with dotlocking.
Indexes and POP3 (v1.0)
The below issue is fixed since v1.1:
If you keep index files in local disks and you move a large number of POP3 users with a lot of mails to a different server, you'll most likely see a huge increase in disk I/O. This is because Dovecot needs to get the "virtual size" of all the messages, which requires reading all the messages' contents. The sizes are then stored into dovecot.index.cache file so the reading isn't needed to be done the next time.
To avoid this disk I/O increase, you can copy the users' dovecot.index and dovecot.index.cache files to the new server before the actual move is done.
With Maildir you can also use the special ,W=<vsize> field in the filename. See MailboxFormat/Maildir.
Single computer setup
This doesn't really differ from keeping mails stored locally. For better performance you should keep index files stored in a local disk.
Random redirects to multiple servers
You should avoid this setup whenever possible. Besides the NFS cache problems described above, mailbox contents can't be cached as well in the memory either. This is more problematic with mbox than with maildir, but in both cases if a client is redirected to a different server when reconnecting, the new server will have to read some data via the NFS into memory, while the original server might have had the data already cached.
If you choose to use this setup, at the very least try to make connections from a single IP redirected into the same server. This avoids the biggest problems with clients that use multiple connections.
Per-user redirects to multiple servers
This method performs a lot better than random redirects. It maximizes the caching possibilities and prevents the problems caused by simultaneous mailbox access.
New mail deliveries are often still handled by different computers. This isn't a problem with maildir as long as you're not using Dovecot LDA (i.e. dovecot-uidlist file or index files shouldn't get updated). It shouldn't be a problem with mboxes either as long as you're using fcntl locking.
NFS clients
Here's a list of kernels that have been tried as NFS clients:
FreeBSD has a caching bug which causes problems when mailbox is being accessed from different computers at the same time
Linux 2.6.16: utime() is buggy, fix in here. With the fix applied, utime() seems to work perfectly. High-volume systems may experience VFS lock sync issues and for these the complete patchset at http://www.linux-nfs.org/Linux-2.6.x/2.6.16/linux-2.6.16-NFS_ALL.dif is suggested and appears to work well in production.
- Linux 2.6.18: Seems to have intermittent caching issues. The same .config with 2.6.20.1 has been tested and appears to work well.
- Linux 2.4.8: Has caching problems, don't know if they can be solved
Solaris: If it's completely broken, see http://dovecot.org/list/dovecot/2006-December/018145.html
The Connectathon test suite is very useful to verify a healthy NFS setup, see http://www.connectathon.org/nfstests.html
Misc notes
- Dovecot doesn't care about root_squash setting, all the root-owned files are in /var/run typically which is not in NFS
In an environment using Debian (2.6.18) clients with Isilon NFS cluster nodes - the following mount options were found to be the most successful: rsize=32768,wsize=32768,hard,fg,lock,nfsvers=3,tcp,retrans=0,actimeo=0 0 0
- As explained above, actimeo=0 will make the performance bad. With v1.1 use mail_nfs_* settings instead.
To learn more about NFS caching and other issues, mostly from a programmer's point of view, see NFS Coding HOWTO