Image source: unsplash.com
Distributed File Systems can be incredibly useful in virtually any organization, but they’re often implemented without much planning. What results are multiple issues, ranging from duplication errors to a high backlog. The following best practices will ensure your system is streamlined.
What is a Distributed File System?
As the name suggests, a Distributed File System (DFS) is a file system that’s distributed on multiple locations or file servers. With a DFS, a program can access or store isolated files similar to local files, meaning programmers can access them from any computer or network.
5 Best Practices When Setting Up a DFS
Setting up a streamlined DFS can be as hard as it looks, but if you deploy the right configuration immediately, you’ll save yourself a lot of time, money, and stress when using it for your startup.
1. DFS Server Sizing
A DFS server can be physical or virtual, but virtual is typically your best option if you’re a small to medium size business. With that said, the CPU should be 64-bit with at least 4 cores (2.5 GHz). At least 8 GB of memory should be installed, and you should increase it as needed.
Raid-10, 15K rpm disks are recommended for storage, while data volumes should have sufficient space to handle replicated data plus staging data. Finally, a Gigabit Ethernet is best. If using a DFS namespace server instead, know that you can install it on lower-capacity hardware.
2. Replication Schedule
As mentioned, businesses often run into errors when checking their DFS replication status. Many replication errors happen because replication is happening over a long distance or there’s a scalability issue. Most of these common problems can be solved with a replication schedule.
For WAN locations, initial replication should only be allowed during off-business hours, while R/O replicas over WAN must be scheduled to replicate after business hours with full bandwidth. If you need to replicate during business hours, consider throttling the bandwidth for a time.
3. DFS Configuration
It’s a good idea to design DFS topology based on your storage requirements, available network bandwidth, data access requirements, backup strategy, and user base. A traditional DFS supports multiple topologies, like mesh, hub, and spoke, and mesh is often the most useful.
With a mesh topology, all servers can replicate with each other, which is helpful when failover or data redundancy is required. However, if you’re looking for more server or website security, a hub or spoke topology is best, as they can replicate with a Hub Server but not with each other.
4. Source Data Permissions
To start, DFS isn’t dependent on file system permissions. It’ll run under the NT Authority/System account, but you can also set SeBackupPrivilege and SeRestorePrivilege rights. If you enable these rights, DFS can read and copy data from one location and write/paste it to another.
The positive thing about these source data permissions is the ability to do this regardless of access rights on folders and files (with the exception of open files). If this is a brand new implementation, the root folder must be set to the right Share and NTFS permissions first.
5. Backup and Restore
In case of accidental deletion, it’s beneficial to have a backup in place. The Active Directory System State stores your configuration and should be placed on an external hard drive or cloud server. Then, you should back up certain registries that hold some of the most critical data.
DFS namespace configuration should be exported, and so should the shares registry on each DFS Namespace server. Save the registry key on each server in a separate location. How you’ll restore your existing backups when you need it will depend on what type of backup you have.