What are the differences between using the Gluster Native client and CIFS/HFS for client access.
The Gluster Native Client can more efficiently use a single network, whereas the CIFS/NFS clients should be implemented with a front-end client network and a back-end network for Gluster Volume communication.
This is because the Gluster Client will be able to make direct access to the Gluster Bricks that contain the data.
For CIFS/NFS we mount the Gluster volume using the Gluster client on the QuantaStors and provide CIFS/NFS access to the Gluster mount via Samba or the nfsd service. In the case of CIFS/NFS you can think of the QuantaStor you are connecting to via those protocols as a Gateway to the Gluster Client access.
This means there is Client network access to get the data to the CIFS/NFS services and then a another I/O stream from that QuantaStor node serving those protocols to spread the data and replicate it to the other QuantaStor nodes.
When using NFS/CIFS you should have each network interface should be on a separate network. Combining client access (NFS/CIFS) and the Gluster back-end traffic on a single network is not optimal and can cause bottlenecks.
This is not a concern if you are deploying with GlusterFS native clients on Linux. "
Note: There are some backend operations that are performed between the Gluster bricks themselves outside of a client such as rebuild of data in the event of a brick being offline or replaced or migration of data when expanding the Gluster Volume.
It can be beneficial for these operations to have a separate backend network. However these operations typically will first bottleneck on Hard Disk throughput in platter based configurations, a 10GbE network would not be the bottleneck for a specific node.
We have found that some configurations that use NVMe or SSD may need multiple networks, however for those configurations we typically recommend 40GbE or faster networking.
Please review the link below for additional detail on this subject: