Follow

Ceph or GlusterFS Redundancy

Subject:

What type of redundancy does QuantaStor have when using Ceph or GlusterFS in the example of a storage brick going offline?

Does CEPH or GlusterFS provide redundancy other than replication of the entire CEPH or Gluster "cluster node"?

 

Details:

Our scale-out File (Gluster) and Scale-out Block (Ceph) technologies provide redundancy that would protect against a Hardware component or complete node failure.

Our Scale-out Block (Ceph) uses OSD's stored on Storage Pools on the individual nodes, the OSD's are then combined into a Ceph Storage Pool and can be configured with redundnacy of 2 or 3 for storing two copies or three copies of the data across the Ceph Pool. This allows for any component a node or the entire node to fail and there will be no impact to client access as there is still a useable copy of the data.

The scale-out File (Gluster) solution uses Bricks stored on Storage Pools on the QuantaStor nodes and also provide a replica count of 2 or 3 as well as erasure coding features when you create a Gluster Volume out of those bricks. These features ensure that a hardware or node failure will not affect data availability or client access.

Note you can create multiple Storage Pools on the various nodes so you could have a Scale-out File and Scale-our Block co-existing on the same hardware.

 

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

Comments

Powered by Zendesk