Breaking L2 Barriers for Storage Clusters

Taking the scale-out paradigm to the next level, Datera 2.2 release support native L3 network integration as part of the scale-out control plane.

Cloud solutions are built on the foundation that resources are aggregated into scalable pools. For scale-out distributed systems, the datacenter network effectively becomes the “backplane.” Typical deployments require the flexibility to scale-out storage nodes beyond a single rack, primarily for two reasons: First, to build inter-rack redundancy to handle rack failures, and second, to drive intra-rack compute “locality,” reducing storage traffic across the network core or spine. Effectively, a desired deployment option for large enterprise and cloud service providers has been to scale storage nodes across multiple racks and provide “rack local storage” to try and contain storage traffic within racks. However, architecturally, nothing prevents remote rack compute instances from accessing this storage.

In cloud deployments, IP-based storage (iSCSI) is becoming ubiquitous. iSCSI is a simple method for the compute nodes to connect to storage. Network-based access control enforcement is another desired approach. Datera Elastic Data Fabric provides the notion of virtual IP address(es) for the iSCSI target port for a provisioned LUN. These virtual IP addresses provide the flexibility to float among the various Datera nodes – either due to cluster-wide load balancing or when node failures occurs. From the compute perspective, such activities are transparent and have no impact to the compute environment.

The traditional problem with iSCSI target port failover in classic L2 network topologies is that the virtual IP address can’t “migrate” outside of the rack in which it was originally provisioned, as that would imply crossing outside of the L2 subnet. This is addressed by iSCSI redirects in the native iSCSI protocol, where on a node failure, the connection gets pointed to a name service that will advertise the appropriate target port IP address, even across different L2 subnets. While this solution is transparent to the compute nodes, network security and access control solutions might be impacted due to the “change” in the target port IP address. Datera supports iSCSI redirects for target discovery and failure handling in data center environments, where the networking-based security and access control enforcements are not relevant.

Some data centers “scale” their L2 cluster across multiple racks by using some form of overlay technology, such as VXLAN, NVGRE or GENEVE. Such solutions have their advantages and disadvantages, especially regarding how BUM (broadcast, unicast, and multicast) traffic is handled, and how the overlay and underlay interact at scale. Datera EDF fits seamlessly into such overlay network environments. It doesn’t depend on any unique networking services (such as LLDP, DNS, DHCP, etc.) for normal operations.   

In modern flat networks, proliferating “floating” network endpoints necessitate a matching flat network namespace. This effectively requires pulling native L3 network support into the network leaf nodes. For instance, in a container environment, it is common for each node to have its own /31 network with 10-100’s of containers that can dynamically be moved around (each node is small data center on its own respect). Datera EDF nodes can act as native L3 network endpoints, and thereby seamlessly participate in all route changes.


To implement active L3 network participation, Datera EDF nodes behave as a route reflector by advertising their IP addresses to the L3 network. When a target port IP address has to “migrate across racks”, e.g., because of node failures or cluster-wide load balancing, the destination Datera node advertises the IP address to the broader L3 network. This way, the Datera EDF nodes actively, instantly and seamlessly participates in all network route changes.