Skip to main content

Ceph Storage driving throughput for Analytics (Focused on Telco Industry).

न म स्का र 

Many in Cloud industry have heard about Ceph Storage as the storage backend for Openstack environment. Its also getting popular as an Object Storage (due to its underlying RADOS structure). Many reference whitepaper's are avaialble to understand architecture and performance what it can deliver in various configurations.

Today's focus is on a different aspect of Ceph Storage being used as standalone block storage for Telco Network Analytics. Following are a couple of use cases for reference.

1) RBD Block Storage for Linux based App Server's 
(standalone/bare metal  Linux server's or VM's)

Some Telco ISV's running Unified Performance Management to monitor and analyze the telco network performance needs to store and analyze lot of data. Hence they need distributed storage as a backend connected to multiple APP Server's for the high throughput / IOPS requirements.

Figure 1 below gives an architectural view of providing RBD devices as disks to normal linux server's/virtual machines.

Figure 1 

PR01/PR02 to PR08 in Figure 1 is using krbd linux library to talk to librbd RADOS block device (RBD) library. NFS can be either from SAN /NSA or from Ceph as in the 2nd case mentioned below.


2) Shared Storage (NFS)

NFS Server's run on physical or virtual server's as required by application as well as performance requirements. This NFS server over Ceph RBD will be connected with the Telco Application's running on Openstack VM's or even in container's and the ones required POSIX complaint shared filesystem. Once CephFS is available for General Availability, it can be used for better scalability and performance (distributed throughput requirements). This can be applied in situations mentioned above or in other situations where app needs shared filesystem.

Figure 2 gives an architectural view of NFS over RBD.

Figure 2

There are many Ceph RBD based applications (using krbd from App server's to connect to librbd on Ceph Storage for RBD connectivity). NFS Gateway is also an option for customer's using object storage and also needs NFS functionality. 

Documenting some of the core advantages of Ceph Storage for reference :
  • SCALABILITY
  1. Multi-petabyte support
  2. Hundreds of nodes
  3. CRUSH algorithm – placement/rebalancing
  4. No single point of failure
  • EFFICIENCY
  1. Standard servers and disks
  2. Erasure coding - reduced footprint
  3. Thin provisioning
  4. Traditional and containerized deployment
  • PERFORMANCE
  1. Client-side caching
  2. Server-side journaling
  3. BlueStore (tech preview)
  • DATA SERVICES
  1. Snapshots, cloning, and copy-on-write
  2. Global clusters for S3/Swift storage
  3. Disaster recovery for block and object storage
  • APIs & PROTOCOLS
  1. S3, Swift, S3A plug-in
  2. Cinder block storage
  3. NFS (using NFS Gateways)
  4. POSIX (tech preview)
  5. iSCSI (tech preview)
  • SECURITY
  1. Pool-level authentication
  2. Active Directory, LDAP, Keystone v3
  3. At-rest encryption with keys held on separate hosts
  • ADVANTAGES FOR OPENSTACK USERS
  1. Instantaneous booting of 1 or 100s of VMs
  2. Instant backups via seamless data migration between Glance, Cinder, Nova
  3. Tiered I/O performance within single cluster
  4. Multi-site replication for disaster recovery or archiving
  • CEPH CONTAINERIZATION
  1. Alternative vehicle for deploying Red Hat Ceph Storage
  2. Single container image of product available on Red Hat Container Registry
  3. Delivers same capabilities as in traditional package format
  4. Supports customers seeking to standardize orchestration and deployment of infrastructure software in containers with Kubernetes
  • MULTISITE CAPABILITIES
  1. Global object storage clusters with single namespace
  2. Enables deployment of clusters across multiple geographic locations
  3. Clusters synchronize, allowing users to read from or write to the closest one
  4. Multi-site replication for block devices
  5. Replicates virtual block devices across regions for disaster recovery and archival
With Permabit and Ceph deeper integration, compression and dedup can be a great asset, giving value for money by ensuring better TCO for unstructured data store requirements.

Storage is one of the core IT component and the way unstructured data is exploding, more and more Enterprise are in need of highly scalable and fault tolerant storage backend. This is where software defined storage like Ceph plays a vital role.

Next time will focus on some specific use cases around Openstack in Telco segment.

ध न्य वा द 






Comments

Popular posts from this blog

Deploy Openshift on MacOS.

OpenShift / Minishift v3.6 setup on MacOS. The minishift installation is very well documented here but still misses on few steps for Mac OS user’s and hence thought of providing step-by-step install to ensure a consistent setup of OpenShift on Mac OS for demo purpose. ——————————    OpenShift v3 brings many architectural changes and introduces new concepts and components. It is built around the applications running in Docker   containers , scheduling and management support provided by the   Kubernetes   project, and augmented deployment, orchestration, and routing functionality on top. OpenShift v3 is a layered system designed to expose underlying Docker-formatted container image and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer. Prerequisites for minishift install : Minishift requires a hypervisor to start the virtual machine on which the OpenShift cluster is provisio...