न म स्का र
Many in Cloud industry have heard about Ceph Storage as the storage backend for Openstack environment. Its also getting popular as an Object Storage (due to its underlying RADOS structure). Many reference whitepaper's are avaialble to understand architecture and performance what it can deliver in various configurations.
Today's focus is on a different aspect of Ceph Storage being used as standalone block storage for Telco Network Analytics. Following are a couple of use cases for reference.
1) RBD Block Storage for Linux based App Server's
(standalone/bare metal Linux server's or VM's)
Some Telco ISV's running Unified Performance Management to monitor and analyze the telco network performance needs to store and analyze lot of data. Hence they need distributed storage as a backend connected to multiple APP Server's for the high throughput / IOPS requirements.
Figure 1 below gives an architectural view of providing RBD devices as disks to normal linux server's/virtual machines.
![]() |
| Figure 1 |
PR01/PR02 to PR08 in Figure 1 is using krbd linux library to talk to librbd RADOS block device (RBD) library. NFS can be either from SAN /NSA or from Ceph as in the 2nd case mentioned below.
2) Shared Storage (NFS)
NFS Server's run on physical or virtual server's as required by application as well as performance requirements. This NFS server over Ceph RBD will be connected with the Telco Application's running on Openstack VM's or even in container's and the ones required POSIX complaint shared filesystem. Once CephFS is available for General Availability, it can be used for better scalability and performance (distributed throughput requirements). This can be applied in situations mentioned above or in other situations where app needs shared filesystem.
Figure 2 gives an architectural view of NFS over RBD.
![]() |
| Figure 2 |
There are many Ceph RBD based applications (using krbd from App server's to connect to librbd on Ceph Storage for RBD connectivity). NFS Gateway is also an option for customer's using object storage and also needs NFS functionality.
Documenting some of the core advantages of Ceph Storage for reference :
- SCALABILITY
- Multi-petabyte support
- Hundreds of nodes
- CRUSH algorithm – placement/rebalancing
- No single point of failure
- EFFICIENCY
- Standard servers and disks
- Erasure coding - reduced footprint
- Thin provisioning
- Traditional and containerized deployment
- PERFORMANCE
- Client-side caching
- Server-side journaling
- BlueStore (tech preview)
- DATA SERVICES
- Snapshots, cloning, and copy-on-write
- Global clusters for S3/Swift storage
- Disaster recovery for block and object storage
- APIs & PROTOCOLS
- S3, Swift, S3A plug-in
- Cinder block storage
- NFS (using NFS Gateways)
- POSIX (tech preview)
- iSCSI (tech preview)
- SECURITY
- Pool-level authentication
- Active Directory, LDAP, Keystone v3
- At-rest encryption with keys held on separate hosts
- ADVANTAGES FOR OPENSTACK USERS
- Instantaneous booting of 1 or 100s of VMs
- Instant backups via seamless data migration between Glance, Cinder, Nova
- Tiered I/O performance within single cluster
- Multi-site replication for disaster recovery or archiving
- CEPH CONTAINERIZATION
- Alternative vehicle for deploying Red Hat Ceph Storage
- Single container image of product available on Red Hat Container Registry
- Delivers same capabilities as in traditional package format
- Supports customers seeking to standardize orchestration and deployment of infrastructure software in containers with Kubernetes
- MULTISITE CAPABILITIES
- Global object storage clusters with single namespace
- Enables deployment of clusters across multiple geographic locations
- Clusters synchronize, allowing users to read from or write to the closest one
- Multi-site replication for block devices
- Replicates virtual block devices across regions for disaster recovery and archival
With Permabit and Ceph deeper integration, compression and dedup can be a great asset, giving value for money by ensuring better TCO for unstructured data store requirements.
Next time will focus on some specific use cases around Openstack in Telco segment.
ध न्य वा द


Comments
Post a Comment