Open Stack Block Storage and Cinder Steven Walchek
Open. Stack Block Storage and Cinder! Steven Walchek, Solid. Fire
What we’re going to cover in ~20 minutes… ● What do you mean when you say storage? ● What is Cinder ● Configuring Cinder + a word from our sponsor
Quick Poll: ● How many of you contribute to Open. Stack? ● How many of you are end-users of Open. Stack? ● How many of you are Open. Stack Operators? ● How many of you work for Vendor Organizations that contribute to Open. Stack? ● How many are “all of the above”?
What Do You Mean When You Say Storage?
Storage. ● Ephemeral ● ● Non-persistent Lifecycle coincides with a Nova instance ● ● Object ● ● ● Manages data as. . well, an Object Think unstructured data (Mp 4) Typically referred to “cheap and deep” Usually runs on spinny drives In Open. Stack: Swift Files System ● We all love NFS and CIFS…right!? Block ● Foundation for the other types ● Think raw disk ● Typically higher performance ● Cinder
Most common question, difference between Object and Block? Cinder / Block Storage What does it do? ● ● Use Cases ● ● Workloads ● ● ● Storage for running VM disk volumes on a host Ideal for performance sensitive apps Enables Amazon EBS-like service Production Applications Traditional IT Systems Database Driven Apps Messaging / Collaboration Dev / Test Systems High Change Content Smaller, Random R/W Higher / “Bursty” IO Swift / Object Storage ● ● ● Ideal for cost effective, scale-out storage Fully distributed, API-accessible Well suited for backup, archiving, data retention Enables Dropbox-like service VM Templates ISO Images Disk Volume Snapshots Backup / Archive Image / Video Repository Typically More Static Content Larger, Sequential R/W Lower IOPS
Let’s talk Cinder
Cinder Mission Statement To implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide Software Defined Block Storage via abstraction and automation on top of various traditional backend block storage devices.
Huh? API that allows you to dynamically create/attach/detach disks to your Nova Instance.
How it works ● Plugin architecture, use your own vendors backend(s) or use the default ● Consistent API regardless of backend ● expose differentiating features via custom volume-types and extra-specs
Reference Implementation Included ● ● ● ● Includes code to provide a base implementation using LVM Just add disks Great for POC and getting started Sometimes good enough Might be lacking for your performance, H/A and Scaling needs (it all depends) Can Scale by adding Nodes Cinder-Volume Node utilizes it’s local disks (allocate by creating an LVM VG) Cinder Volumes are LVM Logical Volumes, with an i. SCSI target created for each ➔ Typical max size recommendations per VG/Cinder-Volume backend ~ 5 TB ➔ No Redundancy (yet)
Sometimes LVM Isn’t Enough Plugin Architecture gives you choices (maybe too many) and you can mix them together: * datera * fujitsu_eternus * fusionio * hitachi-hbsd * hauwei * nimble * prophetstor * pure * zfssa * New as of Juno Release coraid emc-vmax emc-xtremio eqlx glusterfc hds ibm-gpfs ibm-xiv lvm netapp nexenta nfs Ceph RBD HP-3 Par HP-Left. Hand scality sheepdog smbfs solidfire vmware-vmdk window-hyperv zadara
Making choices can be the HARDEST part! Ask yourself: ➔ Does it scale? ➔ Is it tested, will it really work in Open. Stack? ➔ Support? ➔ What about performance and noisy neighbors? ➔ Third party CI testing? ➔ Active in the Open. Stack Community? ➔ DIY, Services, both/neither (Solid. Fire AI, Fuel, Ju. Ju, Nebula…. )
A brief word from our sponsor! Solid. Fire + Cinder and Extra Specs
Solid. Fire and Cinder § Solid. Fire led the creation of Cinder (break out from Nova) § Full Solid. Fire driver integration with every new Open. Stack release § Set and maintain true Qo. S levels on a per-volume basis § Web-based API exposing all cluster functionality § Solid. Fire integration with Open. Stack Cinder can be configured in less than a minute § Seamless scaling after initial configuration § Full multi-tenant isolation
Configuring Solid. Fire Cinder Driver Edit the cinder. conf file: volume_driver=cinder. volume. solidfire. Solid. Fire san_ip=172. 17. 1. 182 san_login=openstack-admin san_password=superduperpassword Open. Stack Supports Multiple Back Ends Configured in under a minute
Eliminating Noisy Neighbors with Qo. S Performance 0 Noisy Neighbor Performance 1 Decreased Performance 2 Performance 3 The Noisy Neighbor Effect § Individual tenant impacts other applications § Unsuitable for performance sensitive apps Solid. Fire Qo. S in Practice § Create fine-grained tiers of performance § Application performance is isolated § Performance SLAs enforced
Creating types and extra-specs griff@stack-1: cinder type create super +-------------------+-------+ | ID | Name | +-------------------+-------+ | c 506230 f-eb 08 -4 d 4 e-82 e 2 -7 a 88 eb 779 bda | super | +-------------------+-------+ griff@stack-1: cinder type create super-dooper +-------------------+-------+ | ID | Name | +-------------------+-------+ | 918 cf 343 -1 f 3 d-4508 -bb 69 -cd 0 e 668 ae 297 | super-dooper | +-------------------+-------+ griff@stack-1: cinder type-key super set volume_backend_name=LVM_i. SCSI griff@stack-1: cinder type-key super-dooper set volume_backend_name=Solid. Fire qos: min. IOPS=400 qos: max. IOPS=1000 qos: burst. IOPS=2000
End users perspective griff@stack-1: cinder type-list +-------------------+-------+ | ID | Name | +-------------------+-------+ | 918 cf 343 -1 f 3 d-4508 -bb 69 -cd 0 e 668 ae 297 | super-dooper | | c 506230 f-eb 08 -4 d 4 e-82 e 2 -7 a 88 eb 779 bda | super | +-------------------+-------+ griff@stack-1: cinder create --volume-type super-dooper ……
- Slides: 20