satis egitimisatis egitimitengda.pro

Open@Blog

Discussion on the state of cloud computing and open source software that helps build, manage, and deliver everything-as-a-service.

  • Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Tags
    Tags Displays a list of tags that has been used in the blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
  • Login
Subscribe to this list via RSS Blog posts tagged in automation

Coming back from CloudStack conference the feeling that this is not about building clouds got stronger. This is really about what to do with them and how they bring you agility, faster-time to market and allow you to focus on innovation in your core business. A large component of this is Culture and a change of how we do IT. The DevOps movement is the embodiment of this change. Over in Amsterdam I was stoked to meet with folks that I had seen at other locations throughout Europe in the last 18 months. Folks from PaddyPower, SchubergPhilis, Inuits who all embrace DevOps. I also met new folks, including Hugo Correia from Klarna (CloudStack users) who came by to talk about Vagrant-cloudstack plugin. His talk and a demo by Roland Kuipers from Schuberg was enough to kick my butt and get me to finally check out Vagrant. I sprinkled a bit of Veewee and of course some CloudStack on top of it all. Have fun reading.

Automation is key to a reproducible, failure-tolerant infrastructure. Cloud administrators should aim to automate all steps of building their infrastructure and be able to re-provision everything with a single click. This is possible through a combination of configuration management, monitoring and provisioning tools. To get started in created appliances that will be automatically configured and provisioned, two tools stand out in the arsenal: Veewee and Vagrant.

Veewee: Veewee is a tool to easily create appliances for different hypervisors. It fetches the .iso of the distribution you want and build the machine with a kickstart file. It integrates with providers like VirtualBox so that you can build these appliances on your local machine. It supports most commonly used OS templates. Coupled with virtual box it allows admins and devs to create reproducible base appliances. Getting started with veewee is a 10 minutes exericse. The README is great and there is also a very nice post that guides you through your first box building.

Most folks will have no issues cloning Veewee from github and building it, you will need ruby 1.9.2 or above. You can get it via `rvm` or your favorite ruby version manager.

git clone https://github.com/jedi4ever/veewee
gem install bundler
bundle install

Setting up an alias is handy at this point `alias veewee="bundle exec veewee"`. You will need a virtual machine provider (e.g VirtualBox, VMware Fusion, Parallels, KVM). I personnaly use VirtualBox but pick one and install it if you don't have it already. You will then be able to start using `veewee` on your local machine. Check the sub-commands available (for virtualbox):

...
Hits: 9417
Rate this blog entry:
Continue reading Comments

Data Driven Tests in Marvin

Posted by on in CloudStack Tips

Data driven tests (DDT) is useful for generating more tests with less code. If you have lot of assertions to be repeated with related data sets or parameters, Data driven tests are a way to go - saves a lot of redundant code. The data sets provided as input will be used iteratively in the test scripts in effect, running different tests cases. I’ve recently tried to implement this with Marvin, CloudStack’s Integration Test framework which is written in Python. 

Before beginning to code test suites in data driven fashion, it’s best to understand the entities which drive the tests.  For example, we may want to test the same set of VM life cycle tests for different network offerings such as a simple isolated offering, an offering with persistent network feature and an offering with Netscaler as a service provider for LB. These offerings then become the data sets which are supplied as input to the test script. And the assertions for VM life cycle are repeated for each of these network offerings.

DDT in Python

Python provides a powerful and efficient way to achieve this with the ddt library. It comes with a class decorator @ddt and method decorators @data and @file_data. Use the class decorator with the TestCase class and specify the data sets with which the tests need to be run with the @data or @file_data method decorator. While @data takes in the arguments directly to be passed to the test, @file_data will load the data set from a JSON file.

CloudStack Test Case Example

...
Hits: 12818
Rate this blog entry:
Continue reading Comments

Puppet and CloudStack

Posted by on in CloudStack Tips


Efficient use of CloudStack really demands configuration management (among other things). I've been a puppet user for many years predating my involvement with CloudStack, and thus I have a slight bias for puppet.

Thus it thrilled me to no end when I saw folks like Jason Hancock doing work around automating configuration of CloudStack instances. Jason really knows Puppet, and even operates a new Hosted Puppetmaster startup. Jason showed this off a few times at both PuppetCamp LA 2012, and at the CloudStack Collaboration Conference in 2012.

It's awesome work, and you should take the time to watch both of his videos and check out his blog.

The gist of what he was presenting is configuring instance metadata within CloudStack at deployment, having the instance read that metadata and set it as a fact, and then using case statements to apply different roles to the instances.

And then there was a knife.....plugin


Next I learned that the good folks at Edmunds.com had written a CloudStack plugin for knife. That was exciting in and of itself, especially as a CloudStack person. It wasn't just knife, which is a command-line tool for Chef, another configuration management tool. knife is commonly used to provision machines, and the folks at Edmunds.com had baked in some extra awesomeness. They had the ability to define an application stack based on a JSON definition of what the stack looked like.

So one could define a Hadoop Cluster like this in JSON, complete with network and firewall configuration:

"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
  {
    "name": "zookeeper-a, zookeeper-b, zookeeper-c",
    "description": "Zookeeper nodes",
    "template": "rhel-5.6-base",
    "service": "small",
    "port_rules": "2181",
    "run_list": "role[cluster_a], role[zookeeper_server]",
    "actions": [
      { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
    ]
  },
  {
    "name": "hadoop-master",
    "description": "Hadoop master node",
    "template": "rhel-5.6-base",
    "service": "large",
    "networks": "app-net, storage-net",
    "port_rules": "50070, 50030, 60010",
    "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
  },
  {
    "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
    "description": "Hadoop worker nodes",
    "template": "rhel-5.6-base",
    "service": "medium",
    "port_rules": "50075, 50060, 60030",
    "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]",
    "actions": [
      { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
      { "http_request": "http://${hadoop-master}:50070/index.jsp" }
    ]
  }


And then deploying a Hadoop Cluster is as simple as:

knife cs stack create hadoop_cluster_a


As a CloudStack guy I thought this was awesome, complex applications were suddenly deployable with ease, this is exactly the kind of automation that CloudStack is supposed to enable.

JEALOUSY


As a puppet afficionado though, it made me a bit sad, nothing existed like this for folks using puppet, and I was jealous.

...
Hits: 18335
Rate this blog entry:
Continue reading Comments

Portability: No snowflakes for you!

Posted by on in Cloud Strategy

An author, whose opinion I respect, wrote recently about how cloud portability 'is a bit of science fiction' at this point. While he is technically right - the idea of moving running machines from a CloudStack cloud to a vCD cloud, CloudStack, or AWS is still at best 'a bit of black magic'[1], and in other cases simply isn't feasible at all. And doing it at scale??? that's just crazy talk. That said, I'd argue (yes, this is largely just my opinion) that in most cases if you are trying for portability at the Infrastructure-as-a-Service layer, you are doing it wrong.

You see, this 'problem' of portability isn't a new one, and really isn't even cloud related, I think it's far more fundamental than that. I remember, in my days as a Pimply-Faced-Youth, of keeping exact copies of machines as cold spares - the thought being that we might need to migrate the services living on one piece of hardware, and particularly for a certain non-Unix operating system, restoring a copy of a disk, or even the disks themselves to different hardware often meant things just wouldn't work. (sound familiar?) Sometimes you got lucky and it would, and sometimes you could coerce things into a working state by installing new drivers, or updating initrd, or even adding kernel modules. So when buying hardware, we'd buy two of everything (or at least N+1) for those services that were truly business critical. Yes it was incredibly expensive, and wasteful, but it mitigated risk.

A bit later in my ops lifetime, virtualization became mainstream. Finally - we had this psuedo-hardware that would look the same regardless of the underlying physical hardware - hypervisors like VMware and Xen did wonderful things for us. Admitedly we couldn't easily migrate between hypervisor types, but life was still much better off. Not only were we able to stop buying so many 'spares', but we were a bit less concerned about machine lifecycle and had much better utilization to boot.

Folks came a bit further along with tools to convert these virt machines from VMware to KVM or Xen. As a matter of fact, on a previous blog I used to maintain, one of my most-viewed articles was one that detailed the use of qemu-img and other tools to convert VMDK files to raw disk images. I did more of this than I care to admit - and honestly when I did it, I was striving for portability, even if in a kludgy, barely automted way. Portability was the wrong thing to seek after - and I should have already known it at that point in my career.

Portability is the equivalent of trying to preserve a snowflake. It's difficult, usually messy, and often ineffective. What I didn't realize is that I really shouldn't have any snowflakes - that what I was really after was a way to consistently reproduce, not a system to save the only copy the world would ever see. Of course configuration management already existed, why I didn't understand the problem and the solution that was already out there I don't know. Perhaps I was a latent server-hugger, or feared obsolence by a few lines of ruby - but configuration management (coupled with automated provisioning) meant I didn't need portability - didn't even want portability.

...
Hits: 54667
Rate this blog entry:
Continue reading Comments

Puppet Labs: The Leading Open Source Data Center Automation Solution

One of the things we have found about running clouds at scale is the need for automation. To that end we often conduct training events called Build a Cloud Days with our open source partners. We like PuppetLabs puppet for automating configuration of CloudStack as well as Zenoss for automating the discover of cloud infrastructure to bring them under monitoring.  

One of the best ways to get up to speed on using puppet is to attend a Puppet Camp. The next one will be hosted in Atlanta on February 3rd.  

"Puppet Camp is a community oriented gathering of Puppet users and developers. You’ll have the opportunity to network with a diverse group of Puppet users, benefit from insightful lectures delivered by prominent community members, and be able to share experiences and discuss potential implementations of Puppet during our attendee generated breakout sessions."

If you can't attend this session there are many other Puppet Camps being held worldwide listed on the PuppetLabs website

Hits: 8945
Rate this blog entry:
Continue reading Comments

Open@Citrix

Citrix supports the open source community via developer support and evangeslism. We have a number of developers and evangelists that participate actively in the open source community in Apache Cloudstack, OpenDaylight, Xen Project and XenServer. We also conduct educational activities via the Build A Cloud events held all over the world. 

Connect