satis egitimisatis egitimitengda.pro

Open@Blog

Discussion on the state of cloud computing and open source software that helps build, manage, and deliver everything-as-a-service.

  • Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Tags
    Tags Displays a list of tags that has been used in the blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
  • Login
ke4qqq

ke4qqq

David Nalley is currently employed by Citrix as the Community Manager for the CloudStack project. In addition he's a long time contributor to the Fedora Project, where among other things he is currently serving on the Fedora Project Board. He's also contributed to in various forms to Cobbler, Zenoss, Opengroupware.org, OLPC Math4, and Sahana. He is a frequent speaker at Free Software conferences around the nation, and writes for a number of technical and open source media publications including Linux Pro Magazine and OpenSource.com

I am excited to get to visit Tokyo again and speak at LinuxCon and CloudOpen May 20-22. 

I was somewhat surprised at the volume of OpenDaylight talks that will be given at the event; but given the relatively advanced state of networking in Asia, perhaps I shouldn't be. There are at least 7 OpenDaylight talks alongside other SDN talks. Perhaps OpenDaylight hype is even bigger in Asia than in North America.

There are lots of cloud talks as one would expect; and I am particularly glad to hear that the Japan CloudStack User Group is going to be meeting on the evening of the 20th; and I am excited that I get to attend.

Hope to see you soon in Tokyo 

Hits: 3356
Rate this blog entry:
Comments

I had a recent discussion with some folks wondering why there was now an option for 32 or 64-bit System VMs with CloudStack 4.3. I provided an answer, and linked back to some mailing list discussions. I figured this might be of general interest, so I’d document in the short term with a blog post.

For background, system VMs provide services like dealing with snapshots and image templates, providing network services like load balancing, or proxying console access to virtual machines. They’ve historically been 32-bit. The reason for this is that the 32-bit arch has been very efficient with memory usage, and since these are horizontally scalable it’s easy to just spin up another.

But you can have either – which do you pick?

Depending on the workload you might have a different answer. Some hypervisors work better with one arch over the other; and that might be a factor; but ignoring hypervisors lets examine the reason you’d want to use either. 32-bit: 32-bit operating systems are pretty efficient with their use of memory compared to 64-bit. (e.g. the same information typically occupies less space in memory). However there are limits on memory. (Yes, you could use PAE with a 32-bit kernel to get more addressable memory, but there is considerable CPU overhead to do so – which makes it inefficient given that all of this is virtualized) The 32-bit kernels also have a limit on how much memory is used by the kernel. This is really where the use case of 64-bit System VMs evolved from. Because one of the system VM functions is providing load balancing, the conntrack kernel module had a practical limit of ~2.5M connections – and that left precious little room for the kernel to do other things. CloudStack orchestrates HAProxy as the default virtual LB, which in turn uses conntrack. Having a heavily trafficked web property behind CloudStack’s 32-bit virtual load balancer might run into that limitation.

64-bit: Not nearly as efficient with memory usage; however it can address more of it. You’ll actually tend to need more memory for the same level of functionality; but if you need to push the envelope further than a 32-bit machine, then at least you have an option to do so.

...
Hits: 11591
Rate this blog entry:
Continue reading Comments

ApacheCon approaches

Posted by on in Open Source

It doesn't seem possible, ApacheCon is less than a week away. CloudStack Collaboration Conference follows shortly. I am excited about seeing ApacheCon this year, the schedule is huge and contains a ton of interesting content. That actually may be a detriment - so much content it will make deciding what talks to see difficult.

Particularly interesting to me is the ton of big data talk. There are plenty of big data projects at the ASF, and those projects have managed to bring 5 days worth of big data content to ApacheCon, coupled with 3 days worth of Lucene/Solr content. Keeping in mind that the ASF is the home for the majority of open source big data projects and ApacheCon becomes a must-attend event if you care about big data.

But ApacheCon is more than just big data, there are tracks for cloud and mobile development, as well as perennial favorites like Traffic Server, and Tomcat. 28 tracks in total.

All of this content is great; and I look forward to learning a lot while I am at ApacheCon, but it ignores the most valuable reason that I am attending: the hallway track. Being able to converse with  many members of various Apache project communities is invaluable.

I hope to see you there.

Hits: 8066
Rate this blog entry:
Comments

Thoughts from FOSDEM

Posted by on in Open Source

I had the pleasure of attending FOSDEM, the Free/Open Source Developers European Meeting this year. There are always interesting folks in attendance, and getting to meet folks you only know via IRC or or a mailing list is great. I also always pick up lots of interesting information.

 

In short, if you can make FOSDEM, you should do so. That isn't really what I wanted to write about though. FOSDEM always has a lot of related conferences; this year they included the CentOS Dojo, CfgMgmtCamp, and Infra.next. Each of those could easily fill several blog posts, but the most poignant thing in my trip this year happened at Infra.next.

 

 I often cite John Vincent with his 'Monitoring sucks' tagline. Not because monitoring sucks as a practice, but because the tools for monitoring are generally comparatively archaic, and painful to use. As a recovering sysadmin, I understand the pain behind 'Monitoring Sucks' well; and thanks to a conversation from Kris Buytaert I realized that many of John's complaints around monitoring, aren't monitoring specific at all, and that caused me to go back and reread John's original post from 3 years ago. These truths should be evident, but I think it's all too easy to forget them, and see plenty of software that ignore them. Specifically, any systems management software should keep these items in mind and avoid them:

...
Hits: 7500
Rate this blog entry:
Continue reading Comments

Building CloudStack RPMs with Ansible

Posted by on in Open Source
I've been hearing a lot about Ansible lately. First I've seen folks like Paul Angus building tooling around installing CloudStack with Ansible. Ansible intrigues me a bit; first it's largely being shepherded by Michael DeHaan, who originally wrote Cobbler and really eased a lot of pain for sysadmins needing to provision machines; so his work has some immediate credibility because of how awesome Cobbler was to use. Second, the entire decentralized config management angle is interesting. I like how minimalistic it is; and while I don't think that's necessarily a good fit for every environment, it is compelling for some. Finally, I see the blurring of lines between config management and workflow/job automation and that makes Ansible pretty versatile in my mind.   I tend to learn best when I have a concrete project to actually apply new tools to, so when the hosted puppet master service I had been using went permanently offline, I decided to recreate some of the tooling around building CloudStack RPMs in Ansible. I started out with a very basic playbook, which worked reasonable well.

Playbooks, in Ansible parlance, are a place to both define system configuration, as well allowing a known order for workflow automation. I first started with just defining a build environment for building CloudStack RPMs.

Then I had the chance to listen to Michael DeHaan showing off Ansible at a DevOps DC meetup in November, and one of his code snippets had an unknown-to-me built in variable that cut my playbook length in half (as well as the number of ssh calls as a result.)  Specifically I went from a playbook like:

  - name: Install wget
    yum: pkg=wget state=latest
  - name: Install rpm-build
    yum: pkg=rpm-build state=latest

to:



  - name: Install deps and niceties
    yum: pkg={{ list }} state=latest
    with_items:
     - wget
     - rpm-build
     ...

This made things much more efficient, and I pushed further from merely configuring the environment to also using Ansible as a bit of a workflow automation tool. So I added the entire build portion as well.

Things weren't 100% happy though - a number of the Maven dependency downloads failed, which caused compilation to fail. I really need to setup a Nexus mirror for CloudStack dependencies both to speed things up as well as to ensure they are reliably available for building. But this failure isn't the fault of Ansible, so can't really fault it here.

The end result is being able to spin up a fresh machine, point Ansible to it, applying the playbook and coming back (admittedly building the RPMs takes a long time) after a long cup of coffee and finding RPMs done. You can see my admittedly beginning attempts at this playbook here:
https://github.com/ke4qqq/ansible_cloudstack_rpmbuild

If you need to build CloudStack RPMs, or want to  see a very basic Ansible playbook you can look at:
https://github.com/ke4qqq/ansible_cloudstack_rpmbuild

Hits: 20706
Rate this blog entry:
Comments

Open@Citrix

Citrix supports the open source community via developer support and evangeslism. We have a number of developers and evangelists that participate actively in the open source community in Apache Cloudstack, OpenDaylight, Xen Project and XenServer. We also conduct educational activities via the Build A Cloud events held all over the world. 

Connect