Do-it-yourself CloudWatch-style alarms using Riemann

AWS CloudWatch is a web service that enables the cloud user to collect, view, and analyze metrics about your AWS resources and applications. CloudWatch alarms send notifications or trigger autoscale actions based on rules defined by the user. For example, you can get an email from a CloudWatch alarm if the average latency of your web application stays over 2 ms for 3 consecutive 5 minute periods. The variables that CloudWatch let you set are the metric (average), the threshold (2ms) and the duration (3 periods). 

I became interested in replicating this capability after a recent discussion / proposal about auto scaling on the CloudStack mailing list. It was clearly some kind of Complex Event Processing (CEP) — and it appeared that there were a number of Open Source tools out there to do this. Among others (Storm, Esper), Riemann stood out as being built for purpose (for monitoring distributed systems) and offered the quickest way to try something out.

As the blurb says “Riemann aggregates events from your servers and applications with a powerful stream processing language”. You use any number of client libraries to send events to a Riemann server. You write rules in the stream processing language (actually Clojure) that the Riemann server executes on your event stream. The result of processing the rule can be another event or an email notification (or any number of actions such as send to pagerduty, logstash, graphite, etc). 

Installing Riemann is a breeze; the tricky part is writing the rules. If you are a Clojure noob like me, then it helps to browse through a Clojure guide first to get a feel for the syntax. You append your rules to the etc/riemann.config file. Let’s say we want to send an email whenever the average web application latency exceeds 6 ms over 3 periods of 3 seconds. Here’s one solution:

The keywords fixed-time-window, combine, where, email, etc are Clojure functions provided out of the box by Riemann. We can write our own function tc to make the above general purpose:

 We can make the function even more general purpose by letting the user pass in the summarizing function, and the comparison function as in:

;; left as an exercise
(tc 3 3 6.0 folds/std-dev <
(email "itguy@onemorecoolapp.net"))

Read that as : if the standard deviation of the metric falls below 6.0 for 3 consecutive windows of 3 seconds, send an email.

To test this functionality, here’s the client side script I used:

[~/riemann-0.2.4 ]$ irb -r riemann/client
1.9.3-p374 :001 > r = Riemann::Client.new
1.9.3-p374 :002 > t = [5.0, 5.0, 5.0, 4.0, 6.0, 5.0, 8.0, 9.0, 7.0, 7.0, 7.0, 7.0, 8.0, 6.0, 7.0, 5.0, 7.0, 6.0, 9.0, 3.0, 6.0]
1.9.3-p374 :003 > for i in (0..20)
1.9.3-p374 :004?> r << {host: "www1", service: "http req", metric: t[i], state: "ok", description: "request", tags:["http"]}
1.9.3-p374 :005?> sleep 1
1.9.3-p374 :006?> end

This generated this event:

{:service "http req", :state "threshold crossed", :description "service  crossed the value of 6.0 over 3 windows of 3 seconds", :metric 7.0, :tags ["http"], :time 1388992717, :ttl nil}

While this is more awesome than CloudWatch alarms (define your own summary, durations can be in second granularity vs minutes), it lacks the precise semantics of CloudWatch alarms:

  • The event doesn’t contain the actual measured summary in the periods the threshold was crossed.
  • There needs to be a state for the alarm (in CloudWatch it is INSUFFICIENT_DATA, ALARM, OK). 

This is indeed fairly easy to do in Riemann; hopefully I can get to work on this and update this space. 

This does not constitute a CloudWatch alarm service though: there is no web services API to insert metrics, CRUD alarm definitions or multi-tenancy. Perhaps one could use a Compojure-based API server and embed Riemann, but this is not terribly clear to me how at this point. Multi-tenancy, sharding, load balancing, high availability, etc are topics for a different post.

To complete the autoscale solution for CloudStack would also require Riemann to send notifications to the CloudStack management server about threshold crossings. Using CloStack, this shouldn’t be too hard. More about auto scale in a separate post.

Is AWS S3 the CDO of the Cloud?

The answer: not really, but the question needs examination.

One of the causes of the financial crisis of 2008 was the flawed ratings of complex financial instruments by supposed experts (ratings bodies such as S&P). Instruments such as CDOs comingled mortgages of varying risk levels, yet managed to get the best (AAA) ratings. The math (simplified) was that the likelihood of all the component mortgages failing at the same time was very low, and hence the CDO (more accurately the AAA rated portion of the CDO) itself was safe. For example if the probability of a single mortgage defaulting is 5%, then the probability of 5 mortgages defaulting at the same time is 1 in 3 million. The painful after effects of the bad assumptions underlying the math is still being felt globally, 5 years later.

If there is a AAA equivalent in the Cloud, it is AWS S3 which promises “11 9s of durability” (99.999999999%) or:

“if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years”.

However, AWS does not give us the mathematical reasoning behind it. We have some clues:

“Amazon S3 redundantly stores your objects on multiple devices across multiple facilities in an Amazon S3 Region. The service is designed to sustain concurrent device failures by quickly detecting and repairing any lost redundancy”.

Let’s assume S3 stores 3 copies of each object in 3 datacenters that do not share the same failure domains. For example the datacenters each can have different sources of power, be located in different earthquake fault zones and flood zones. So, by design the possibility of 3 simultaneous failures is presumably low. Intuitively, if the probability of losing one object is 10-4, then the possibility of losing all 3 is (10-4)3 or 10-12. This is an oversimplification, there are other factors in the calculation such as the time to recover a failed copy. This graph (courtesy of Mozy) shows that the probability of losing an object is far lower with 3 copies (blue) than with 2 copies (yellow).

Image

The fatal flaw with the CDO ratings was that the failures of the component mortgages did chillingly correlate when the housing bubble burst : something that the ratings agency had not considered in their models. Perhaps the engineers and modelers at Amazon are far better at prediction science. But remember that the Fukushima nuclear disaster was also a result of a failure to account for an earthquake larger than 8.6. What we do know that at least in the US Standard region, AWS stores the object bi-coastally, so indeed the odds of a natural disaster simultaneously wiping out everything must indeed be quite low. There’s still manmade disasters (bugs, malicious attacks) of course. But given the uncertainty of such events (known unknowns), it is likely that the 10e-11 figure ignores such eventualities. Other AWS regions (EU, Japan, etc) do not store the objects with such wide geographic separation, but there are no caveats on the 10e-11 figure for those regions, so I’m guessing that the geographical separation in the U.S. Standard region does not figure in the 10e-11 calculation.

Interestingly none of S3’s competitors make similar claims. I can’t find Google Storage or Azure Storage making similar claims (or any claims). This riposte from the Google Storage team says

“We don’t believe that quoting a number without hard data to back it up is meaningful to our customers … we can’t share the kind of architectural information necessary to back up a durability number”.

Windows Azure storage SLA quotes an “availability” number of 99.9%. This availability number is same as that of Amazon S3.

Waitaminnit. What happened to the 11 9’s we saw above?

The difference between availability and durability is that while the object (at least 1 copy) might exist in AWS S3, it may not be available via the S3 APIs. So there’s a difference of 8 nines between the availability and durability of an S3 object.

Given the actual track record of AWS S3, it is perhaps time to revise the durability  estimates of this amazing service. If I were an enterprise or government organization considering moving my applications to the AWS cloud, I would certainly want to examine the math and modeling behind the figures. During congressional testimony, the ratings agencies defended their work by saying that it was impossible to anticipate the housing downturn: but clearly lots of people had anticipated it and made a killing by shorting those gold-plated CDOs. The suckers who trusted the ratings agencies blindly failed to do their own due diligence.

I do believe that storing your data in AWS S3 is less risky than your own data center and probably cheaper than storing it across your own multiple data centers. But it behooves one to use a backup: preferably with another provider such as Google Storage. As chief cloud booster Adrian Cockroft of Netflix says:

The SDN behemoth hiding in plain sight

Hint: it is Amazon Web Services (AWS). Let’s see it in action:

Create a VPC with 2 tiers: one public (10.0.0.0/24)  and one private (10.0.1.0/24). These are connected via a router. Spin up 2 instances, one in each tier (10.0.0.33 and 10.0.1.168).

[ec2-user@ip-10-0-0-33 ~]$ ping 10.0.1.168 -c 2
    PING 10.0.1.168 (10.0.1.168) 56(84) bytes of data.
    64 bytes from 10.0.1.168: icmp_seq=1 ttl=64 time=1.35 ms
    64 bytes from 10.0.1.168: icmp_seq=2 ttl=64 time=0.412 ms

The sharp-eyed might have noticed an oddity: the hop count (ttl) does not decrement despite the presence of a routing hop between the 2 subnets. So, clearly it isn’t a commercial router or any standard networking stack. AWS calls it an ‘implied router‘. What is likely happening is that the VPC is realized by creation of overlays. When the ethernet frame (ping packet) exits 10.0.0.33, the virtual switch on the hypervisor sends the ethernet frame directly to the hypervisor that is running 10.0.1.168 inside the overlay. The vswitches do not bother to decrement the ttl since that will cause an expensive recomputation of checksums in the ip header. Note that AWS introduced this feature in 2009 — well before open vswitch even had its first release.

One could also argue that security groups and elastic ips at the scale of AWS’s datacenters also bear the hallmarks of Software Defined Networking : clearly it required them to innovate beyond standard vendor gear to provide such to-date-unmatched L3 services. And these date back to the early days of AWS (2007 and 2008).

It doesn’t stop there. Elastic Load Balancing (ELB) from AWS orchestrates virtual load balancers across availability zones — providing L7 software defined networking. And that’s from 2009.

Last month’s ONS 2013 conference saw Google and Microsoft (among others) presenting facts and figures about the use of Software Defined Networking in their data centers. Given the far-and-away leadership of AWS in the cloud computing infrastructure space, it is interesting (to me) that AWS is seldom mentioned in the same breath as the pixie dust du jour “SDN”.

In CloudStack’s native overlay controller, implied routers has been on the wish list for some time. CloudStack also has a scalable implementation of security groups (although scaling to 100s of thousands of hypervisors might take another round of engineering). CloudStack also uses virtual appliances to provide L4-L7 services such as site-to-site IPSec VPN and load balancing. While the virtual networking feature-set in CloudStack is close to that of AWS, clearly the AWS implementation is likely an order of magnitude bigger.

Stackmate : execute CloudFormation templates on CloudStack

AWS CloudFormation provides a simple-yet-powerful way to create ‘stacks’ of Cloud resources with a single call. The stack is described in a parameterized template file; creation of the stack is a simple matter of providing stack parameters. The template includes description of resources such as instances and security groups and provides a language to describe the ordering dependencies between the resources.

CloudStack doesn’t have any such tool (although it has been discussed). I was interested in exploring what it takes to provide stack creation services to a CloudStack deployment. As I read through various sample templates, it was clear that the structure of the template imposed an ordering of resources. For example, an ‘Instance’ resource might refer to a ‘SecurityGroup’ resource — this means that the security group has to be created successfully first before the instance can be created. Parsing the LAMP_Single_Instance.template for example, the following dependencies emerge:

WebServer depends on ["WebServerSecurityGroup", "WaitHandle"]
WaitHandle depends on []
WaitCondition depends on ["WaitHandle", "WebServer"]
WebServerSecurityGroup depends on []

This can be expressed as a Directed Acyclic Graph — what remains is to extract an ordering by performing a topological sort of the DAG. Once sorted, we need an execution engine that can take the schedule and execute it. Fortunately for me, Ruby has both: the TSort module performs topological sorts and the wonderful Ruote workflow engine by @jmettraux. Given the topological sort produced by TSort:

["WebServerSecurityGroup", "WaitHandle", "WebServer", "WaitCondition"]

You can write a process definition in Ruote:

Ruote.define my_stack do
  sequence
    WebServerSecurityGroup
    WaitHandle
    WebServer
    WaitCondition
  end
end

What remains is to implement the ‘participants‘ inside the process definition. For the most part it means making API calls to CloudStack to create the security group and instance. Here, the freshly minted CloudStack Ruby client from @chipchilders came in handy.

Stackmate is the result of this investigation — satisfyingly it is just 350 odd lines of ruby or so.

Ruote gives a nice split between defining the flow and the actual work items. We can ask Ruote to roll back (cancel) a process that has launched but not finished. We can create resources concurrently instead of in sequence. There’s a lot more workflow patterns here. The best part is that writing the participants is relatively trivial — just pick the right CloudStack API call to make.

While prototyping the design, I had to make a LOT of instance creation calls to my CloudStack installation — since I don’t have a ginormous cloud in back pocket, the excellent CloudStack simulator filled the role.

Next Steps

  • As it stands today  stackmate is executed on the command line and the workflow executes on the client side (server being CloudStack). This mode is good for CloudStack developers performing a pre-checkin test or QA developers developing automated tests. For a production CloudStack however,  stackmate needs to be a webservice and provide a user interface to launch CloudFormation templates.
  • TSort generates a topologically sorted sequence; this can be further optimized by executing some steps in parallel.
  • There’s more participants to be written to implement templates with VPC resources
  • Implement rollback and timeout

Advanced

Given ruote’s power, Ruby’s flexibility and the generality of CloudFormation templates:

  • We should be able to write CloudStack – specific templates (e.g, to take care of stuff like network offerings)
  • We should be able to execute AWS templates on clouds like Google Compute Engine
  • QA automation suddenly becomes a matter of writing templates rather than error-prone API call sequences
  • Templates can include custom resources such as 3rd party services: for example, after launching an instance, make an API call to a monitoring service to start monitoring port 80 on the instance, or for QA automation: make a call to a testing service
  • Even more general purpose complex workflows: can we add approval workflows, exception workflows and so on. For example, a manager has to approve before the stack can be launched. Or if the launch fails due to resource limits, trigger an approval workflow from the manager to temporarily bump up resource limits.

Conference Highlights of 2012

Aside

2012 was a conference-filled year for me. I presented or exhibited in 10 conferences and attended 3 more. I have never spoken publicly so extensively before: here’s a recap.

Open Networking Summit April 2012, Santa Clara, CA

CloudStack has a built-in controller that controls setup and tear-down of GRE tunnels in response to VM provisioning events. We demo’ed this feature at the conference to great interest. The feature spec (for those inclined to code) is here

CloudStack Developer Bootcamp, May 2012, Santa Clara, CA

In April 2012, Citrix donated the CloudStack codebase to the Apache Software Foundation. To introduce developers and build a community, Citrix sponsored a developer bootcamp at their Santa Clara office. As one of the architects, I introduced CloudStack architecture and networking concepts in CloudStack.

Citrix Synergy May 2012, San Francisco, CA

This was a first for me as a Citrix Employee. There was a last minute scramble to demo the HDFS integration into CloudStack, but it didn’t happen.

Carrier Cloud Summit June 2012, Paris

I was invited to speak on scalable networking and I prepared this talk on the scalability challenges in CloudStack networking. The talk was titled Making a million firewalls sing. It was well received.

Hadoop Summit June 2012, San Jose, CA

I merely attended this to get a good grasp of Hadoop, HDFS in particular. I attended several talks around scaling HDFS and making it reliable.

OSCON July 2012, Portland, OR

I presented a tutorial on Hacking on Apache CloudStack. I can’t say it was a great success since several audience members could not get their DevCloud appliances to start (and I had some problems too) due to variances in virtual box dependencies. DevCloud is much improved now thanks to efforts by the community

Cloudcon Expo and Conference Oct 2012, San Francisco, CA

As part of the Build-a-Cloud-Day, I presented a talk on CloudStack networking concepts.

Cloud Tech III Oct 2012, Mountain View, CA

I presented a talk on the Evolution of CloudStack Architecture chronicling the journey from 5-person startup to a force-to-reckon-with. Cloud Tech III had some great speakers from Google and a great insights from Andy Bechtolsteim.

Ricon Oct 2012, San Francisco, CA

I presented a lightning talk on how Basho’s Riak CS is integrated into Apache CloudStack. Riak CS is an S3 clone.

Citrix Synergy Oct 2012, Barcelona

As part of the CTO super-session that involved the Citrix and Cisco CTOs, I demo’ed CloudStack’s advanced network automation.

AWS re:Invent Nov 2012, Las Vegas, NV

Great keynotes and insights. I especially enjoyed James Hamilton’s talk on Building Web-Scale Applications with AWS.

CloudStack Collab Nov 2012, Las Vegas, NV

I reprised my talk on the Evolution of CloudStack Architecture and continued the theme by talking about the Future of Apache CloudStack.

Usenix LISA Dec 2012, San Diego CA

I presented a 3-hour tutorial Networking in the Cloud Age for cloud operators.

2013

I hope to present at more conferences next year. I have 2 talks lined up for ApacheCon in Portland for February 2012