Public Cloud or Private Cloud: an analogy

public or private coffee?

public or private coffee?

The usual morning dilemma: to spend the time & make (and drink) a coffee at home, or grab one from the cafe at the office! Then, the cloud analogy hit… to go “private coffee” or “public coffee”! Go with me on this one… :)

- Public coffee: grabbing a coffee at a cafe, paying the $3

- Private coffee: having the beans, the grinder, the espresso machine, the milk and the time/skill to make a coffee at home.

If someone asked me to make a choice of only ever having public coffee or private coffee, I would struggle with the answer.

I like private coffee. Spending time selecting the right beans, ensuring the grind setting is spot on, the machine primed, the perfectly warmed and frothed milk with the mandatory attempt at artistic imagery. It tastes awesome. There is pride in every step of the process. It’s one of the things I look forward to every Sunday morning.

I also like public coffee. The sight and awe of the industrial grade espresso machine, imported from Italy, the special blend roast, the trained barrister and that perfect pour and first sip. There’s an efficiency gained, both in time and quality, ending up with that consistent perfect nectar every time. Not to mention the time to chat or glance at the paper.

So what’s better? In terms of coffee, for me, there’s a time and a place for public coffee and private coffee. The day usually begins with a private coffee, and rapidly evolves into Public coffee for the majority. Public coffee is available everywhere, provides the perfect product, pay for what I consume, and means I can get on with the business of life.

Bringing this rather stretched analogy back to cloud and business, hopefully it helps somewhat. Private cloud is about acquiring equipment, skills, ingredients, and having the time to produce the output. You need to be the expert at cloud in Private cloud. Public cloud is about getting the same wonderfully awesome output (if not better output) in a more efficient and on demand basis. You can focus on the business and let someone else be the expert at cloud. Public cloud is also available anywhere, and in many different forms.

Stretched analogy? Possibly… but hey… I like good coffee, in any form :)

Written for Ninefold on 10/01/2013: http://ninefold.com/blog/cloud-computing/coffee-and-the-cloud

Government headed toward a Cloud Tipping Point

Government Cloud

How awesome would it be to see the Australian Government ramp up its adoption of public cloud in Australia. There are so many reasons and benefits associated with this, especially when considering the business agility and innovation it would encourage across the country.

There are three things that – in combination – suggest we could be in a unique position to see a tipping point for Government adoption of public cloud in Australia.

  1. Firstly, procurement via DCaaS – Today Ninefold was accepted onto the Federal Government’s Data Centre As A Service Multi Use List (DCaaS MUL). Whilst a rather long acronym, it is a great thing, it means that government agencies can now use the Ninefold public cloud for projects with a budgeted contract value up to $80k. But getting onto the DCaaS supplier list is not the most exciting bit, what’s most exciting is that the Federal Government have made it easier for their agencies to start using Public Cloud services without the usual traditional drawn out procurement process.
  2. Secondly, the National Cloud Computing Strategy was released last month by Communications Minister Stephen Conroy, which, amongst other things, “requires federal agencies to consider cloud services”. Ok, so maybe could have used stronger language and been a bit bolder. But it is encouraging to read the following report, stating that as a result of the new strategy, “Government agencies will move their public-facing websites to the cloud as part of the federal government’s new push to increase the use of cloud computing in both the public and private sectors.” There is a mandate.
  3. And thirdly, the Australian Government CTO John Sheridan this month talked about some very interesting Government procurement statistics in his blog. Two of the statistics stood out as particularly relevant:
    • 71% of Government contracts have a contract value below $80k. This gives a bit more weight to the importance of something like the DCaaS MUL procurement process.
    • 98% of the services procured by the Government are sourced in Australia. This is great for Australian owned and operated cloud providers like Ninefold.

So.. if Government agencies have been mandated to move their public facing apps to the cloud – AND – there is the DCaaS procurement model available today that allows them to procure cloud services easily – AND – most projects fall within the majority budget-wise – then it could be that we have the perfect conditions right now to see the tide rise, to witness a tipping point in the ramp-up & adoption of Australian public cloud by federal & local government agencies.

Only time will tell, wouldn’t it be awesome!

One thing is for sure, Ninefold is working hard to make self service public cloud simple & intuitive across multiple secure Australian Data Centres, with global scale and infrastructure performance to energise the most demanding of applications. Not to mention our 24×7 local support team.

Written for Ninefold on 25/06/2013: http://ninefold.com/blog/cloud-hosting/government-headed-towards-a-cloud-tipping-point/

Cloudstack Graduates at Apache – Three Observations

CloudstackEver hear the story of the startup that got acquired for $200M to then be donated to the open source community? Not your every day story. But then, the current online-cloud-mobile-software revolution is your not every day event – our world is changing – business is changing.

This week saw some open source awesomeness, the next chapter of that story, the announcement that Apache Cloudstack has graduated to an Apache Top Level Project (TLP) at the Apache Software Foundation (ASF). This signifies that the Project’s community and products have now come under the Foundation’s meritocratic process and principles.

The story and the evolution of Cloudstack to an Apache TLP provides some cool insight into this changing world of cloud..

  1. The power of community – it was reported that over the last 12 months Apache Cloudstack received inputs from 700+ contributors and close to 60 committers – each bringing valuable insights, knowledge & experience. During this time Cloudstack went from being a Citrix lead development, to a Foundation lead development with now only 32% of the developers from Citrix. An impressive demonstration of the power of community“When CloudStack first became an Apache Incubator project, it was a well-established cloud management platform with a mature codebase,” said Chip Childers, Vice President of Apache CloudStack. “Our work in the Incubator has focused on growing a really strong community around the code and establishing the governance practices expected of a top-level project within The Apache Software Foundation.” (source)
  2. Cloud is bigger than any one vendor – there are a few developing standards out there in the land of Cloud, some open source, some vendor defacto. Could Cloudstack become the “Apache Web Server of the Cloud”? The success of the Apache Web Server rates pretty high, it became a key pillar of the internet as we know it today. And it continues to enjoy dominance as the most widely used web server, with 54% market share as at March 2013. Why did it achieve such adoption? Among the many reasons, the Apache Web Server became bigger than any one vendor. It’s neutral, it’s open source and it’s reliable. It will be interesting to see if the same successful ASF model will produce a ‘cloud stack’ of similar proportion.”We believe that Infrastructure-as-a-Service is the next generation of IT infrastructure, and that people will demand open standards and open governance for such an important layer in their IT stack,” explained Childers. That is why having the CloudStack project meet the rigorous standards of ASF governance is so significant.” (source)
  3. Do “Cloud” like it’s 2013 – in 2010 the disruptive powers of Cloud were self service, usage based virtual servers, with API & hardware abstraction. No small feat back in the day! Fast forward from 2010 & what was the essence behind Cloudstack v2.0 and we’re in 2013 talking about Cloudstack v4.x where those things are a given (isn’t everyone doing self service now?). Cloud in 2013 is about so much more.. regions, persistence, autoscale, inter-op, app & database frameworks etc. If you’re not already doing Cloud like it’s 2013, get your skates on, 2014 is right around the corner :)

Competitive forces are still very much ripe for disruption and innovation in the cloud and there’s never a dull moment here at Ninefold as we strive to be true do’er’s of Cloud. Congratulations to the ASF & Cloudstack community from all the crew here at Ninefold!

Written for Ninefold on 28/03/2013 : http://ninefold.com/blog/innovation/cloudstack-graduates-at-apache-three-observations/

Apps that are “of the cloud”

Cloud AppsThere’s a new wave of applications being built… Apps built “of the cloud”, not for the cloud, or on the cloud, but “of the cloud”. They are born with cloud. They depend less on underlying service levels or preset capacity or frequent deployment of changes attributed to agile development. Not that these items are any less important, quite the opposite! Instead applications “of the cloud” see the world differently, taking control over these items and overcoming them – more specifically, the applications leverage the true benefit of cloud to overcome them.

The National Australia Bank said it well in a recent ZDnet article when they said they wanted to ensure that the apps they deploy to #cloud are “not just virtualised incarnations of apps from the old IT infrastructure“. Sure, there’s lower cost & better manageability benefits for deploying any app to the cloud, but the true benefits of cloud are found in the applications written of the cloud. The NAB went on the say that they want to “rethink how we run solutions to make sure that, in the future, they are designed and deployed for the cloud”.

What does this mean? How does an application “of the cloud” overcome these challenges?

  1. They leverage multiple availability zones. An application “of the cloud” is able to work across cloud availability zones. It is distributed. A disruption to a single availability zone does not affect the application, the application carries on serving traffic and performing critical business functions & calculations.
  2. They auto-scale. An application “of the cloud” is able to spin up more capacity when load demands it, and spin down capacity if no longer required. No intervention required. The application takes control of its own capacity, ensuring maximum efficiency and a reduction in risk caused by a lack of capacity.
  3. The cloud becomes code. One of the core definitions of cloud is self service, almost every cloud providing a self service portal for provisioning the cloud infrastructure. An application “of the cloud” however leverages the cloud’s API, it does not need a portal, and instead it imbeds cloud provisioning functions within the application itself, in the code. The application takes charge of provisioning its own cloud infrastructure using defined logic that makes sense to it as an application, there is no need for a person using a self service portal UI.

In short, when thinking about deploying to the cloud, think about two categories of applications: traditional workloads & new “of the cloud” workloads. Both are candidates for cloud, just in different ways and for different benefits. For the former, there are major cost and manageability benefits of deploying to the cloud. For the latter, the benefits of cloud become unique and disruptive – in essence, abstracting away the cloud infrastructure and ruling it via code.

Written for Ninefold on 07/12/2012: http://ninefold.com/blog/cloud-programming/apps-that-are-of-the-cloud/

Multiple Cloud availability zones become ‘the norm’

blog-pulled-ethernet-cableYou know disruption is afoot in an industry when a “value added” service becomes “the norm” – that point where a premium service feature or an “optional extra” becomes available to the masses – customer expectations of the standard product evolve. Remember when air-conditioning in the humble automobile was an “optional extra”, not that long ago! These days air conditioning is standard in what a car provides, part of the standard product. It’s at this point when an industry is ripe for innovation, the next wave of product differentiation.

With the rapid rise and adoption of cloud computing, what was once an “optional extra” for IT is quickly becoming part of the standard customer expectation – Self Service, Usage-based billing for example. But what is next from the cloud is even more exciting because it challenges one of the most fiercely competed areas of differentiation, the service level (the SLA).

Multiple Availability Zones in the cloud will challenge the status quo of IT when it comes to Service Levels and SLA’s. Is running your application in the cloud across Multiple Availability Zones an “optional extra”, or now part of the standard customer expectation?

  • Quick definition:  An Availability Zone is a physically separated set of infrastructure designed to be isolated from failures in other Availability Zones. Virtual Servers run in an Availability Zone.

The Holy Grail of IT is probably pretty close to being able to seamlessly run an application across multiple separate infrastructures at the same time – in unison. But the cost and effort involved in attaining this has historically proved prohibitive for the majority, making it a “nice to have” – an “optional extra” – the cost-risk-analysis usually parking large scale adoption. So we focus on Service Levels (SLAs) and rebates and terms and conditions to mitigate risk.

But disruption is afoot – technology and cloud have started to shift the customer expectation away from a sole focus on the Service Level, and onto true web scale application design and hosting via Multiple Availability Zones. The Cloud has made developing and running an application across Multiple Availability Zones easy and affordable for the masses. The tools are now in the customer’s hands, with developer API access to automate.  You only have to Google “cloud” to see some of the high profile cloud casualties last month copping bad press for downtime caused by not running their application across multiple availability zones. Expectations are changing; “the norm” is changing!

Today Ninefold is publically kicking off its multiple Availability Zone strategy, announcing the release of Availability Zone 2 and the launch of Availability Zone 3 a little later in the year. Ninefold Customer’s can now provision cloud services in one, or many of the Ninefold Availability Zones via the simple self service portal or the Ninefold API. Even better, Ninefold customers are able to snapshot and move their cloud services between Availability Zones via inter-zone private networking. Check out the details here: ninefold.com/cloud-architecture/cloud-availability-zones/

Implementing your application in the cloud across Multiple Availability Zones is no longer an optional extra – it’s fast becoming the norm, it’s easier to do, it’s affordable and it’s in Australia with Ninefold!

Written for Ninefold on 11/07/2012 http://ninefold.com/blog/cloud-architecture/multiple-cloud-availability-zones-become-the-norm/

Data Upload Faster than a Speeding Pigeon

It takes 2 hours, 6 minutes and 57 seconds to transfer and upload data over a distance of 80km via pigeon – and no, that isn’t a typo. In 2009, Unlimited IT in South Africa strapped a data card to Winston the carrier pigeon’s leg and sent him on his way to their Durban office in a race against data uploaded at the same time via the local internet service provider. By the time Winston’s special delivery had arrived and been downloaded, only 4% of the same data had been successfully transferred over the internet connection.

Sure, internet speeds and connectivity in South Africa may be significantly behind what we are used to here in Australia. However, Winston might still beat your data upload if transfering TBs of data, even post NBN.

Don’t worry – Ninefold isn’t announcing a carrier pigeon service. We don’t have the roof space. But when thinking about the transfer of large amounts of data (e.g. terabytes), it may be more practical to get those envelopes out and use the internet to find a courier to deliver your devices to us instead.

Let’s compare something more practical at the centre of many overnight courier services – a Boeing 747-400. Theoretically, if you packed it full of 3TB hard drives and flew it from LAX to JFK (flight time 15,254 seconds) it would transfer 98 Terabits per second.

Again… Ninefold isn’t about to move into the airfreight business. But I merely bring up pigeons and jumbo jets to illustrate that data transfer needn’t remain a hurdle for those businesses looking to adopt cloud storage. Storing your data in the cloud makes a lot of sense on so many levels: accessible from anywhere, massively scalable, no upfront investment, pay for what you use, secure data centre location, etc. But if you already have a large volume of data, how do you get it into the cloud in the first place?

There are plenty of tools available for mirroring and synchronising data, or uploading files into the cloud. These are great because not a whole lot of data may change on a day by day basis. Where they struggle (from an efficiency perspective) is the initial upload of ‘big’ data.

Depending on your office or home internet connection speed, it could take days, if not weeks, to transfer a terabyte of data into the cloud. If you check out the many online bandwidth calculators on the web, they show that it will take anywhere from 6 days (10Mbps link) to 60 days (1.5Mbps link) to transfer a TB of data (assuming the maximum throughput rate). And this doesn’t take into consideration factors such as latency and/or other items consuming your bandwidth at the same time. Plus there’s the impact that committing your upload capacity to getting your data into the cloud can have on your other online business activities.

Sneakernet is the common term for the transfer of data by courier and other traditional methods, instead of relying on an internet last mile connection that – let’s be realistic - still struggles with unfeasibly large data volume transactions.

And yes, this is what Ninefold has just announced. A bit more practical than building pigeon coops or adding a runway to our offices.  Now you can just send us your hard drives and we’ll pump the data straight into the cloud. No bandwidth hogging, far less hassle, and everything ready to use within 3-5 days. Oh, and we’ll even send you your hard drives back cause that’s the kind of chaps we are.

Officially, the service is called Sneakernet. Personally I’m calling it Winston, after a certain pigeon that proved a point.

Written for Ninefold on 22/09/2011 : http://ninefold.com/blog/cloud-storage/data-upload-faster-than-a-speeding-pigeon/