UPDATED 15:25 EDT / OCTOBER 07 2009

SALabs’ October Silicon Valley Cloud Club Report [Part 1]

image[Editor’s Note: Our San Francisco Cloud Club (in partnership with the Silicon Valley one) meeting that took place Monday was a rousing success, with a pretty impressive turnout for our first meeting. James Watters and Rich Miller both took extensive notations of the event.

James Watters you probably know as our Managing Director of Cloud Computing, but Rich Miller you may not. He’s very active on Twitter with the cloud computing usual suspects and serves as CEO of Replicate Technologies, a provider of predictive fault management for virtualized datacenters.

Below are the slightly edited and cleaned up notes from part of the event – there are more agenda items and Q&As we’ll be posting over the course of the next few days and weeks. All questions were input by attendee’s during the first free-form hour of the event.

Keep your eye on this space to know when the next SF Cloud Club is congregating! – mrh]

Q: Will cloud growth be driven primarily by new applications, and companies or migration of existing applications and workloads, including the cloud technologies coming form VMware (James Watters)

James Watters: Web application suitability for the cloud is nearly taken for granted according to the crowd last night–and this is a big step. Web and collaboration workloads have been the fastest growing segment of the server market since at least 2002. If you aren’t building, or deploying your web applications in a cloud you are either rich enough to build out your own private web cloud (Twitter, Facebook) or are being pessimistic about your growth opportunities.

Rich Miller: In addition, the economic argument for small and medium sized companies puts a good deal of pressure on those companies — even the established ones with established / legacy apps — to make significant use of cloud.  However, they tend not to be ‘migrating’ or ‘porting’ existing enterprise applications to the cloud, but rather transferring their use to SaaS.

The problem for mid-size and enterprise class companies comes from a recognition that no one SaaS vendor has put in place the ‘perfect suite’ of applications, nicely integrated with one another, and ready for adoption by the new customer.  The fact is that companies want to pick the best combination of cloud-oriented applications — independent of who provides it, they want ‘best of breed’ or ‘most suited to my problem.’  If they pursue a philosophy of cherry-picking the SaaS or a combination of on-premise and SaaS applications, they’re confronted with the problem of application integration.

The integration of independently provided applications, residing in the cloud but offered by completely different commercial providers, presents a problem that no one is prepared to solve.  In some cases, it’s possible to provide ‘SaaSy EAI” by throwing a lot of money at integrators… thereby defeating one of the main purposes for going to cloud apps.  In many cases, the range of independent, third-party cloud-based services that permit SaaS integration with one another, AND with those applications that remain on-premise are just not yet in existence.  What kinds of services?  I (rhm) gave the examples of:

–  trust broker services that act on an enterprise’s behalf to manage authentication and authorizations that are required when doing multi-SaaS application integration

– the accounting and settlement intermediaries that act both as clearinghouses for the exchange of payments among/between SaaS vendors, as well as manage chargeback within the corporate customer’s operatoins

The ‘consensus’ on this question — if there was one — is that growth in cloud-based applications will hit a barrier that was estimated at between 10% and 20% penetration.  It will remain there for the foreseeable future (2 -3 years ?), and remains there until some of these SaaS-integration issues are addressed.

Q:What is the impact of internal private clouds on both enterprises and external cloud service providers? (Randy Bias)

imageJames Watters: I got the ‘scrunch face’ from Randy Bias, and James Urquhart when I suggested that private clouds need to adhere to public cloud standards to be really useful.  I believe this is important because it keeps both the economics and usability innovations of the public cloud proximal to how users evaluate their internal private clouds–or as /Hoff said once, allows public cloud to be the forcing function for change.

If Private or internal clouds get really exotic, with proprietary in-house created management, deployment and consumption functions they won’t play as easily with the coming wealth of interesting solutions created on top of public cloud standards.

The other point is simple: this is what really smart companies already have today. If you sit down with the top investment banking firms in the country many of them have highly sophisticated JeOS optimized application deployment, scaling, patching, and management functions for autonomic computing–but its expensive to create this kind of in-house IP.

Amazon sources tell me that over 40% of their revenues are driven by third party applications built directly atop their API. If you build an internal cloud not compliant to public standards you may be left without access to this increasingly important ecosystem of innovation.

Rich Miller: For better or worse, the adoption of cloud-oriented computing by the enterprise and SMB will start as a transition from ‘the way things are done now’ to in-house, on-premise clouds.  IT organizations will get religion … in part through the widespread adoption of server virtualization … and start operating their in-house IT organizations like utilities:  lots of self-service, pay-as-you-go, multi-tenancy. (Remember: cloud is an operating model, not just a technology model.)

But, in order to get there in an orderly fashion, the path will be evolutionary.  And, in order to get there, some of the internal clouds will be mixed-bags of infrastructure-cloud offerings (especially in-house data clouds), platform-cloud offerings and application-cloud offerings.

To your point, James, one way in which coordination and compatibility with public cloud offerings may come about is if the management systems that the enterprise purchases to operate, administer and manage their in-house operations are built to recognized ‘standards’… those offered by the most powerful service providers (e.g. Amazon AWS) or technology providers (e.g. VMware).  Over a reasonable period of time, the management of an in-house, on-premise cloud will morph easily into managing hybrids (both on- and off-prem).

Q:What is the Future of Hybrid Microsecond SSD Clouds (Karriem @ak2consulting )

James Watters: I don’t know of any large public cloud explicitly offering a pool of SSD specific storage. We did not get a chance to address this questions last night. SSD’s have a future in the cloud–but its not clear to me yet if they will be explicitly exposed, vs. used opaquely to speed up a storage services performance. My preference would be for any implementation and management of this mix to fall under the responsibility of the cloud provider.

Rich Miller: This question of SSD in the large public cloud wasn’t addressed.  However, there was some conversation that expressed the notion that, by placing SSD in-house / on-premise, and incorporating it into a new architecture of data federation that extended to the cloud, the nature of cloud storage would change significantly.  No one delved deeply enough into the questions of how things would change when SSD-based data-stores extended from internal, on-premise resources into network-resident data clouds.

Q: What is the future of Database as a Service? (Srini V. Srinivasan)

James Watters: This is one of the most important questions in cloud computing (data structure attributes and control).  A major online virtual world company was at the meetup last night looking into using a next generation no-sql architecture for their growing application. This is a huge topic worthy of a whole club session to discuss.

I think the future is bright and amazing–but the isolation of ‘states’ is currently a big problem. The good news is that Microsoft is taking a different approach from Google and Amazon and other’s so we should have in interesting adoption war ahead of us.

imageI wonder what DB structure Amazon really uses internally to run their commercial transactions? They haven’t exposed that service yet…see next question as well for more.

Rich Miller: In another context, this issue of a ‘collision’ of NoSQL and SQL-based database services showed up.  And, as a practical matter, the group was split between ‘true believers’ in the NoSQL / Type-Value database services and those who were more ecumenical.  (I fall into that camp.)

The less dogmatic camp can characterize their argument like this:  There are legitimate problems that are well-suited to relational DBMS.  Not only that, there are tons of DBMS apps — particularly transactional applications — that simply are not going to be reworked from first principles in order to be ‘pure’ in their rejection of relational DBMS and wholesale adoption of Type-Value.  As a practical matter, it just can’t be an ‘either / or’ argument.

At the same time, they believe that the Type-Value database for ‘Big Data’ applications is increasingly available and increasingly useful for new applications and a slew of well-recognized applications that have had to endure unnatural acts to allow them to be run on relational DBMS.  This is great, and it’s liberating… use a database technology that is most appropriate to the problem being addressed.

Q: Must transactions be kept out of the cloud? If so, does cloud computing serve as nothing more than either a platform for social services and websites? (DISCUSSED)

James Watters: In the long run no, but in the shorter term it seems classic ACID DBs are not the ideal initial workload to migrate to the cloud. The real question is can very large transactional workloads be put into the cloud, as MS will address small and medium sized ACID DBs even with their initial release of Azure SQL.

One consistent message from cloud discussions is that people need to get more creative about when they need to use transactions.

Rich Miller: Yeah… this whole question of whether cloud can or cannot be used for transactional data is just wrong on its face.  The idea that a cloud-based or ‘hosted’ DBMS is unsuited for ACID DB just comes across as silly or naive.

Now… Will transactional processing in the cloud change BECAUSE of the new topologies and nature of shared-tenancy, and global scales?  Without a doubt.  I’m putting at least some of my money on the applicability of complex event processing (CEP) and what my friend Duncan Johnston-Watt of CloudSoft refers to as “Content Processing Networks”… environments where data in motion is being operated on as it moves by means of reliable messaging.

(OK… stop me before I spin out of earth orbit here.)

And, as you point out…  we should all just ask @GeorgeReese.  He called BS this past weekend on folks spouting off about ACID DB and relational DBMS in general being unsuitable for cloud computing.

Q: What kinds of apps will be deployed on the cloud?  Is it all about large scale or will “normal” apps be deployed to the cloud? (Derek Henninger)

James Watters: One of my favorite exchanges of the night was when Oren Tiech from Heroku lept into the conversation to describe the myriad of applications perfectly suitable for his PaaS offering. I got the feeling he thought at least 50% of internal IT and external web applications could be written for his platform without a second thought to any infrastructure operations details.

image Rich Miller: I don’t think that he’d make that claim.  I’ve had this conversation with him.  What he WOULD claim is that there are an astounding number of applications that have (a) not been done before because of prohibitive cost and (b) there are a huge number of applications used today by enterprises / internal IT which lend themselves brilliantly to the code-run-manage approach that is characterized by platform offerings like Heroku.

…Speaking with him prior to the moderated session gave me an appreciation for just how complex, and evolved the underpinnings of their platform really are. They are solving for a lot of operational dependencies, not merely doing a lighter-weight script scaling of resources like a right-scale (sorry Mike) and if they can get any sort of continuous improvement curve going here…

On the other hand he of course admitted that you wouldn’t want to write any of the top 20+ web applications to his service (I asked about Twitter) as they simply couldn’t scale enough. This is where James Urquhart made a great point about the continued necessity of IT ops involvement with truly scaled applications.

Rich Miller: What Oren’s also going to admit is that there are classes of applications OTHER than the top 20, web-scale behemoths that we don’t know how to build and operate without paying particular ongoing attention to infrastructure operation and administration.

These are crucial issues. The balance of power between developers and operations is at stake in the cloud evolution–and is TBD. I don’t see a massive homogenization looming just yet, consolidation yes, but both abstraction and control still have powerful market forces behind them. As Oren said, there is a lot of power in ‘and’ not just ‘or.’

I think it was at that point that I suggested that, just as the PaaS providers have made extensive use of the ‘development framework’ to accelerate development and reduce on-going operational burden, we may want to start thinking about the creation of the analogous frameworks for infrastructure… something that provides enough variation and richness, but also assures the datacenter architect and ops guys that they can declare their intentions for the infrastructure, and have it automagically remain true to that intent… the ‘infrastructure framework.’

Q: How many years before the majority of IT services are provided by the cloud?  5? 10? never? (Ron Wolf)—–(DISCUSSED)

James Watters: The group was pretty unanimous that as far as public cloud being the majority of IT it will happen over a long long horizen over 7 years at the minimum. The inclusion of a wide variety of different types of internal spending as ‘private clouds’ could greatly affect the question.

Everyone agree’d that after applications.gov and other major initiatives were released the pressure is on to move.

Rich Miller: This is also where the existence of the barrier or threshold created by (a) the need for straightforward XaaS integration meeting  (b) the almost complete dearth of SaaS integration tools and solutions.

Q: What are the customer requirements for the Cloud infrastructure in markets like telecom, finance?

James Watters: This was not discussed, and at the risk of getting flamed by customer intimacy advocates I would suggest we are still defining cloud computing horizontal value propositions to a point of usefulness, and that vertical ones will mostly be subsequent. This is probably not applicable to SaaS, but IaaS and PaaS seem to be evolving in a generally horizontal market configuration.

Q: What is the future of Storage in the cloud? Will SANs go away?(DISCUSSED)

James Watters: This was a very focused, and pratical question and created a lot of discussion because it compared a comfortable and familiar current technology with a new paradigm.

Generally very large scale high performance storage is the ‘boat anchor’ of true workload mobility today. James Urquhart made it clear that the trifecta of Cisco, Netapp, and EMC are in the lab now working on this problem with solutions to come soon.

Brian Bulkowski made a very well received point in the discussion about data and processing becoming more highly paired in the cloud for performance reasons, in architectures such as Hadoop. This is a flip of the SAN model. Rich Miller then followed with a pointer to cellular storage architectures, as the design pattern of the future. SANs to cloud problems were clearly an ‘opportunity’ to work on for most of the attendees.

Rich Miller: Yes, this one went off into some great discussions, and side topics.

Because of the constraints of distance (and the speed of light) as well as the huge volume of both data-at-rest and data-in-motion/streams that must be addressed, those in the group who actually RUN services, could only shake their heads in frustration when talking about the ‘elephant in the room.’  Data is a cold-hearted killer.

I lobbed in one of my wild-eyed notions of the moment:  The problem for the individual user as well as the enterprise requiring significant storage infrastructure starts to remind of the issue faced early on in the internet by web-sites who had to serve up the same data to a huge number of individual consumers across the web.  The answer to the problem became a huge business we think of as Content Delivery Networks.

When I think about the requirement of a ‘customer’ of cloud-class computing, with huge data requirements, it seems to be that the problem has been stood on its head…or at least on its side.   The solutions seem to be how the enterprise consumer of cloud COMPUTE can reduce the delay in starting that processing using a combination of anticipatory data dissemination to the clouds where computation is going to take place, and possibly to multiple replicas or near-replicas so as to reduce the latency.  Thus, it’s the enterprise user who’s now distributing data to reside in proximity of the compute processes that are most likely to be ‘consuming’ it.

It’s the topic for a blog post that I’ve been writing and trying to finish for weeks.

Q: When should a cloud NOT be used, and why? (Dave Crocker)

James Watters: Clouds should always be used under all conditions. Get out of the penthouse Dave.

This may be one to discussed in the future – but with many of us still figuring out places and methods TO deploy them I don’t think we lack for workloads not yet moved to the cloud en mass. The sticky persistence of mainframe transactional back-end workloads was brought up more than once last night however–and this is the simplest example of a workload that has eluded all ‘next big thing’ changes so far.

Q: Should clouds talk to each other?  Why?  How? (Dave Crocker)’

This was a tighter and briefer conversation than the endless standards debates on Twitter and online might suggest. The room was wise in suggesting that many management interoperability and data sharing problems were in no way unique to the cloud movement. The cloud was seen as an opportunity to improve on this generation old problem.

Rich Miller: Can’t argue with that… since I was one of the guys flapping on about it.


Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.