Is Red Hat the new Oracle?

4728543452_bc75fbe9fd_b
Locking in? Trying to replicate its Linux success in the cloud, Red Hat said it will not support Red Hat Linux customers who run a non-Red Hat OpenStack distribution.

Everyone knows that Red Hat, the king of enterprise Linux, is banking on OpenStack as its next big opportunity. And most figured it would be aggressive in competing with rival OpenStack distributions — from Canonical, HP, Suse, and others.

What we didn’t necessarily know until the Wall Street Journal  (registration required) reported it Tuesday night is that Red Hat — which makes its money selling support and maintenance for its open-source products – would refuse to support of users of Red Hat Enterprise Linux who also run non-Red Hat versions of OpenStack.

Since Red Hat accounts for more than 60 percent of the paid enterprise Linux market, that policy could stem adoption of rival OpenStack distributions. It could also irritate customers — many of whom don’t like the specter of vendor lock-in.

In this policy — which the company confirmed to the Journal — Red Hat seems to have ripped a page out of Oracle’s playbook. The database giant, as it expanded into other software areas, decided that it would not support customers running  non-Oracle virtualization, non-Oracle Linux etc., unless the customer could prove that its issue originated in the Oracle part of the stack. That went over like a lead balloon with users. Since Oracle announced its own OpenStack distribution this week — and it also fields its own Linux distribution — the stage is set for dueling single-source OpenStack implementations going forward. Needless to say, that could ding OpenStack’s promise of no-vendor-lock-in.

The Journal also reported that Red Hat employees were told to stop working with Mirantis, an OpenStack systems integrator that late last year started offering its own OpenStack distribution. Red Hat’s president of products and technologies (pictured above) Paul Cormier told the paper that Red Hat would not bring a competitor into its accounts.

Reached late Tuesday for comment, a Red Hat spokeswoman noted that OpenStack “is not simply a layered product on top of Linux — [RHEL] is tightly integrated into and part of OpenStack. It is much more complex and intertwined than, say, Microsoft choosing to run PowerPoint on iOS.” I will update this story with additional Red Hat comment when it becomes available. Mirantis could not be reached for comment.

This news, which came out of the OpenStack Summit in Atlanta, may unsettle corporate customers wary of committing too much of their IT budget to any one vendor. But it’s hardly surprising. All of these vendors, while pledging open-source goodness of OpenStack, also want to expand their own reach in customers’ shops. OpenStack competitors have been wary of Red Hat for quite some time, expecting it to try to replicate its dominance in the enterprise Linux realm with cloud with OpenStack.

To be sure this problem is sort of theoretical now, as IDC Analyst Al Gillen pointed out. “From a practical standpoint, this is a non-issue today since there are precious few OpenStack clouds in production use,” he said via email. He agreed that this smacks of Oracle’s support policies but attributed Red Hat’s action more to the “immaturity and fast release of OpenStack more than anything else.”

Mark Shuttleworth, founder of Canonical, which competes with Red Hat both in Linux and OpenStack, recently acknowledged that the battle front has moved from the single-node Linux server realm — where Red Hat won — to multi-node cloud deployments where Red Hat’s enterprise software licensing mentality could put off customers. This new policy will test how compliant big customers will be to such enterprise sales tactics in the age of cloud.

(Via Gigaom.com)

Eight Cloud and Big Data Predictions for 2014

clip_image002Prediction has always been a fun exercise. It forces you to take a step back from your day to day activities, look at the market “crystal ball” and figure out what the future looks like. This year I found this exercise to be particularly difficult as the amount of new innovation and conflicting trends that are taking place at the same time can be very confusing.

After spending quite some time reading market analyses from different sources (see references at the end) and compiling them with all the things I’ve seen and heard throughout the year, I came to the following observation on how 2014 will look for Cloud and Big Data.  I’m happy to exchange thoughts on that regard and learn how you envision 2014 will look.

 

Public and Private Cloud in 2014

1. Amazon continues to distance itself while Google and Microsoft are catching up

Amazon continues to lead in public cloud and distance itself from the rest of the market. This year it reached 5 times the size of other cloud vendors combined, as noted in a recent Gartner report. Google and Microsoft are slowly closing the gap as the closest alternatives, but at a much slower pace than one would expect given the significant investment of both companies in this area.

clip_image004

2. 2014 will be the year of Enterprise Clouds

Traditional enterprise players, such as IBM, HP, Cisco and Red Hat, continue to fight for the remaining share of the market, mostly around enterprise adoption. Yet, enterprises have been slower to embrace and execute on private cloud strategy. Part of the reason for slow adoption is the gap between the solution provided by most of the regular contenders – who are still competing on selling an end to end story – and the reality that most enterprises are looking for Open and Hybrid cloud strategy, especially given that there is no clear winner.

3. OpenStack will be the most popular choice for Enterprise Clouds in 2014

OpenStack is now high on the radar for Enterprise Cloud, mostly threatening VMware’s strong leadership in that domain. Most of the main contenders have embraced a strong OpenStack strategy, including Red Hat, Ubuntu, Suse, HP, IBM and Cisco, with Cisco taking a surprising leading position over IBM and HP, according to a recent Forrester survey.
clip_image006
VMware’s response of embracing OpenStack is still questionable, as the transition to OpenStack is not only about technical integration, but also involves a big shift in the value chain. More enterprises are less keen on paying high cost for hypervisor licenses, which is to date the main revenue channel in the VMware pie.
clip_image008

4. Native OpenStack alternatives will disrupt many of the existing cloud solutions

The rapid adoption of OpenStack will also disrupt many of the existing cloud solutions that were built in a pre-OpenStack world. New solutions that take a more native-to-OpenStack approach will pop up and replace many of the current solutions. A good example of this is Project Solum, which aims to provide a native alternative to existing PaaS solutions, such as CloudFoundry and OpenShift as I outlined in this recent post. I expect that will see more of these disruptions expanding to other frameworks in 2014.

5. Orchestration & Automation will be the next big thing in 2014

Having said all that, the remaining challenge of enterprises is to break the IT bottleneck. This bottleneck is created by IT-centric decision-making processes, a.k.a “IaaS First Approach,” in which IT is focused on building a private cloud infrastructure – a process that takes much longer than anticipated when compared with a more business/application-centric approach. One of the ways to overcome that challenge is to abstract the infrastructure and allow other departments within the organization to take a parallel path towards the cloud, while ensuring future compatibility with new development in the IT-led infrastructure.
clip_image010

DevOps has definitely been a key example for a business-led initiative that determines the speed of innovation and, thus, competitiveness of many organizations. The move to DevOps forces many organizations to go through both cultural and technology changes in making the biz and application more closely aligned, not just in goals, but in processes and tools as well.

Enterprises with legacy environments and customers take a two-step approach. They first tackle continuous delivery – automating the packages of their software deliverable and keeping tighter control over new production rollout. Once enterprises feel comfortable with their process and environment, they will automate the entire deployment process into production.

Configuration management, orchestration and workflow automation become key enablers in enterprise transition to cloud, and will gain much attention in 2014 and 2015. Again, Amazon has already recognized that need, introducing into the space a new offering called OpsWorks. It is only a matter of time until will see similar offerings embedded with other cloud providers in both public and private clouds.

As the focus moves to orchestration with greater focus on standardization of deployments and packages, DSL and API become equally important. Existing standards, such as TOSCA and CAMP, will undergo massive modifications to make them simpler to use and fit the cloud environment.

Big Data Predictions

There are many advancements happening within the Big Data/NoSQL domain that I’m not going to touch on. I will focus on two main areas that are close to my work.

6. 2014 will see a major increase of Big Data in the Cloud

Cloud infrastructure is closing the gap with the existing data center, as we see the emergence of support for Bare Metal, High CPU, High Memory and Flash-Disk. Many cloud providers are already offering Big Data as Service, such as Elastic Map Reduce and Redshift, and are continuously expanding their offering on that regard.

This advancement in cloud infrastructure removes almost all of the technical barriers for running I/O intensive workloads, such as Big Data analytics, on the cloud. I expect that in 2014, running Big Data analytics on the cloud will become the first choice for any new project, while the use of Big Data in non-cloud environments will be minimized to only niche use cases with extreme regulations and security constraints.

7. In-Memory Data backed by flash-disk will become a popular choice for Real-Time Big Data analytics
Flash disk pricing changes the economics behind the cost/performance ratio of disk-based solutions, making it possible to achieve the performance of an In-Memory-based solution at a price closer to a disk-based solution. So far, attempts have been made to use flash disks as a fast disk alternative to magnetic drives. This path inherits many of the limitations of a disk-based drive and, therefore, doesn’t capture the full potential of flash disk which, like RAM, can provide highly parallel access to data.
clip_image012

At the same time, memory-based solutions like In-Memory Database or In-Memory Data Grid cover a small niche in the Big Data ecosystem, mostly due to the high cost/performance ratio.

I believe that the combination of flash drive, In-Memory data-grids and databases at the front-end change that dynamic and will make memory-based solutions backed by flash disk much more attractive to a bigger niche, specifically as it relates to real-time analytics of Big Data.

In some cases, the combination of memory-based solutions is also integrated with existing Big Data frameworks and, thus, provides seamless performance acceleration. A good example is GigaSpaces’ integration with Storm and GridGain’s integration with Hadoop.

8. Real-time analytics turns mainstream

The most interesting indication that real-time analytics is becoming mainstream is Amazon’s support for real-time analytics, with many of the existing analytics solutions providing real-time analytics capabilities as built-in parts of their reports. Another good example of that is Google Analytics Real Time View.

New disruptive forces in Cloud worth watching in 2014
Disruptive technologies change the market landscape in ways that are difficult to anticipate by their very nature. Rather than predicting their impact, I felt it would be simpler to list them.

  • Networking - The networking segment of IT is experiencing a major disruption as of late with the move to Software Defined Network, OpenStack Neutron project and Network Function Virtualization. These developments will change networking from not only a technology perspective, but they are also driving a completely different business model which will be based on utilization or a subscription-based model.
  • Linux Container - Linux containers are gaining interest both as light-weight software packaging as well as VMS’s. The most commonly used use case for Linux containers has as an underlying container for PaaS. With the introduction of new projects, such as Docker – which makes the Linux container easy to use, we will see wider and more pervasive use of containers as high performance virtualization, as packaging tools in continuous deployment scenarios, etc.
  • Bare Metal Cloud - Bare metal cloud has been a small niche in the cloud space, mostly due to the fact that it is usually offered in a static configuration setup. Bare metal clouds provide the same degree of elasticity to bare metal devices. With OpenStack, for example, you can spawn a new bare metal device just as you would provision any other VM. The development would reduce one of the last remaining barriers for bringing mission critical applications to the cloud.
  • OpenStack as an Innovation Accelerator - Many items on the list of disruptive technologies that I listed above are not that new, and to a certain degree, have existed for years, like in the case of Linux Container. Often, a disruptive force needs to gain certain critical mass before it can break into massive adoption. OpenStack creates an ecosystem that provides a platform for many users to integrate new technologies in a way that could be consumed by end users immediately and that plays a major role in the acceleration of adoption of many of those disruptive technologies.

Where do we go from here?

There are many individual technologies and trends that have a disruptive force behind them. Having said that, I think that it is the intersection between those technologies that holds the most promising potential. Here are few examples:

  • Big Data and the Cloud Application Orchestration - Big Data analytics for operational information could serve as the new “brain” behind a new class of orchestration engine that would combine artificial intelligence decision-making based on trends and historical analysis and handle complex failure and scaling scenarios automatically.
  • Putting Network and Applications together - Putting network and applications together holds a lot of promise in the way we scale applications across regions and multiple sites, as well as how we control application SLA’s in a shared environment. For example, giving priority to customer-facing services versus batch analytics or optimizing the network routing based on the locality of the data, etc.

(via: Nati Shalom’s Blog)

 

We Finally Cracked The 10K Problem – This Time For Managing Servers With 2000x Servers Managed Per Sysadmin

In 1999 Dan Kegel issued a big hairy audacious challenge to web servers:

It’s time for web servers to handle ten thousand clients simultaneously, don’t you think? After all, the web is a big place now.

This became known as the C10K problem. Engineers solved the C10K scalability problems by fixing OS kernels and moving away from threaded servers like Apache to event-driven servers like Nginx and Node.

Today we are considering an even bigger goal, how to support 10 Million Concurrent Connections, which requires even more radical techniques.

No similar challenge was issued for managing servers in a datacenter, but according toDave Neary from Red Hat, in a recent FLOSS Weekly episode, we have passed the 10K barrier for server management with 10,000 or more servers managed per sysadmin.

Should We Let This Milestone Pass Without Mention?

Absolutely not! It’s a stunning accomplishment with 200x-2000x increases in productivity. Dave said he remembered in the 1990s it took one sysadmin to manage 4 or 5 Windows servers. A Linux sysadmin could manage 50 to 60 servers.

Now companies are managing over 10,000 servers per sysadmin. This huge change is rooted both in IaaS, treating a datacenter as an elastic programmable resource, divorcing operations from infrastructure deployment, and in the DevOps revolution, with its emphasis on tools, culture, automation, metrics, sharing of resources, and infrastructure as code.

What Will It Take To Manage 10 Million Servers Per Sysadmin?

Who might know? Google of course.

As James Hamilton says, Counting Servers is Hard, but Microsoft says they have 1 million servers, and Google is planning for 10 million servers, so it may take a while before we can get to 10 million servers per sysadmin.

But when it does happen the base will be built on:

At a high level the approach of scaling to 10 million connections per server and scaling 10 million machines per sysadmin are the same: scalability is specialization.

But at lower level they differ completely. Scaling to 10 million connections is about removing layers and doing the work yourself. Scaling to 10 million servers is all about putting the intelligence into smarter and smarter layers. A lot like how human body utilizes trillions of individual components mediated by many autonomous systems all directed by a parallelized and decentralized brain.

(source: HighScalability.com)