Is Red Hat the new Oracle?

4728543452_bc75fbe9fd_b
Locking in? Trying to replicate its Linux success in the cloud, Red Hat said it will not support Red Hat Linux customers who run a non-Red Hat OpenStack distribution.

Everyone knows that Red Hat, the king of enterprise Linux, is banking on OpenStack as its next big opportunity. And most figured it would be aggressive in competing with rival OpenStack distributions — from Canonical, HP, Suse, and others.

What we didn’t necessarily know until the Wall Street Journal  (registration required) reported it Tuesday night is that Red Hat — which makes its money selling support and maintenance for its open-source products – would refuse to support of users of Red Hat Enterprise Linux who also run non-Red Hat versions of OpenStack.

Since Red Hat accounts for more than 60 percent of the paid enterprise Linux market, that policy could stem adoption of rival OpenStack distributions. It could also irritate customers — many of whom don’t like the specter of vendor lock-in.

In this policy — which the company confirmed to the Journal — Red Hat seems to have ripped a page out of Oracle’s playbook. The database giant, as it expanded into other software areas, decided that it would not support customers running  non-Oracle virtualization, non-Oracle Linux etc., unless the customer could prove that its issue originated in the Oracle part of the stack. That went over like a lead balloon with users. Since Oracle announced its own OpenStack distribution this week — and it also fields its own Linux distribution — the stage is set for dueling single-source OpenStack implementations going forward. Needless to say, that could ding OpenStack’s promise of no-vendor-lock-in.

The Journal also reported that Red Hat employees were told to stop working with Mirantis, an OpenStack systems integrator that late last year started offering its own OpenStack distribution. Red Hat’s president of products and technologies (pictured above) Paul Cormier told the paper that Red Hat would not bring a competitor into its accounts.

Reached late Tuesday for comment, a Red Hat spokeswoman noted that OpenStack “is not simply a layered product on top of Linux — [RHEL] is tightly integrated into and part of OpenStack. It is much more complex and intertwined than, say, Microsoft choosing to run PowerPoint on iOS.” I will update this story with additional Red Hat comment when it becomes available. Mirantis could not be reached for comment.

This news, which came out of the OpenStack Summit in Atlanta, may unsettle corporate customers wary of committing too much of their IT budget to any one vendor. But it’s hardly surprising. All of these vendors, while pledging open-source goodness of OpenStack, also want to expand their own reach in customers’ shops. OpenStack competitors have been wary of Red Hat for quite some time, expecting it to try to replicate its dominance in the enterprise Linux realm with cloud with OpenStack.

To be sure this problem is sort of theoretical now, as IDC Analyst Al Gillen pointed out. “From a practical standpoint, this is a non-issue today since there are precious few OpenStack clouds in production use,” he said via email. He agreed that this smacks of Oracle’s support policies but attributed Red Hat’s action more to the “immaturity and fast release of OpenStack more than anything else.”

Mark Shuttleworth, founder of Canonical, which competes with Red Hat both in Linux and OpenStack, recently acknowledged that the battle front has moved from the single-node Linux server realm — where Red Hat won — to multi-node cloud deployments where Red Hat’s enterprise software licensing mentality could put off customers. This new policy will test how compliant big customers will be to such enterprise sales tactics in the age of cloud.

(Via Gigaom.com)

Eight Cloud and Big Data Predictions for 2014

clip_image002Prediction has always been a fun exercise. It forces you to take a step back from your day to day activities, look at the market “crystal ball” and figure out what the future looks like. This year I found this exercise to be particularly difficult as the amount of new innovation and conflicting trends that are taking place at the same time can be very confusing.

After spending quite some time reading market analyses from different sources (see references at the end) and compiling them with all the things I’ve seen and heard throughout the year, I came to the following observation on how 2014 will look for Cloud and Big Data.  I’m happy to exchange thoughts on that regard and learn how you envision 2014 will look.

 

Public and Private Cloud in 2014

1. Amazon continues to distance itself while Google and Microsoft are catching up

Amazon continues to lead in public cloud and distance itself from the rest of the market. This year it reached 5 times the size of other cloud vendors combined, as noted in a recent Gartner report. Google and Microsoft are slowly closing the gap as the closest alternatives, but at a much slower pace than one would expect given the significant investment of both companies in this area.

clip_image004

2. 2014 will be the year of Enterprise Clouds

Traditional enterprise players, such as IBM, HP, Cisco and Red Hat, continue to fight for the remaining share of the market, mostly around enterprise adoption. Yet, enterprises have been slower to embrace and execute on private cloud strategy. Part of the reason for slow adoption is the gap between the solution provided by most of the regular contenders – who are still competing on selling an end to end story – and the reality that most enterprises are looking for Open and Hybrid cloud strategy, especially given that there is no clear winner.

3. OpenStack will be the most popular choice for Enterprise Clouds in 2014

OpenStack is now high on the radar for Enterprise Cloud, mostly threatening VMware’s strong leadership in that domain. Most of the main contenders have embraced a strong OpenStack strategy, including Red Hat, Ubuntu, Suse, HP, IBM and Cisco, with Cisco taking a surprising leading position over IBM and HP, according to a recent Forrester survey.
clip_image006
VMware’s response of embracing OpenStack is still questionable, as the transition to OpenStack is not only about technical integration, but also involves a big shift in the value chain. More enterprises are less keen on paying high cost for hypervisor licenses, which is to date the main revenue channel in the VMware pie.
clip_image008

4. Native OpenStack alternatives will disrupt many of the existing cloud solutions

The rapid adoption of OpenStack will also disrupt many of the existing cloud solutions that were built in a pre-OpenStack world. New solutions that take a more native-to-OpenStack approach will pop up and replace many of the current solutions. A good example of this is Project Solum, which aims to provide a native alternative to existing PaaS solutions, such as CloudFoundry and OpenShift as I outlined in this recent post. I expect that will see more of these disruptions expanding to other frameworks in 2014.

5. Orchestration & Automation will be the next big thing in 2014

Having said all that, the remaining challenge of enterprises is to break the IT bottleneck. This bottleneck is created by IT-centric decision-making processes, a.k.a “IaaS First Approach,” in which IT is focused on building a private cloud infrastructure – a process that takes much longer than anticipated when compared with a more business/application-centric approach. One of the ways to overcome that challenge is to abstract the infrastructure and allow other departments within the organization to take a parallel path towards the cloud, while ensuring future compatibility with new development in the IT-led infrastructure.
clip_image010

DevOps has definitely been a key example for a business-led initiative that determines the speed of innovation and, thus, competitiveness of many organizations. The move to DevOps forces many organizations to go through both cultural and technology changes in making the biz and application more closely aligned, not just in goals, but in processes and tools as well.

Enterprises with legacy environments and customers take a two-step approach. They first tackle continuous delivery – automating the packages of their software deliverable and keeping tighter control over new production rollout. Once enterprises feel comfortable with their process and environment, they will automate the entire deployment process into production.

Configuration management, orchestration and workflow automation become key enablers in enterprise transition to cloud, and will gain much attention in 2014 and 2015. Again, Amazon has already recognized that need, introducing into the space a new offering called OpsWorks. It is only a matter of time until will see similar offerings embedded with other cloud providers in both public and private clouds.

As the focus moves to orchestration with greater focus on standardization of deployments and packages, DSL and API become equally important. Existing standards, such as TOSCA and CAMP, will undergo massive modifications to make them simpler to use and fit the cloud environment.

Big Data Predictions

There are many advancements happening within the Big Data/NoSQL domain that I’m not going to touch on. I will focus on two main areas that are close to my work.

6. 2014 will see a major increase of Big Data in the Cloud

Cloud infrastructure is closing the gap with the existing data center, as we see the emergence of support for Bare Metal, High CPU, High Memory and Flash-Disk. Many cloud providers are already offering Big Data as Service, such as Elastic Map Reduce and Redshift, and are continuously expanding their offering on that regard.

This advancement in cloud infrastructure removes almost all of the technical barriers for running I/O intensive workloads, such as Big Data analytics, on the cloud. I expect that in 2014, running Big Data analytics on the cloud will become the first choice for any new project, while the use of Big Data in non-cloud environments will be minimized to only niche use cases with extreme regulations and security constraints.

7. In-Memory Data backed by flash-disk will become a popular choice for Real-Time Big Data analytics
Flash disk pricing changes the economics behind the cost/performance ratio of disk-based solutions, making it possible to achieve the performance of an In-Memory-based solution at a price closer to a disk-based solution. So far, attempts have been made to use flash disks as a fast disk alternative to magnetic drives. This path inherits many of the limitations of a disk-based drive and, therefore, doesn’t capture the full potential of flash disk which, like RAM, can provide highly parallel access to data.
clip_image012

At the same time, memory-based solutions like In-Memory Database or In-Memory Data Grid cover a small niche in the Big Data ecosystem, mostly due to the high cost/performance ratio.

I believe that the combination of flash drive, In-Memory data-grids and databases at the front-end change that dynamic and will make memory-based solutions backed by flash disk much more attractive to a bigger niche, specifically as it relates to real-time analytics of Big Data.

In some cases, the combination of memory-based solutions is also integrated with existing Big Data frameworks and, thus, provides seamless performance acceleration. A good example is GigaSpaces’ integration with Storm and GridGain’s integration with Hadoop.

8. Real-time analytics turns mainstream

The most interesting indication that real-time analytics is becoming mainstream is Amazon’s support for real-time analytics, with many of the existing analytics solutions providing real-time analytics capabilities as built-in parts of their reports. Another good example of that is Google Analytics Real Time View.

New disruptive forces in Cloud worth watching in 2014
Disruptive technologies change the market landscape in ways that are difficult to anticipate by their very nature. Rather than predicting their impact, I felt it would be simpler to list them.

  • Networking – The networking segment of IT is experiencing a major disruption as of late with the move to Software Defined Network, OpenStack Neutron project and Network Function Virtualization. These developments will change networking from not only a technology perspective, but they are also driving a completely different business model which will be based on utilization or a subscription-based model.
  • Linux Container – Linux containers are gaining interest both as light-weight software packaging as well as VMS’s. The most commonly used use case for Linux containers has as an underlying container for PaaS. With the introduction of new projects, such as Docker – which makes the Linux container easy to use, we will see wider and more pervasive use of containers as high performance virtualization, as packaging tools in continuous deployment scenarios, etc.
  • Bare Metal Cloud - Bare metal cloud has been a small niche in the cloud space, mostly due to the fact that it is usually offered in a static configuration setup. Bare metal clouds provide the same degree of elasticity to bare metal devices. With OpenStack, for example, you can spawn a new bare metal device just as you would provision any other VM. The development would reduce one of the last remaining barriers for bringing mission critical applications to the cloud.
  • OpenStack as an Innovation Accelerator – Many items on the list of disruptive technologies that I listed above are not that new, and to a certain degree, have existed for years, like in the case of Linux Container. Often, a disruptive force needs to gain certain critical mass before it can break into massive adoption. OpenStack creates an ecosystem that provides a platform for many users to integrate new technologies in a way that could be consumed by end users immediately and that plays a major role in the acceleration of adoption of many of those disruptive technologies.

Where do we go from here?

There are many individual technologies and trends that have a disruptive force behind them. Having said that, I think that it is the intersection between those technologies that holds the most promising potential. Here are few examples:

  • Big Data and the Cloud Application Orchestration - Big Data analytics for operational information could serve as the new “brain” behind a new class of orchestration engine that would combine artificial intelligence decision-making based on trends and historical analysis and handle complex failure and scaling scenarios automatically.
  • Putting Network and Applications together – Putting network and applications together holds a lot of promise in the way we scale applications across regions and multiple sites, as well as how we control application SLA’s in a shared environment. For example, giving priority to customer-facing services versus batch analytics or optimizing the network routing based on the locality of the data, etc.

(via: Nati Shalom’s Blog)

 

We Finally Cracked The 10K Problem – This Time For Managing Servers With 2000x Servers Managed Per Sysadmin

In 1999 Dan Kegel issued a big hairy audacious challenge to web servers:

It’s time for web servers to handle ten thousand clients simultaneously, don’t you think? After all, the web is a big place now.

This became known as the C10K problem. Engineers solved the C10K scalability problems by fixing OS kernels and moving away from threaded servers like Apache to event-driven servers like Nginx and Node.

Today we are considering an even bigger goal, how to support 10 Million Concurrent Connections, which requires even more radical techniques.

No similar challenge was issued for managing servers in a datacenter, but according toDave Neary from Red Hat, in a recent FLOSS Weekly episode, we have passed the 10K barrier for server management with 10,000 or more servers managed per sysadmin.

Should We Let This Milestone Pass Without Mention?

Absolutely not! It’s a stunning accomplishment with 200x-2000x increases in productivity. Dave said he remembered in the 1990s it took one sysadmin to manage 4 or 5 Windows servers. A Linux sysadmin could manage 50 to 60 servers.

Now companies are managing over 10,000 servers per sysadmin. This huge change is rooted both in IaaS, treating a datacenter as an elastic programmable resource, divorcing operations from infrastructure deployment, and in the DevOps revolution, with its emphasis on tools, culture, automation, metrics, sharing of resources, and infrastructure as code.

What Will It Take To Manage 10 Million Servers Per Sysadmin?

Who might know? Google of course.

As James Hamilton says, Counting Servers is Hard, but Microsoft says they have 1 million servers, and Google is planning for 10 million servers, so it may take a while before we can get to 10 million servers per sysadmin.

But when it does happen the base will be built on:

At a high level the approach of scaling to 10 million connections per server and scaling 10 million machines per sysadmin are the same: scalability is specialization.

But at lower level they differ completely. Scaling to 10 million connections is about removing layers and doing the work yourself. Scaling to 10 million servers is all about putting the intelligence into smarter and smarter layers. A lot like how human body utilizes trillions of individual components mediated by many autonomous systems all directed by a parallelized and decentralized brain.

(source: HighScalability.com)

Google On Latency Tolerant Systems: Making A Predictable Whole Out Of Unpredictable Parts

In Taming The Long Latency Tail we covered Luiz Barroso’s exploration of the long tail latency (some operations are really slow) problems generated by large fanout architectures (a request is composed of potentially thousands of other requests). You may have noticed there weren’t a lot of solutions. That’s where a talk I attended, Achieving Rapid Response Times in Large Online Services (slide deck), byJeff Dean, also of Google, comes in:

In this talk, I’ll describe a collection of techniques and practices lowering response times in large distributed systems whose components run on shared clusters of machines, where pieces of these systems are subject to interference by other tasks, and where unpredictable latency hiccups are the norm, not the exception.

The goal is to use software techniques to reduce variability given the increasing variability in underlying hardware, the need to handle dynamic workloads on a shared infrastructure, and the need to use large fanout architectures to operate at scale.

Two Forces Motivate Google’s Work On Latency Tolerance:

  • Large fanout architectures. Satisfying a search request can involve thousands of machines. The core idea is that small performance hiccups on a few machines causes higher overall latencies and the more machines the worse the tail latency. The statistics of the situation are highly unintuitive. Only 1% of requests will take over a second with a server that has a 1ms average response time and a one second 99th percentile latency. If a request has to access 100 servers, now 63% of all requests will take over a second.
  • Resource sharing. One of the surprising aspects of Google’s architecture is they use most of their machines as generalized job execution engines. Not even Google has infinite resources, so they share resources. This also fits where their “the data warehouse is the computer” philosophy. Every node runs linux and participates with a scheduling daemon that schedules work across the cluster. Jobs of various types (CPU intensive, MapReduce, etc)  will be running on the same machine that runs a Bigtable tablet server. Jobs are not isolated on servers. Jobs impact each other and those impacts must be managed.

Others try to reduce variability by overprovisioning or running only like workloads on machines. Google makes use of all their resources, but sharing disk, CPU, network, etc, increases variability. Jobs will burst CPU, memory, or network usage, so running a combination of background activities on machines, especially with large fanouts, leads to unpredictable results.

You may think complete and total control would solve the problem, but the more you tighten your grip, the more star systems will slip through your fingers. At a small scale careful control can work, but the disadvantages of large fanout architectures and shared resource execution models must be countered with good design.

Fault Tolerant Vs Latency Tolerant Systems

Dean makes a fascinating analogy between creating fault tolerant and latency tolerant systems. Fault tolerant systems make a reliable whole out of unreliable parts by provisioning extra resources. In the same way latency tolerant systems can make a predictable whole out of unpredictable parts. The difference is in the time scales:

  • variability: 1000s of disruptions/sec, scale of milliseconds
  • faults: 10s of failures per day, scale of tens of seconds

Tolerating variability requires having a fine trigger so the response can be immediate. Your scheduling system must be able to make decisions within these real-time time frames.

Mom And Apple Pie Techniques For Managing Latency

These techniques the “general good engineering practices” for managing latency:

  • Prioritize request queues and network traffic. Do the most important work first.
  • Reduce head-of-line blocking. Break large requests into a sequence of small requests. This time slices a large request rather than let it block all other requests.
  • Rate limit activity. Drop or delay traffic that exceeds a specified rate.
  • Defer expensive activity until load is lower.
  • Synchronize disruptions. Regular maintenance and monitoring tasks should not be randomized. Randomization means at any given time there will be slow machines in a computation. Instead, run tasks at the same time so the latency hit is only taken during a small window.

Cross Request Adaptation Strategies

The idea behind these strategies is to examine recent behavior and take action to improve latency of future requests within tens of seconds or minutes. The strategies are:

  • Fine-grained dynamic partitioning. Partition large datasets and computations. Keep more than 1 partition per machine (often 10-100/machine). Partitions make it easy to assign work to machines and react to changing situations as the partitions can be recovered on failure or replicated or moved as needed.
  • Load balancing. Load is shed in few percent increments. Shifting load can be prioritized when the imbalance is severe. Different resource dimensions can be overloaded: memory, disk, or CPU. You can’t just look at CPU load because they may all have the same load. Collect distributions of each dimension, make histograms, and try to even out work to machines by looking at std deviations and distributions.

    Different resource dimensions can be overloaded: memory, disk, or CPU. You can’t just look at CPU load because they may all have the same load. Collect distributions of each dimension, make histograms, and try to even out work to machines by looking at std deviations and distributions.

  • Selective partitioning. Make more replicas of heavily used items. This works for static or dynamic content. Important documents or Chinese documents, for example, can be replicated to handle greater query loads.
  • Latency-induced probation. When a server is slow to respond it could be because of interference caused by jobs running on the machine. So make a copy of the partition and move it to another machine, still sending shadow copies of the requests to the server. Keep measuring latency and when the latency improves return the partition to service.

My notes on this section are really bad, so that’s all I have for this part of the talk. Hopefully we can fill it in as more details become available.

Within-Request Adaptation Strategies

The idea behind these strategies is to fix a slow request as it is happening. The strategies are:

  • Canary requests
  • Backup requests with cross-server cancellation
  • Tainted results

Backup Requests With Cross-Server Cancellation

Backup requests are the idea of sending requests out to multiple replicas, but in a particular way. Here’s the example for a read operation for a distributed file system client:

  • send request to first replica
  • wait 2 ms, and send to second replica
  • servers cancel request on other replica when starting read

A request could wait in a queue stuck behind an expensive query or a packet could be dropped, so if a reply is not returned quickly other replicas are tried. Responses come back faster if requests sit in multiple queues.

Cancellation reduces the frequency at which redundant work occurs.

You might think this is just a lot of extra traffic, but remember, the goal is to squeeze down the 99th percentile distribution, so the backup requests, even with what seems like a long wait time really bring down the tail latency and standard deviation. Requests in the 99th percentile take so long that the wait time is short in comparison.

Bigtable, for example, used backup requests after two milliseconds, which dramatically dropped the 99th percentile by 43 percent on an idle system. With a loaded system the reduction was 38 percent with only one percent extra disk seeks. Backups requests with cancellations gives the same distribution as an unloaded cluster.

With these latency tolerance techniques you are taking a loaded cluster with high variability and making it perform like an unloaded cluster.

There are many variations of this strategy. A backup request could be marked with a lower priority so it won’t block real work. Send to a third cluster after a longer delay.Wait times could be adjusted so requests are sent when the wait time hits the 90th percentile.

Tainted Results – Proactively Abandon Slow Subsystems

  • Tradeoff completeness for responsiveness. Under load noncritical subcomponents can be dropped out. It’s better, for example, to search a smaller subset of pages or skip spelling corrections than it is to be slow.
  • Do not cache results. You don’t want users to see tainted results.
  • Set cutoffs dynamically based on recent measurements.

(via HighScalability.com)

Google wants to use its power to protect small websites from the terror of DDoS attacks

Distributed denial of service attacks are, in their own special way, a violent form of digital censorship. And Google wants to protect the world’s websites from them.

The company is taking the wraps off of Project Shield, a new distributed denial of service (DDoS) mitigation service that it hopes will “protect free expression online” by keeping websites themselves online.

A favored tactic among Anonymous and just about every other online bad guy, DDoS attacks are the Internet equivalent to packing a women’s shoe store with men asking for hats. Attackers flood a site with unwanted traffic, preventing people who need to access the site from doing so, and, eventually, forcing the site to shut down.

The tactic has been used to take down sites like Reddit, Bitcoin exchange Mt. Gox, WikiLeaks, and many, many others. If DDoS attacks can KO some of the world’s most popular websites, imagine what they can do to any the smaller ones.

To show you how bad it can get, here are all the the attacks happening right now.

Google says it’s already used Project Shield to protect a variety of what it calls “trusted testers,” including one Syrian site that gave people early-warnings of missile attacks.

One issue, however, should stand out to anyone who is even remotely concerned about censorship online. As Google points out, Project Shield works by “[enabling] websites to serve their content through Google to be better protected from DDoS attacks.” That’s a problem.

While going with Google may keep websites up during attacks, it also could mean putting some of the world’s most important tiny sites under the protection of a single global corporate entity. Google may have good intentions here, but that’s one reality that’s going to be tough to explain away.

Read more at Venturebeat.com

Facebook’s vs Twitter’s Approach to Real-Time Analytics

Facebook’s vs Twitter’s Approach to Real-Time Analytics

Last year, Twitter and Facebook have released new versions of their real-time analytics systems.

In both cases, the motivation was relatively similar — they wanted to provide their customers with better insights on the performance and effectiveness of their marketing activities. Facebook’s measurement includes “likes” and “comments” to monitor interactions. For Twitter, the measurement is based on the effectiveness of a given tweet – typically called “Reach” – basically a measure of the number of followers that were exposed to the tweet. Beyond the initial exposure, you often want to measure the number of clicks on that tweet, which indicate the number of users who saw the tweet and also looked into its content.

Facebook’s vs Twitter’s Approach to real-time Analytics 

Facebook Real-Time Analytics Architecture – Logging-Centric Approach:

Diagram1

Relies on Apache Hadoop framework for real-time and batch (map/reduce) processing. Using the same underlying system simplifies the maintenance of that system.

  • Limited real-time processing — the logging-centric approach basically delegates most of the heavy lifting to the backend system. Performing even a fairly simple correlation beyond simple counting isn’t a trivial task.
  • real-time is often measured in tens of seconds. In many analytics system, this order of magnitude is often more than enough to express a real-time view of what is going on in your system.
  • It is suitable for simple processing. Because of the logging nature of the Facebook architecture, most of the heavy lifting of processing cannot be done in real-time and is often pushed into the backend system.
  • Low parallelization — Hadoop systems do not give you ways to ensure ordering and consistency based on the data. Because of that, Facebook came up with their Puma service that collects and inputs data into a centralized service, thus making it easier to processes events in order.
  • Facebook collects user click streams from your Facebook wall through an Ajax listener which, then, sends those events back into the Facebook data centers. The info is stored on Hadoop File System via Scribe and collected by PTail.
  • Puma aggregates logs in-memory, batching them in windows of 1.5 seconds and stores the information in Hbase.
  • The Facebook approach puts a huge limit as to the volume of events that the system can handle and have significant implications over the utilization of the overall system.

 

Twitter Real-Time Analytics Architecture – Event-Driven Approach:

Diagram2

  • Unlike Facebook, Twitter uses Hadoop for batch processing and Storm for real-time processing. Storm was designed to perform fairly complex aggregation of the data that comes through the stream as it flows into the system, before it is sent back to the batch system for further analysis.
  • Real-time can be measured in milliseconds. While having second or millisecond latency is not crucial to the end user — it does have a significant effect on the overall processing time and the level of analysis that we can produce and push through the system. As many of those analyses involve thousands of operations to get to the actual result.
  • It is suitable for complex processing. With Storm, it is possible to perform a large range of complex aggregation while the data flows through the system. This has a significant impact on the complexity of the processing. A good example is calculating trending words. With the event-driven approach, we can assume that we have the current state and just make the change to that state to update the list of trending words. In contrast, a batch system will have to read the entire set of words, re-calculate, and re-order the words for every update. This is why those operations are often done in long batches.
  • Extremely parallel – Asynchronous events are, by definition, easier to parallelize. Storm was designed for extreme parallelization. Ultimately, it determines the speed level of utilization that we can get per machine in our system. Looking at the bigger picture, this quite substantially adds to the cost of our system and to our ability to perform complex analyses.

Final Words

Quite often, we get caught in the technical details of these discussions and lose sight of what this all really means.

If all you are looking for is to collect data streams and simply update counters, then both approaches would work. The main difference between the two is felt in the level and complexity of processing that you would like to process in real-time. If you want to continuously update a different form of sorted lists or indexes, you’ll find that doing so in an event-driven approach, as is the case of Twitter, can be exponentially faster and more efficient than the logging-centric approach. To put some numbers behind that, Twitter reported that calculating the reach without Storm took 2 hours whereas Storm could do the same in less than a second.

Such a difference in speed and utilization have a direct correlation with the business bottom line, as it determines the level and depth of intelligence that it can run against its data. It also determines the cost of running the analytics systems and, in some cases, the availability of those systems. When the processing is slower there would be larger number of scenarios that could saturate the system.

(source: Nati Shalom’s blog)

 

Storm-YARN Released as Open Source

At Yahoo! we have worked on the convergence of Storm with Hadoop, as mentioned in our earlier post. We are pleased to announce that Storm-YARN has been released as open source. Storm-YARN enables Storm applications to utilize the computational resources in a Hadoop cluster along with accessing Hadoop storage resources such as HBase and HDFS.

Motivation
Collocating real-time processing with batch processing offers a number of advantages over segregated clusters.

  • It provides a huge potential for elasticity. Real-time processing will rarely produce a constant and predictable load. As such, Storm needs more resources to keep up with spikes in demand. Collocating Storm with batch processing allows Storm to steal resources from batch jobs when needed and give them back when demand subsides. The Storm-YARN effort lays the groundwork to make this possible.
  • Many applications use Storm for low-latency processing and Map/Reduce for batch processing while sharing data between Storm and Map/Reduce. By placing Storm physically closer to the data source and/or other components in the same pipeline we can reduce network transfers and in turn the total cost of acquiring the data.

Launch Storm Cluster
To launch a Storm cluster managed by YARN, you simply execute:

storm-yarn launch <storm-yarn.yaml>

storm-yarn.yaml is the standard storm configuration file with YARN specification parameters including master.initial-num-supervisors (the initial number of supervisors to be launched) and master.container.size-mb (the memory size of the container to be allocated for each supervisor).
Figure 1 illustrates the execution of storm-yarn command. Storm-YARN asks YARN’s Resource Manager to launch a Storm Application Master. The Application Master then launches a Storm nimbus server and a Storm UI server locally. It also uses YARN to find resources for the supervisors and launch them.

Figure 1: Launch Storm Cluster with Hadoop YARN

Execute Storm Topologies
You can communicate with the Storm cluster the same as with a standalone Storm cluster, through the storm command.

storm jar <topology_jar>

Because nimbus is running on a node picked by YARN, you may need to specify that node on the command line by setting the nimbus.host config.

As illustrated in Figure 2, each Storm supervisor will launch worker processes within its container. These Storm worker processes are enabled to access Hadoop datasets stored in HDFS and HBase etc..

Figure 2 Submit and Execute Storm Topologies

Open Source Release
Yahoo! has decided to release Storm-YARN code under the Apache 2.0 License. The code is available at https://github.com/yahoo/storm-yarn. This alpha release enables members of the community to jointly make Storm-YARN a high-quality product. Please try it out and let us know what you think.

If you are interested in contributing, please feel free to submit proposals as issues, sign an Apache style CLA and contribute your code.

Additional details on Storm-YARN will be shared during our Storm-on-YARN: Convergence of Low-Latency and Big-Data talk at the 2013 Hadoop Summit North America on June 26, 2013, 11:20 am under the Future of Apache Hadoop track. We look forward to seeing you there.

Acknowledgement
Derek Dagit has implemented significant portion of Storm-Yarn release. We thank him for making this early release avaialble for open source.

Biography
Bobby Evans is a software developer at Yahoo! and a Hadoop PMC member at the Apache Software Foundation.

Andy Feng is a Distinguished Architect at Yahoo! and a Core Contributor of Storm project. He lead architecture design and development of next-gen big-data platform, which empowers variety application patterns (Batch, Microbatch, Streaming, Query).

(via Yahoo.com)