The Apache documentation on the YARN schedulers is good, but it covers how to configure them, not how to choose one or the other. Here’s the background on why the schedulers are designed the way they are and how to choose the right one.
Author Archives: Peter Coates
What’s So Important About Compression?
HDFS storage is cheap—about 1% of the cost of storage on a data appliance such as Teradata. It takes some doing to use up all the disk space on even a small cluster of say, 30 nodes. Such a cluster may have anywhere from 12 to 24 TB per node, so a cluster of that size has from 720 to 1440 TB of storage space. If there’s no shortage of space, why bother wasting cycles on compression and decompression?
With Hadoop, that’s the wrong way to look at it because saving space is not the main reason to use compression in Hadoop clusters—minimizing disk and network I/O is usually more important. In a fully-used cluster, MapReduce and Tez, which do most of the work, tend to saturate disk-I/O capacity, while jobs that transform bulk data, such as ETL or sorting, can easily consume all available network I/O.
The YARN Revolution
YARN—the data operating system for Hadoop. Bored yet? They should call it YAWN, right?

Not really—YARN is turning out be the biggest thing to hit big-data since Hadoop itself, despite the fact that it runs down in the plumbing of somewhere, and even some Hadoop users aren’t 100% clear on exactly what it does. In some ways, the technical improvements it enables aren’t even the most important part. YARN is changing the very economics of Hadoop.
Not Hadoop: All about Unicode
Unicode is a subject that trips up even experienced programmers. It’s one of those places where computer science and engineering bump hard into human diversity.
Understanding Hadoop Hardware Requirements
I want my big-data applications to run as fast as possible. So why do the engineers who designed Hadoop specify “commodity hardware” for Hadoop clusters? Why go out of your way to tell people to run on mediocre machines?

Shifting to Hive Part II: Best Practices and Optimizations
This is part two of an extended article. See part one here.

A full listing of Hive best practices and optimization would fill a book. All we’ll do here is skim over the topics that best indicate the spirit of Hive, and how it is used most successfully. There’s plenty of detail available in the documentation and on the Web at large. Hopefully, these quick run-downs will provide enough background and keywords for a rewarding Google search.
Shifting to Hive Part I: Origins
SQL is the lingua-franca of data big and small, but SQL is a language, not a platform—it serves as the conceptual framework for data tasks on many platforms, ranging from blog content management with MySQL, to high-frequency online transaction processing (OLTP) systems, to heavy-duty batch processing on Hadoop and other big-data platforms.

