Introduction to Apache YARN

Apache YARN is the prerequisite for Enterprise Hadoop, providing resource management and a central platform to deliver consistent operations, security, and data governance tools across Hadoop clusters.

YARN also extends the power of Hadoop to the incumbent and new technologies found within the data center so that they can take advantage of cost-effective, linear-scale storage and processing. It provides ISVs and developers a consistent framework for writing data access applications that run in Hadoop.

Enabling Multiple Workloads

Enabling Multiple Workloads

Hadoop 1.0 was mainly used for MapReduce jobs in more of a project level adoption of Hadoop for big data analysis.

Hadoop 2.0 with YARN as its architectural center makes a mixed load data lake a reality by enabling the running of mixed workloads in the same Hadoop cluster. This makes it easy to build a data lake.

Different workloads interact with data in different ways, simultaneously and seamlessly.

YARN Architectural Components

YARN Architectural Components

The MapReduce framework was decomposed to generalize resource management from a single workload type. The ApplicationMaster allows clusters to scale beyond the 2000-4000 node range due to the decentralization of tracking independent jobs

YARN Components and Interactions

As the architectural center of Hadoop, YARN enhances a Hadoop compute cluster through multitenancy, dyanmic utilization, scalability and compatibility with MapReduce applications developed for Hadoop 1.x.

YARN Resource Management

YARN Resource Management

YARN (unofficially “Yet Another Resource Negotiator”) is the computing framework for Hadoop. If you think about HDFS as the cluster file system for Hadoop, YARN would be the cluster operating system. It is the architectural center of Hadoop.

YARN as an operating system

A computer operating system, such as Windows or Linux, manages access to resources, such as CPU, memory, and disk, for installed applications. In similar fashion, YARN provides a managed framework that allows for multiple types of applications – batch, interactive, online, streaming, and so on – to execute on data across your entire cluster.

Just like a computer operating system manages both resource allocation (which application gets access to CPU, memory, and disk now, and which one has to wait if contention exists) and security (does the current user have permission to perform the requested action), YARN manages resource allocation for the various types of data processing workloads, prioritizes and schedules jobs, and enables authentication and multi-tenancy.

Multi-Tenancy

Software multi-tenancy is achieved when a single instance of an application serves multiple groups of users, or “tenants.” Each tenant shares common access to an application, hardware, and underlying resources (including data), but with specific and potentially unique privileges granted by the application based on their identification. A typical example of a multi-tenant application architecture would be SaaS cloud computing, where multiple users and even multiple companies are accessing the same instance of an application at the same time (for example, Salesforce CRM).

This is in contrast with multi-instance architectures, where each user gets a unique instance of an application, and the application then competes for resources on behalf of its tenant. A typical example of a multi-instance architecture would be applications running in virtualized or IaaS environments (for example, applications running in KVM virtual machines).

NOTE: In prior versions of Hadoop, resource management was part of the MapReduce process. In this scenario, you had a single application handling both job scheduling and running data processing jobs at the same time. Starting with Hadoop 2.0, MapReduce is simply another data processing application running on top of the YARN framework.

YARN – The Big Picture View

YARN Master and utility node

The YARN Master node component centrally manages cluster resources for all YARN applications.

YARN Master, utility and worker node

The YARN Worker node component manages local resources at the direction of the ResourceManager.

YARN Multi-Node Resource Allocation Example

The following graphic illustrates how containers, ApplicationMasters, and job tasks might be spread across a 3-node cluster.

YARN Multi-Node Resource Allocation Example

In this example, the Job1 ApplicationMaster was started on NodeManager 2. The first task for Job1 was started and NodeManager 1, and the second Job1 task was started on NodeManager 2. This completed all the tasks required for Job1.

The Job2 ApplicationMaster was launched on NodeManager 3. The first two Job2 tasks were launched on NodeManager 1, Job2 tasks 3 through 6 were launched on NodeManager 3, and the final Job2 task was launched back on NodeManager 1.

The main point to this is to illustrate that the ApplicationMaster can initiate the creation of containers on any appropriate NodeManager in the cluster. The default behavior is for all jobs to be collocated where data blocks already exist, even if more processing power is available on a node without those data blocks whenever possible.

YARN Features

YARN’s original purpose was to split up the two major responsibilities of the JobTracker/TaskTracker into separate entities:

  • A global ResourceManager
  • A per-application ApplicationMaster
  • A per-node slave NodeManager
  • A per-application Container running on a NodeManager

Resource Manager High Availability

In HDP prior to version 2.1, the ResourceManager was a single point of failure. The entire cluster would become unavailable if the ResourceManager failed or became unreachable. Even maintenance events such as software or hardware upgrades on the ResourceManager machine would result in periods of cluster downtime.

The YARN ResourceManager High Availability (HA) feature eliminates the ResourceManager as a single point of failure. It enables a cluster to run one or more ResourceManagers in an Active/Standby configuration.

ResourceManager HA enables fast failover to the Standby ResourceManager in response to a failure, or a graceful administrator-initiated failover for planned maintenance.

There are two ways of configuring ResourceManager HA. Using Ambari is the easiest way. Manually editing the configuration files and starting or restarting the necessary daemons is also possible. However, manual configuration of ResourceManager HA is not compatible with Ambari administration. Any manual edits to the yarn-site.xml file would be overwritten by information in the Ambari database when the YARN service is restarted.

Multi-Tenancy with Capacity Scheduler

YARN Capacity Scheduler

Traditionally, organizations have had their own compute resources with sufficient capacity to meet SLAs under peak or near peak conditions. This often results in poor average utilization and the increased overhead of managing multiple independent clusters.

While the concept of sharing clusters is logically a cost-effective manner of running large Hadoop installations, individual organizations may have concerns about sharing a cluster, fearing other organizations may use resources critical to their SLAs.

The CapacityScheduler is designed to enable sharing of computing resources in a large cluster while guaranteeing each organization a minimum capacity. Available resources in the cluster are partitioned between multiple organizations based on computing needs. In addition, any organization can access excess capacity at any given point in time. This provides cost-effective elasticity.

Implementation of the CapacityScheduler is based on the concept of queues which are set up by administrators to reflect the organizational economics of the shared cluster.

Resource Isolation

Resource isolation in provided on Linux by Control Groups (CGroups) and on Windows through Job Control.

CGroups

Cgroups enable an administrator to allocate resources among processes running on a system.

Administrators can:

  • Monitor cgroups
  • Deny cgroups access to certain resources
  • Reconfigure cgroups dynamically on a running system

Cgroups can be made persistent across reboots by configuring the cgconfig service to run at boot time to reestablish cgroups.

Windows Job Control

Isolation controls similar to Linux CGroups have been implemented on Windows to perform default job control actions. Job control messages can only be processed by customized applications.

Managing Queue Limits with Ambari

Managing Queue Limits with Ambari

Ambari is the Apache project that allows single pane of glass administration of clusters (including multiple clusters). Ambari provides a standard set of tools, APIs and processes to be leveraged across Hadoop instances.

To configure a queue in Ambari, click on the queue name. From here you can set queue parameters:

  • Capacity
    • Capacity: The percentage of cluster resources available to the queue. For a sub-queue, the percentage of parent queue resources.
    • Max Capacity: The maximum percentage of cluster resources available to the queue. Setting this value tends to restrict elasticity, as the queue will be unable to utilize idle cluster resources beyond this setting.
    • Enable Node Labels: Select this check box to enable node labels for the queue.
  • Access Control and Status
    • State: Running is the default state. Setting this to Stopped lets you gracefully drain the queue of jobs (for example, before deleting a queue).
    • Administer Queue: Clicking Custom will restrict administration of the queue to specific users and groups.
    • Submit Applications: Clicking Custom will restrict the ability to run applications in the queue to specific users and groups.
  • Resources
    • User Limit Factor: A measure of the maximum any user can occupy in a queue. Setting this variable to “1” results in a maximum equal to the queue’s configured capacity. The setting is used to prevent any one user from monopolizing resources across all queues in a cluster.
    • Minimum User Limit: A measure of the minimum percentage of resources allocated to each queue user. For example, to enable equal sharing of the queue capacity among four users, you would set this property to 25%.
    • Maximum Applications: Enables the administrator to override the Scheduler Maximum Applications setting.
    • Maximum AM Resource: Enables an administrator to override the Scheduler Maximum AM Resource setting
    • Ordering Policy: Enables setting FIFO (First In, First Out) or fair (Fair Scheduler where applications get a fair share of capacity regardless of the order in which they were submitted)

Policy-Based Use of Computing Resources

Policy-Based Use of Computing Resources

The use of queues limits access to resources. Sub-queues are possible allowing capacity to be shared within a tenant. Each queue has ACLs associated with users and groups. Capacity guarantees can be set to provide minimum resource allocations and soft and hard limits can be placed on queues.

LEAVE A REPLY

Please enter your comment!
Please enter your name here