Introduction

Apache Pig is a scripting platform for processing and analyzing large data sets.
Pig was designed to perform long series of data operations, making it ideal for three categories of Big Data jobs:

  • Extract-transform-load (ETL) data pipelines
  • Research on raw data
  • Iterative data processing

Pig’s design goals can be described as:

  • Pigs eat anything: Pig can process any data, structured or unstructured
  • Pigs live anywhere: Pig can run on any parallel data processing framework, so Pig scripts do not have to run just on Hadoop
  • Pigs are domestic animals: Pig is designed to be easily controlled and modified by its users
  • Pigs fly: Pig is designed to process data quickly

Pig Latin

The language of the platform is called Pig Latin, which abstracts from the Java MapReduce idiom into a form similar to SQL. While SQL is designed to query the data, Pig Latin allows you to write a data flow that describes how your data will be transformed (such as aggregate, join and sort).

Pig executes in a unique fashion

  • During execution, each statement is processed by the Pig interpreter
  • If a statement is valid, it gets added to a logical plan built by the interpreter
  • The steps in the logical plan do not actually execute until a DUMP or STORE command is used

Since Pig Latin scripts can be graphs (instead of requiring a single output) it is possible to build complex data flows involving multiple inputs, transforms, and outputs. Users can extend Pig Latin by writing their own functions, using Java, Python, Ruby, or other scripting languages. Pig Latin is sometimes extended using UDFs (User Defined Functions), which the user can write in any of those languages and then call directly from the Pig Latin.

The user can run Pig in two modes, using either the “pig” command or the “java” command:
MapReduce Mode: This is the default mode, which requires access to a Hadoop cluster.
Local Mode: With access to a single machine, all files are installed and run using a local host and file system.

Why Use Pig

Apache Pig allows Apache Hadoop users to write complex MapReduce transformations using a simple scripting language called Pig Latin. Pig translates the Pig Latin script into MapReduce so that it can be executed within YARN for access to a single dataset stored in the Hadoop Distributed File System(HDFS).

why pig

The Grunt Shell

Grunt is Pig’s interactive shell. Users can enter commands interactively and interact with HDFS. Grunt provides:

  • A command-line history
  • Editing
  • Tab completion

grunt shell

Ambari Pig View

The Pig View provides a web-based interface to compose, edit, and submit Pig scripts, download results, and view logs and the history of job submissions.

You can use Pig View to:

  • Write Pig scripts
  • Execute Pig scripts
  • Add user-defined functions (UDFs) to Pig scripts
  • View the history of all Pig scripts run by the current user

Pig Latin Commands

DataFu Library

The DataFu Library is a collection of Pig UDFs for data analysis on Hadoop. Started by LinkedIn it is no open source under the Apache 2.0 license.

DataFu includes functions for:

  • Bag and set operations
  • PageRank
  • Quantiles
  • Variance
  • Sessionization

To use the functions in the DataFu library, you need to register the DataFu JAR file, just like you would with any other Pig UDF library: register datafu-1.2.0.jar;

HCatalog

hcatalog

Apache™ HCatalog is a table management layer that exposes Hive metadata to other Hadoop applications. HCatalog’s table abstraction presents users with a relational view of data in the Hadoop Distributed File System (HDFS) and ensures that users need not worry about where or in what format their data is stored. HCatalog displays data from RCFile format, text files, or sequence files in a tabular view. It also provides REST APIs so that external systems can access these tables’ metadata.

HCatalog:

  • Frees the user from having to know where the data is stored, with the table abstraction
  • Enables notifications of data availability
  • Provides visibility for data cleaning and archiving tools

LEAVE A REPLY

Please enter your comment!
Please enter your name here