Git Product home page Git Product logo

hadoop-fieldformat's Introduction

hadoop-fieldformat (beta)

Map Reduce utilities for flat tables.

RATIONALE

Hadoop is build to process semi-structured data; however, in many uses cases we found we still need the data to be more structured, such as querying data, ETL, doing data science, etc. Many popular hadoop tools propose different solutions: Pig and Cascading uses runtime schema layout to determine fields; Hive uses external MySQL database to save header meta information. In contrast, this project is trying another approach: make the Map-Reduce API be able to process the fields without using external source. Header information is attached into the data itself, and the field mapping can be read/write by mappers and reducers. It is done by rewriting TextInputFormat and TextOutputFormat classes, so you don't need to change any of your Map-Reduce code, just needt to use different input/output classes and it's done. Currently it is only available to raw Map-Reduce API, but it shouldn't be difficult to integrate into other batch processing tools like Hive, Pig, and Cascading.

SYNOPSIS

Before you use the classes, you'll need to know how header is stored in HDFS. The trick is, store the header.tsv in <source directory>/_logs/header.tsv. When running Map-Reduce jobs, anything that is in _logs won't be read, thus it is compatible to other map-reduce ecosystem.

Reading fields in Mapper

To use this library, just setup job.setInputFormatClass(FieldInputFormat.class) instead of the default TextInputFormat.class.

public int run (String[] args) throws Exception {
  Job job = new Job(getConf());

  FileInputFormat.addInputPaths(job, args[0]);
  job.setInputFormatClass(FieldInputFormat.class);
  job.setMapperClass(ExampleMapper.class);
  job.setNumReduceTasks(0)
  FileOutputFormat.setOutputPath(job, new Path(args[1]));

  job.submit();
}

public static class ExampleMapper extends Mapper<LongWritable, FieldWritable, Text, NullWritable> {

  public void map (LongWritable key, FieldWritable fields, Context context) throws IOException, InterruptedException{
    String ip = fields.get("ip");
    String user_agent = fields.get("user_agent");
    String cookie = fields.get("cookie");

    context.write(new Text(ip+"\t"+user_agent+"\t"+cookie), NullWritable.get());
  }
}

If you use wildcard characters in your path. FieldInputFormat will read different headers in different paths.

Write header information to output

Output is as simple as input, just specify the output format class to be FieldOutputFormat.class.

job.setOutputFormatClass(FieldOutputFormat.class);

String [] header = {"ip", "user_agent", "cookie"};
String [] body = {ip, user_agent, cookie};
FieldWritable new_fields = new FieldWritable(header, body);

context.write(new_fields, NullWritable.get());

Update fields

String [] header = {"ip", "user_agent", "cookie"}
String [] default = {"10.3.1.1", "Safari", "123456"};
FieldWritable field = new FieldWritable(header, default);

field.set("ip", "123.12.3.4");
field.set("user_agent", "Chrome");

context.write(field, NullWritable.get());

INSTALL

Hadoop-FieldFormat uses maven for dependency management. To use it in your project, add the following to your pom.xml file.

  <repositories>
    <repository>
        <id>hadoop-fieldformat-mvn-repo</id>
        <url>https://raw.github.com/dryman/hadoop-fieldformat/mvn-repo/</url>
        <snapshots>
            <enabled>true</enabled>
            <updatePolicy>always</updatePolicy>
        </snapshots>
    </repository>
  </repositories>

  <dependencies>
    <dependency>
      <groupId>org.apache.hadoop-contrib</groupId>
      <artifactId>hadoop-fieldformat</artifactId>
      <version>0.4.4</version>
    </dependency>
  </dependencies>

ADVANTAGES

  1. Useful for long-term aggregation, because map-reduce jobs only need to use string to refer the field, not by column numbers (which may change by time).
  2. Gives more semantic on data level.

TODOS

  1. Test on YARN environment
  2. Setup TravisCI or Jenkins-CI
  3. Integrate into other hadoop tools
  4. Write more concrete document
    • FieldWritable usage and limitations
    • Output constraint (Need to be FieldWritable, NullWritable)
    • Javadoc

LICENSE

Copyright (c) 2014 Felix Chern

Distributed under the Apache License Version 2.0

hadoop-fieldformat's People

Contributors

dryman avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.