Git Product home page Git Product logo

bundle-apache-processing-mapreduce's Introduction

Overview

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model.

It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.

This bundle provides a complete deployment of the core components of the Apache Hadoop 2.7.1 platform to perform distributed data analytics at scale. These components include:

  • NameNode (HDFS)
  • ResourceManager (Yarn)
  • Slaves (DataNode and NodeManager)
  • Client (example and node for manually running jobs from)
    • Plugin (collocated on the Client)

This bundle also includes the following services for monitoring an analytics:

  • Ganglia (monitoring UI)
  • Flume Syslog & Flume HDFS (collect logs and ingest into HDFS for analysis)

Deploying this bundle gives you a fully configured and connected Apache Hadoop cluster on any supported cloud, which can be easily scaled to meet workload demands.

Deploying this bundle

In this deployment, the aforementioned components are deployed on separate units. To deploy this bundle, simply use:

juju quickstart apache-processing-mapreduce

See juju quickstart --help for deployment options, including machine constraints and how to deploy a locally modified version of bundle.yaml.

The default bundle deploys three slave nodes and one node of each of the other services. To scale the cluster, use:

juju add-unit slave -n 2

This will add two additional slave nodes, for a total of five.

Conjure Up

conjure-up is an interactive, terminal UI deployment tool for Juju bundles.

After installing conjure-up, you can deploy the canonical-kubernetes bundle and tweak config values with one command:

sudo apt install conjure-up
conjure-up apache-processing-mapreduce

Refer to the conjure-up documentation to learn more.

Verify the deployment

The services provide extended status reporting to indicate when they are ready:

juju status --format=tabular

This is particularly useful when combined with watch to track the on-going progress of the deployment:

watch -n 0.5 juju status --format=tabular

The charms for each master component (namenode, resourcemanager) also each provide a smoke-test action that can be used to verify that each component is functioning as expected. You can run them all and then watch the action status list:

juju action do namenode/0 smoke-test
juju action do resourcemanager/0 smoke-test
watch -n 0.5 juju action status

Eventually, all of the actions should settle to status: completed. If any go instead to status: failed then it means that component is not working as expected. You can get more information about that component's smoke test:

juju action fetch <action-id>

Monitoring is provided by the Ganglia, which is available at http://<ganglia-host>/ganglia where <ganglia-host> can be discovered from juju status. The logs from all nodes are also ingested into HDFS and can be analyzed using Yarn and map-reduce jobs.

Deploying in Network-Restricted Environments

The Apache Hadoop charms can be deployed in environments with limited network access. To deploy in this environment, you will need a local mirror to serve the packages and resources required by these charms.

Mirroring Packages

You can setup a local mirror for apt packages using squid-deb-proxy. For instructions on configuring juju to use this, see the Juju Proxy Documentation.

Mirroring Resources

In addition to apt packages, the Apache Hadoop charms require a few binary resources, which are normally hosted on Launchpad. If access to Launchpad is not available, the jujuresources library makes it easy to create a mirror of these resources:

sudo pip install jujuresources
juju-resources fetch --all /path/to/resources.yaml -d /tmp/resources
juju-resources serve -d /tmp/resources

This will fetch all of the resources needed by a charm and serve them via a simple HTTP server. The output from juju-resources serve will give you a URL that you can set as the resources_mirror config option for that charm. Setting this option will cause all resources required by the charm to be downloaded from the configured URL.

You can fetch the resources for all of the Apache Hadoop charms (apache-hadoop-namenode, apache-hadoop-resourcemanager, apache-hadoop-slave, apache-hadoop-plugin, etc) into a single directory and serve them all with a single juju-resources serve instance.

Contact Information

Resources

bundle-apache-processing-mapreduce's People

Contributors

andrewdmcleod avatar johnsca avatar kwmonroe avatar marcoceppi avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.