Git Product home page Git Product logo

udaru's Introduction

Udaru

Greenkeeper badge npm travis coveralls snyk

Udaru Udaru is a Policy Based Access Control (PBAC) authorization module. It supports Organizations, Teams and User entities that are used to build the access model. The policies attached to these entities define the 'Actions' that can be performed by an entity on various 'Resources'.

See the Udaru website for complete documentation on Udaru.

This repository is home to Udaru's three main modules:

Module Package
@nearform/udaru-core ./packages/udaru-core
@nearform/udaru-hapi-plugin (for Hapi v17 and above) ./packages/udaru-hapi-plugin
@nearform/udaru-hapi-16-plugin (for Hapi v16) ./packages/udaru-hapi-16-plugin
@nearform/udaru-hapi-server (for Hapi v16) ./packages/udaru-hapi-server

Database support

Udaru requires an instance of Postgres (version 9.5+) to function correctly. For simplicity, a preconfigured docker-compose file has been provided. To run:

docker-compose up
  • Note: Ensure you are using the latest version of Docker for (Linux/OSX/Windows)
  • Note: Udaru needs PostgreSQL >= 9.5

Populate the database

The Authorization database, system user and initial tables can be created by executing:

npm run pg:init

Test data can be added with:

npm run pg:load-test-data
  • Note: Running a test or coverage command will auto run these commands

Volume data set installation and bench tests

The Authorization database can be further initialized with a larger volume of data, which can be tested using autocannon bench tests in order to demonstrate the potential throughput of the authorization API.

To populate the database with volume data, execute the following command:

npm run pg:init-volume-db
  • Note: Running this command will auto run the standard database population commands also

All volume data sits under the organization 'CONCH' and has the following default setup:

  • 500 teams
  • 100 users per team (the first of every 100 being the parent of subsequent 99)
  • 10 policies per team

After loading the data, the autocannon bench tests can be run by executing:

npm run bench:volume

This will run 15 second autocannon tests, which fire multiple concurrent requests at 2 frequently used endpoints. This results in the database being queried randomly across the entire set of data giving a good indication of average end-to-end latency and potential requests per second for a database containing 50K users.

pgAdmin database access

As the Postgresql docker container has its 5432 port forwarded on the local machine the database can be accessed with pgAdmin.

To access the database using the pgAdmin you have to fill in also the container IP beside the database names and access credentials. The container IP can be seen with docker ps. Use IP 127.0.0.1 and use postgres as username/password to connect to database server.

Migrations

We use postgrator for database migrations. You can find the sql files in the database/migrations folder. To run the migrations manually:

node packages/udaru-core/database/migrate.js --version=<version>`
  • Note: Running the tests or init commands will automaticaly bring the db to the latest version.

To get more information see Service Api documentation

Setup SuperUser

The init script needs to be run in order to setup the SuperUser: node packages/udaru-core/scripts/init

If you want to specify a better SuperUser id (default is SuperUserId) you can prefix the script as follow:

UDARU_SERVICE_authorization_superUser_id=myComplexId12345 node packages/udaru-core/scripts/init
  • Note: if you have already ran some tests or loaded the test data, you will need to run npm run pg:init again to reset the db.

Load policies from file

Run the following script to load policies:

Usage: node packages/udaru-core/scripts/loadPolicies --org=FOO policies.json

JSON structure:

{
  "policies": [
    {
      "id": "unique-string", // <== optional
      "version": "",
      "name": "policy name",
      "organizationId": "your_organization" // <== optional, if present will override the "--org=FOO" parameter
      "statements": [
        {
          "Effect": "Allow/Deny",
          "Action": "act",
          "Resource": "res"
        },
        { /*...*/ }
      ]
    },
    { /*...*/ }
  ]
}

Documentation

The Udaru documentation site can be found at nearform.github.io/udaru.

Swagger API Documentation

The Swagger API documentation gives explanations on the exposed API. The documentation can be found at nearform.github.io/udaru/swagger/.

It is also possible to access the Swagger documentation from Udaru itself. Simply start the server:

npm run start

and then go to http://localhost:8080/documentation

The Swagger documentation also gives the ability to execute calls to the API and see their results. If you're using the test database, you can use 'ROOTid' as the required authorization parameter and 'WONKA' as the organisation.

ENV variables to set configuration options

There are three default configuration files, one per "level": packages/udaru-core/config.js, packages/udaru-hapi-16-plugin/config.js and packages/udaru-server/config.js.

They are cumulative: when running udaru as a standalone server all the three files will be loaded; when using it as an Hapi plugin, plugin and core will be loaded.

This configuration is the one used in dev environment and we are quite sure the production one will be different :) To override this configuration you can:

  • provide a config object when using it as a standalone module or hapi server
  • ENV variables on the server/container/machine you will run Udaru on.

Config object

Standalone module

const buildUdaru = require('@nearform/udaru-core')
const udaru = buildUdaru(dbPool, {
  logger: {
    pino: {
      level: 'warn'
    }
  }
}})

Hapi plugin

async function () {
  const server = Hapi.Server()
  const UdaruPlugin = require('@nearform/udaru-hapi-plugin')

  await server.register({
    plugin: UdaruPlugin,
    options: {dbPool, config: {
      api: {
        servicekeys: {
          private: ['123456789']
        }
      }
  }}})

  await server.start()

  return server
}

Hapi 16 plugin

const Hapi = require('hapi')
const UdaruPlugin = require('@nearform/udaru-hapi-16-plugin')
const server = new Hapi.server()
server.register({
  register: UdaruPlugin,
  options: {dbPool, config: {
    api: {
      servicekeys: {
        private: ['123456789']
      }
    }
}}})

ENV variable override

UDARU_SERVICE_security_api_servicekeys_private_0=jerfkgfjdedfkg3j213i43u31jk2erwegjndf

To achieve this we use the reconfig module.

Testing, benching & linting

Before running tests, ensure a valid Postgres database is running. The simplest way to do this is via Docker. Assuming docker is installed on your machine, in the root folder, run:

docker-compose up -d

This will start a Postgres database. Running test or coverage runs will automatically populate the database with the information it needs.

  • Note: you can tail the Postgres logs if needed with docker-compose logs --tail=100 -f

To run tests:

npm run test
  • Note: running the tests will output duplicate keys errors in Postgres logs, this is expected, as the error handling of those cases is part of what is tested.

To lint the repository:

npm run lint

To fix (most) linting issues:

npm run lint -- --fix

To run a bench test on a given route:

npm run bench -- "METHOD swagger/route/template/path"

To create coverage reports:

npm run coverage

To populate the database with large volume of data:

npm run pg:init-volume-db

To run bench test against populated volume data (2 endpoints)

npm run bench:volume

For convenience, you can load the volume db and run the bench tests with the single command.

npm run bench:load-volume

This command will:

  • initialise the db & migrate to latest db schema
  • load the standard test fixtures
  • load the volume fixtures
  • spawn an instance of udaru server
  • run the autocannon tests & display results
  • shut down

Security

Udaru has been thoroughly evaluated against SQL injection, a detailed description of this can be found in the SQL Injection document.

To automatically run sqlmap injection tests run:

npm run test:security
  • Note: before running this, make sure you have a version of Python 2.x installed in your path.

These tests are not included in the main test suite. The security test spawns a hapi.js server exposing the Udaru routes. It only needs the DB to be running and being initialized with data.

The injection tests can be configured in the sqlmap config. A few output configuration changes that can be made:

  • level can be set to 5 for more aggressive testing
  • risk can be set to 3 for more testing options. Note: this level might alter the DB data
  • verbose can be set to level 1-5. Level 1 displays info about the injections tried

See the sqlmap repository for more details.

Also, Udaru, has some additional security related (penetration) testing available through npm commands based on OWASP Zed Attack Proxy. End results of the scans are stored as HTML reports in the Udaru documentation and should be reviewed manually post execution.

Note: before running this, make sure you have a Docker installed and the weekly Zed Attack proxy might take quite a bit to download (1,5GB + in size). Also note that the API scan is very thorough, extensive and takes quite some time to complete (45+ mins).

To run the baseline scan:

npm run test:security:pentest:baseline

To run the API attack scan:

npm run test:security:pentest:api

To run both:

npm run test:security:pentest

License

Copyright nearForm Ltd 2017-2018. Licensed under MIT license.

udaru's People

Contributors

allbto avatar allbto-mck avatar andrewcashmore avatar andrewwood2 avatar bcameron avatar cianfoley-nearform avatar ckiss avatar darahayes avatar dberesford avatar feugy avatar floridemai avatar greenkeeper[bot] avatar irelandm avatar jimmymintzer avatar litzenberger avatar mcdonnelldean avatar mcollina avatar mihaidma avatar mobri3n avatar mrister avatar nherment avatar ovhemert avatar p16 avatar paolochiodi avatar piccoloaiutante avatar salmanm avatar shogunpanda avatar simoneb avatar temsa avatar wooorm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

udaru's Issues

suggestion: refactor use of the pg pool

In https://github.com/nearform/labs-authorization/blob/master/service/lib/userOps.js, a good chunk of code is used to get a connection of the pool.

If you use https://github.com/mcollina/with-conn-pg, you can get all this code done automatically.

From

function listAllUsers (pool, args, cb) {
  pool.connect(function (err, client, done) {
    if (err) return cb(err)
    client.query('SELECT * from users ORDER BY name', function (err, result) {
      done() // release the client back to the pool
      if (err) return cb(err)
      return cb(null, result.rows)
    })
  })
}

to

var connString = 'postgres://localhost/with_conn'
var withConn = require('with-conn-pg')(connString)

var listAllUsers = withConn(function listAllUsers (client, args, cb) {
  client.query('SELECT * from users ORDER BY name', function (err, result) {
    if (err) return cb(err)
    return cb(null, result.rows)
  })
})

Less code, less things that can break.

EDIT: updated to with-conn-pg v2.0.0.

Ensure that organisations are correctly implemented

Current implementations of users, teams and policies do not take much account of organisations, as that feature has yet to be fully implemented, although key columns exist in the database.

The current codebase should be updated to fully take organisations into account - effectively limiting users, teams and policies to a single organisation, with the organisation determined by the admin using the system. This also needs to cope with a superadmin working on a particular organisation's information.

This includes adjusting the test data

Part of Epic #3

Database Technology Spike

As a developer
I want to be able to choose from multiple database technologies for the back-end Authorizations database and the technology to connect to it
So that I can meet customer or project standards or requirements

Standardise on a single test framework

Currently front-end and back-end use different test frameworks. There should only be a single framework used, which should be the one used by the boilerplate

Ensure consistent API response codes and return values

The different resources within the API need to be consistent in their use of response codes and return values.

The user resource is the reference example - compare the policy and team resources to ensure that they match and change them as necessary
e.g. DELETE can return one of 204 with a result or 410 or 500 with an error

The user API can also be changed if it is found to be incorrect/inadequate but please check first with Michael that this won't break anything in the UI

How should the nesting of teams be modelled in the database?

[If this proves hard, then consider the simpler mechanism of implementing a 'copy team' function]

The teams structure needs to work for the UI - for any team find its children and its parent.

It also needs to work for authorisation resolution - for any user find all the policies which apply to their teams (TBD if the level of the team hierarchy at which the policy applies makes a difference - i.e. does a leaf policy over-ride a branch one)

Mihai's findings:

from what I read it seems to me that we have 3 main options:

  • with the current adjacent model (the node has reference to the parent) we could build the json tree in js. Here I must check with Nathan, if the client does or does not need a json structure
  • there is a complex nested set model: https://en.wikipedia.org/wiki/Nested_set_model. I wouldn't use it
  • adjacent with lineage: http://www.sqlteam.com/article/more-trees-hierarchies-in-sql. This solution seems simple and can handle complexity. Needs to be reviewes against our use cases and how does it handle our CRUD operations
    sqlteam.com

actually i would go with the adjacent model but do the processing on client

additional simplified link: http://skillfulness.blogspot.ro/2010/11/my-approach-to-hierarchal-data-in-sql.html

Create better policy test data

For the demo on 27/10 we need a half-decent set of policy test data that will allow us to clearly demonstrate that the authorization service can give accurate responses to permission checks

It is currently loaded in scripts/init/database/testdata/loadPolicies.js

The data set needs externalising into a separate file (it is currently directly in the code)

It should be edited and extended so that users have policies that make logical sense and can be demonstrated to give different permissions to each other

Only one organisation needs to be involved, i.e. all users and policies can belong to the same org

Ensure that authorization look-up will be fast

Make sure that the two main authorization functions will not be unduly slow and that they will lend themselves to being optimised for performance, e.g. ensure policies element of database structure will be readily cacheable if required in future.

Part of Epic #75

How can we authorise superusers?

Work out how to authorise the superusers

Can we 'eat our own dogfood' and use the 'Authorisation' module for this

What is the bootstrap method, i.e. how to get the first superadmin in, who can then add other superadmins?

Part of Epic #3

As a developer I need to ensure that data is validated...

...so that the database is not compromised, e.g. by SQL injection attacks

Validate user-supplied and external data at service wrapper layer and possibly also at API layer, to make sure it meets the known data format. (bearing in mind that the UI will also have to validate, to give user feedback)

Part of Epic #65

Switch to mu-error for Error passing

  1. check that mu-error is suitable for Authorization and can be used for general error handling
  2. replace current error handling and passing - currently partially implemented and with a workaround to use strings where needed

https://github.com/apparatus/mu-error

  • this wraps the hapi Boom object - so it's 100% compatible with hapi reply
  • it's going to be used internally in mu
  • the idea is that the error will eventually propagate to a reply handler but with additional mu error context

Each module in the service should export a function

As recommended by Matteo:

Each module in the service should export a function, which you can pass options to, that will act as a factory for that module. In this way, you can avoid passing a logger instance (and pool) to each function call.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.