Git Product home page Git Product logo

govflow's Introduction

Gov Flow

codecov

An open, modular work order and workflow management system for local governments and resident satisfaction.

What we are building

Our aim is to build an open source solution that address the following aspects of government service delivery:

  • 311 request management ("there is a pothole in my street")
  • general questions and comments from the public ("what time does the library open")
  • "internal" request management ("there is a water leak on the 4th floor of town hall")
  • centralized views of request management status and analytics ("what is our work volume at present and how do we perform over time?")

For smaller cities, we'd like to be able to meet all their work management needs with Gov Flow. For larger cities, we intend to build out integrations with existing CRMs and ticket management software.

We have a focus on expanding "input channels" for requests. The API server currently supports API calls for new requests (used by our web form), and an endpoint for inbound emails. We plan to expand to SMS, chatbots, and potentially other entry points such as audio messages, and social media apps. If you have specific interest or use cases in this area, please talk to us by starting a discussion.

How we are building it

We are currently a small team within Zencity. We have a high-level roadmap, and we prioritize features based on real use cases with our early adopter users.

The API server (this codebase) is open source. We have some UI components that we have not yet open sourced. If you are interested in Gov Flow and need some pointers on building UI please open a ticket or a discussion and talk to us about your needs.

Get involved

  • Discussions: We discuss new features, architecture and so on in our Discussions forum. Feel free to take part in existing discussions or start a new one.
  • Issues: If you find a bug, or have some input on the codebase, feel free to open an issue.

Getting started

Using Gov Flow

Install Gov Flow from npm:

npm install @govflow/govflow

If you are not modifying Gov Flow with plugins, or embedding Gov Flow into an existing application, you can run the default server. First, you'll need to have a database to connect to and run migrations against:

Ensure database:

createdb govflow

Ensure your new database is declated in an environment variable:

# similar to the following
DATABASE_URL=postgres://<YOUR_USER>@localhost:5432/govflow

Ensure object storage:

GovFlow stores file on an S3-compatible object storage backend. Install minio and set the STORAGE_ environment variables appropriately.

Run migrations:

npx govflow-migrate up

# migrate backwards with npx govflow-migrate down
# govflow-migrate is a wrapper around Umzug so see:
# https://github.com/sequelize/umzug

Run the default server:

npx govflow-start

You can then visit localhost:3000/ in your browser to see the base API endpoint.

Other CLI commands:

There are some other CLI commands available when Gov Flow is installed. The API for the CLI will change, but for now the following are available:

npx govflow-start
npx govflow-migrate
npx govflow-generate-fake-data
npx govflow-send-test-email
npx govflow-send-test-sms
npx govflow-send-test-dispatch

Create a custom entrypoint:

If you plan to modify Gov Flow with plugins or any custom configuration or integrations, create your own entrypoint based on the following:

// my-govflow-extension/config.ts
import { MyServiceRepositoryPlugin } from './repositories';

export const plugins = []; // your plugins here.
export const config = {}; // your config here.

// my-govflow-extension/index.ts
import type { Server } from 'http';
import { createApp } from '../index';
import logger from '../logging';

async function defaultServer(): Promise<Server> {
    process.env.CONFIG_MODULE_PATH = './my-govflow-extension/config.ts';
    const app = await createApp();
    const port = app.config.appPort;
    return app.listen(port, () => {
        logger.info(`application listening on ${port}.`)
    });
}

export default defaultServer;

Note: See src/servers for examples of ready-to-go server configurations.

Note: createApp is a factory function that takes custom configuration, and returns an Express.js app instance. See Customization for further information on this and other configuration entry points.

Developing Gov Flow

Clone the codebase:

git clone https://github.com/govflow/govflow.git

View the primary runnable tasks:

make

Install dependencies:

make install

Also ensure that you have Postgres running, and create a database called govflow for use.

Lint code:

make lint

Run tests:

make test

Customization

Configuration

Provide a path to your custom configuration module via process.env.CONFIG_MODULE_PATH. See src/config for how this is read. You can provide new config and overwrite existing config.

Entry point

The primary entry point is the createApp factory function defined in src/index.ts.

createApp calls initConfig which initializes the system correctly. All tools, such as the migration and fake data generator scripts, need to call initConfig as part of their initialization flow.

createApp returns an Express.js Applicationinstance app. app is provisioned with easy access to configuration at app.config, registered plugins at app.plugins, and repositories at app.repositories (repositories are used for all data access). The database is also configured via createApp, and is usable after createApp has been called by exporting databaseEngine from src/db.

Extensibility

Gov Flow is designed to be shaped for specific use cases and system integrations. Existing behavior can be modified or extended via plugins. Over time, such customizations will be available as extensions, downloadable via npm, contributed by the core maintainers and the wider community of users.

Plugins

Gov Flow exposes a number of interfaces that can have their behaviour customized via plugins. All interfaces abstractions that can be customized in this way are prefixed with I and are bound to a concrete implementation via IoC containers instantiated in the registry module. The default concrete implementations are part of the src/core module - see the Module overview below for further information on the organization of the codebase.

In the current release, repository interfaces can be customized via plugins. Repositories are a data access abstraction layer - see the tests for examples of how they are used, and how custom implementations can be provided.

Future releases may see interfaces that can be customised via plugins for routes, models, and so on.

Providing a plugin

Provide a path to your custom configuration module via process.env.CONFIG_MODULE_PATH, and from that module export a member plugins which is an array of Plugin types. See examples in the test suite, and see where implementations are bound in src/registry.

Models

Providing a model

See the tests for examples of modifying a core model and/or providing an additional model.

Note: It is likely that the Plugin concept will be expanded in the future to encapsulate the provision of custom models, and other aspects of the system like routers, so this current API is likely to change.

Modules

This is a short overview of the modules under src in the codebase to help you get oriented, especially while documentation is sparse. Please also review tests, and all functionality of the codebase is on display in the test suite.

config

Provides a configuration object to store various config for the system. The configuration object is backed by nconf.

core

This provides the core functionality of the system. Here you will find a submodule for each core entity, with its routes, repositories, models, and any business logic.

db

Provides a database engine object, with verification of the connection, syncing of tables, migration of schema changes, and registration of custom models. Sequelize is used.

logging

Provides a logger. Winston is used.

migrations

Stores migration directives for Umzug, Sequelize's tool for programmatic migrations.

registry

Provides registration and implementation of custom components for the system. Implementation is managed via Inversify IoC containers.

servers

Provides server configurations for the app.

types

Provides all types that the system declares.

default

Provides the default server configuration to run the system with no customization.

index

The entry point for the system, providing the createApp factory function to initialize and configure an app.

Communications

An important part of any workflow management ot 311 system is communication with public users who submit requests, and staff users who handle requests. Gov Flow currently supports email and SMS for such transactional messaging.

SendGrid is used as the email backend, and Twilio is used as the SMS backend. You will need to setup accounts and credentials at both providers to send messages from GovFlow.

Configuration

The following configuration variables need to be set on the environment:

  • SENDGRID_API_KEY
  • SENDGRID_FROM_EMAIL
  • TWILIO_ACCOUNT_SID
  • TWILIO_AUTH_TOKEN
  • TWILIO_FROM_PHONE

The following configuration variables are not required for messaging to work, but are required for message templates to be meaningful:

  • APP_CLIENT_URL
  • APP_PUBLIC_EMAIL

An additional configuration variable allows bypassing the backends and sending messages to console for testing and development (see below):

  • COMMUNICATIONS_TO_CONSOLE

Manual messaging to verify the backends

make send-email, make send-sms, and make send-dispatch can be used to send test messages. You will need to set TEST_TO_EMAIL and TEST_TO_PHONE environment variables to receive these messages. All credentials will need to be properly set for these manual tests to work. make send-dispatch sets COMMUNICATIONS_TO_CONSOLE to undefined to force usage of the backend provider, and, it uses the higher-level dispatchMessage function that us used by the dispatch handler in the app, rather than the low-level sendSms and sendEmail functions.

Messages to console

Important: if the COMMUNICATIONS_TO_CONSOLE environment variable is set to any truthy value, then SMS and email messages will be logged to the console and will not be sent to the backend service providers. We highly recommend setting this environment variable for local development, and in particular for running tests. Even if all backend provider credentials are set, if COMMUNICATIONS_TO_CONSOLE is truthy, then they will not be used to send messages.

File support

Gov Flow supports files (usually images) being added to service requests. File support is a client/server architecture where the actual file storage is on an S3-compatible object storage (for object storage solutions that are not S3 compatible, namely, Azure Blog Storage, Minio can be used as a gateway to provide the required API).

The flow

  • A client (such as a service request submission form, or an administration dashboard for 311 work management) provides functionality to allow users to (i) upload, or (ii) view, a file.
  • The client submits the file name to the appropriate Gov Flow storage endpoint to either (i) get a presigned PUT url, or (ii) get a presigned GET url.
  • The server responds with a URL, time limited in usage, to perform the appropriate PUT or GET request from the client.
  • The client performs the appropriate action with the new url.
  • In the case of allowing users to upload URLs, for example for a new service request, the client submits the service request payload with an array of image URLs, not actual images.

Configuration

The following environment variables are required:

STORAGE_BUCKET // default 'govflow_uploads' STORAGE_REGION // default 'us-east-1' STORAGE_SSL // default 1 which is cast to true STORAGE_PORT STORAGE_ENDPOINT STORAGE_ACCESS_KEY STORAGE_SECRET_KEY STORAGE_SIGNED_GET_EXPIRY // minutes for get urls to expiry

Testing and development

The simplest solution is to run an instance of minio locally. See the GitHub workflow, which runs a minio instance for the test suite, and see the Minio documentation at https://min.io

govflow's People

Contributors

amirozer avatar pwalsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

Forkers

abhinemani

govflow's Issues

Preventing spam requests

Spam is very likely to occur very early, and I’d suggest we integrate a form of human validation from the beginning for all anonymous users (so, all users as we are not having logged in users from the beginning). That can be one of:

  • reCaptcha
  • hCaptcha (I’m using this in a couple of places as I prefer not to use google)
  • email or phone ping (get the user to click a verify link sent via SMS or email, depending on the data she has added to the form).

The third option is the more complex, but, it also plays into our value proposition of facilitating communication between the city and citizens, and, when the requester is anonymous, that is the only way we know the details of the requester are valid (so we meet two needs - verify humanness and verify the human is contactable).

Key modules

We are currently proposing the following key modules for this system. Dropping this here for general knowledge, and we'll update as we go along.

  • Authentication
    • Pluggable
  • Authorization (permissions and roles)
    • Pluggable
  • Services
    • Configurable (templates + user-generated)
  • Service Request
    • Core only?
  • Request Storage
    • Pluggable
  • Event Log
    • Core only?
  • Analytics
    • Pluggable
  • Input
    • Several core modules
    • Pluggable
  • Management (super user etc.)
    • Core
  • Workflow
    • Configurable (templates + user-generated)
  • Preprocessing
    • Several core modules
    • Pluggable
  • Communication (responders, channels, backends)
    • Pluggable
  • Geocoding + Mapping
    • Pluggable

Explicitly create log of events related to each request

We have a number of state changes to requests that do not have data about the change persisted.

For example: change status, change assignee.

We should have a record of every change made to a request, including what the change was, and who made the change.

We can do this in a simple manner by "abusing" the RequestComment data and writing additional events. This maps well with how we display such data in the UI we have.

For these new "automated" comments we will add as records of change, we can write them to a request comment using one or several templates.

Naming interfaces and types

It is not idiomatic TypeScript to prefix interfaces with I (See here and here, an in general, any of the types from @types), and so on creation of this codebase, I didn't carry over the usage of I prefixed interfaces in general from the internal codebase it is based on.

As documented, I did retain I prefixes for a special meaning - as a way to indicate interfaces that can be overridden by plugins.

Some recent additions used the I prefix for non-pluggable interfaces. Let's discuss, so that the codebase remains consistent.

Initial auth implementation

Relations

The Client object represents a jurisdiction (we may in future break this into a client can have many jurisdictions, for the county > many cities use case).

Clients have one or many StaffUser models, where each represents a user with admin rights (at this stage, we have no permission system for more complex rules). And, each StaffUser only belongs to one Client.

Authorization

A StaffUser logs in, and once verified, we therefore know both the user and client objects for each request (putting aside now if we store this in a session, with some other type of token like JWT, or, we pass the username and password in the header for each request to verify identity and therefore the user and the client for the request).

Headless integration

Currently, we are running an API that talks to an internal frontend, and that frontend can query other systems to get both the client and the user objects. So, the issue the for the near term is, what is the most direct way to do the integration, and we can build on this after to contain proper auth logic in this app.

Suggestion:

  • The client makes requests with two custom headers:
    • X-Zen311-Client
    • X-Zen311-Username

The user is already logged in so we do not need to auth again on the API, as we trust the execution environment for the short term.

From these headers, we query the database and set req.user, which will have the full user object, including the client accessible at req.user.client. This will be done in middleware here: https://github.com/pwalsh/zen311/blob/main/src/core/accounts/middlewares.ts#L3

Next steps

The above approach should enable us to get working with the current frontend.

After this, once we are moving, we properly implement, via passport.js:

  • http basic auth where the current client can correctly authenticate on each request, with user credentials from the execution environment
  • OAuth with tokens login for a proper login system that supports the API more generally and future UI work we do, including authorization for actions, as well as authentication

@amirozer WDYT?

SMS communication

Just like we currently support both inbound email for service requests, and, two-way communication around service requests, we are now implementing the same feature set and UX (to the extent reasonable) via SMS.

This integration is possible due to the extensive APIs that Twilio provides for SMS and phone number management (similar to how we leverage SendGrid, also a Twilio product, for email). Therefore, SMS support requires a direct dependency on Twilio - we are not designing an abstraction for potential use with other SMS backends.

At a high level, we plan to expose the following features:

  • Enable submission of new service requests via SMS. The submitter sends a message to a dedicated Phone Number for a given Jurisdiction.
    • We can support an arbitrary number of Phone Numbers for a Jurisdiction, and Phone Numbers are routed into the system using the existing Routing Table with the same routing rules that are currently available for Inbound Emails ref.. This means that, if needed, there can be Phone Numbers for routing to a specific Department, a specific Service, a specific Assignee, and any combination thereof.
  • When a Service Request is made via SMS, and, when a Service Request is submitted with Phone Number information, allow two-way communication around the Service Request with the Submitter.
    • Still deciding: Issue a dedicated Phone Number per Service Request - provides identical UX and data management flows to the current email integration, OR, have a "single" Phone Number, and a (more complex) disambiguation process to ensure two way communication is routed correctly to a given existing Service Request (will write about this a bit further below) Moved to #68
  • All the same communication flows that currently exist for Email can apply for SMS - for the server, messages are dispatched via essentially the same data flow, and the difference is "only" the broadcast channel. This provides further basis for the development of more broadcast and input channels (WhatsApp, etc.)
  • Jurisdictions can control if and how messages are sent to submitters, and if two-way communication is enabled. The flags for this on the Jurisdiction Model that currently exist, work in the same way for SMS as they do for email (and any other future communication channel).
  • Two-way SMS is not only with Submitters. GovFlow should support the exact same flow with Staff Users (again, a near 1:1 feature parity with email). However, we are still a bit undecided on this for our current use cases. Will update as we progress with implementation - may not fully expose this, or, hide it via a Jurisdiction config. - agreed we are not doing this for now, but technically, the server supports Staff Users the same as submitters
  • Unsubscribe management - Just like we integrate with SendGrid to keep up to date on which recipients have unsubscribed, bounced, etc., we need the same or similar routines for SMS, using the ChannelStatus model. - moved to #69

References

Some links to docs on the inbound email, and two-way email, implementation:

Queues (for communications and other)

We don't have a queue for dispatching tasks. Currently, the only area we really could use it is in dispatching email and sms messages via API to the backend providers. The initial implementation of that uses Node's event emitters to dispatch these jobs, but there is no management of such jobs like retry etc.

It would be good if we could have a simple queue framework that uses Postgres, to reduce the out of the box dependencies for the code base, but probably we'll go for a rabbit mq or redid backed queue framework.

Modelling Services

What we did have:

Until 0.0.15-alpha services were modeled in a parent-child hierarchy, the idea being that:

  • This allows public users to assign requests at different levels of the hierarchy (don't make the public user think too much if they are reporting a 15L bin or a 30L bin, just let them report a rubbish issue)
  • This allows us to model service types that are not strictly 311 related (have different branches for 311 and non-311 queries)

There were tests at the API endpoint and repository levels that showed serialization to/from open311 group to this parent-child data model was working (for some definition of working).

I came to this model from:

  • my reading of the Open311 spec (which has no hierarchy but has a "group" as a container for services, but which can't be used as a service), and looking at example services from data we have we seems to me to have an implicit or explicit hierarchy
  • the point that we want to accept requests that are not strictly 311 related, and having a branched and hierarchical service entity supports that

What we now have:

This was changed in 0.0.15-alpha back to a model that directly follows the Open311 approach of a "flat" container group. I don't really know why - perhaps there was an edge case, or some behavior in the existing client that was not identical to the test cases we had.

Whatever the case, I think this highlights that we need to discuss and design the Service entity a bit to ensure we model it in a way that supports Open311 but is not restricted by it.

** Some thoughts:**

  • Open311 is a data interoperability format - we should be able to read and write Open311, but there is no reason that we need to restrict our data model to what open311 dictates (especially as the standard is over 10 years old and we want Gov Flow to be 311 + more). That means, we should be able to losslessly ingest open 311, and we should be able to export our data as 311, with an acceptable degree of loss (example: as we currently have, we support multiple images for service requests but open311 supports a single image for a service request)
  • After thinking about it further since the change, I still think the parent-child data model, or a version of it, is on the way to the correct way to model Service data, primarily because of my first two points above, being (i) this allows public users to assign requests at different levels of the hierarchy, and (ii) this allows us to model service types that are not strictly 311 related (have different branches for 311 and non-311 queries, for example). it would be great to discuss different approaches.

@amirozer @idoivri be great to get your thoughts here, and/or discuss this further in person.

Transactional email and SMS

We need to message with the public, and also with admin users, via email and optionally via SMS.

We will do this by integration with SendGrid and Twilio. Feature flags will prevent certain functionality if SendGrid and/or Twilio SMS is not configured.

Support request submissions by email

We now have clear user use cases for submitting requests by email.

As a first iteration, we will use an inbound email processing service provided by SendGrid, which we use for transactional messaging:

https://docs.sendgrid.com/for-developers/parsing-email/setting-up-the-inbound-parse-webhook

email addresses of users of Gov Flow can be configured to forward to email addresses we designate, and these email addresses will take and process the emails (in the simplest way) and then be ingested into Gov Flow and specifically to the request inbox where they can be further processed manually.

Possible enhancements for message disambiguation

Two-way SMS requires some manual disambiguation message flows between the server and the submitter. See dfc6214 and https://github.com/govflow/govflow/blob/main/docs/two-way-email-and-sms.md

With the way this works now, if the submitter simply never responds to a disambiguation message, then, her original message never becomes a service request in any way (the request message IS persisted) in the MessageDisambiguationModel, though, with other data about the state of the disambiguation process.

Two ways (non-exclusive) that we could enhance this current implementation are:

  1. Send scheduled reminders, possibly with a limit on total reminder, to the submitter until she responds.
  2. Provide user interfaces for staff users to inspect the messages in the disambiguation queue, and prices them manually.

The second option would involve a large amount of work at the UI level, and leverage the existing data flows that support the submitter disambiguation process.

The first option would be simpler and just require us to keep track of the # of times we sent a reminder.

Managing communication status

As per #5 and specifically #5 (comment) we currently rely on the communication backends (Twilio and SendGrid) to manage unsubscribes, and Gov Flow is not aware of a given user's subscription status.

Next step around this feature would be one of:

  • Manage all unsubscribe state in Gov Flow itself
  • Poll those backends periodically to update our own tables with subscription state, but just as a copy of what is held by the backend service provider

I prefer the first option (less coupling with the service provider), but either should be fine.

Needs a bit more scoping to see what is best and when we do it.

Migrations

Currently we just call sequelize sync to ensure the tables if they don't exist.

Umzug, the sequelize tool for programmatic migrations, is installed and loosely setup, but not working at present.

Unfortunately with sequelize we need to write up/down migrations by hand and maintain them in addition to model definitions.

And, because we support custom models we also need to support user provided migrations.

This is all possible with the tools we have, and we will implement it after a few more model changes are made to the current alpha codebase

Allow editing of a service for a service request

We want to allow UIs to have editable service for a given request, as the public user who submitted a request may have miss-assigned, for example.

We need to add a new action endpoint, like with editing status or assignee, for editing service.

Indicating that a ServiceRequest is associated with a department

We've seen that some existing systems being used for 311-type requests allow "assigning" or "associating" a request with a government department, for example, "Public Works".

Initial implementation

We want to model this, starting with a really simple use case, being, the ability to associate a ServiceRequest with a department.

This initial implementation will support the following scenario:

  • When a new service request hits the inbox, the staff users running the inbox can optionally associate a request with a department
  • The StaffUser who is assigned to the request does not have to belong to the department - she is just a staff user who is overseeing the request in the Gov Flow dashboard, not the person actioning the fix for the request
  • The department for the service request can be added/changed at any point as long as a ticket is not closed
  • The view of service requests can be filtered by department (and, filtered by those not assigned to a department)

For each jurisdiction, we require:

  • A list of department names

The initial implementation essentially annotates a service request with a department from a closed list of departments for a given jurisdiction. This annotation can be used for user interface.

Implementing this requires the following changes to the code:

  • A new model / repository for "Department"
  • An association between service requests and departments, where each service request can be associated with a department
  • Allow, but don't require, associating a service request with a department
  • If there are no departments for a given jurisdiction (i.e.: the jurisdiction does not provide one), then, don't show the department option in the UI (no server code paths need to change for this)

Probable future use cases

All of these would require additional logic and relations not present in the initial implementation.

  • Assigning tickets to staff users who belong to a department
  • Direct mapping between a request's "service" and a department(s?) that handles that service
  • Reassignment to new department/associated staff users
  • Workflow for todo/doing/done that is department specific

Support email as an inbound channel

We have started with a standard webform but the goal is the support many inbound request channels.

We have a need now to support email, so, email is next!

We already depend on SendGrid for transactional email, so, we will build this feature using SendGrid's inbound parse API:

https://docs.sendgrid.com/for-developers/parsing-email/setting-up-the-inbound-parse-webhook

At a high level, this feature will require the following:

  1. The GovFlow instance administrators set up all the MX config as per SendGrid, and any other SendGrid specific config
  2. We expose a new endpoint on GovFlow for SendGrid to post inbound emails to.
  3. Based on the domain configuration, we establish a pattern of issuing inbound email addresses per jurisdiction, and (i) for parsing association with existing requests from the inbound payload (is an issue new or further correspondence on an existing ticket), and (ii) for parsing department or service information from an inbound payload (ideally via the email name)
  4. New inbound requests go to the inbox
  5. Inbound requests associated with an existing request go to the comment thread of the existing request
  6. Support incoming images by uploading them into the image storage

Add user-readable Ticket ID field

The Need

  1. Humans need an easy to read ticket ID/number to reference requests (for example, when calling the hotline to ask for the status of ticket)
  2. Ticket IDs can be added to Email subject lines to group communication around a specific request together (both for residents and for staff members)
  3. We will want to allow importing tickets from external systems, and will need to store their ticket ID, so we can update an existing ticket in following imports. Unlike our IDs, these IDs are unique to a jurisdiction but not to an instance (the same ID can appear in multiple jurisdictions)

The Solution

  1. Add a new field for Ticket Id
  2. By default, it will be an auto-increasing number with a year and month prefix. Each month we start counting from 1. For example, the 271st ticket in June 2022 will have the ID "2206271"

Multiple status stages which map to "closed"

As per @amirozer

For example based on the current implementation which is just a simple, linear workflow:

inbox [open] | todo [open] | doing [open]| blocked [stalled] | done [closed] | invalid [closed] | moved [closed]

More pluggable interfaces

Currently, repositories (data access abstraction) are pluggable, supporting user-provided plugins. Using the same internal infrastructure, next candidates to make pluggable are:

  • models (currently are customizable but should standardize them via the same plugin interface)
  • route collection (pass a mount point and a set of routes to the Router, overriding existing mount points and adding/removing endpoints)
  • middlewares - provide a stack of app-level middleware (we will need at some point to allow users to order these in relation to the core middlewares). Also, assumption is that Router-level middleware can be provided via providing custom routes

Other areas like input processing stacks for future iterations.

Error building projects using Govflow as dependency: migrate cli source code not found

When building a project using 0.0.18-alpha, I'm getting the following error:

#8 16.85 npm ERR! syscall chmod
#8 16.85 npm ERR! path /app/node_modules/@govflow/govflow/cli/migrate
#8 16.85 npm ERR! errno -2
#8 16.86 npm ERR! enoent ENOENT: no such file or directory, chmod '/app/node_modules/@govflow/govflow/cli/migrate'
#8 16.86 npm ERR! enoent This is related to npm not being able to find a file.
#8 16.86 npm ERR! enoent 

*Not sure whether this is an issue with the Govflow project/package or there is some adaptation required on my project to support this

Two-way SMS implementation

Broken off from #66

Original in #66 we wrote:

When a Service Request is made via SMS, and, when a Service Request is submitted with Phone Number information, allow two-way communication around the Service Request with the Submitter.
Still deciding: Issue a dedicated Phone Number per Service Request - provides identical UX and data management flows to the current email integration, OR, have a "single" Phone Number, and a (more complex) disambiguation process to ensure two way communication is routed correctly to a given existing Service Request (will write about this a bit further below)

We since decided to implement a happy path where a Jurisdiction has a single Phone Number for Service Requests, and GovFlow manages a disambiguation flow to ensure incoming messages are understood correctly. Even if multiple phone numbers are in use for a Jurisdiction, via the creation of InboundMap instances, we still run this disambiguation process (within the context of the matching InboundMap for a Number).

So, that disambiguation process is:

  • GovFlow sees an incoming message
    • The incoming message is automatically associated with a Jurisdiction due to a matching InboundMap, and optionally with deeper context (eg, a department) based on the InboundMap configuration
  • If the submitter has no OPEN requests for this InboundMap context, then, create a new Service Request (we disambiguated the incoming message without user involvement)
  • If the submitter has a SINGLE open request for this InboundMap context, then, create a new comment on the existing Service Request (we disambiguated the incoming message without user involvement)
    • This is maybe too prone to error, if we have a submitter who submits many tickets, in which case, maybe in this case we should change to manual disambiguation like with the next case
  • Otherwise, we disambiguate the new message by asking the user in response:
    • case/ user has multiple tickets: ask her to select from an incremented number list, where each entry is an existing service request (# and snippet of text), and the last entry is "This is a new request"
    • case/ user has one tickets: ask her to select from an incremented number list, where one entry is an existing service request (# and snippet of text), and the other entry is "This is a new request"

To support the manual disambiguation flow, we need a new model, let's call it a MessageDisambiguation. Here, we persist user messages and system responses, and probably references to existing objects like Service Requests, until the message from the user has been disambiguated. When it is disambiguated, we take the data in the MessageDisambiguation object and create either a new Service Request or a new Service Request comment from it.

Verify contact details

When a citizen submits a service request, sometimes she would like to be notified of the status of the request (let's say specifically, when it is resolved, but in general a notification could be sent based on a number of events in a service request lifecycle).

To support this, we can verify the citizen's contact channel (email or phone for sms) is valid. We can do this via Twilio/SendGrid ( see #5 ).

We can do this for:

  • Logged in citizens
  • Citizens not logged in, but who also have not opted in to an anonymous request

Pluggable auth

Ok, I've been looking at the auth use case we have in our managed hosting environment (where authentication is handled outside the scope of GovFlow, and we just push a middleware in GovFlow to check that each request is authorized), and seeing what we can learn from that in terms of how to implement pluggable auth in GovFlow.

I think it is pretty simple, and breaks down to:

  • authentication
    • login
    • logout
  • authorization
    • can this user use this API (and we don't currently have complex auth requirements - it is just: "is this user authenticated")

Implementation

Default

  • GovFlow will use Passport.js for auth
  • GovFlow will Passport's openid-connect plugin by default, configured for Auth0 integration (ref.)
  • Govflow will provide login and logout routes and handlers as per the above tutorial (which means that Auth0 provides the actual UI for those flows)
  • Govflow will have sessions and inspect the session for auth, and redirect to login when no authenticated session (again, this is the vanilla implementation in the passport.js Auth0 tutorial)

Plugin

Auth will have its own plugin type: AuthPlugin

  • AuthPlugin.login: callable | null | undefined. If undefined, use the default implementation (passport.authenticate('openidconnect')), if null, disable the login, if callable, use that as the passport.authenticate middleware on the login route
  • AuthPlugin.logout: callable | null | undefined. If undefined, use the default implementation (req.logout), if null, disable the login, if callable, use that as the passport.authenticate middleware on the logout route
  • AuthPlugin.verify: callable | null | undefined. If undefined, use the default implementation (passport.authenticate('session')), if null, disable the login, if callable, use that as the passport.authenticate middleware on all routes that require authentication

Different types of "services" or "request types"

We have a clear set of use cases where Gov Flow users definitely want to use it as the backend for a general workflow management system, and not just as the workflow management system for public submitted 311 enquiries.

We need to consider how we model services for this purpose ( #30 ), and how this impacts the way data is read and analyzed.

Key use case now:

Internal requests - gov staff make requests of people from other gov departments, and the request types may be different to those that are publicly available.

Repositories and Services

Repositories are a data access abstraction and specifically for us we are using them to provide access to the core entities of the system. There should be basically a 1:1 relation between a repository and an entity in our domain.

Services are for logic performs over multiple data entities, or external interactions, for example dispatch methods for email and sms.

Originally, communications dispatch methods were implemented on the communications repository. In 7f8a41b they are refactored into a service, and services are provided to the app via our registry which uses IoC. So, like Repositories, Services are also pluggable.

This is not in a release yet, I'm just recording the refactor here for reference.

cc @amirozer

Expose proper CLI for commands

In the core codebase, there are a bunch of commands in package.json and also in the Makefile - primarily based on my preference of using Makefiles as documentation. This approach is fine for developing the Gov Flow codebase itself.

But, the way these commands are implemented are useless for the primary use case of having Gov Flow as a dependency of a codebase.

We need a CLI that is exposed when Gov Flow is in your node_modules, and also can be used when developing Gov Flow. Specifically, right now we need to be able to conveniently run database migrations.

In development I commonly run make initdb && make migrate && make test. We could probably expose:

govflow initdb
govflow migrate
govflow serve # maybe ....

Add explicit close date for service requests

We could use updatedAt, but, something might update after closed, so better that we add a closeDate field and set it when the request state moves to a status that is of the closed type.

Couple Open311 repository to ServiceRequest and Service repositories

Currently, the Open 311 repositories for Service and Service Request are decoupled from the "main" Service and ServiceRequest repositories. Given they have similar logic, it might make sense to have the Open311 repositories depend on their "main" equivalents, rather than having their own logic and querying the database directly.

Not an urgent refactor but possibly good in the scenarios that the datastore for Open311, Service, or Service Repository is pluggable and outside the core system.

cc @amirozer

Typing repositories and injecting their dependencies

At the moment, we attach additional settings, models and repositories to concrete implementations of repositories by adding a property after they are bound to the IOC container, and suppressing typescript errors. It works but it is neither good typescript nor good dependency injection.

Rather, we can get type safety and add these dependencies properly using dependency injection, either with factories for concrete implementations or by injecting constant values (models and settings are immutable by the time they are assigned to repositories).

Here is a description of an approach https://stackoverflow.com/questions/37439798/inversifyjs-injecting-literal-constructor-parameters (I like the "injecting the literal as a constant value" approach as it is simple)

File storage for images

People who submit service requests also need to optionally be able to submit images, to show the thing they are requesting to be addressed.

There are many ways to provide image support in the application: different ways to use cloud object storage, or upload to a filesystem, etc.

An ideal solution probably looks something like:

  • in browser, the user selects to add one or many images to their service request
  • the client code reads over the files, gets a hash for each, and then
  • the client code hits an API endpoint and gets an upload URL for each file (verified by the hash)
  • the client code initiates the upload of each file
  • the client code handles a callback for each file that the upload is complete, and gets with that a URL to the file (the URL will not be publicly accessible, it will only be accessible to admin users of the app)
  • the client submits the form, with the images being an array of URLs

This pattern, or a variation of it, is pretty straight forward for file system storage, or AWS S3, and also I think minIO. Need to check if this pattern translates to Azure Blob Storage as we do want to support that (but we can also assume a minIO gateway over Azure Blob Storage as an option).

After some discussion with @amirozer it looks like the simplest, first solution will be:

  • The client code does what it does to upload images and get back a URL that the image will be available to admin users at
  • The client submits an array of URLs for images

This solution works probably works in the short term as the current client runs in a privileged environment with access to non-public image upload APIs.

Workflow enhancement - assignment and "ownership" logic to users via departments

In #26 ( implemented in #29 ) we added basic support for departments in a jurisdiction, being the ability to associate a service request with a department. As noted in #26 (comment) there are several ways this feature can develop based on input we have from current users and potential users.

The current implementation optionally allows assigning a request to a department. This assignment has no impact or relation to the person assigned to the request at present. The department itself does have a primary contact name and email as metadata, but this is not a StaffUser object, and it just for display purposes, not for any use in the assignment of requests.

We have a new potential user with key user scenarios that we can support by iterating on the initial base.

Our current understanding of the user scenarios is as follows:

  1. Departments have one or more staff users who are "team" or "department" leads, for the purposes of service delivery.
  2. Assignees for tickets should come from a list of staff users who are explicitly associated with a department. This could be one of the department leads, or, another staff user who is not a lead.
  3. Department leads will communicate with each other around any given request, and, communicate with the assigned person, and, communicate with the person who submitted the request.

Supporting this in the GovFlow API server requires:

  1. A configuration setting for enforcing assignment via department (I think this will be a common use case, but not one we should enforce going forward for all users of govflow): ENFORCE_ASSIGNMENT_VIA_DEPARTMENT
  2. Adding a relationship from StaffUser to Department - this relationship will be required for non-admin staff users when ENFORCE_ASSIGNMENT_VIA_DEPARTMENT is true.
  3. Allow StaffUsers to belong to multiple Departments.
  4. Don't allow StaffUsers to be removed from a Department if they have non-closed tickets assigned via the department.
  5. Allow one or more staff users in a department to be a lead for this department - can implement this via a Group model for more flexibility around other "group types" or just as a boolean on the staff user / department relation now.
  6. Various CRUD actions for relationships between staff users and department exposed via dedicated endpoints.

Probable client changes required:

  1. Get some configuration from the GovFlow server via a "client config endpoint", to configure itself appropriately (I talked about that previously here in regard to captcha, and the more features we add that are configurable it makes more sense to centralize this config to the server and the clients configure themselves accordingly from there, rather than maintaining the same config in multiple places)
  2. new screens for managing staff user / department / lead relations
  3. In the workflow for requests, when ENFORCE_ASSIGNMENT_VIA_DEPARTMENT is true require assigning a department before assigning a staff user, and, if department changes, clear user assignment
  4. At any time, given a request, have a list of department leads for the request as well as the assignee - this may or may not display in the UI, but will be required for further communication use cases.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.