Git Product home page Git Product logo

pyro-api's Introduction

PyroNear Logo

CI Status Documentation Status Test coverage percentage black

PyPi Status Anaconda Docker Image Version pyversions license

Pyrovision: wildfire early detection

The increasing adoption of mobile phones have significantly shortened the time required for firefighting agents to be alerted of a starting wildfire. In less dense areas, limiting and minimizing this duration remains critical to preserve forest areas.

Pyrovision aims at providing the means to create a wildfire early detection system with state-of-the-art performances at minimal deployment costs.

Quick Tour

Automatic wildfire detection in PyTorch

You can use the library like any other python package to detect wildfires as follows:

from pyrovision.models import rexnet1_0x
from torchvision import transforms
import torch
from PIL import Image


# Init
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])

tf = transforms.Compose([transforms.Resize(size=(448)), transforms.CenterCrop(size=448),
                         transforms.ToTensor(), normalize])

model = rexnet1_0x(pretrained=True).eval()

# Predict
im = tf(Image.open("path/to/your/image.jpg").convert('RGB'))

with torch.no_grad():
    pred = model(im.unsqueeze(0))
    is_wildfire = torch.sigmoid(pred).item() >= 0.5

Setup

Python 3.6 (or higher) and pip/conda are required to install PyroVision.

Stable release

You can install the last stable release of the package using pypi as follows:

pip install pyrovision

or using conda:

conda install -c pyronear pyrovision

Developer installation

Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source:

git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.

What else

Documentation

The full package documentation is available here for detailed specifications.

Demo app

The project includes a minimal demo app using Gradio

demo_app

You can check the live demo, hosted on 🤗 HuggingFace Spaces 🤗 over here 👇 Hugging Face Spaces

Docker container

If you wish to deploy containerized environments, a Dockerfile is provided for you build a docker image:

docker build . -t <YOUR_IMAGE_TAG>

Minimal API template

Looking for a boilerplate to deploy a model from PyroVision with a REST API? Thanks to the wonderful FastAPI framework, you can do this easily. Follow the instructions in ./api to get your own API running!

Reference scripts

If you wish to train models on your own, we provide training scripts for multiple tasks! Please refer to the ./references folder if that's the case.

Citation

If you wish to cite this project, feel free to use this BibTeX reference:

@misc{pyrovision2019,
    title={Pyrovision: wildfire early detection},
    author={Pyronear contributors},
    year={2019},
    month={October},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/pyronear/pyro-vision}}
}

Contributing

Please refer to CONTRIBUTING to help grow this project!

License

Distributed under the Apache 2 License. See LICENSE for more information.

pyro-api's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyro-api's Issues

Discussion: Media Upload

A draft PR #20 has been opened to upload a media received by the API on S3.

Let's discuss the pros's and con's of the different solutions and the following topics.

  • Define if we want to use S3 or another service
  • Decide on how to handle credentials (right now it's hardcoded in the code)
  • See if the upload works using a raspberry (works to be done on Client side to send a file)
  • How to alter the work on client side

TODO:

  • Add key field to media (#20)
  • Add sterilized routes for media upload (only modifying the key) (#20)
  • Defining basic bucket/storage interaction (#20)
  • Adding actual Cloud Service Provider support (AWS or other)
  • Test with mock tables (#20)
  • Test with postgresql duplicates

[Client] Create routes to create an event/alert/media linked to a fire.

From Mateo's request:

En gros j'aimerais qu'on mette en place une petite présentation ou je filme un départ de feu et je ping l'api quand j'en détecte un et du coup ca vous permet d'enchaîner sur la présentation de l'api. Je dois utiliser quoi pour ca ?

It seems we lack the proper client's routes (and potentially API routes) to achieve that.

[risk] How to update & store risk

How to:

  • trigger pyro-risk update (at a daily basis with current data source)
  • store the results: where, how long, who can have access, etc.
  • orchestrate pyro-platform <-> pyro-api <-> pyro-risk

Need to coordinate with all teams

[test] Some bugs aren't caught up with our unittests

In #13 a small issue was introduced, that wasn't caught up by unittests.
Fortunately, @jeanpasquier75 noticed it and it's being solved in #16. But this raises the question about how we should design our unittests

Any thoughts @florianriche @jeanpasquier75 @fe51 ?

  • Update current unittests to get a testing situation as close as possible from production (#33)
  • Adding super-user creation mean (#26)

[client][devices] multiple devices and one onlye exposed to the internet

Feature

Have a convenient solution available to communicate from site

Motivation

For a site equipped with the Pyronear solution, we are going to have several devices with camera, and one central device performing the inference (connected on a local network). So, only one device, the one doing inference will be exposed to the internet (for security aspects, in order to avoir entry points into our system, and to limit the number of actions to be carried out on devices with camera which will be exposed to sun and subject to quite high level of temperature)

Thus, only this one with communicate through the api and we need to be able to handle communication smoothly (that is to say, to be able to correctly route alerts from various devices with only one device using the api client).

2 possible solutions seem feasible :

  • The central device has credentials of all other devices, hence, via the api client, creation of one class per camera devices and authenticate for each camera, then it should be possible to manage classes and properly route communication.
    It's a bit DIY though, and not very practical as the api is supposed to be able to facilitate the communication of devices.

  • Some evolution are performed on the api, in order to manager a group of devices on a site, with only one connected to the internet, and with a unique authentification.

What do you think @florianriche @jeanpasquier75 @frgfm ? There might be other ways to solve this issue.
I will be happy to discuss this, and organizing a quick meeting if necessary. Do not hesitate if you want me to clarify the issue !

Discussion: Client-side API

We need to develop on Client-side (Device) in order to send information to the API.

The devices will need to:

  • Ping the API
  • Send alerts
  • Send medias
  • Send positions (if using GPS )
    Those messages will rely on a token that should be stored on the client-side.

Now comes the question: how to develop that client-side capability?

  • Do we alter the CV component to have a main that sends messages to the API when it detect something. This could end up being a bit of spaghetti code and make the purpose of the package less clear
  • Do we create a totally new component whose goal is to deploy on a device. This client would communicate with the API and call the CV component to look for fires. This would increase the number of repos.

[datetime format] Time Zone handling

Our data are supposed to be stored as UTC datetime

However, this information is implicit and we have no constraint on it.

What do you think of having datetime stored with time zone information ?

As an example, this format could be convenient "2020-04-01T12:34:56.000+0100".

That being said,to facilitate the use of our api, would be better to store, or at least, return all datetime in "UTC" (ie +0000). Thus, even if one user, get some that that could have been uploaded with differnt timezone, its gets all data with the same tz info.

So, once we agree on the following points, we can open PR and work on it :)

  • defining the datetime format (maybe we could use timestamptz datatype from postgreSQL ?, but I am not sure what is the easly compliant with sql alchemy)

  • deciding if we store all datetime with the same time zone information (always in UTC, ie +0000)

  • returning all datetime with the same time zone info (always in UTC, is +0000) (one day, we might add if needed an option to get data in a specific TZ or to infer the expected tz from the location .. )

Looking for feedbacks :)

[media] It is possible to upload multiple images for one media and loose the bucket keys

When one upload an image linked to a media (route upload_media) ; we:
1: Generate the CSP bucket key
2: Upload the image
3: If successful, alter the bucket key field in the DB

If one calls this function two times for a unique media, then the first uploaded image will stay in the CSP and we will loose its "key". It won't be linked to anything anymore. If for some reason, we do this too often there will be a load on the bucket storage.

[media] Reduce image download latency on client

Currently, when requesting an image through the client:

  • the API accesses the media table to get the bucket path, and adds the CSP creds
  • it downloads the image on the API server
  • the client then download the byte content from the API server

So we at least get a double latency on content transfer. A more optimal solution would be:

  • the API resolves the URL + creds for a CSP file download
  • the client uses this information to directly download from CSP

We could expect ~50% drop in latency in doing so!

cc @florianriche @Akilditu

[devices] Add a route to update the location of the current device

Currently, the API supports the update of location information for a given device provided admin scope. However, a device with GPS capabilities should be able to update its own location.

This might be a good occasion to add a route to update the location of the current_device!

[Client] Allow to create two client instances

The way a client is instanced right now is buggy (guilty here) because everytime we create a client instance, we alter the Class variable routes.
As a consequence, if we want to create two client instances (for whatever reason), the second instance will have incorrect routes as it will use the already altered class variable.

 class Client:
 
   routes = {"token": "/login/access-token",
               ...
              }

    def __init__(self, api_url, credentials_login, credentials_password):
        self.api = api_url
        self._add_api_url_to_routes()
        ...
    def _add_api_url_to_routes(self):
        for k, v in self.routes.items():
            self.routes[k] = urljoin(self.api, v)
     ...

Suggested solutions:

  • change the current routes class variable to "_routes" and create a deepcopy self.routes in the constructor before calling _add_api_url_to_routes.
  • Refactor the whole class construction.

[test] Add an end-to-end test script

Initially mentioned in #17, an end-to-end testing script would allow us to easily make sure that the API can perform all required tasks for a specific scope. Several "interaction maps" could be designed, but here is a suggestion for a basic one:

  • create a user, a site
  • login with this user
  • update the password
  • create a device of her/his own
  • create an installation with this device and the site
  • the installation creates an event and a related wildfire alert (+ a related media)
  • the installation throws later a second alert that terminates the event

What do you think?

[Alerts] Acknowledge and close an alert

We have no route, for now, to specify an alert has been acknowledged and no way for a user to say an alert is complete.

The routes to do that should be quite straightforward.
One should also add functions to access those routes in the client.

Side note:
It also raises the question of how to interact with the events? I am not sure I clearly understand what they are supposed to do.

[test] Access check does not work correctly in unittests

While I was changing scopes on some routes, I noticed something:

  • changes on the scope requirements do impact the API behaviour expectedly
  • however in the unittest, even if the get_current_access yields an access with insufficient scope, the request is still being processed

We would need to investigate and fix this to be able to check scopes in the unittests.

[alerts] Reference to installation_id rather than device_id

When we first designed all the table fields, we added the device_id as the emitter of the alert. I'm starting to think it would make way more sense to replace this by installation_id. There are pros and cons

Pitch

Here is how the change would play out:

Triggering the alert

This would mean that a device per se cannot send an alert without being an installation itself i.e. linked to a site (the biggest con). The API client will handle the alert sending by resolving the installation from the device. Additionally:

  • We could either revise our notion of installation: imagine an individual positioning the device in her/his garden, there is no site to be linked to necessarily. But it is an "installation" since the device will stay positioned over there.
  • or accept the inconvenience

People receiving the alert

Before you were receiving the device_id:

  • GET method to reading of device position & specs
  • more thorough resolution of installation if you want to get the site it was sent from

Now you would receive the installation_id:

  • direct GET on site information
  • using site information, you get the same device information

One last option would be to have both but I feel like this would be a dangerous path later on. What do you think @pyronear/back-end ?

[init] Add a warning/error to verify that DB is not init with None credentials

Even after #47 , their was still a bug on deployment. Here are the logs:

2020-11-22T07:58:34.305128+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/starlette/routing.py", line 526, in lifespan
2020-11-22T07:58:34.305129+00:00 app[web.1]: async for item in self.lifespan_context(app):
2020-11-22T07:58:34.305129+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/starlette/routing.py", line 467, in default_lifespan
2020-11-22T07:58:34.305131+00:00 app[web.1]: await self.startup()
2020-11-22T07:58:34.305131+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/starlette/routing.py", line 502, in startup
2020-11-22T07:58:34.305132+00:00 app[web.1]: await handler()
2020-11-22T07:58:34.305132+00:00 app[web.1]: File "src/app/main.py", line 18, in startup
2020-11-22T07:58:34.305133+00:00 app[web.1]: await init_db()
2020-11-22T07:58:34.305133+00:00 app[web.1]: File "src/app/db/init_db.py", line 16, in init_db
2020-11-22T07:58:34.305134+00:00 app[web.1]: hashed_password = await hash_password(cfg.SUPERUSER_PWD)
2020-11-22T07:58:34.305134+00:00 app[web.1]: File "src/app/api/security.py", line 31, in hash_password
2020-11-22T07:58:34.305134+00:00 app[web.1]: return pwd_context.hash(password)
2020-11-22T07:58:34.305135+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/passlib/context.py", line 2258, in hash
2020-11-22T07:58:34.305135+00:00 app[web.1]: return record.hash(secret, **kwds)
2020-11-22T07:58:34.305136+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/passlib/utils/handlers.py", line 777, in hash
2020-11-22T07:58:34.305136+00:00 app[web.1]: validate_secret(secret)
2020-11-22T07:58:34.305137+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/passlib/utils/handlers.py", line 122, in validate_secret
2020-11-22T07:58:34.305190+00:00 app[web.1]: raise exc.ExpectedStringError(secret, "secret")
2020-11-22T07:58:34.305190+00:00 app[web.1]: TypeError: secret must be unicode or bytes, not None
2020-11-22T07:58:34.305191+00:00 app[web.1]: 
2020-11-22T07:58:34.305283+00:00 app[web.1]: ERROR:    Application startup failed. Exiting.

It was related to https://github.com/pyronear/pyro-api/blob/master/src/app/config.py#L24-L25, so I updated the environment variables on the deployment server and it's OK now.

We might need to raise an error if the environment variables are not set!

[devices] Add angle_of_view field to device table

This is important information to have a comprehensive understanding of the visual situation of a device raising an alert.

Using this information + visual data + device orientation (yaw), the device can compute the azimuth of the alert/wildfire.

[installations] add boolean field to set the relevance of a device

Feature

Add boolean field "is_relevant" to installation table.

Motivation

In the case of a device that points a no_alert site it is needed to flag it with is_relevant as false in order to not consider alerts coming from this device.

Question

If the related PR pass after alembic migration (#112), this feature need to be implemented as an alembic migration isn't ?

[docs] Adding summary and description for each route

Hey there,

I ended up discovering that we can add a summary and description using the route decorator + its docstring (cf. https://fastapi.tiangolo.com/tutorial/path-operation-configuration/#description-from-docstring). I believe it would be a good idea to document each route (one PR for each separate router file in https://github.com/pyronear/pyro-api/tree/master/src/app/api/routes for easier reviewing):

Suggestions for documentation improvements are welcome!

Fetching past fires via the API

Hello everyone,

Here is a non-urgent issue related to this one in the Pyro-Platform repository!

In order to display past fires on the map, we are currently reading a local file stored within the repository. The goal would be to remove it and instead fetch this information via the Pyro-API. If I understood well, for now, there is no table dedicated to past fires (which could be useful not only for the platform but also for the data science team typically) and this issue is designed to discuss its addition to the database.

To be more precise about the historic_fires.csv file, it was built out of a raw dataset of satellite fire detections which we preprocessed, notably in this Colab. We added several fields like the department and the "commune" where the fire occurred (as well as the population, area and density of the locality).

We wanted to use the latter to compute the density of population in the area, so as to approximately filter urban fires out of the dataset. However, we have not yet implemented this filtering on the platform and for now, department is the only added field which is really required. I mention this in case we are limited for the number of fields.

Last remark: at some point, I guess we might want to add fires detected by our own devices and acknowledged by firemen to the past fires table in the database.

All that being said, here are my question(s):

  • Should we add the historic_fires.csv file as is to the database?
  • Or do you prefer that we script the preprocessing steps and use the raw dataset?

Thanks a lot for your help, I hope that everything is clear!

Add default example values to schemas

Add example values to schemas that are used as input (see https://fastapi.tiangolo.com/tutorial/schema-extra-example/
This will speed up the testing when using the Swagger UI

[client] The documentation is clearly lacking some explanations about payloads

The current state of the client documentation is not really helpful. Most of the payloads are not detailed in the docstrings and thus in the documentation.

This is the documentation for the get_site_devices method:
image

Worst thing about this: in this case, the payload is really simple (site_id). I think we should do something about that soon 😅

[route][table][db]New routes to record time related metrics from devices

🚀 Feature

[] Add table to record metrics (cpu and memory usage, temperature) from pi devices

[] Create related routes

[] Update client consequently

Motivation

In order to design a suitable system, some metrics should be recorded.
After test phases, the output can be analyzed to understand how to improve the system or understand why it crashed.
As an example, in order to know if fans, heatsink are needed, cpu temperature can be recorded; Or, some performance about cpu usage or memory available could help to select the best raspberry pi model fitting our needs.

Moreover, it will help us to monitor systems.

Pitch

Hence, a new table needs to be created, and could look like this :

device_id datetime measure_name value units

It will be time related metrics having only relation with devices.

Hence, for sake of simplicity, it can be stored in our postgre table (Otherwise it would implies a specific db system, what do you think)

Granularity of metrics might be around every 10min

This is completely open to discussion about and all suggestion are welcomed

Some not specified fields should not be updated to null

Currently when a PUT request is done and one optional field is not specified it is updated to null.

A solution should be to add an option boolean to crud.put or crud.update_entry, if set to True it will update only the not-None fields

[scope] The "me" scope is supposedly useless

Having refactored most of the unittests, I came to the conclusion that the "me" scope is useless.

Simply put, here is a route using it now:

@router.get("/me", response_model=UserRead, summary="Get information about the current user")
async def get_my_user(me: UserRead = Security(get_current_user, scopes=["admin", "me"])):
    """
    Retrieves information about the current user
    """
    return me

Now if we use the user scope instead:

@router.get("/me", response_model=UserRead, summary="Get information about the current user")
async def get_my_user(me: UserRead = Security(get_current_user, scopes=["admin", "user"])):
    """
    Retrieves information about the current user
    """
    return me

assuming all entries in the user table with "me" will be replaced by "user", we get the exact same behaviour.

There is no difference in "access level" between a "user" and "me" (or "device" and "me" depending on the situation). So I suggest that we remove it to avoid unnecessary complication. We will have to discuss access levels later on to distinguish a bare user from a "supervisor" or a "manager".

I suggest waiting for #138 approval before starting a PR to avoid conflicts.

[alerts] Add azimut field

Feature

Add azimut field to alerts table

Motivation

In order to create a vision cone on the platform, it is necessary to know the azimut of an alert in relation withthe device that raised it.

Once we are able to identify the GPS position of an alert, we can extrapolate the azimut from the GPS position of the alert and the one from the device

[scopes] Defining scopes for all routes

Hey there 👋

Playing around with the PRs, I noticed that some routes are protected by scopes while others aren't. If some of those are correct, I just want to make sure we do consider carefully those choices for each router:

  • alerts: should it be exposed to everyone on Read? admin & device for create, update, delete? (#99, #122)
  • devices: "admin" by default for all routes, "me" added to self-ID routes (#121)
  • events: should it be exposed to everyone on Read? admin & device for create, update, delete? (#59, #99)
  • installations: should it be exposed to everyone on Read? admin only for create, update, delete? (#99, #120)
  • media: we need some scope but can't be admin only, since a logged user whose scope includes a given device, should have access to its media (#99, #118)
  • sites: should it be exposed to everyone on Read? admin only for create, update, delete? (#58, #99)
  • users: "admin" by default for all routes, "me" added to self-ID routes (#15, #44)

Those where I have no clue whether we should add some:

  • accesses: "admin" by default for all routes (#104)
  • login: no scope

cc @florianriche @jeanpasquier75 @fe51 @martin1tab

Changing query filters from list of tuples to dict

While designing new unittests, I figured I wasn't sure about what to expect when we use query_filters with multiple filters on a single value. Since this does not include order relationships but strictly equality, I suggest changing this to a dictionary to prevent duplication of keys.

Here is an example on the fetch_all crud function:

what a call currently looks like

result_entries = crud.fetch_all(user_table, [('username', 'fg')]

my suggestion

result_entries = crud.fetch_all(user_table, {'username': 'fg'}]

I think this would imply:

[devices] Implement an update mechanism for devices

In order to avoid requiring ssh access to our devices, it would be handy for them to check whether they need to update and automatically do it if necessary.

Context

  • the device needs to be able to check which software/docker version it needs
  • depending on the result of the comparison with its current version, it might need to go download the software (outside of the scope of this repo)

Implementation suggestion

I gave it some thoughts and here is what I suggest:

  • similarly to the specs field (which we should rename to hardware_specs), we introduce a software_hash field.
  • the corresponding value will be a deterministic result for a given software config (let's say the docker image hash, or the commit hash)
  • just like devices that will ping regularly, the device will come check the software_hash. If it's different from the one it uses, then it updates.

I strongly suggest going for the commit hash, at least to check the mechanism

What do you think @pyronear/back-end ?

Add a route for user password update

Currently, once the user is created by the admin, that person won't be able to update the password.
A specific route should be created to tackle this!

[users] Implementing user_group

Following up on #45, I think it is sound to have a discussion about the multiple access levels in the API. With the current design of scopes, we cannot efficiently separate accesses of large user groups.

Requirements

  • We need to restrict "superadmin" from common users
  • Users need to be segmented into groups, where each group only has access to a restricted scope (i.e. a local firefighter, while having a "user" scope, should not have access to the same data as someone from another region, while that person is also a "user")
  • Groups need to be easily editable
  • security on routes will have to combine those to provide proper and secure access to each route

Design suggestion

Simply put, I believe two fields are required:

  • access_type: admin, user, device, me
  • access_group: ID that would reference a group (we could add a groups table with group_id, and group_name)

Password hashing is currently non deterministic

I was writing some unittests and apparently, the context we use for hashing seems to include a timestamp or something similar. As per https://github.com/pyronear/pyro-api/blob/master/src/app/api/security.py, here is how you can reproduce the error:

from passlib.context import CryptContext

pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")

prev_hash = pwd_context.hash('hello')

print(prev_hash == pwd_context.hash('hello'))

which yields

False

I might be missing something here though

[alerts] Add a callback on alert creation to provide developers with a webhook

In order to send notifications on multiple systems based on alert raising on the API-side, we need to provide developers with a webhook option. Another front-end service of pyronear is already in need for this feature cf. pyronear/pyro-platform#38

Implementation suggestion

Fortunately, FastAPI callbacks seem to be quite appropriate for this use case. However, I'm not exactly sure how to make a webhook available using that.

[scripts] Add script to create super user

Feature

Add script to create super user

Motivation

After a possible deletion of the volume of the db, it would be convenient to have a script to create a super user.

Pitch

We might add a new folder called scripts with a first one, adding a super user with admin scope.

[storage] Set up automatic deletion of old media

For now, the media that are uploaded to the bucket are stored indefinitely. But in the long-term, we need to erase those to avoid filling up the disk.

If that's alright with everyone, I suggest setting a MEDIA_PERSISTENCY value in the config (default 30-60days).

What do you think?
cc @jeanpasquier75 @florianriche @fe51

[client] Extend API client features for platform demo

Following up on pyronear/pyro-platform#19, the following features need to be added to the client:

  • Read all devices within the scope of the user
  • Read the sites
  • Read the list of device IDs of a given site at a given time (which ongoing on the API-side in #49)
  • Read the alerts (while we don't have a webhook)
  • Download the frame (media) linked to an alert

Considering the scopes of several routes are going to change (more restricted) with #45, this is all the more important since, if they are able to use GET on some routes without authentication, this might not be true in the future!

[sites] No alert sites

Feature

  • Add no_alert as SiteType in Site table
  • Create new route to allow no admin user to create a no_alert Site

Motivation

Some sites can be a source of smoke (like a quarry or a plant) and it is needed to record those specific location in order to not raise useless alerts. That could be done by adding a new site type "no_alert".

Moreover, users without admin scope should be able to create this new type of site. Hence a new route is needed !

Any feedback is welcome !

[alerts][events] How to define the event life cycle with alerts

Let's discuss here how we should create events, and link alerts to it.

Context

  • An alert is: a signal sent by a device that localizes a wildfire. Natively, it holds: a timestamp, the localization information, the device_id
  • an event holds: a type, a localization, a starting timestamp and an end timestamp

Problem statement

  • How do we determine the event an alert refers to?
  • How do we update the starting & end timestamps of the event?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.