Git Product home page Git Product logo

fiware-livedemoapp's Introduction

FI-WARE Live Demo Application

The FI-WARE Smart City Live Demo (aka "FI-WARE LiveDemo application") is a proof of concept that illustrates the usage of the FI-WARE Generic Enablers to easily build rich applications for smart cities. The Smart City Live Demo is an application for the management of street lamps maintenance workforce in a city. The Smart City Live Demo is hosted and executed from the cloud capabilities of FI-Lab: all the GEs are deployed and run on FI-Lab virtual machines, and several of them are offered secured and "as a Service". It consists of a set of sensors deployed in Santander city center to gather electric parameters, presence and lightning measures from various street lamps and other devices. This information is gathered through IoT GEs and published in FI-Lab through the Context Broker GE. Based on the measures gathered from sensors, the Complex Event Processing analyzes the information and triggers issues (alarms) under certain circumstances (e.g. sustained low battery metrics). Moreover, through the Location GE the position information of the technicians mobile phones is also gathered and published. The application allows an operator to watch the measures from the sensors on the city map, check the triggered issues and their severity, and assign technicians from the workforce (for instance, the idle technician closer to the street lamp). It also shows historic information gathered from the sensors and eventually, to send remote commands to the devices. On the other hand, a technician can update the information about an issue by taking and uploading a picture of the streetlamp to the cloud storage capabilities.

This repository contains different pieces of code used in the FI-WARE LiveDemo application, opened to developers worldwide under AGPL license so they can have a look on how applications are built using the FI-WARE platform. If you haven't ever seen the FI-WARE LiveDemo application running, we suggest you to have a look to this video: http://www.youtube.com/watch?v=Wh_zPsLUg-8

This repository contains the pieces of code developed by Telefónica I+D, other pieces (developed by other partners in the project) are available in the following locations:

Familiarity with LiveDemo architecture in general is required in order to fully understand this documentation. In addition, knowledge on the following FI-WARE GEis:

  • Orion Context Broker
  • CEP
  • Cosmos
  • LOCS
  • Wirecloud
  • Store

Eventually we will provide that information (or links to that information) in this page.

LiveDemo architecture

View 1 (high-level functional view)

LiveDemo app view 1

View 2 (deployment view including conecton to FI-LAB ContextBroker and Cosmos):

LiveDemo app view 2

Content

This repository contains the following Python modules in the packages/ directory:

  • event2issue: a process to receive CEP generated events and register/update the corresponding Issues in Orion Context Broker
  • location2cb: tools to init the LOCS GEi, schedule van routes and regularly update that information in Orion Context Broker
  • ngsi2cosmos: a process that receives notification updates from Orion Context Broker and write them in the HDFS Cosmos cluster. Warning: since March 2014 this component is deprecated. Thus, you are highly encouraged to use its sucessor: Cygnus, available at https://github.com/telefonicaid/fiware-connectors/tree/develop/flume.

The required modules to run the Python modules are specified in the requirements.txt file in the repository root.

In addition, this repository includes several scripts to automate management tasks related with LiveDemo application. They are located in the scripts/ directory. The examples/ directory contains several examples used to program LOCS simulations.

Finally, you can find in the repository root a script named ld-watchdog.sh that can be used to check that the LiveDemo application environment is correctly set up.

More detailed information on the different pieces follow in the next sections.

event2issue

This process listens to updates in the CEP singleton entity (sent by Orion Context Broker as NGSI10 notifyContext requests to the callback URL), process the information published by CEP in that entity and generates (or updates) the corresponding Issue back in Orion Context Broker (using NGSI10 updateContext request).

Run it with:

./event2issue.py

You can specify as arguments the listening port, the Orion Context Broker URL and the Store URL:

./event2issue.py 5000 http://localhost:1026 http://localhost:80

This process logs to event2issue.log and uses the accounting_token.json file to store the credentials used to interact with Store. It exports the following REST operations (see details in the source code):

  • POST /notify, callback that Orion Context Broker invokes whenever a new CEP event occurs
  • POST /set_accounting, to set the accounting token
  • POST /new_issue/[affected_entity_id]/[type]/[severity], to create a new Issue programmatically (as alternative to CEP-generated Issues through Orion Context Broker), e.g. a third-party application interacting with LiveDemo application backend.
  • POST /set_counter/, set the issue counter, so next Issue created will have Issue name
  • POST /set_correlation/, set the correlation token (used in interactions with Store)

location2cb

LiveDemo simulates 4 vans moving by the city. Several tools are included in this module, to deal with this simulation.

  • init_van.py, to init vans simulation in LOCS GEi
./init_vans.py
  • get_vans.py, to print van locations
./get_vans.py [period (default = 5 seconds)] [times (default once) (0 = forever)]
#e.g.: ./get_vans.py 10 0
  • stop_vans.py, to stop van simulation in LOCS GEi
./stop_vans.py
  • move_van.py, to program LOCS with van movement from one point to another. Only van A and B are allowed to move this way, from A1 to Ex (and back) for van A and from B1 to Ex (and back). Look to the points.csv file for the coordinates associated to each point (they are in Santander city, but you could adapt this file to use yours). The default velocity is 20 km/h (to change it you need to edit template.xml).
./move_van.py [van_msisdn] [from] [to]"
#e.g.: ./move_van.py 34621898316 A1 E7"
  • location2cb.py, deals with Orion Context Broker interactions. This tool can be used in two ways. If arguments are provided then it works without interacting with LOCS (this mode is thought when LOCS is not available or it is failing), moving a van from one point to another (pretty much the same than move_van.py described above). If no arguments are provided, then it just queries vans location from LOCS and updates the corresponding entities in Orion Context Broker.
# autonomous mode
./location2cb.py 34621898316 A1 E7"
# not autonomous mode
./location2cb.py

ngsi2cosmos

Warning: since March 2014 this component is deprecated. Thus, you are highly encouraged to use its sucessor: Cygnus, available at https://github.com/telefonicaid/fiware-connectors/tree/develop/flume.

This process listens to NGSI10 notifyContext requests sent by Orion Context Broker to the callback URL, then appends the values of each entity attribute to a file in the HDFS filesystem used by Cosmos (a different file is used for each entity-attribute pair). The attribute value is timestamped with the current time. This way, historical entity-attribute information can be used in Cosmos map-reduce jobs.

Run it with:

./ngsi2cosmos.py

You can specify as arguments the listening port and the Cosmos namenode URL:

./ngsi2cosmos.py 1028 http://localhost:14000

As additional optaional arguments you can specify directly the HDFS directory to use (default is base_dir), HDFS user (default is cosmos_user) and disable logging (using "log_off", otherwise logging is activated)

./ngsi2cosmos.py 1028 http://localhost:14000 /user/fermin fermin log_off

This process logs to ngsi2cosmos.log. It supports two HDFS backends: HttpFS and WebHDFS (the first one is preferred and used by default, given that it doesn't need cluster complete exposure, only needs access to the namenode). It exports only one REST operations (see details in the source code):

  • POST /notify, callback that Orion Context Broker invokes whenever a new notifyContext request is sent

In addition, this package includes a helper script named list_status_pretty.py that can be used to print a status report of the files in the HDFS backend. This script list files in the default HDFS directory (based_dir) but a different one can be specified as script argument.

For mor information on how to connect Orion to Cosmos, check this link: https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/How_to_persist_Orion_data_in_Cosmos

management scripts

  • iptables/, this directory contains scripts to turn on/off reporting from IDAS platform, manipulating iptables rules. Disabling IDAS updates can be useful to debug, in order to avoid "noise" introduced by information coming from real sensors during a testing session. Scripts in this directory require superuser privileges to run.

  • bootstrapping/, this directory contains a not comprehensive set of scripts used to "bootstrap" the LiveDemo. In particular:

    • 00_register_idas_entities/, contains scripts for creating Nodes, AMMS and Regulator in Orion Context Broker
    • 01_createCepEntity.sh, creates the CEP singleton entity (this entity is used by the event2issue.py process). This script is very similar to clear-cep-singleton.sh (using APPEND as action insted of UPDATE)
    • 02_subscribeEvent2Issue.sh, subscribes the event2issue callback for notifications
    • 03_subscribeCep.sh, subscribes CEP to changes in Nodes, AMMS and Regulator, so CEP is notified each time a change occurs in these entities (these changes in sequence can trigger rules which result are events published in the CEP singleton entity)
    • 04_setTechnicians.sh, creates and sets technicians information. It requires four arguments: the phone numbers to use for the technicians.
    • 05_vansInit.sh, create the four van entities. It can be also used to reset vans to their initial positions
    • 06_subscribeCygnus.sh, subscribe the Cygnus callback for notifications
    • 07_subscribeFederatedCB-sensors.sh, subscribe a federated CB (orion2 in the file) to sensor notifications
    • 08_subscribeFederatedCB-vans.sh, subscribe a federated CB (orion2 in the file) to van notifications
    • 09_subscribeFederatedCB-issues.sh, subscribe a federated CB (orion2 in the file) to issue notifications
  • get-from-amms.py: pretty-prints a given attribute for all AMMS (attribute name passed as argument). It relies on query-amms.sh script, which encapsulates the actual NGSI request issued to Orion Context Broker.

./get-from-amms.py ActivePower
  • get-from-nodes.py: pretty-prints a given attribute for all Nodes (attribute name passed as argument). It relies on query-node.sh script, which encapsulates the actual NGSI request issued to Orion Context Broker.
./get-from-node.py batteryCharge
  • get-from-regulator.py: pretty-prints a given attribute for the Regulator (attribute name passed as argument). It relies on query-regulator.sh script, which encapsulates the actual NGSI request issued to Orion Context Broker.
./get-from-regulator.py ActivePower
  • get-issues.py: pretty-prints a list with all issues. It relies on query-issue.sh script, which encapsulates the actual NGSI request issued to Orion Context Broker.

  • get-technician.py: pretty-prints a list with all technicians. It relies on query-technician.sh script, which encapsulates the actual NGSI request issued to Orion Context Broker.

  • get-van.py: pretty-prints a list wih all vans. It relies on query-van.sh script, which encapsulates the actual NGSI request issued to Orion Context Broker.

  • last-times.py: prints the last time Nodes, AMMS and Regulator were modified. It relies on query-all.sh script, which encapsulates the actual NGSI request issued to Orion Context Broker.

  • set-amms.sh: set a given attribute of a given AMMS with a given value, passed as argument.

./set-amms.sh 06E1E5B2100394784 electricalPotential -2
  • set-node.sh: set a given attribute of a given Node with a given value, passed as argument.
./set-node.sh 3501 batteryCharge
  • erase-node-date.sh: erases the TimeInstant of a given Node, setting it to "None"
./erase-node-date.sh 3512
  • set-regulator.sh: set a given attribute of the Regulator with a given value, passed as argument.
./set-regulator.sh electricalPotential -7
  • close-issue.sh: set closingDate attribute to current time on a given Issue (which number is passed as argument), which means "closing the issue" according to LiveDemo application semantics.
./close-issue.sh 23
  • get-cep-singleton.py: pretty-prints the attributes of the CEP singleton entity. It relies on query-cep-singleton.sh script, which encapsulates the actual NGSI request issued to Orion Context Broker.

  • cep-start.sh, cep-stop.sh, cep-status.sh: use them to start/stop CEP or report its status

  • clear-cep-singleton.sh: clears the CEP singleton entity, setting all its attributes with the value passed as argument.

./clear-cep-singleton.sh foo
  • mongo-remove-all-issues.sh: removes all issues in MongoDB

  • mongo-remove-expired-subs.sh: removes all expired subscriptions in MongoDB (after running the garbage-collector.py program tha comes with Orion Context Broker RPM).

  • mongo-remove-id.sh: removes a given registration/entity associated in MongoDB, identified by its ID.

./mongo-remove-id.sh Issue27
  • new-cep-event.sh: emulates a CEP event directly updating the CEP singleton entity (instead of using CEP). This script takes the following parameters: entity ID, entity type, event type and severity. Only for debugging purposes.

  • register-issue.sh: emulates a direct Issue registration (instead of using event2issue). Only for debugging purposes.

  • renew-cb-log.sh: rotate Orion Context Broker log. This script has not been testbed so much, so it may fail.

  • simulation-tool-restart.sh: wrapper of the REST operation to start/restart the simulation tool (part of LOCS).

  • simulation-tool-status.sh: wrapper of the REST operation to get the status of the simulation tool.

  • event2issue-test/, this directory contains some scripts to test event2issue process. Not too interesting, by the way.

Putting all together: typical sequence of commands running LiveDemo app

This section shows a sequence of commands corresponding to a typical test execution of LiveDemo application, for illustration purposes.

# First of all, run the needed scripts in bootstrapping/ directory

# remove all issues due to start
./mongo-remove-all-issues.sh

# stop IDAS reporting
sudo iptables/turn_off_idas.sh

# init vans
python init_vans.py

# create some initial issues
curl -X POST localhost:5000/new_issue/OUTSMART.NODE_3508/LowBatteryAlert/Warning
curl -X POST localhost:5000/new_issue/OUTSMART.NODE_3501/BrokenLamp/Critical

# create issue with mobile on 3500

# create an issue in Regulator due to problems with electricPotential
# (commented lines is the alternative way in the case CEP generated issues are not working)
./get-from-regulator.py electricPotential  # to know the previous level
./set-regulator.sh electricPotential -2
#curl -X POST localhost:5000/new_issue/OUTSMART.RG_LAS_LLAMAS_01/LowElectricPotential/Warning
./set-regulator.sh electricPotential -7
#curl -X POST localhost:5000/new_issue/OUTSMART.RG_LAS_LLAMAS_01/LowElectricPotential/Critical

# create an issue in Node due to problems with batteryCharge
# (commented lines is the alternative way in the case CEP generated issues are not working)
./get-from-nodes.py batteryCharge # to know the previous level
./set-node.sh 3506 batteryCharge 10 ; date
#curl -X POST localhost:5000/new_issue/OUTSMART.NODE_3506/LowBatteryAlert/Warning
./set-node.sh 3506 batteryCharge 3 ; date
#curl -X POST localhost:5000/new_issue/OUTSMART.NODE_3506/LowBatteryAlert/Critical

# At this moment we have the following issues on the map:
# 3508
# 3501
# 3500
# Regulator
# 3506

#move Marcos to repair 3501
python move_van.py 34604872235 B1 E7
#python location2cb.py 34604872235 B1 E7  # in the case LOCS is not working

#move Marcos back to home
python move_van.py 34604872235 E7 B1
#python location2cb.py 34604872235 E7 B1  # in the case LOCS is not working

#move Jacinto to repair 3506
python move_van.py 34669079467 A1 E1
#python location2cb.py 34669079467 A1 E1  # in the case LOCS is not working

#move Jacinto back to home
python move_van.py 34669079467 E1 A1
#python location2cb.py 34669079467 E1 A1  # in the case LOCS is not working

#restore values previous to manipulation
./set-regulator.sh electricPotential <prev_level>
./set-node.sh 3506 batteryCharge <prev_level>

#turn on IDAS again
sudo iptables/turn_on_idas.sh

#stop vans
python stop_vans.py

Security consideration

Due to security reasons, all the URLs in code, configuration and this documentation itself are not using actual IPs or DNS names. All are set to localhost. Of course, replace it with the right ones (in the FI-WARE GEi global or dedicated instances you were using) before running the software.

To ease the task, all the parameters you need to configure for shell scripts (.sh files) are in the scripts/ENV.sh file. Just edit that file and load it in your environment using:

. ENV.sh

Parameters:

  • CEP_HOST and CEP_PORT where the CEP runs
  • CB_HOST and CB_PORT where the Orion Context Broker runs
  • FED_CB_HOST and FED_CB_PORT where the federated Orion Context Broker runs
  • E2I_HOST and E2I_PORT where the event2issue runs
  • CYGNUS_HOST and CYGNUS_PORT where Cygnus runs
  • IDAS_HOST where IDAS runs

In addition, for Python code, you need to modify env.py files in the following places:

  • In package/event2issue/env.py, set cb_url and store_url to the actual URLs
  • In package/location2cb/env.py, set locs_host to the LOCS actual host IP/name
  • In package/ngsi2cosmos/env.py, set cosmos_url properly to the URL where COSMOS HttpFs is listening, cosmos_user to the proper HDFS user and base_dir to the proper directory within the HDFS directory. Warning: since March 2014 this component is deprecated. Thus, you are highly encouraged to use its sucessor: Cygnus, available at https://github.com/telefonicaid/fiware-connectors/tree/develop/flume.

Contact

For any question, bug report, suggestion or feedback in general, please contact with Fermín Galán (fermin at tid dot es). If I don't know the answer I will redirect you to the right contact :)

License

This code is licensed under GNU Affero General Public License v3. You can find the license text in the LICENSE file in the repository root.

Cosmos Demo Applications

Cosmos is the reference implementation of the Big Data GE, and its Global Instance (also called in this documente Cosmos cluster, or cluster) in FI-LAB holds several public datasets regarding certaint spanish Smart Cities.

Plague Tracker

Conceptually speaking, this is an application running on top of the Cosmos Global Instance in FI-LAB. The Plague Tracker accesses and processes the historical data about the plagues affecting the spanish city of Malaga. More details on the nature, representation formats, location, etc. of the data can be found at:

http://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/M%C3%A1laga_open_datasets#Plagues_tracking

Under the above concept there is a Java-based Hive client querying the Cosmos cluster through the TCP/10000 port, where a Hive server listens for incoming connections. This Hive client is governed by a Web application exposing a GUI (a map of the city of Malaga and a set of controls) the final user operates in order to get certain visualizations of the data. These visualizations/operations are:

  • Current focuses. The map shows the neighbourhoods affected by the selected type of plague.
  • Infection forecast. The map shows a forecast on the neighbourhoods that will probably fe infected by the selected type of plague.

The plague types the user can select are:

  • Rats
  • Mice
  • Pigeons
  • Cockroaches
  • Bees
  • Wasps
  • Ticks
  • Fleas

In addition to the map, three charts show the correlation index between the selected type of plague and three ambiental parameters such as the temperature, the rainfall and the humidity. These ambiental parameters are got from another dataset related to the city of Malaga:

http://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/M%C3%A1laga_open_datasets#Weather

Already deployed instances of this application

http://130.206.81.65:8080/plague-tracker/

fiware-livedemoapp's People

Contributors

fgalan avatar dependabot[bot] avatar

Stargazers

Charith Madhuranga avatar Muisz avatar Júlio Zinga avatar Pascal Hirsch avatar David Martínez avatar  avatar

Watchers

Miguel Angel Cañas Vaz avatar David Guzman avatar James Cloos avatar Marcos Reyes avatar daniel.moran avatar  avatar Francisco Romero Bueno avatar Sergio García-Gómez avatar  avatar  avatar

fiware-livedemoapp's Issues

ngsi2cosmos management API for FI-LAB users

Currently, the ngsi2cosmos is configured by FI-LAB staff in a "static" way. So if a FI-LAB user wants to store the data he/she publishes at CB to Cosmos (as happened during Santander hackaton), he/she has to talk with FI-LAB staff to do that configuration. This is a quite unflexible and not scalable approach.

Thus, the ngs2cosmos component should provide an API so users can configure CB->Cosmos rules. The processing of the request in this API will involve all the underlying actions in CB, Cosmos and ngsi2cosmos components:

  • In Cosmos: creating a dataset for the user
  • In CB: set up the proper subscribeContext for the user entities
  • In ngsi2cosmos: configure the logic so the notifiyContext corresponding to the above subscription goes at the end to Cosmos through its HttpFs or WebHDFS API.

Tool to subscribe ngsi2cosmos

Configuring Orion-Cosmos integration currently needs a NGSI10 suscribeContext operation so ngsi2cosmos receives Orion notifications. Currently, this subscription has to be done "manually".

However, it could be great to provide a simple script tool for user, so such tools does the subscription on behalf the user.

Hive does not like datetimes in ISO format

The datetime string persisted in HDFS contains the 'T' character as field sepatator between the date and time parts:

>>> from datetime import datetime
>>> date = datetime.now()
>>> print date.isoformat()
2014-02-20T10:31:38.297875

This is not what Hive expects, but a ' ' field separator (Hive manages datetimes as strings in the form "%Y-%m-%s %h:%i:%s.%f").

Improve watchdog script

Taking into account the following information from Alain's email on 15/Oct/2013 to improve the ld-watchdog.sh script (LOCS section):

To retreive the current scenario simulation status, you can sent the following GET request [wrapped in a script]: scripts/simulation-tool-status.sh

that will return the following response :

<scenarioStatus>
   <selectedScenario>applied scenario name|none</selectedScenario>
  <running>false|true</running>
</scenarioStatus>

For example, after a restart of the Fleet mobile management tool (on demand or after VM reboot), you will obtain :

<scenarioStatus>
   <selectedScenario>none</selectedScenario>
   <running>false </running>
</scenarioStatus>

You can check this return status before initializing the simulation and then set the scenario, the mobile paths and finally start the simulation by relevant PUT requests.

When running your Live Demo simulation in nominal situation, you will obtain :

<scenarioStatus>
   <selectedScenario>LocationGE-LocationQuery</selectedScenario>
   <running>true</running>
</scenarioStatus>

Include ngsi2cosmos in the Cosmos Platform software catalogue

The Cosmos Platform (Cosmos v0.x | x > 9) exposes a catalogue of Hadoop related software available for being automatically installed in a new private cluster. Currently, that catalogue includes:

  • Hive
  • Pig
  • Oozie
  • SQOOP

ngsi2cosmos must be exposed within that catalogue in order it is possible to connect Orion Context Broker with the cluster as it was done in Cosmos 0.9.

This issue may imply to refactor the way this software is packaged and/or distributed.

get_van.py: output shorted by entity

Currently, the order is the one returned by query-van.sh, which is random (it depends on how the entities are created and updated at MongoDB layer at the end).

location2cb/get_vans.py: improve error reporting

Currently, the script doesn't handle error conditions due to LOCS unavailability, e.g.:

$ ./get_vans.py
Usage: python get_vans.py [period (default = 5 seconds)] [times (default once) (0 = forever)]
Technician A (34669079467):
Traceback (most recent call last):
  File "./get_vans.py", line 59, in <module>
    main()
  File "./get_vans.py", line 54, in main
    print_positions()
  File "./get_vans.py", line 28, in print_positions
    locs.get_location(locs.technician_A)
  File "/home/test2/fiware-livedemoapp/package/location2cb/locs_sim.py", line 260, in get_location
    longitude = location["terminalLocation"][0]["currentLocation"]["latitude"]
KeyError: 'currentLocation'

In this case, an error should be printed, e.g.:

$ ./get_vans.py
Usage: python get_vans.py [period (default = 5 seconds)] [times (default once) (0 = forever)]
LOCS service unavailable

ngsi2cosmos not dealing with '&'

Found during Campus Party Brazil 2013:

We have found that when the updateContext to Orion uses & in attributes value (e.g. "H&M") the notifyContextRequest sent to ngsi2cosmos breaks the program in some place (a 500 error is returned by the Flaks stack).

ngsi2cosmos: flexible clasification of data into many HDFS directories

Use a text file as ngsi2cosmos configuration in the following format:

<id_pattern1>|<id_type1>|<dataset1>
<id_pattern2>|<id_type2>|<dataset2>
<id_pattern3>|<id_type3>|<dataset3>
...

So, each time a new context element is received, ngsi2cosmos check that table (from top to bottom) to find to with HDFS directory send the data.

Eg.:

OUTSMART.NODE.*|Node|/user/opendata/smartcities/santander/llamas
OUTSMART.AMMS.*|AMMS|/user/opendata/smartcities/santander/llamas
OUTSMART.RG.*|Regulator|/user/opendata/smartcities/santander/llamas
urn:smartsantander:testbed:.*|Sensor|/user/opendata/smartcities/santander/smart
<no se aun :)>|<no se aun :)>|/user/opendata/smartcities/santander/magdalena

Check special characters in entity and attribute names or types

As experienced with FINESCE people, if an entity or attribute name or type contains the "/" character then the persisted file for this entity-attribute pair will contain the "/" character. When creating thar file, WebHDFS/HttpFS understands a subdirectory must be created and then a file.

E.g. entity=mycar, entity_tpe=car, attribute_name=speed, attribute_type=km/h will be persisted as mycar-car-speed-km/h.txt, which in the end is the following directory /user/myuser/mystorage/mycar-car-speed-km containing a h.txt file.

Special characters usage must be checked before creating the file name.

notifyContextRequest JSON support in ngsi2cosmos

Subscribing ngsi2cosmos using a JSON subscribeContext (which is posible from Orion 0.9.0 on) will cause JSON notifications arriving to ngsi2cosmos. The current ngsi2cosmos code is not able to process JSON, so internal error will happen (return HTTP code 500 is shown in the ngsi2cosmos Flaks trace), probably due to attempt of using XML processing method in a payload that is not a XML document.

As workaround while we implement JSON support in ngsi2cosmos code is to change the "format" field in the csub document corresponding to the subscription in Orion Mongo database from "XML" to "JSON" (see https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Publish/Subscribe_Broker_-_Orion_Context_Broker_-_Installation_and_Administration_Guide#csubs_collection), causing that new notifications to be send in JSON.

ngsi2cosmos.py: selective scaping of delimiter character to avoid "Cosmos injection"

ngsi2cosmos.py should parse the contextValue before writing it to Cosmos, escaping the delimiter (usually "|"). Otherwise, the user could "inject several columns in a single field" potentially breaking the schema defined by tools such as Hive.

This escaping should be a optional feature (typically, a flag in the CLI or configuration file), given that some cases it could be useful to have this injection to simplify NGSI model definition.

@frbattid is also having a look to this.

ngsi2cosmos: ability to set the per file consolidation level

Currently, each attribute goes to a different file always. However, a more flexible approach will be to use a selector (in the process configuration) to chose between different consolidation levels:

  • Per attribute files (as it is now)
  • Per entity files: all the attributes of the given entity goes to the same file
  • One file: all attributes of all entities goes to the same file.

The naming of the file would be adjusted consequently.

README.md: include a section about deploying LiveDemo from scratch

Raised during Sevilla March 2012 workshop:

It would be desirable to have a section in the LiveDemo documentation (README.md file) describing how to deploy the AP from scratch, starting with deploying the VMs in the cloud, then explaining how to interconnect the different GEis and finally explaining how to deploy the value added by the application.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.