Git Product home page Git Product logo

openvisualcloud / smart-city-sample Goto Github PK

View Code? Open in Web Editor NEW
188.0 29.0 81.0 578.96 MB

The smart city reference pipeline shows how to integrate various media building blocks, with analytics powered by the OpenVINO™ Toolkit, for traffic or stadium sensing, analytics and management tasks.

License: BSD 3-Clause "New" or "Revised" License

CMake 2.23% Dockerfile 3.44% Shell 7.90% Python 44.45% CSS 11.24% HTML 0.57% JavaScript 20.97% M4 7.59% Awk 0.46% Smarty 1.15%
ffmpeg gstreamer analytics openvino object-detection openvisualcloud traffic-monitoring stadium-management people-counting crowd-counting

smart-city-sample's People

Contributors

avenkats avatar cgdougla avatar dahanhan avatar djie1 avatar dpatel257 avatar fkhoshne avatar huornlmj avatar jhou5 avatar monkuta avatar tedlu2021 avatar tobiasmo1 avatar ttao1 avatar xwu2git avatar xwu2intel avatar yimm0815 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smart-city-sample's Issues

m4 command line not found

got an error while running make:

m4 command line ont found

had to install m4 manually through yum.

Can GST detection select subset of object types

we need people detection result in queue counting, current we use ssd model which detect any object type. Be aware of that we use object detection pipeline for queue counting, so we don't have custom transform in the pipeline. What's the best way to constraint the result to people only?

camera numbers cannot set to zero

tried to build smart stadium with 0 people counting, 1 crowd counting and 0 queue counting, so i set
cmake -DNCAMERAS=0,1,0 -DNANALYTICS=0,1,0 ..
it leads to crash at make start_kubernetes. error as below:

Error from server (Invalid): error when creating "/home/vcse/github/yimm/Smart-City-Sample/deployment/kubernetes/camera-s2o1.yaml": Service "stadium-office1-cameras-service" is invalid: spec.ports: Required value
Error from server (Invalid): error when creating "/home/vcse/github/yimm/Smart-City-Sample/deployment/kubernetes/camera-s2o1.yaml": Service "stadium-office1-cameras-crowd-service" is invalid: spec.ports: Required value
deployment/kubernetes/CMakeFiles/start_kubernetes.dir/build.make:57: recipe for target 'deployment/kubernetes/CMakeFiles/start_kubernetes' failed
make[3]: *** [deployment/kubernetes/CMakeFiles/start_kubernetes] Error 1
CMakeFiles/Makefile2:557: recipe for target 'deployment/kubernetes/CMakeFiles/start_kubernetes.dir/all' failed
make[2]: *** [deployment/kubernetes/CMakeFiles/start_kubernetes.dir/all] Error 2
CMakeFiles/Makefile2:564: recipe for target 'deployment/kubernetes/CMakeFiles/start_kubernetes.dir/rule' failed
make[1]: *** [deployment/kubernetes/CMakeFiles/start_kubernetes.dir/rule] Error 2
Makefile:274: recipe for target 'start_kubernetes' failed

how to pass array as input parameter into custom transform

in crowd counting, currently, i hard coded the polygon in crowd_counting.py as below:
self.polygon[0] = [865,210,933,210,933,227,968,227,968,560,934,560,934,568,865,568,865,210]
self.polygon[1] = [830,49,861,49,893,56,922,71,946,93,960,122,967,151,967,228,934,228,934,211,899,211,867,209,864,183,854,165,836,149,814,144,759,144,759,114,795,114,795,84,830,83,830,49]
...

Now i would like to pass the polygon array in sensor database to custom transform in order to calculate the crowd number by each zone

More specifically, how to define the array in pipeline.json? I know single valuable can be defined as below:
"width": {
"element":"detection",
"type": "integer",
"minimum": 0,
"maximum": 4096
},

detailed design page should be able to access from smart city wiki as well

some overlapped design between ad insertion and smart city, i only see from ad insertion wiki page, such as kafka, nginx, tornado,etc. since customers don't know they are connected together, ovc team should at least provide some links between these two projects, or provide separate design diagram is also ok.

Can multiple Office with "stadium" scenario ?

Hi,

I encountered a problem when deploying with "stadium" scenario, but my environment is not exists, so I describe the problem in words.. : )

When in "traffic" scenario:
cmake -DPLATFORM=XXX -DSCENARIO=traffic-DNOFFICES=2 -DNCAMERAS=2 -DNANALYTICS=2 -DFRAMEWORK=XXX
make
SCOPE=office1 CONNECTOR_CLOUD=test@cloudserver make start_helm
SCOPE=office2 CONNECTOR_CLOUD=test@cloudserver make start_helm

Two office node(s) helm installation result is "deployed", and kubectl get pod and svc is "running".

But in "stadium" scenario:
cmake -DPLATFORM=XXX-DSCENARIO=stadium -DNOFFICES=2 -DNCAMERAS=2 -DNANALYTICS=2 -DFRAMEWORK=XXX
make
SCOPE=office1 CONNECTOR_CLOUD=test@cloudserver make start_helm
SCOPE=office2 CONNECTOR_CLOUD=test@cloudserver make start_helm

Office1 can be deployed, but regardless of whether office1 has been deployed office2 always fails to install.

And office2 helm debug result is "helm Error: no objects visited".

It seem like to the helm chart recourses not be created with the file in template folder, if helm uninstall stadium-office2-* it says"0 Resource has been deleted", and I watch the helm/smtc/template/YAML_FILES, there will be some if condition judgments at the beginning of the file, is it caused by these conditions?

Then if I use "traffic" scenario to start office2, it can start, is it related to some other configurations? Thanks!

How to use multioffice in Kubernetes?

I am trying to do multioffice in traffic scenario, db gets initialized for both offices, but other pods are not able to connect to them, for other Pods the logs shows, 'waiting fo db' , this I was doing in the same PC. Please suggest me what I am doing wrong here.

Cannot see any office/cameras on Web UI after setup with openness20.06

I deploy SMTC based on openness20.06 with CIR build. I can get all pods running status without error.
But there are no any cameras/offices shown in webui.
openness version: https://github.com/open-ness/openness-experience-kits/tree/d12eda43b93b6789f8bdfb23d85c45569227cd28
SMTC branch: openness-k8s
Cluster environment: openness controller: 10.166.30.126, cloud master: 10.166.30.50
Logs:

[root@av09-09-wp ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
traffic-office1-alert-c7759b46-2k8rr 1/1 Running 0 3m46s
traffic-office1-analytics-traffic-78646d66d-5rnw7 1/1 Running 0 3m46s
traffic-office1-analytics-traffic-78646d66d-xwm26 1/1 Running 0 3m46s
traffic-office1-camera-discovery-86f669d8cf-z22ct 1/1 Running 0 3m46s
traffic-office1-db-7d85f6c8b4-xxrbk 1/1 Running 0 3m45s
traffic-office1-db-init-mqbld 1/1 Running 0 3m45s
traffic-office1-mqtt-799c64c7b8-6xc8v 1/1 Running 0 3m45s
traffic-office1-mqtt2db-ffbcdf95c-kpq4h 1/1 Running 1 3m45s
traffic-office1-relay-c9f985fc-lz8qj 1/1 Running 0 3m44s
traffic-office1-smart-upload-64599744c6-xwspj 1/1 Running 0 3m43s
traffic-office1-storage-68578c78d9-9dwbr 1/1 Running 0 3m44s
traffic-office1-where-indexing-6fb84d489d-dqklm 1/1 Running 0 3m43s
traffic-office2-alert-64b54d44dd-4xmdh 1/1 Running 0 3m46s
traffic-office2-analytics-traffic-67d79b49f8-9v9zt 1/1 Running 0 3m46s
traffic-office2-analytics-traffic-67d79b49f8-bdf7f 1/1 Running 0 3m46s
traffic-office2-camera-discovery-578bfb7dd9-kfpt6 1/1 Running 0 3m45s
traffic-office2-db-db6598bd9-bvrhm 1/1 Running 0 3m44s
traffic-office2-db-init-hd2mk 1/1 Running 0 3m45s
traffic-office2-mqtt-6cdc85699d-vtx44 1/1 Running 0 3m45s
traffic-office2-mqtt2db-ccc75b8c8-ffw4f 1/1 Running 1 3m45s
traffic-office2-relay-59bfd84796-hgvqc 1/1 Running 1 3m43s
traffic-office2-smart-upload-778676d7cc-8t24h 1/1 Running 0 3m43s
traffic-office2-storage-8447fb8d58-74rpx 1/1 Running 0 3m44s
traffic-office2-where-indexing-6d8468f494-ss5bb 1/1 Running 0 3m43s

[root@av09-09-wp ~]# kubectl logs traffic-office1-db-7d85f6c8b4-xxrbk
........
[2020-08-12T08:10:02,762][INFO ][o.e.t.TransportService ] [traffic-office1] publish_address {10.166.30.50:30301}, bound_addresses {0.0.0.0:9300}
[2020-08-12T08:10:02,770][INFO ][o.e.b.BootstrapChecks ] [traffic-office1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-08-12T08:10:32,785][WARN ][o.e.n.Node ] [traffic-office1] timed out while waiting for initial discovery state - timeout: 30s
[2020-08-12T08:10:32,793][INFO ][o.e.h.n.Netty4HttpServerTransport] [traffic-office1] publish_address {10.16.0.47:9200}, bound_addresses {0.0.0.0:9200}
[2020-08-12T08:10:32,793][INFO ][o.e.n.Node ] [traffic-office1] started
[2020-08-12T08:10:36,105][INFO ][o.e.d.z.ZenDiscovery ] [traffic-office1] failed to send join request to master [{cloud-db}{q5Kru1cDS4KTx52i0gib3g}{eRn_HSzUSfWh_sLshr3tgg}{10.166.30.50}{10.166.30.50:30300}{zone=cloud}], reason [RemoteTransportException[[cloud-db][10.244.1.22:30300][internal:discovery/zen/join]]; nested: ConnectTransportException[[traffic-office1][10.166.30.50:30301] handshake_timeout[30s]]; ]

how to export model resolution and import to custom transform for crowd counting

@dahanhan Hi, Dahan,

I tried to get model resolution 128x96 from the model and resize the zonemap bitmask to align with VA pipeline crowd counting output size.

My guess is to change CSRNet_2019R3_model_proc.json file, to add "model_width": 128, and "model_height": 96 inside the "output_postproc" block, then in the crowd_counting.py to call tensor.model_width() and tensor.model_height() to get the resolution.

However, the modification hit an error as below:

{"levelname": "ERROR", "asctime": "2020-01-02 16:32:39,597", "message": "Error on Pipeline 3: element: detection : gst-resource-error-quark: base_inference plugin intitialization failed (2), /home/gst-video-analytics/gstts/base/gva_base_inference.c(438): gva_base_inference_set_caps (): /GstPipeline:pipeline5/GstGvaInference:detection:\n\n[json.exception.parse_error.101] parse error at line 11, column 18: syntax error while parsing objecring literal; expected '}'\n[json.exception.parse_error.101] parse error at line 11, column 18: syntax error while parsing object - unexpected string literal; expected '}'\n[json.exception.parse_error.101] parse error at18: syntax error while parsing object - unexpected string literal; expected '}'\n[json.exception.parse_error.101] parse error at line 11, column 18: syntax error while parsing object - unexpected string literal; expectedtion.parse_error.101] parse error at line 11, column 18: syntax error while parsing object - unexpected string literal; expected '}'\n", "name": "GSTPipeline"}
on_created: /tmp/itfQaG8BCrLsWhJDZ3yJ/2020/01/02/1578011559492217514_253003953.mp4
{'id': 3, 'state': 'ERROR', 'avg_fps': 0, 'start_time': 1578011558.9943552, 'elapsed_time': 0.6035614013671875}
pineline ended with ERROR
exiting va pipeline
Exception in connect: VA exited. This should not happen.

so the grammar is not right yet. Do you know where to check the right solution? any thoughts?

thanks,
Mo

VCAC-A setup not working properly

Hi @xwu2git ,

We are facing one trouble, in running VCAC-A based Smart-City-Sample. We have followed all the installation steps given on the github for VCAC-a, so the problem is that all the pods are running fine, except storage and smart-upload, they are stuck with "Searching" statement. and as result UI is not working properly.
We are running analytics pods only on vcac-a, by making vcaca-a as a worker node, rest of them are running on host machine as a master.
Screenshots are attached for better clarity, please help us on resolving the issue.
Running pods:
Podes
Smart-upload logs
smart_upload
Storage logs
storage
UI Screen
UI
Analytics container logs
analytics

Information of database used for storing

There are several questions related to how the data is being stored and fetched

1)I find that the data is ingested into the database (here dbhost) using index and office combined together.
Example : 'http://db:9200/algorithms$45.539626$-122.929569/_doc' from detectobject.py

  1. While ingesting the bulk data URL generated is 'http://db:9200/_bulk' . The ingesting data part consists of index: {"_index": "analytics$45.539626$-122.929569", "_type": "_doc"}.

3)Apart from all this same data is being transmitted (A copy of all data) is again being transmitted from upload.py file.
The URL as generated is 'http://db:9200/_bulk'

Can you please clarify what dbhost actually specifies? What is database software being used? Is dbhost storage same as cloud storage? How is video being streamed during offline simulation?

[SMTC]Offices and cameras are lost once reboot worker node after k8s setup

Test commit: latest master branch,
0d2ceb2

Steps:
1.Enable "SMTC" based on k8s .
2. Check all pods status and check the web with https://master_node_ip. Ensure they are all working as expected.
3. Then run "reboot" in worker node and check step 2 again, after all pods are created and running again.
You'll find the web can only show map but no offices and cameras info.
This issue isn't reproduced if reboot master node.

"make update" pin wrong worker ip address for kubernetes setup

when set up kubernetes with one master and one worker, the "make update" always pin wrong worker ip address thus generate error as below:
Unable to negotiate with 10.233.182.20 port 22: no matching cipher found. Their offer: aes128-cbc,3des-cbc

but if comment the docker swarm part in update-image.sh makes kubectl grab the correct worker ip and works fine. thanks @ttao1 to analyze and assign to him to fix.

Sending local index to cloud-db

Hi @xwu2git, In the multioffice scenario, we have 4 indexes in the cloud-db-0, i.e. sensors, recordings, offices and analytics, similarly I want algorithm index into the cloud-db-0. Till now, I know that runva.py and for traffic scenario detect_objcet.py ingest the data using dbingest by importing db_ingest.py. So, I Tried somethings around them, now I have algorithm index initialized into the cloud-db-0, but it contains no docs.

Please help me in this. Thanks

Attaching one live stream camera to two different usecases in stadium scenario

I am not able to give one live stream camera feed from sensor-info.json to two different use cases like Entrance and svcq in Stadium scenario.
issue1
I have given same details of IP camera into sensor-info but Smart city is not taking the same feed into two use case, one use case its showing N/A. It is showing 1:1 ratio for camera and use case.

crowd counting UI should show crowd counting result separately for multiple zone surveillanced by one sensor

In sensor-info.json, I modified “East Wing” to cover zone0 and zone1 as below, however, i only see the sum of the two zone's data on the main UI. is it possible to show the crowd-counting data seperately for each zone?

},{
    "address": "East Wing",
    "location": {
        "lat": 37.38865,
        "lon": -121.95405
    },
    "algorithm": "crowd-counting",
    "theta": 270.0,
    "zones": [0,1],
    "zonemap": [
        {"zone": 0,
        "polygon": [[1080,200],[1166,200],[1166,216],[1211,216],[1211,523],[1167,523],[1167,530],[1080,530],[1080,200]]},
        {"zone": 1,
        "polygon": [[1036,51],[1075,51],[1115,58],[1153,72],[1182,92],[1200,119],[1209,146],[1209,216],[1167,216],[1167,201],[1123,201],[1083,199],[1079,175],[1066,159],[1044,144],[1016,139],[946,139],[946,111],[991,111],[991,84],[1036,83],[1036,51]]}
    ],
    "simsn": "cams2o1w0"
},{

Live rtsp stream does not contain bounding boxes and analytics

I am using live CCTV coverage in the Smart-city demo, so now I have bounding boxes visible in the recording page, in the form of snippet of 1 min videos. But in the home page, the preview is available, there I am getting direct stream from the camera (live-stream) and not the processed output which contains the analytics and bounding boxes for the objects.

So what I want is to have analytics (bounding boxes and labels) on the preview-template with live stream, or to have processed videos at the home page preview section.

Please help me in this issue.

Thanks

VCAC-A Card build not working

Hi, recently we tried to deploy Smart City framework with VCAC-A card. In analytics container, we got an error related to vaapipostproc. We changed from Xeon to vcac-a by changing cmake to cmake -DPLATFORM=VCAC-A. It got reflected in the cmake process.
With VCAC-A build, we are getting error as below.
Screenshot (162)
We tried with Xeon in same environment it was working fine, screeshot for cmake -DPLATFORM=Xeon is below
Screenshot (163)
Kindly look into this issue

No rule to make target 'start_openness_camera

Hi Team,

while i am running the command make start_openness_camera in camera simualtor.

i was getting error like this "make: *** No rule to make target 'start_openness_camera'. Stop."

So when I do make help , i observed that there is no target like "start_openness_camera".
make_help

Latest update shows problem with recognition and reidentification models

Hi @xwu2git,
We recently encountered few issues with latest update of Smart-City-Sample:

  1. The entrance part does not work and shows error with Person-reidentification model.
  2. The emotion recognition model when put in pipeline, also shows same error.
  3. Both were working fine with 2-3 month old version.

For reference, I am adding screenshots of both.

Screenshot (86)

Screenshot (85)

Fail to run webrtc pods and no video recording in camera

Commit: 3ce4ad2
Run deployment and then check pods and url. following pods are failed. But you can see map/office if you open URL, but cannot see any "recording" video if you click on the cameras.

traffic-office1-webrtc-748c54c75d-vzv2s 2/3 CrashLoopBackOff 11 34m
traffic-office2-webrtc-74d495b64-6dfng 2/3 CrashLoopBackOff 11 34m
Then check the pod log:
kubectl logs traffic-office2-webrtc-5d77694c84-lr8nq -c webrtc
{
"databases" : [
{
"name" : "admin",
"sizeOnDisk" : 8192,
"empty" : false
},
{
"name" : "local",
"sizeOnDisk" : 8192,
"empty" : false
},
{
"name" : "owtdb",
"sizeOnDisk" : 16384,
"empty" : false
}
],
"totalSize" : 32768,
"ok" : 1
}
mongodb connected successfully
Initializing ManagementAPIServer configuration...
superServiceId: 5f840524d636910017835cd1
superServiceKey: ZemX/1wCvjj+ms0l9546B9GZu/ZOk6063kAWIz8s+lBIpW9u5J56LKERUW03oUvpbnlXwUBnb6c5CtyCT5Xf41plSKdy6IJcR2SoWERFN3wEj6xCPImR3zXTJHkCF4sN0Larc2u8GGQ5SJPm4cpt9MdPVqje/tzhPgPLgMPryi0=
sampleServiceId: 5f840524d636910017835cd2
sampleServiceKey: OK5iaQUOvJRei/5qz8vLTHiz3EIKsq/zO/Xez63K/7EiUJXTxmTQTCdPQrUstuDg9dgv0N30yHIMdHkuW1mbL/IAk0dP4s5aqKA8vum0Ls7lNsQ0NtsCWHPiNzoByipMew7CEHoCSZklgVHG2PdOo0lvzEAML/YIy0wZr7NIxlA=
Error in saving configuration: { Error: EACCES: permission denied, open '/home/owt/management_api/management_api.toml'
errno: -13,
code: 'EACCES',
syscall: 'open',
path: '/home/owt/management_api/management_api.toml' }
{ Error: EACCES: permission denied, open '/home/owt/extras/basic_example/samplertcservice.js'
errno: -13,
code: 'EACCES',
syscall: 'open',
path: '/home/owt/extras/basic_example/samplertcservice.js' }
sed: couldn't open temporary file /home/seduWjzcK: Permission denied

reliability with certain IP camera

The recording and analytics pipeline may break with camera streaming after certain period of time on some IP cameras. Need to investigate.

analytics ahead of video playback in Opera

During playback, analytics (bounding boxes) shows up ahead of the corresponding video frames in the Opera browser. Chrome, Firefox and Edge works fine with browser specific timestamp compensation. Opera does not work even with the time stamp compensation.

Changing inference model

I have been trying to change the current person-vehicle-bike-detection-crossroad-0078 model with the person-vehicle-bike-detection-crossroad-1016 as both of them have a similar use. But I am facing a problem where the output is not showing any bounding boxes or the class labels, but is showing a undefined label with an accuracy percentage where an object is expected to be.
Screenshot from 2020-07-20 01-17-38

Can you please guide me on how can I deal with the G-streamer pipeline or where can I get a proper documentation to deal with other similar model changing scenarios.

cannot build image with V20.4 branch

Branch: v20.4
OS: Centos7.6
Build failed with following log:

Step 6/14 : RUN git clone ${VA_SERVING_REPO} -b ${VA_SERVING_BRANCH} --depth 1
---> Running in c1b819f011cb
Cloning into 'video-analytics-serving'...
warning: Could not find remote branch v0.3_preview to clone.
fatal: Remote branch v0.3_preview not found in upstream origin
The command '/bin/sh -c git clone ${VA_SERVING_REPO} -b ${VA_SERVING_BRANCH} --depth 1' returned a non-zero code: 128
make[2]: *** [analytics/common/CMakeFiles/build_smtc_analytics_common] Error 1
make[1]: *** [analytics/common/CMakeFiles/build_smtc_analytics_common.dir/all] Error 2
make: *** [all] Error 2

Error while loading Pipeline - YoloV3 model

Hello all,

I tried to load YoloV3 (converted IR Model), in CPU, Model Network: FP32

Open_model_zoo: https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf
Tensorflow model: https://download.01.org/opencv/public_models/022020/yolo_v3/yolov3.pb
Json file: https://download.01.org/opencv/public_models/022020/yolo_v3/yolo_v3_new.json

IR_Conversion_Script: python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo_tf.py --input_shape [1,416,416,3] --input input_1 --scale_values input_1[255] --reverse_input_channels --transformations_config ./OMZ_YOLOV3_TF_Model/yolo_v3_new.json --input_model ./OMZ_YOLOV3_TF_Model/yolov3.pb --output_dir ./IR/

I tried removing INT8 model and forcely kept FP32 model,

Error log from OVC:

Generating LALR tables
Searching...
[INFO] sensor msg: rtsp
Connected to BF9S4XoBjrHmACLjMiHW...
testing mqtt connection
mqtt connected
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,599", "message": "========================", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,599", "message": "Options for vaserving.py", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,599", "message": "========================", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,599", "message": "port == 8080", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,599", "message": "framework == gstreamer", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,600", "message": "pipeline_dir == /home/pipelines", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,600", "message": "model_dir == /home/models", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,600", "message": "network_preference == {'CPU': 'INT8,FP32'}", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,600", "message": "max_running_pipelines == 1", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,600", "message": "log_level == INFO", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,600", "message": "config_path == /home/vaserving/..", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,600", "message": "ignore_init_errors == False", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,600", "message": "========================", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,601", "message": "==============", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,601", "message": "Loading Models", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,601", "message": "==============", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,601", "message": "Loading Models from Path /home/models", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,653", "message": "Loading Model: object_detection_2020R2 version: 1 type: IntelDLDT from {'FP32': '/home/models/object_detection_2020R2/1/FP32/yolov3.xml', 'model-proc': '/home/models/object_detection_2020R2/1/yolov3.json'}", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,654", "message": "========================", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,654", "message": "Completed Loading Models", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,654", "message": "========================", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,655", "message": "=================", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,655", "message": "Loading Pipelines", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:14,655", "message": "=================", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:19,135", "message": "Loading Pipelines from Config Path /home/pipelines", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:20,823", "message": "Loading Pipeline: object_detection version: 4 type: GStreamer from /home/pipelines/object_detection/4/pipeline.json", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:21,108", "message": "Loading Pipeline: object_detection version: 2 type: GStreamer from /home/pipelines/object_detection/2/pipeline.json", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:21,207", "message": "Loading Pipeline: object_detection version: 1 type: GStreamer from /home/pipelines/object_detection/1/pipeline.json", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:21,263", "message": "Loading Pipeline: object_detection version: 3 type: GStreamer from /home/pipelines/object_detection/3/pipeline.json", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:21,263", "message": "===========================", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:21,263", "message": "Completed Loading Pipelines", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:21,264", "message": "===========================", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:21,264", "message": "Creating Instance of Pipeline object_detection/1", "module": "pipeline_manager"}
{"levelname": "INFO", "asctime": "2021-07-26 11:10:21,296", "message": "Device preferred network INT8 not found", "module": "model_manager"}
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=6.866455078125e-05, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
on_created: /tmp/rec/BF9S4XoBjrHmACLjMiHW/2021/07/26/1627278024244428192_1474343959.mp4
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=3.0013587474823, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=6.001809597015381, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=9.002149820327759, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=12.002528190612793, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=15.002888441085815, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=18.003196477890015, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=21.003546237945557, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=24.0038800239563, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=27.004395008087158, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=30.004823684692383, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=33.005263805389404, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=36.00559902191162, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=39.00599455833435, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=42.00645327568054, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=45.006819009780884, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=48.007275342941284, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=51.007657051086426, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=54.00809383392334, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=57.00853681564331, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=60.008960485458374, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=63.009300231933594, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=66.00956678390503, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=69.01006317138672, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=72.01044178009033, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=75.01072978973389, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=78.01087641716003, id=1, start_time=1627278021.299261, state=<State.QUEUED: 1>)
{"levelname": "ERROR", "asctime": "2021-07-26 11:11:40,513", "message": "Error on Pipeline 1: gst-library-error-quark: base_inference plugin intitialization failed (3): /opt/build/gst-video-analytics/gst/inference_elements/base/inference_singleton.cpp(137): acquire_inference_instance (): /GstPipeline:pipeline4/GstGvaDetect:detection:\nFailed to load model '/home/models/object_detection_2020R2/1/FP32/yolov3.xml'\n\tCannot create Gather layer up_sampling2d/Shape/GatherNCHWtoNHWC id:400 from unsupported opset: opset7\n", "module": "gstreamer_pipeline"}
PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=79.21747660636902, id=1, start_time=1627278021.299261, state=<State.ERROR: 4>)
Pipeline object_detection Version 1 Instance 1 Ended with ERROR

Traceback (most recent call last):
File "/home/detect-object.py", line 32, in connect
raise Exception("VA exited. This should not happen.")
Exception: VA exited. This should not happen.

Model-proc Json file used:

{
"json_schema_version": "2.0.0",
"input_preproc": [],
"output_postproc": [
{
"converter": "RegionYolo",
"iou_threshold": 0.5,
"classes": 80,
"anchors": [
10.0,
13.0,
16.0,
30.0,
33.0,
23.0,
30.0,
61.0,
62.0,
45.0,
59.0,
119.0,
116.0,
90.0,
156.0,
198.0,
373.0,
326.0
],
"masks": [
6,
7,
8,
3,
4,
5,
0,
1,
2
],
"bbox_number_on_cell": 3,
"cells_number": 13,
"labels": [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
]
}
]
}

It would be much appreciated if you could tell me where am I going wrong,

Scenarios not working

I'm trying to run any of the scenarios available, stadium or traffic, using docker. But, after install and deploy the docker containers and services (using make start_docker_swarm), when I open https://<hostname> only the interface and background image are load, but nothing else appears:

CROWD
CROWD2

Does anyone have the same problem?

Unable to add a simulated sensor

I was able to build and run on Xeon with Ubuntu 18.04 using the git master. The UI shows the map and one office with five cameras. When I click on the camera when it is green, I see the traffic video (sometimes if I am lucky). I don't see any of the simulated sensors and and the simulated sensor service. I am using docker swarm. Could I get a complete cheat-sheet on how to build that simulated sensors and how to add them? The existing readme files did not provide enough info.

Stadium scenario doesn't work

I am deploying this sample with both traffic and stadium scenario to a Kubernetes cluster. The traffic scenario works perfectly but the stadium doesn't show anything. There is no sensor available and no video captured. Do you have any idea?

Screen Shot 2019-11-06 at 10 35 04
Screen Shot 2019-11-06 at 10 34 51

Issue in make

I get the following error when I do make. Can you please help me resolve it. Thanks!

Step 5/15 : COPY --from=smtc_common /home/*.py /home/ invalid from flag value smtc_common: pull access denied for smtc_common, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

start_docker_compose startup with corporate proxy.

The sample does not work with docker compose on host CentOS 7.6, if any proxy settings are available. What happened is that docker-compose auto imports any proxy settings into the running containers thus interrupts container-to-container communication.

No good solution to this issue other than disable http_proxy (system wide) before start_docker_compose.

Alternatives tried:

  • Added http_proxy='' into docker-compose.xml. Docker-compose worked but docker-swarm failed. Somehow docker-swarm thinks this is a desire to import the full http_proxy string into container.
  • Added no_proxy='database' into docker-compose.xml. Accessing to database worked but accessing to web_local or simulated cameras failed (those used container hostnames thus unavailable at the configuration time.)

Hacky Solution:

  • Specifically code database and camera access to bypass any proxy settings. Not desired. Might have unintended consequences.

specify and/or check docker version installed

There is no check for specific docker version. The node I tried was on an older docker release and I ended up getting this error:

Error parsing reference: "centos:7.6.1810 as build" is not a valid repository/tag: invalid reference format

removing the old docker release and upgrading it to 1.19 fixed this problem. There should be a docker version check to validate you are on the the expected one. I don't remember seeing a required version # in the readme (my bad if its there).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.