Git Product home page Git Product logo

geoman's Introduction

GeoMAN

GeoMAN: Multi-level Attention Networks for Geo-sensory Time Series Prediction.

An easy implement of GeoMAN using TensorFlow, tested on CentOS 7 and Windows Server 2012 R2.

->Pytorch version

Paper

Yuxuan Liang, Songyu Ke, Junbo Zhang, Xiuwen Yi, Yu Zheng, "GeoMAN: Multi-level Attention Networks for Geo-sensory Time Series Prediction", IJCAI, 2018.

If you find this code and dataset useful for your research, please cite our paper:

@inproceedings{ijcai2018-476,
  title     = {GeoMAN: Multi-level Attention Networks for Geo-sensory Time Series Prediction},
  author    = {Yuxuan Liang and Songyu Ke and Junbo Zhang and Xiuwen Yi and Yu Zheng},
  booktitle = {Proceedings of the Twenty-Seventh International Joint Conference on
               Artificial Intelligence, {IJCAI-18}},
  publisher = {International Joint Conferences on Artificial Intelligence Organization},
  pages     = {3428--3434},
  year      = {2018},
  month     = {7},
  doi       = {10.24963/ijcai.2018/476},
  url       = {https://doi.org/10.24963/ijcai.2018/476},
}

Dataset [Click here]

The datasets we use for model training is detailed in Section 4.1 of our paper, which are still under processing for release. You can use the sampled data with 100 instances under "sample_data" folder. Besides, you can also test the code by using the air quality data from 2014/5/1 to 2015/4/30 in our previous research, i.e., KDD-13 and KDD-15. For more datasets, you can visit the homepage of Urban Computing Lab in JD Group.

Code Usage

Preliminary

GeoMAN uses the following dependencies:

  • TensorFlow >= 1.5.0
  • numpy and scipy.
  • CUDA 8.0 or latest version. And cuDNN is highly recommended.

Code Framework

Model Input

The model has the following inputs:

  • local_inputs: the input of local spatial attention, shape->[batch_size, n_steps_encoder, n_input_encoder]
  • global_inputs: the input of global spatial attention, shape->[batch_size, n_steps_encoder, n_sensors]
  • external_inputs: the input of external factors, shape->[batch_size, n_steps_decoder, n_external_input]
  • local_attn_states: shape->[batch_size, n_input_encoder, n_steps_encoder]
  • global_attn_states: shape->[batch_size, n_sensors, n_input_encoder, n_steps_encoder]
  • labels: ground truths, shape->[batch_size, n_steps_decoder, n_output_decoder]

Guide

The model implement mainly lies in "GeoMAN.py" and "base_model.py" and both of them are well commented. To train or test our model, please follow the presented notebooks.

License

GeoMAN is released under the MIT License (refer to the LICENSE file for details).

geoman's People

Contributors

castleliang avatar yoshall avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

geoman's Issues

Meanings about local state, global inputs, global state

Hi yoshall,

Thanks for you contribution. I have some question about the code and work. I hope you could give me a hand.

Part 1: According to the sample_data, I think there are 35 nodes, each node generate 19 time series. However, I am a little confusing about the meaning of all input files:

  1. I notice that the code reads 7 files and process them to generate inputs for training. While the mearning for local_inputs, external_inputs and decoder_gts are obvious, I don't know what is global_atten_state and local_atten_state, why not generate them from raw data (for example, local inputs and global inputs)?
  2. What is global_inputs.npy? Why it's shape is 500*35?
  3. Is global_attn_state_indics the same as global_inputs_indics? I noticed that in get_batch_feed_dict() function, train_global_inp = training_data[1] which in fact is global_attn_index.

Part 2: Besides I think the model generate prediction for each node separately. Do you train the model for each node separately or train a unified model?

Part 3: a suggestion: I hope you could add some explanation to the input files as they are different to raw inputs and maybe also publish the code process your raw data (http://urban-computing.com/data/Data-1.zip).

Looking forward to your answer and thanks for you patient.

POIs是如何处理的

我是用的是JD lab的Data_1的空气质量数据,里面没有POI的数据,而且我主要是想知道,POI是只在external module中使用的吗?是对全北京的POI数据的分类统计数据吗?那这样的话不是只是固定的值吗?跟PM2.5的预测会有关系吗?
总是是想知道具体关于POI是怎么处理和使用的。

'HParams' object has no attribute 'override_from_dict'

Hi, yoshall:

  I've installed tensorflow-gpu 1.2.1 and cuda8.0. An error occurred in line 30 while **train_model.py** was running. Line 30 is  **hps.override_from_dict(hps_dict)** and the error is **'HParams' object has no attribute 'override_from_dict'**. Thus, I‘ve tried some other versions of tensorflow, but the error still exists. Can you give me advice on how to solve this problem ?

Jack

Don't saved model in train_model.py

Hi yoshall,

I'd like to implement you code using sample_data. However, I got some troubles. You can see this information:
total parameters: 554054 ----------epoch 0----------- ----------epoch 1----------- ----------epoch 2----------- ----------epoch 3----------- ----------epoch 4----------- ----------epoch 5----------- ----------epoch 6----------- ----------epoch 7----------- ----------epoch 8----------- ----------epoch 9----------- ----------epoch 10----------- ----------epoch 11----------- ----------epoch 12----------- ----------epoch 13----------- ----------epoch 14----------- stop training !!!
May be not run this code:
if valid_loss < min(valid_losses[:-1]): print('iter {}\tvalid_loss = {:.6f}\tmodel saved!!'.format( iter, valid_loss)) saver.save(sess, model_dir + 'model_{}.ckpt'.format(iter)) saver.save(sess, model_dir + 'final_model.ckpt') else: print('iter {}\tvalid_loss = {:.6f}\t'.format( iter, valid_loss))

So,the previously saved model cannot be loaded when running the test_model.py file.
NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./logs/GeoMAN-12-6-2-128-0.30-0.001/saved_models/final_model.ckpt [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
Look forward to receiving your kind reply soon. Thank you very much.
best wishes,
wendong

In the code, are the cell state and hidden state concatenated?

In the GeoMAN.py, Line 262 to Line 273:

multiply attention weights with the original input

local_x = local_attn * local_inp
global_x = global_attn * global_inp

Run the BasicLSTM with the newly input

cell_output, state = cell(tf.concat([local_x, global_x], axis=1), state)

Run the attention mechanism.

with tf.variable_scope('local_spatial_attn'):
local_attn = local_attention(state)
with tf.variable_scope('global_spatial_attn'):
global_attn = global_attention(state)
attn_weights.append((local_attn, global_attn))

Does this only consider cell state when calculating attention? In the paper Equation (1), cell state and hidden state are concatenated?

Input files in GeoMAN/utils.py?

Hi Yoshall,

I'd like to implement you project using my own data. However, I got some troubles about the meta data of input files. Following your uploaded codes, I can only see the input file names in GeoMAN/utils.py, could you please upload a small sample of your input files for reference.

Look forward to receiving your kind reply soon. Thank you very much.

Gabby

Dataset release day

Hello, I was curios when do you plan to release the datasets, I'm interested in using them as well.

Thank you in advance.

The performance increased with time

Hi yoshall,

We implemented your project with our own dataset and had a problem with the test errors. We set
the n_encoder_steps and the n_decoder_steps T=τ=12 during the training phase to make predictions. Generally,when testing the model, there‘ll be an increase of test errors over the 12 test timesteps. However,the errors decreased and the 12th timestep got the least test error.

Looking forward to your reply and thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.