Git Product home page Git Product logo

tadw's Introduction

TADW

codebeat badge repo sizeโ €benedekrozemberczki

An implementation of Network Representation Learning with Rich Text Information. Text Attribtued Deep Walk (TADW) is a node embedding algorithm which learns an embedding of nodes and fuses the node representations with node attributes. The procedure places nodes in an abstract feature space where information about a fixed order procimity is preserved and attributes of neighbours within the proximity are also part of the representation. TADW learns the joint feature-proximal representations using regularized non-negative matrix factorization. In our implementation we assumed that the proximity matrix used in the approximation is sparse, hence the solution runtime can be linear in the number of nodes for low proximity. For a large proximity order value (which is larger than the graph diameter) the runtime is quadratic. The model can assume that the node-feature matrix is sparse or that it is dense, which changes the runtime considerably.

The model is now also available in the package Karate Club.

This repository provides an implementation for TADW as described in the paper:

Network Representation Learning with Rich Text Information. Yang Cheng, Liu Zhiyuan, Zhao Deli, Sun Maosong and Chang Edward Y IJCAI, 2015. https://www.ijcai.org/Proceedings/15/Papers/299.pdf


The original MatLab implementation is available [here], while another Python implementation is available [here].

Requirements

The codebase is implemented in Python 2.7. package versions used for development are just below.

networkx          2.4
tqdm              4.28.1
numpy             1.15.4
pandas            0.23.4
texttable         1.5.0
scipy             1.1.0
argparse          1.1.0

Datasets

The code takes an input graph in a csv file. Every row indicates an edge between two nodes separated by a comma. The first row is a header. Nodes should be indexed starting with 0. Sample graphs for the `Wikipedia Chameleons` and `Wikipedia Giraffes` are included in the `input/` directory.

The feature matrix can be stored two ways:

If the feature matrix is a sparse binary one it is stored as a json. Nodes are keys of the json and features are the values. For each node feature column ids are stored as elements of a list. The feature matrix is structured as:

{ 0: [0, 1, 38, 1968, 2000, 52727],
  1: [10000, 20, 3],
  2: [],
  ...
  n: [2018, 10000]}

If the feature matrix is dense it is assumed that it is stored as csv with comma separators. It has a header, the first column contains node identifiers and it is sorted by these identifers. It should look like this:

NODE ID Feature 1 Feature 2 Feature 3 Feature 4
0 3 0 1.37 1
1 1 1 2.54 -11
2 2 0 1.08 -12
3 1 1 1.22 -4
... ... ... ... ...
n 5 0 2.47 21

Options

Learning of the embedding is handled by the src/main.py script which provides the following command line arguments.

Input and output options

  --edge-path      STR      Input graph path.           Default is `input/chameleon_edges.csv`.
  --feature-path   STR      Input Features path.        Default is `input/chameleon_features.json`.
  --output-path    STR      Embedding path.             Default is `output/chameleon_tadw.csv`.

Model options

  --dimensions     INT        Number of embeding dimensions.                     Default is 32.
  --order          INT        Order of adjacency matrix powers.                  Default is 2.
  --iterations     INT        Number of gradient descent interations.            Default is 200.
  --alpha          FLOAT      Learning rate.                                     Default is 10**-6.
  --lambd          FLOAT      Regularization term coefficient.                   Default is 1000.0.  
  --lower-control  FLOAT      Overflow control parameter.                        Default is 10**-15.
  --features       STR        Structure of the feature matrix.                   Default is `sparse`. 

Examples

The following commands learn a graph embedding and write the embedding to disk. The node representations are ordered by the ID.

Creating a sparse TADW embedding of the default dataset with the default hyperparameter settings. Saving the embedding at the default path.

$ python src/main.py

Creating a TADW embedding of the default dataset with 128x2 dimensions and approximation order 1.

$ python src/main.py --dimensions 128 --order 1

Creating a TADW embedding with high regularization.

$ python src/main.py --lambd 2000

Creating an embedding of an other dataset with dense features the Wikipedia Giraffes. Saving the output in a custom folder.

$ python src/main.py --edge-path input/giraffe_edges.csv --feature-path input/giraffe_features.csv --output-path output/giraffe_tadw.csv --features dense

License

tadw's People

Contributors

benedekrozemberczki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tadw's Issues

node embeddings are all zero

Hi, Thanks for the python implementation of TADW. I try to evaluate the result of TADW on giraffe dataset. I make the following changes to the codes:

  • chage the file path of --edge-path, --feature-path,--output-path to giraffe dataset.
  • change --features to dense.

And all other parameters keeps unchanged.

Here are embeddings I get:

id,X_0,X_1,X_2,X_3,X_4,X_5,X_6,X_7,X_8,X_9,X_10,X_11,X_12,X_13,X_14,X_15,X_16,X_17,X_18,X_19,X_20,X_21,X_22,X_23,X_24,X_25,X_26,X_27,X_28,X_29,X_30,X_31,X_32,X_33,X_34,X_35,X_36,X_37,X_38,X_39,X_40,X_41,X_42,X_43,X_44,X_45,X_46,X_47,X_48,X_49,X_50,X_51,X_52,X_53,X_54,X_55,X_56,X_57,X_58,X_59,X_60,X_61,X_62,X_63
0.0,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,17.5561640193,19.0393369854,17.9668565593,17.8095968136,15.7288687625,18.2842039046,18.8672905625,19.0908401457,17.3428634989,17.9861489809,19.2221152214,18.8880063305,18.9347242007,16.3956388766,18.0626110202,17.7801313876,19.6118153724,19.6522067374,15.5831001956,17.9466391123,18.9412024245,18.6910995146,18.4319755594,17.7823144487,16.1617267138,16.632035156,17.7868636472,19.144048575,16.2448227439,15.8837429851,16.8948444098,20.7434825928
1.0,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,19.418857302,21.1776514057,16.7330740863,20.7961903049,17.4019876294,21.4364204382,20.5935195948,20.8804407443,19.4307565643,21.2201610956,19.8779685732,21.658861326,20.3617437599,18.0326615246,19.5013417891,20.4816495277,20.1702035628,19.1156188389,17.2827884159,20.5683601985,22.6392520365,20.0903648231,19.9298201164,21.3580573765,18.7907836501,19.7752476486,19.9652307795,20.5689666432,18.7515709792,17.0995186355,18.5041452107,20.1793805727
2.0,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,15.8655501853,15.2614582941,15.1237360692,14.5464554457,15.518018525,15.1216947038,14.9469130927,15.2269697516,14.2152368637,15.2137422453,16.0391038165,15.1560322943,16.9188702633,15.134795549,14.7140644724,14.9903461376,16.4724884343,15.785330147,14.6277017312,16.4522976424,16.0744194761,15.5419878179,15.7469767053,14.1617998808,14.5053627042,14.7402181088,15.7792771715,14.8755418723,15.1000829619,13.7796906747,15.21955654,16.8494631693
3.0,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,25.2035641617,25.7037428377,24.1046696175,23.1220193021,23.7414492738,23.8769181891,23.2346393145,24.6325339183,23.9417420937,25.4818110078,26.0248145741,26.4453890377,25.7432762261,23.3298515663,24.8734525125,24.0201878555,24.1759821777,25.4479371142,23.0832849934,25.8504544394,25.8230451379,26.0038499866,25.2155392266,24.2239254842,24.2414188799,24.0078842097,25.1042005095,24.3209910274,24.6705904816,21.8627201866,24.0299475081,24.0038341944
4.0,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,1e-15,10.4560723562,12.5196933928,10.6806982438,13.076437942,9.72079032744,8.94961917529,12.7237842975,10.9228435878,10.3742308348,12.0424505048,12.2636346198,12.6850735313,9.37093654152,7.06472852324,13.0849199387,12.7714280378,11.8251953826,10.8370930474,10.0865815703,11.7727932304,11.2289073837,10.3189405835,12.5184795748,14.4196606615,11.7215457371,10.2559107664,11.5965745013,12.1475858113,9.22081082777,11.1381676022,12.4477135617,12.3033616545
...

It seems that the first half part dimension embeddings are nearly zero. Can you solve this problem for me? Thanks.

Where is Text-feature matrix? what is json file?

X = read_features(args.feature_path)

def read_features(feature_path):
    """
    Method to get dense node feaures.
    :param feature_path: Path to the node features.
    :return features: Node features.
    """
    features = pd.read_csv(feature_path)
    features = np.array(features)[:, 1:].transpose()
    return features

I think this should be the text tfidf information reduction module . Accorind to the comment it is for getting dense node features. According to the readme, the json file is the feature matrix. But it is the feature matrix of what? nodes feature matrix or text feature matrix? if it is the network information, where is the text feature matrix? if it is a text feature matrix, why it is not tf-idf of the text-data. How is the json file come? Could you show the code for producing json file?

Good work! But one question.

Hi @benedekrozemberczki,

Why I cannot train similar nodes to similar embeddings?
So what I did is training the node embedding twice on the same dataset. Same node should output similar embeddings right? I experimented with the given samples, not work. Then I created a even simpler dataset crafted by myself, don't work neither.

Leo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.