Git Product home page Git Product logo

Comments (13)

snf avatar snf commented on July 21, 2024 1

--deepest should only be used for the trained network when testing. The fractals should be used instead but something in the random columns is failing.
I got a gpu again so will try to debug it and fix the code for the latest Keras.

from keras-fractalnet.

snf avatar snf commented on July 21, 2024

How are you running it and with which versions of Keras, Theano/TF?

from keras-fractalnet.

sixsamuraisoldier avatar sixsamuraisoldier commented on July 21, 2024

Tensorflow 0.11 and the latest Keras version. That shouldn't make a difference though, right?

from keras-fractalnet.

snf avatar snf commented on July 21, 2024

Maybe, I know that the master version of Theano was breaking this code because they changed something in the random generator so one of the calculations was resulting in NaN. I don't know about TF.
Unfortunately I don't have access to a CUDA enabled pc to run further tests.
My last working environment for this code was Theano 0.8.2 with Keras 1.0.6.

from keras-fractalnet.

chayitw avatar chayitw commented on July 21, 2024

With TF 1.0.0 & Keras 1.2.2, i met the issue mentiond by sixsamuraisoldier.

My "~/.keras/keras.json" is:
{
"image_dim_ordering" : "th",
"image_data_format": "channels_first",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}

and part info about training loss is as below:
($python cifar10_fractal.py)

Using TensorFlow backend.
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
(50000, 3, 32, 32)
... ...
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.759
pciBusID 0000:01:00.0
Total memory: 7.92GiB
Free memory: 7.75GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
100/50000 [..............................] - ETA: 2020s - loss: 3.2465 - acc: 0.1400I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 3240 get requests, put_count=3165 evicted_count=1000 eviction_rate=0.315956 and unsatisfied allocation rate=0.362654
2300/50000 [>.............................] - ETA: 463s - loss: 2.6815 - acc: 0.1148I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 3283 get requests, put_count=3547 evicted_count=1000 eviction_rate=0.281928 and unsatisfied allocation rate=0.231191
50000/50000 [==============================] - 457s - loss: 2.3915 - acc: 0.0998 - val_loss: 2.3504 - val_acc: 0.1000
Epoch 2/400
50000/50000 [==============================] - 450s - loss: 2.3458 - acc: 0.0987 - val_loss: 2.3234 - val_acc: 0.1000
Epoch 3/400
50000/50000 [==============================] - 450s - loss: 2.3364 - acc: 0.1001 - val_loss: 2.3187 - val_acc: 0.1000
Epoch 4/400
50000/50000 [==============================] - 449s - loss: 2.3288 - acc: 0.0993 - val_loss: 2.3081 - val_acc: 0.1000
Epoch 5/400
50000/50000 [==============================] - 449s - loss: 2.3204 - acc: 0.1002 - val_loss: 2.3078 - val_acc: 0.1000
Epoch 6/400
50000/50000 [==============================] - 453s - loss: 2.3217 - acc: 0.0999 - val_loss: 2.3067 - val_acc: 0.1000
Epoch 7/400
50000/50000 [==============================] - 454s - loss: 2.3191 - acc: 0.0982 - val_loss: 2.3085 - val_acc: 0.1000
Epoch 8/400
50000/50000 [==============================] - 454s - loss: 2.3175 - acc: 0.1001 - val_loss: 2.3039 - val_acc: 0.1000
Epoch 9/400
50000/50000 [==============================] - 454s - loss: 2.3121 - acc: 0.0998 - val_loss: 2.3043 - val_acc: 0.1000
Epoch 10/400
50000/50000 [==============================] - 454s - loss: 2.3138 - acc: 0.1003 - val_loss: 2.3047 - val_acc: 0.1000
Epoch 11/400
50000/50000 [==============================] - 454s - loss: 2.3126 - acc: 0.0981 - val_loss: 2.3039 - val_acc: 0.1000
Epoch 12/400
50000/50000 [==============================] - 454s - loss: 2.3140 - acc: 0.1001 - val_loss: 2.3032 - val_acc: 0.1000
Epoch 13/400
50000/50000 [==============================] - 454s - loss: 2.3108 - acc: 0.0987 - val_loss: 2.3038 - val_acc: 0.1000
Epoch 14/400
50000/50000 [==============================] - 454s - loss: 2.3097 - acc: 0.0984 - val_loss: 2.3047 - val_acc: 0.1000
Epoch 15/400
50000/50000 [==============================] - 454s - loss: 2.3084 - acc: 0.0969 - val_loss: 2.3030 - val_acc: 0.1000
Epoch 16/400
50000/50000 [==============================] - 454s - loss: 2.3075 - acc: 0.1001 - val_loss: 2.3033 - val_acc: 0.1000
Epoch 17/400
50000/50000 [==============================] - 455s - loss: 2.3068 - acc: 0.0988 - val_loss: 2.3032 - val_acc: 0.1000
Epoch 18/400
50000/50000 [==============================] - 454s - loss: 2.3060 - acc: 0.1003 - val_loss: 2.3034 - val_acc: 0.1000
Epoch 19/400
50000/50000 [==============================] - 454s - loss: 2.3055 - acc: 0.0998 - val_loss: 2.3027 - val_acc: 0.1000
Epoch 20/400
50000/50000 [==============================] - 456s - loss: 2.3053 - acc: 0.0987 - val_loss: 2.3031 - val_acc: 0.1000
Epoch 21/400
50000/50000 [==============================] - 455s - loss: 2.3045 - acc: 0.1012 - val_loss: 2.3027 - val_acc: 0.1000
Epoch 22/400
50000/50000 [==============================] - 454s - loss: 2.3046 - acc: 0.1000 - val_loss: 2.3030 - val_acc: 0.1000
Epoch 23/400
50000/50000 [==============================] - 454s - loss: 2.3038 - acc: 0.1005 - val_loss: 2.3028 - val_acc: 0.1000
Epoch 24/400
50000/50000 [==============================] - 454s - loss: 2.3041 - acc: 0.0989 - val_loss: 2.3028 - val_acc: 0.1000
Epoch 25/400
50000/50000 [==============================] - 452s - loss: 2.3036 - acc: 0.0997 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 26/400
50000/50000 [==============================] - 456s - loss: 2.3036 - acc: 0.0983 - val_loss: 2.3027 - val_acc: 0.1000
Epoch 27/400
50000/50000 [==============================] - 456s - loss: 2.3032 - acc: 0.0986 - val_loss: 2.3027 - val_acc: 0.1000
Epoch 28/400
50000/50000 [==============================] - 455s - loss: 2.3035 - acc: 0.0994 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 29/400
50000/50000 [==============================] - 449s - loss: 2.3030 - acc: 0.0999 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 30/400
50000/50000 [==============================] - 454s - loss: 2.3032 - acc: 0.0978 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 31/400
50000/50000 [==============================] - 452s - loss: 2.3030 - acc: 0.0960 - val_loss: 2.3028 - val_acc: 0.1000
Epoch 32/400
50000/50000 [==============================] - 453s - loss: 2.3030 - acc: 0.0983 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 33/400
50000/50000 [==============================] - 455s - loss: 2.3030 - acc: 0.0994 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 34/400
50000/50000 [==============================] - 455s - loss: 2.3030 - acc: 0.0971 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 35/400
50000/50000 [==============================] - 455s - loss: 2.3030 - acc: 0.0997 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 36/400
50000/50000 [==============================] - 453s - loss: 2.3029 - acc: 0.0991 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 37/400
50000/50000 [==============================] - 455s - loss: 2.3029 - acc: 0.0957 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 38/400
50000/50000 [==============================] - 455s - loss: 2.3029 - acc: 0.0981 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 39/400
50000/50000 [==============================] - 455s - loss: 2.3029 - acc: 0.0993 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 40/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.1011 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 41/400
50000/50000 [==============================] - 455s - loss: 2.3029 - acc: 0.0995 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 42/400
50000/50000 [==============================] - 455s - loss: 2.3029 - acc: 0.0987 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 43/400
50000/50000 [==============================] - 451s - loss: 2.3030 - acc: 0.0987 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 44/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0998 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 45/400
50000/50000 [==============================] - 455s - loss: 2.3030 - acc: 0.0951 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 46/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0990 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 47/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0988 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 48/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0981 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 49/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.1002 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 50/400
50000/50000 [==============================] - 448s - loss: 2.3027 - acc: 0.0993 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 51/400
50000/50000 [==============================] - 449s - loss: 2.3029 - acc: 0.0997 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 52/400
50000/50000 [==============================] - 446s - loss: 2.3028 - acc: 0.0977 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 53/400
50000/50000 [==============================] - 454s - loss: 2.3028 - acc: 0.0985 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 54/400
50000/50000 [==============================] - 447s - loss: 2.3028 - acc: 0.0974 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 55/400
50000/50000 [==============================] - 446s - loss: 2.3029 - acc: 0.0964 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 56/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0997 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 57/400
50000/50000 [==============================] - 454s - loss: 2.3028 - acc: 0.0985 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 58/400
50000/50000 [==============================] - 445s - loss: 2.3027 - acc: 0.0989 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 59/400
50000/50000 [==============================] - 447s - loss: 2.3028 - acc: 0.0974 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 60/400
50000/50000 [==============================] - 446s - loss: 2.3028 - acc: 0.0979 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 61/400
50000/50000 [==============================] - 449s - loss: 2.3028 - acc: 0.0985 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 62/400
50000/50000 [==============================] - 447s - loss: 2.3028 - acc: 0.0985 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 63/400
50000/50000 [==============================] - 447s - loss: 2.3027 - acc: 0.0983 - val_loss: 2.3027 - val_acc: 0.1000
Epoch 64/400
50000/50000 [==============================] - 448s - loss: 2.3029 - acc: 0.0985 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 65/400
50000/50000 [==============================] - 448s - loss: 2.3028 - acc: 0.0978 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 66/400
50000/50000 [==============================] - 449s - loss: 2.3028 - acc: 0.0979 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 67/400
50000/50000 [==============================] - 448s - loss: 2.3028 - acc: 0.0997 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 68/400
50000/50000 [==============================] - 448s - loss: 2.3028 - acc: 0.0974 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 69/400
50000/50000 [==============================] - 448s - loss: 2.3027 - acc: 0.0982 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 70/400
50000/50000 [==============================] - 448s - loss: 2.3027 - acc: 0.1002 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 71/400
50000/50000 [==============================] - 448s - loss: 2.3028 - acc: 0.0956 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 72/400
50000/50000 [==============================] - 448s - loss: 2.3027 - acc: 0.0978 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 73/400
50000/50000 [==============================] - 448s - loss: 2.3027 - acc: 0.0984 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 74/400
50000/50000 [==============================] - 454s - loss: 2.3028 - acc: 0.0984 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 75/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0977 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 76/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0993 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 77/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0991 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 78/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0976 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 79/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0972 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 80/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0985 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 81/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0959 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 82/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0998 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 83/400
50000/50000 [==============================] - 456s - loss: 2.3028 - acc: 0.0967 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 84/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0973 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 85/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0974 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 86/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.1006 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 87/400
50000/50000 [==============================] - 456s - loss: 2.3028 - acc: 0.0968 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 88/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0964 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 89/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0995 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 90/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0954 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 91/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0982 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 92/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0995 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 93/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0983 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 94/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0986 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 95/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0980 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 96/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0975 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 97/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0972 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 98/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0997 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 99/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0982 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 100/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0981 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 101/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0976 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 102/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0974 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 103/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0984 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 104/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0980 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 105/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0982 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 106/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0991 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 107/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0985 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 108/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0989 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 109/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0982 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 110/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0993 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 111/400
50000/50000 [==============================] - 455s - loss: 2.3028 - acc: 0.0981 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 112/400
50000/50000 [==============================] - 455s - loss: 2.3027 - acc: 0.0994 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 113/400
50000/50000 [==============================] - 456s - loss: 2.3028 - acc: 0.0980 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 114/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0975 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 115/400
50000/50000 [==============================] - 456s - loss: 2.3027 - acc: 0.0976 - val_loss: 2.3026 - val_acc: 0.1000
Epoch 116/400
50000/50000 [==============================] - 454s - loss: 2.3028 - acc: 0.0980 - val_loss: 2.3027 - val_acc: 0.1000

from keras-fractalnet.

snf avatar snf commented on July 21, 2024

I don't have a ML box now to check it again. I'm pretty sure that all the problems are in the random column selection as it's a little hackish.
I hope to have one soon and will try to fix the code :)

from keras-fractalnet.

WayneZww avatar WayneZww commented on July 21, 2024

actually I meet this problem too, but I add the --deepest in command and it works fine and my backend is tensorflow, I ran 40 epoches and got the accuracy of 0.69 while 20/40 epoches, but I did not run further since it costs a lot of time.

from keras-fractalnet.

WayneZww avatar WayneZww commented on July 21, 2024

I am thinking maybe there is some problem when doing the dropout task, but I am still not so familiar with this so I will try it too today

from keras-fractalnet.

aicentral avatar aicentral commented on July 21, 2024

I have the same with Keras/TF. In addition to a couple of bugs related to using TF as a backend. some parts of the code will run only if you use Theano as a backend. I can work on fixing those TF bugs

from keras-fractalnet.

snf avatar snf commented on July 21, 2024

I guess this was fixed by #4 and #5 so I'm closing it. Please reopen if that wasn't the case.
Edit: All credits go to @aicentral

from keras-fractalnet.

ftyuuu avatar ftyuuu commented on July 21, 2024

@WayneZww @aicentral I meet same problem, when i set global_p, the loss is low and acc doesn't increase. Do you solve this problem?

from keras-fractalnet.

EigenvectorOfFate avatar EigenvectorOfFate commented on July 21, 2024

I meet the same problem with Keras/TF. the loss not decreases and the predicted label seems is the same label, therefore validation acc is 0.1 @snf

from keras-fractalnet.

snf avatar snf commented on July 21, 2024

Hi @EigenvectorOfFate, unfortunately too much changed in Keras and TF since I made this code.
I don't have time to look into it but I'm very happy to accept a PR that get it working again.

from keras-fractalnet.

Related Issues (5)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.