Git Product home page Git Product logo

livescan3d's Introduction

LiveScan3D

LiveScan3D is a system designed for real time 3D reconstruction using multiple AzureKinect or Kinect v2 depth sensors simultaneously at real time speed. The code for working with Kinect v2 is in the master branch, and the v1.x.x releases. If you want to work with Azure Kinect please use the appropriately named branch.

For both sensors the produced 3D reconstruction is in the form of a coloured point cloud, with points from all of the Kinects placed in the same coordinate system. The point cloud stream can be visualized, recorded or streamed to a HoloLens or any Unity application. The code for streaming to Unity and HoloLens is available in the LiveScan3D-Hololens repository.

Possible use scenarios of the system include:

  • capturing an object’s 3D structure from multiple viewpoints simultaneously,
  • capturing a “panoramic” 3D structure of a scene (extending the field of view of one sensor by using many),
  • streaming the reconstructed point cloud to a remote location,
  • increasing the density of a point cloud captured by a single sensor, by having multiple sensors capture the same scene.

You will also find a short presentation of LiveScan3D in the video below (click to go to YouTube): YouTube link

In our system each sensor is governed by a separate instance of a client app, which is connected to a server. The client apps can either run on separate machines or all on the same machine (only for Azure Kinect). The server allows the user to perform calibration, filtering, synchronized frame capture, and to visualize the acquired point cloud live.

How to use it

To start working with our software you will need a Windows machine with and at least a single Kinect device. You can either build LiveScan3D from source, for which you will need Visual Studio 2019, or you can download the binary release. Both the binary and source distributions contain a manual (in the docs directory) which contains the steps necessary to start working with our software (it won't take more than a couple of minutes to set up).

Where to get help

If you have any problems feel free to contact us: Marek Kowalski [email protected], Jacek Naruniec [email protected]. We usually answer emails quickly (our timezone is CET).

For details regarding the methods used in LiveScan3D you can take a look at our article: LiveScan3D: A Fast and Inexpensive 3D Data Acquisition System for Multiple Kinect v2 Sensors.

Licensing

While all of our code is licensed under the MIT license, the 3rd party libraries have different licenses:

If you use this software in your research, then please use the following citation:

Kowalski, M.; Naruniec, J.; Daniluk, M.: "LiveScan3D: A Fast and Inexpensive 3D Data Acquisition System for Multiple Kinect v2 Sensors". in 3D Vision (3DV), 2015 International Conference on, Lyon, France, 2015

Authors

livescan3d's People

Contributors

blackghost1987 avatar hexx2bin avatar jaceknaruniec avatar marekkowalski avatar vinjn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

livescan3d's Issues

can not calibrate

when I try to calibrate the kinect, it always shows a green border, but I already add the marker Id, and from the part of client, the video that kinect captured is mirror symmetry, I just blocked here.

Supported platforms and OS?

Hello,

Great project!

I'm getting the impression that this project is currently mainly developed for use on HoloLens - does it support any other OS, say Android or IOS?

What I ultimately want to do is to implement this system on an e.g., Android or Apple device in a similar way to what people can see through HoloLens, i.e., the mobile device will use both its camera to display real-world image, as well as the point cloud read from LiveScan3D server. Do you think this is possible?

Thanks in advance!
Chang

Read .bin

Hi, Marek!

I want to get the timestamp of each frame, whether it needs to be obtained by reading the bin file. I just added a timestamp to ply writer, but you know its time is the time after processing. I want the original shooting time. Can you give me some suggestion? Thanks in advance.

How to Converting KinectServer Csharp to C++ using Winsock2

Currently i'm working in Passing KinectV2 streams between different system. Existing system was developed using c++,openGL both Client and server. I think this repository is better than my existing system. So i am willing to migrate everything to my source. Please help me to converting KinectServer Csharp to C++.and suggest me what are the steps needed to creating Winsock Receiver side.

Thanks ,
Kirubha

Symmetry Calibration Issue

Hey Marek, great project! Having a slight issue with the calibration. I am using two kinects and two markers. I have followed the calibration instructions, but I seem to have the first kinect mirrored so I don't have a full 360 view of a person. The first marker had 0 for all values in the calibration marker panel. I had the translation y value of the 2nd marker set to 180, as any other alteration to the calibration marker panel would render two separate streams, whereas the 180 value on y would seem to render as one stream. I also changed the z value of the translation to equal the length in meters of the box that the markers were on. It seemed that even if I removed that z value and made it 0, it had no bearing on the calibration. Any help would be greatly appreciated!

Problems with calibration

Hi, we have problems with calibration of multiple Kinects because when we do the steps of the documentation, the result is two point clouds on different calibration. What is wrong?
Please, can you write us step by step how to do?
Thanks!!
livescan3d-calibration

probrem with color saved .bin files

I tried with good this app, but I had a trouble with saved pointlcloud.
When I captured human with Azure Kinect, .ply pointcloud files are collectly generated, but in .bin files human faces saved in blue.
How can I fixed it?

How can the Clients are synchronized?

Dear Marek,
As far as I understand, The Clients will acquire Frame simultaneously., then send to Sever. The Sever wait until all Client captured, then receive the information.
Now, I run the system with two Kinect, one in Release mode (up to 30fps), one in Debug (5fps)
Thus, the Sever, will receive the latest frame of faster Client (Release), and the only one frame from slower Kinect (Debug).
How can I make sure that two Kinect frames are quite similar? The observed results between two case: 2 Release Kinect, and 1 Release and 1 Debug are not much different. Both of case, the system are quite synchronization.
The only assumption, I guess is the Debug camera captured the image 5fps, but process it very fast. Thus,the latest frame captured from Release Camera is almost the same as the the frame captured by Debug camera. Am I right?

Thank you,
Mark

Reduction of the streamed frame size.

The depth data is currently not being streamed in an efficient manner.
If its size was reduced, the live view would have a better framerate and the frame download.after recording would be quicker.

the problem of settings

Hi~,
I am using the Live Scan 3D with 4 Azure Kinect. When I set up the system I have some problems. Could you pls tell what is meaining about orientation and translation in the setting menu.

Thanks~

Multiple Azure Kinects save recording problem

Thank for share your project , it is so great. I have a problem when i trying to recording the point cloud from multiple Azure Kinects, The cameras were calibrated and can show in live view ,and enable merge scans in setting, After i recording the point clouds, i check the file in MeshLab , only one camera point cloud was saved . Is it the bug ? or something i m missing?

Replay saved stream (bin file) from a database

Hi, great project Marek! I was trying to go about storing the bin file of the recording in a database and then replaying the bin file on the iOS application I made. Since the project is only setup to work with a socket, do you have any suggestions as to how to go about replaying the recorded stream on-demand (such as storing it in a database and calling to download/play it)? Do I need to extract the frames from the bin file and replay them in a readable file format? Any help would be greatly appreciated!

Multiple Marker Calibration Problem

Hi,

Recently, I have tried to calibrate three Kinect2 which were put around a small office, their position look like an equilateral triangle.

Obviously, It is impossible to use only one marker to calibration all of Kinect2. So I have to use 2 marker. But After added 2 calibration marker in LiveScanServer Setting, I cannot get correct calibration of three Kinect2.

I noticed that there are Orientation and Translation slots in Calibration Marker Setting. Dose it mean I should provide the transformation between different Maker?

BTW, If I use the default Orientation and Translation (i.e all of six slots are 0), the calibration process looks like ambiguous because two different marker simultaneously locate global coordinate origin.

Could you tell me what the standard procedure of multiple marker calibration is? I cannot find this information from manual.

LiveScanClient build from source code failed for Azure Kinect.

Hi @MarekKowalski,

I am trying to build the binaries for the LiveScanClient for Azure Kinect and it is giving error "Kinect.h" not found.

Also, when using the pre-compiled binaries with Azure, the LiveScanClient is giving error "Capture Device failed to initialize!"

Could you please suggest how to use Azure Kinect for the project and solve these issues.

client is not getting connected with the server

Hello,

I'am trying to use the application. Am able to run the LiveScanClient application successfully on two laptops . But I'am facing an issue while connecting the client with the server. Even thought after starting the server it's showing an error as Failed to connect, Did you start the server?

Could you please check with this?

Thanks a lot for such a great work!

Shamini

whatsapp image 2018-06-08 at 13 41 45

Two Kinects on the same PC

Hi,
Liv3D is a great tool, thanks for sharing. I have two Kinects v2 connected at the same computer, and they run with some apps (like “Ipi Recorder 4”) without problems, thanks to UsbDK, a driver that allows two plug two (or more) Kinects 2 on the same PC.

Connecting the second Kinect, Live3D only recognizes the first connected, don’t bother if the’re in different ports/controllers; it would be useful (to show at demos) not having to work with two computers and two clients…. Must Live3D recognize the other Kinect? if not ...is it possible (and how) to modify the source to acquire from different Kinects and having two instances of LiveClient? I’m not C programmer, only basic stuff. Perhaps it would be slow capturing with one machine, but interesting enough to try it.

Thanks!

Low FPS on decent hardware

Hi,

I'm getting 2-4 fps on my i7 machine with a 1080Ti.

Steps I took,

  1. Fresh build using Visual Studio 2017 ,
  2. Start the LiveScanServer
  3. Start LiveScan Client and connect
  4. Show Live from server window

Other kinect apps get 30 fps on same machine. CPU, GPU, and network activity all look normal. Am I missing something?

A few other issues I noticed, seems like server doesn't fully shutdown after closing. At least lights on Kinect remain on, and I see KinectService running in background. I can close them in the task manager no problem.

Was able to stream to Unity using your Hololens project which is really cool. But low fps makes it not really usable for me.

Really cool project though, and tons of potential. Thanks for posting!

calibration.txt

Thank you in advance for sharing this great work! Great help for me.
I have obtained a translation vector and rotation matrix from the sensor coordinate system to the landmark coordinate system, but I don't know how to convert it into a Matrix4 × 4 matrix in Unity3D.
Uploading image.png…
So, could you help me get the Matrix 4 × 4 in unity3D, or the rotation angle along the x, y, and z axes?
I would appreciate it if you could make some suggestions.

Linux makefiles and docs

You mentioned that this also runs on Linux, and that it is possible to have multiple clients only when using Linux. Is it possible to make available makefiles for building this on linux?

I managed to build LiveScanServer using MonoDevelop, however it crashes when started.

Thanks!

Very slow

Hello, when I use LiveScan3D with my PC, the status bar show the FPS just 0.5 - 2. How can I make it more quickly? I have set Stream only bodies, and disable filtering. The client and server is run on the same PC. I had watch the video in YouTube, it looks very smooth. How can i run it smooth on my PC?
Thanks.

Memory Leak in C++ CallBack Function

Currently i Writing Code for CallBack Function to Getting Bytes array continuously from CLI Wrapper. My code

    C++
        **Declaration:**
        void ReceivedSensor1ByteArray(unsigned char values[], int length);

        **Calling:**
         GetSensor1ColorsFromCsharp(&ReceivedSensor1ByteArray);


        **Definition:**
         byte* sensor1bytevalues;
         void ReceivedSensor1ByteArray(unsigned char values[], int length)
        {
            if(length > 0)
             {
        sensor1bytevalues=new byte[length];

        for(int i = 0; i < length; i++)
        {
            sensor1bytevalues[i]=values[i];
        }
            }
        }

    **CLI Wrapper**

        **Decalration:**

            public ref class SampleWrapper
                {
                    SampleWrapper(void)
                    {   
            kinectSocketwrapperObj->ReadBytesValues+=gcnew CLIWrapperClass::ByteValuesReady(this,&Wrapper::SampleWrapper::ByteArrayReadyMethod);
                }
        public:

             CLIWrapperClass ^ kinectSocketwrapperObj;
            static SampleWrapper ^ Instance = gcnew SampleWrapper();
            void ByteArrayReadyMethod(array<Byte> ^ values);
    **Definition:**

      GetByteArrayCallback byteArrayCallback;
    __declspec(dllexport) void GetSensor1ColorsFromCsharp(GetByteArrayCallback cb)
    {
            byteArrayCallback = cb;
            CLIWrapperClass ^KinectServerWrapper = SampleWrapper::Instance->kinectSocketwrapperObj;
            KinectServerWrapper->ReceiveSensor1colors();
    }

 void SampleWrapper::ByteArrayReadyMethod(array<Byte> ^ values)
   {
      Byte *nativeValues = new Byte[values->Length];
      copyManagedByteToUnfloatArray(nativeValues, values);
      byteArrayCallback(nativeValues, values->Length);

   }

void copyManagedByteToUnfloatArray(Byte target[], array<Byte> ^ values)
   {
      int maxSize = values->Length;
      if ( maxSize > 0) 
      {
         for (int index = 0; index < maxSize; index++ ) 
         {
            target[index] = (float)values[index];
         }
      }
   }

Actually I Receiving Bytes Data from C# through CLI Wrapper Class and Passed to C++ Application in order to DisplayImageFrame. When i call GetSensor1VerticesFromCSharp function continuesly the system Memory increased after 10 minutes the system gets hanged. please suggest me to solve this issue.

Thanks,
Kirubha

Azure Kinect calibration problem

Thanks for sharing this great work! I have compiled the Azure Kinects brunch and testing for two Azure Kinects. In calibration step, the clients can detect the same mark id 0 (green border around the marker in both clients), but after added mark id 0 in setting, calibrate never worked, as the green border didn't disappear and no calibration file generated. Do I need to add the orientation and translation info? I am really appreciated if you can give some suggestions.

Skeleton handling

It would be very useful to include skeleton handling this would allow for:

  • segmenting people inside the merged point cloud,
  • merging multipel skeletons to avoid occlusions,
  • calibrating the rig using skeleton instead of marker data (no need for marker printing, a user simply walks through the scene and the server infers relative sensor positions based on skeleton data).

i am trying the app, but have issues

The live view is simply the dark screen, nothing in it. I tried using kinect SDK body 2D, it can capture my skeleton normally. I noticed that when the Livescan server started, Kinect camera LED light (white color) is not activated. When using Kinect SDK, when the camera is connected, the camera LED light is activated.

Also, do I need to specify server IP for the player?

Do I need marker to calibrate kinect in order to use server? What is the bounding box means here?

Thanks guys for this amazing job!!

Azure data corruption, scan merging, color problems, player crashing

Hello! First, of all, thank you so much for the work you have done on this, Marek! It was really fun to find this tool and get a chance to play with it. I've run into several issues that I'd like to note, two of which have already been posted in previous issues but seemingly are still an issue. I'm not sure if you're interested in still actively working on this software, but let me know!

  1. The only method for recording that has worked for me is ASCII ply with merged scans enabled. If I use binary ply and/or disable merge scans, then it fails to save all of the frames, and the frames it does save are badly corrupted. Many of the points are missing and their colors have turned to noise. Interestingly, when I save with ASCII ply and merge scans enabled, and then view the recording in the player and enable the save frames option, it appears to correctly export them as binary ply to the outplayer folder without corruption.

  2. When I do record with ASCII ply, I'm able to open the saved frames in both the LiveScan3DPlayer and MeshLab, to check them (although, Meshlab is angry about an empty line in the header of the ASCII file, which can be easily deleted). However, even though I have "merge scans" enabled, it's clearly only actually saving the points from one of the 3 cameras I'm using. I can't adequately test what happens when I disable "merge scans," as noted above. This appear to be the same as Issue #41, if I'm understanding what they said correctly.

  3. As reported in Issue #47, there's a color problem (at least with saving as ASCII ply and merge scans). Just like they're saying, the ply files and bin files seem to have a color channel problem, with the vertex colors appearing very blue. I can fix this in individual files by going into the ASCII ply and reordering the properties in the header as red, blue, green instead of the default red, green, blue. So, seems like a simple fix that I could do in the source or in post.

  4. The LiveScan3DPlayer always crashes after playing. I have to close it from the task manager. I haven't done any work to track down exactly why, but it has happened so far in every circumstance that I've ever run it.

For me, fixing the second issue of the scan merging would be enough for me to be able to use this software for my project, so I'd love to know if you have any thoughts! Perhaps I'm doing something wrong or misunderstanding, which would be the best possible scenario!

Again, thanks very much,
Zach

When will the AzureKinect branch be finalized?

This branch seems to be in an incomplete state. It compiles properly, but the server-client pair do not produce any RGBD data in the viewer and upon saving point clouds, it streams thousands of 1Kb PLY files to the out directory -- it is impossible to stop this save process by clicking on "Stop Saving". The program is unresponsive and must be manually shutdown. Without looking through the code, it appears you are missing the libraries for the Azure Kinect SDK v2.0 (specifically k4a and k4arecord).

Using LiveScan3D with Realsense D435

Hi,
I wanted to inquire if LiveScan3D would work with the realsense d400 series cameras?

Moving this issue to LiveScan3D-Hololens git.

Thanks and Best Regards

Using AzureKinect, PointCloud is upside down

Hi.

I am running with one Azure Kinect.
The communication with the server went well, but the point cloud displayed in the live scan is upside down, left and right.
In AzureKinectViewer, it appears correctly.

I tried modifying the for statement in LiveScanServer.OpenGLWindow.OnUpdateFrame, but there was no change.
image

Any help would be greatly appreciated.

How can we use information about body joints?

I have come across such variables like
public List lBodies = new List();
tempBody.lJoints = new List(nJoints);
tempBody.lJointsInColorSpace = new List(nJoints);

How can I draw such information over LiveView?

send skeleton data to server?

Hello Marek

First I wanna thank you for this amazing project.
I notice the software can handle skeleton data now but I didn't see skeleton in the live view window.
I only saw skeleton in the client window.
Also could you please tell me how to send skeleton data to the server?

Thanks

How can i use your server application in C++

Currently i am trying to pass kinectv2 streams between different machines.I like your way of sending and receiving kinect streams between different machines very much.But the only thing is your client application is in C++ and server application is in C#.

Is there any source which you have developed server application in C++.

Please suggest me how can i use your server application in C++.

Thanks ,
Kirubha

Binary PLY saving

The recorded files should be saved as binary PLY instead of text PLY. This will greatly improve the saving speed and size of saved data.

.bin file in sync while ply files out of sync for two camera recordings

Hello,

I encountered a problem while using LiveScan3D with two Kinect Azure cameras. When I play the .bin file in the player the recording shows the two Kinect cameras are in sync like this: https://youtu.be/aFD6q2zI6XU However, when I chose to play .ply sequence of the same recording to play in the player, the two Kinect cameras appear to be out of sync like this: https://youtu.be/Ophyb6914p0
It appears that all my recordings are like this. I made sure before recording that both cameras were calibrated and temporal sync is enabled. Would you know why this is?

Thank you.

calibration_txt

Thank you in advance for sharing this great work! Great help for me.
I have obtained a translation vector and rotation matrix from the sensor coordinate system to the landmark coordinate system, but I don't know how to convert it into a Matrix4 × 4 matrix in Unity3D.
calibrate_txt

So, could you help me get the Matrix 4 × 4 in unity3D, or the rotation angle along the x, y, and z axes?
I would appreciate it if you could make some suggestions.

Timestamp Problem

Hello Marek.
Thanks for your codes . It runs perfectly!However,I have a problem that how can I get the timestamp of the captured frame. I'm confused that where can I modify the code! Please give me some suggestion! Thanks in advance.

Place the calibrated cameras in Unity

Hey!

I am using Live Scan3D for calibrating 2 Azure Kinect and, after that, I wanted to place both cameras in Unity. To do it I extracted the rotation matrix and translation vector from the .txt files and made the necessary computations to define the rotation and translation of the camera objects from Unity (I attach the code). But the position and rotation from Unity didn’t correspond to the “reality”.

// rotationMatrixCV = 3x3 rotation matrix; translation = translation vector
        var rotationMatrix = new Matrix4x4();
        for (int i = 0; i < 3; i++)
        {
            for (int j = 0; j < 3; j++)
            {
                rotationMatrix[i, j] = rotationMatrixCV[i][j];
            }
        }
        rotationMatrix[3, 3] = 1f;

        var localToWorldMatrix = Matrix4x4.Translate(translation)* rotationMatrix;


        Vector3 position;
        position.x = localToWorldMatrix.m03;
        position.y = localToWorldMatrix.m13;
        position.z = localToWorldMatrix.m23;
        transform.position = position;

        Vector3 forward;
        forward.x = localToWorldMatrix.m02;
        forward.y = localToWorldMatrix.m12;
        forward.z = localToWorldMatrix.m22;
        

        Vector3 upwards;
        upwards.x = localToWorldMatrix.m01;
        upwards.y = localToWorldMatrix.m11;
        upwards.z = localToWorldMatrix.m21;

        transform.rotation = Quaternion.LookRotation(forward, upwards);

I think the problem can be that the coordinate systems of Unity and Live Scan3D are different. Any suggestion will be appreciated!

Thank you in advance!

Manual needs finishing

The manual is still not finished, all the functionalities need to be described in detail.

Calibration issue about calibration.txt

Hi,

Thanks for your nice share first! I am using two kinects and one marker to calibrate, then get the file 'calibration.txt' on the 'Server' computer.I want to know the extrinsics and intrinsics,but I think the file only provides the extrinsics(T,R).I don't know if I understand it wrong. Cloud you tell me the meaning of the parameters inside and how to get the intrinsics?
image

Connecting to a VM on the Azure Cloud

I'm trying to connect the LiveScanClient to a server that is running within a VM hosted in the cloud with a public IP Address. When I attempt to connect from the LiveScanClient I get a "Failed to connect. Did you start the server?" message. The VM is running Windows Server 2008 and has Windows Firewall configured to allow any messages through to the LiveScanServer application.

Is connecting to an outside IP Address currently supported? Are there any specific requirements for running the Server application?

problems with server

I can't open the show live window if I use the released server. But if I just click the debug button to run the server, I can open the show live. How to solve the problem?

File format saving problem

File format saving problem
Hi, we have a question about saving files. When we choose Ascll_ply or binary_ply, the saved file format is .bin format, we want to save it as .ply format. Maybe because we may not really understand the meaning of the paper, so we have two questions to ask: First, whether the point cloud can be saved after multi-view registration, and second, whether the point cloud after multi-view registration can be saved .ply format?

Unable to get client to run from source

I need to make some changes to the client. I've been able to compile it, and it actually seems to run just fine. Connects to the Kinect, and I can connect to the server and send data across.

However, if I run the calibrate command from the server to the client, the client crashes on
MarkerDetector::GetMarker
...
cv::findContours(img3, contours, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);

I get this exception

_> Exception thrown at 0x0F6A0770 (opencv_core248.dll) in LiveScanClientD.exe: 0xC0000005: Access violation reading location 0x0406A0DC.

If there is a handler for this exception, the program may be safely continued._

Any thoughts?

If I just run the included client, everything works fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.