Git Product home page Git Product logo

seoul's Introduction

Seoul

Seoul is x86 Virtual Machine Monitor including device modules and instruction emulator. It is a standalone version of the Vancouver VMM that is included in NUL developed mainly by Bernhard Kauer ([email protected]) at the TU Dresden. Please direct questions regarding Seoul to Julian Stecklina <[email protected]>.

This repository contains a frontend for Seoul runnable on Linux, FreeBSD and perhaps other UNIX-likes implemented in unix/. This frontend is currently work-in-progress and not intended for anything except further development.

The goal is to keep this UNIX frontend only a showcase for how to get Seoul running on your platform. All other parts of Seoul do not invoke platform-specific functions on their own and should be reusable on most platforms without change.

This repository also contains drivers developed for NUL in host.

Building the UNIX demo

The Unix frontend builds with gcc 4.7 on Linux. It builds and runs for both 32-bit and 64-bit hosts, although it will only emulate IA-32. If you are ready to give it a go, execute scons in unix/ to start the build process. Help regarding build options can be obtained via scons -h.

We currently only support booting Multiboot compliant kernels. Execute seoul -h to get usage information.

Serial output is redirected to standard output. VGA is available to the VM, but currently not displayed to the user.

seoul's People

Contributors

blitz avatar hrniels avatar nils-tud avatar udosteinberg avatar wentasah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seoul's Issues

NOVA doesn't tick

Steps to reproduce:
./unix/seoul hypervisor-x86_32 spinner

What happens:
NOVA hangs after detecting that there is no root task.

What should happen:
NOVA keeps receiving timer IRQs.

Profiling stuff is not 64-bit ready

When building Seoul for NRE x86_64 target, the profiling macros won't work, because the assembler code in include/service/profile.h is not ready for 64 Bit. Are there plans to fix this?

Discussion: MP topology information

As far as I could see, at the moment the guest is not provided with any core topology information at all. Booting are bare NOVA shows that package, core and thread are always 0. For future use with SMP and hyperthreads, it might me good to think about whether this should be changed, and if so, how. For example, if a VM with 4 vCPUs is assigned to physical CPUs 0:0:0, 0:1:1, 0:3:0 and 0:3:1, what should the guest see as topology? And how should it be implemented? @tfc, what should happen to a migrated guest? Hotplugging? Another question is: Could certain guest operating systems have a problem with the current situation?

Passing configuration to the VMM

Currently we parse strings to find out which modules to start with what parameters. I would rather see a simple data structure. This would remove the need for most of service/string.h and each frontend (NRE, Genode, ...) can use its native configuration mechanism to configure seoul.

Don't exit with abort()

Currently seoul exits with abort() in all cases. This fails to restore the terminal screen and is generally annoying.

Race condition in PIC model

During MP safety tests I stumbled across spurious interrupts which I think are caused by a race condition between triggering and EOI. The scenario leading to an error is the following:

    Trigger Thread                Receiver Thread
    ------------------------------------------------------
    Set IRR
    Prioritize -> yes
    INTR
                                  INTA
                                  DEASS
                                  EOI: Clear ISR
    Set IRR
    Prioritize -> yes
                                  EOI: Prioritize -> yes
                                  INTR
                                  INTA
                                  DEASS
                                  EOI
    INTR
                                  INTA -> spurious!

This effect can be emphasized by adding some artificial delay between the trigger thread's prioritize function and the actual INTR message. In a very basic standalone PIC model I created to simplify hunting this issue, I found that the optimization setting is also important (with -O0 I have to add the delay to see anything at all).

Now I have basically two questions:

  1. Does the above scenario make sense or am I missing something?
  2. What is an appropriate solution to it? I don't think this is a MP-related problem, because the same course of events could occur between different threads on the same CPU. It is just highly unlikely to happen.

@vmmon, @blitz, @udosteinberg, any suggestions as to how this can be solved elegantly are very much appreciated.

CLFLUSH unimplemented

NOVA boot triggers emulation failure:
c0001d59: 0f ae 3a clflush BYTE PTR [edx]

unimpl GRP case 0fae3a at 4009
decode fault 3
unimplemented at line 4010 eip c0001d59

Have to check whether Seoul advertises CLFLUSH support in CPUID (eax=1, edx bit 19)

Reorganization discussion

Thanks @blitz. Currently for Genode following files are missing in Seoul which were included in NUL:

sys/desc.h
sys/hip.h
sys/utcb.h

base/lib/runtime/string.cc
base/service/hostsink.cc

@Nils-TUD: Does NRE require the files or reimplement it ?

NOVA timer calibration is broken

NOVA uses a 10ms window to calibrate its clock: lapic.cc

In seoul, NOVA ends up with extremely high clock frequencies (12 GHz for me). A workaround is to change that code to delay(1000) (and adapt the freq_* calculations).

Seoul's wallclock (rdtsc) / timeout handling might need some love.

[mig2] TCP/IP communication for the VMM

To migrate a running VM over network, the VMM needs a TCP/IP interface.

NUL/Vancouver does not provide a usable and performant abstraction layer for TCP sockets/IP networking at this time. Therefore i implemented a socket abstraction based on LwIP and parts of the code which Alexander Boettcher wrote to integrate LwIP into NUL.

Interface

I used this as a chance to try my own (MPI-like) interpretation of how a socket should work, since my use case should heavily profit from nonblocking send operations to interleave network transmission with other work.
My current implementation provides a TcpSocket class with connect(...), close(...), receive(...), etc. calls which behave like usual Berkeley-style sockets. Special are the send(...) functions with different semantics. Actually, i designed them to be MPI-like:

  • bool TcpSocket::send(void *buffer, unsigned bytes)

    Berkeley-style socket send(...) procedures copy the data from the user buffer into the send buffer and return. The user can not be sure if the data has been ACKed. In case of a full send buffer this procedure blocks. The implementation at hand does not copy the data into a buffer. It rather enqueues the pointer and the associated number of bytes to a send queue and blocks completely (using semaphores) until the last buffer enqueued in this queue is sent away by LwIP (which has its own thread) and ACKed by the receiver. This aims to have the same semantic like MPI_send(...) for example.

  • bool TcpSocket::send nonblocking(void *buffer, unsigned bytes)

    This procedure does the same like the send(...) procedure before, but without blocking. It just enqueues the buffer for LwIP and returns immediately. It aims to have the same semantics like MPI_Isend(...)

  • bool TcpSocket::wait complete()

    After enqueuing arbitrarily many buffers with send_nonblocking(...), the program can do other work. To synchronize with the receiver, wait_complete() blocks until the last byte is sent and ACKed by the receiver. It aims to have the same semantics like MPI_Wait(...).

To omit unnecessary execution time in send() calls, LwIP is configured to not use the TCP_WRITE_FLAG_COPY flag. This makes synchronization (TcpSocket::wait complete()) so important.

Embedding into the VMM code

All the actual sending is embedded into the already existing do_network thread. Currently it handles network traffic for both host and guest.

The implementation is embedded using the _mb->bus_network, just like it is used for forwarding packets to VMs:

  • Currently there is one autonomous LwIP stack running in every VMM.
  • Another autonomous LwIP stack runs in sigma0 to obtain an IP address via DHCP for the whole machine and all other processes. Any VMM will ask Sigma0 via a new MessageNetwork type for the machine's IP-address. (No TCPing done here)
  • Every VMM will see every packet when it is send over the _mb->bus_network bus. Packets which don't belong to this process (distinguished by the destination/source port) are dropped. (This means: two user space processes using the same port == bad)

Measurements have shown that this socket abstraction provides the same throughput as a running guest VM using GNU/Linux networking applications reaches. (My test machine: > 8 MB/s)

I implemented this all this way, because my time as an intern is limited and i wanted TCP/IP networking setup running fast, so i can spend more time on actual migration code. I think that it might be the best solution to have a proper implementation of TCP/IP running within an external user space service. But i am not a micro kernel user land architect. :-)

All my code depending on the socket abstraction can be easily ported to another socket library, if there is a new one in the future, since it only uses these few connect, close, listen, send and receive calls.

Virtual LPC Controller

Some guest VM drivers for direct-assigned (pass-through) PCI devices attempt to identify the hardware chipset type by looking at the LPC controller. Passing the physical LPC controller through to VMs is not recommended for security reasons, especially because multiple VMs may want to access it.

Seoul should implement a virtual LPC controller for each VM that has direct-assigned PCI devices and should configure the virtual LPC controller as follows:

  • PCI vendor and device ID should match that of the physical LPC controller
  • Device type should be ISA bridge
  • B:D:F should be 0:1f:0

Fault error 6

I managed to trigger the following message from executor/instcache.h:

fault: 80000b0e old 0 error 6 cr2 aaaad000 at eip b71e9cf6 line 78 aaaad000

This happens on Genode with a custom device written by me. Thus, I am not sure if this is a bug in seoul at all.
I do not know how this message gets triggered. Error code 6 says that it was a write fault (from a user space process) because the page was not present, but what is encoded in 80000b0e?

[mig5] Device Serialization/Deserialization

Devices need to be serialized before sending them over ethernet and deserialized after receiving them. Some devices need further configuration on the target machine (the VGA model device for example). In the underlying implementation the MotherBoard class has an additional bus, namely the RestoreBus, which transports Messages of type MessageRestore.

A MessageRestore contains the following fields: devtype, bytes, id1, id2, a write flag and a byte pointer space. The devtype field contains an enumeration value to identify what action shall follow on which type of device on the bus, when received. The other fields are explained in the following subsections.

Using a seperate bus for this it is easy to append new devices to the migration process without messing around on other message protocols. This turned out to be quite comfortable.

Making a Device Migrateable

The devices in question need to be attached to the motherboard’s restore bus. This is done in their constructors. Furthermore they need to implement a receive(MessageRestore &msg) method.

The receive method does the following:

  1. if (msg.devtype == MessageRestore::RESTORE_RESTART) Set internal processed bit to false, increment msg.bytes by the size in bytes this device will consume later, and return. (The bus is reset on all devices)

  2. if (msg.devtype != MessageRestore::RESTORE_<MY_TYPE> || processed == true) return false (This message is for a different device type)

  3. if (msg.write == true) { save internal state to msg.space and set msg.bytes to the number of bytes consumed for this. Optionally use the id1 and id2 fields for further internal identification needs. }
    else { Do the opposite: load device state from msg.space }

    Most devices just do a memcpy(...) over their member variables. If any custom reinitialization etc. needs to be done before/after serialization/deserialization, this can also be done here.

Devices will only react via their receive function. The only sender of MessageRestore messages will always be the migration process.

Serializing Devices for Take-Off

The execution flow in control of the migration process sends a MessageRestore::RESTORE_RESTART over the bus first. Since this message went through all serializable devices which are currently running in the underlying VMM and every device added its size to msg.bytes during the restart event, the process knows how many bytes have to be allocated for buffering them.

Then the serialization itself follows:
The process sends restore messages over the bus (with the earlyout parameter set to true). The first device which receives it saves its state into the buffer and returns true. The same device will never return true again (Unless another MessageRestore::RESTORE_RESTART is sent to reset its processed bit).
This way the migration process does neither have to know what devices nor how many of them it serializes: The send function of the bus will eventually return false for the first time when all devices are serialized into the buffer.
The buffer, containing a dense queue of restore messages and following serialized data strings, can be sent over network now.

Deserializing Devices after Arrival

After parsing the queue of restore messages and serialized device strings, the migration process pushes all the restore messages onto the restore bus. The devices will identify their restore message and deserialize themselves.

Special Restore Messages

While most enumeration values for msg.devtype identify a serialize/deserialize action for a specific device type (Which is actually split into multiple specific types to enable for implementing a serialization/deserialization order, if this ever becomes relevant in the future), there are special enumeration values identifying custom actions. MessageRestore::RESTORE_RESTART is a first example for this. Currently there are two other values which identify actions like “return me the video mode currently displayed” or “display string on the user’s framebuffer”, which are relevant on different stages of the migration process.

Discussion: Dedicated I/O Thread

There was the idea of implementing a dedicated I/O thread as a synchronization replacement among vCPUs and host events. In order to do this, all requests of vCPUs (port I/O, memory access) and the VMM (i.e., Timeouts, IRQs, Network, Disk, ...) would have to be intercepted and placed in the I/O thread's work queue. This thread would then process and distribute the requests in a serial fashion.

My concept now was to hook into the DBus template object and redirect sending messages to an iothread callback function under certain conditions. Basically, there are a number of "claim"-callbacks which determine whether some listener on the bus claims the message directly. If no claim is received, the message gets queued. In the I/O thread, I then wanted to extract the original messages from the queue objects and send them directly through the appropriate bus one by one. Although this mechanism turned out to be working in an early experiment with only MessageTimer, MessageTimeout and MessageIOOut, I stumbled across what I deem a major design flaw in this concept.

The MessageTimer for allocating a timer slot requires an immediate response, so queueing this request would not work. So I had to bypass the queue for this message to get things working. On second thought, in fact every single request that gets replied to is a problem. Take an I/O port read: The vCPU must not resume operation until the request is finished and the data from the read is present. When such a message is redirected and queued, one would not only have to preserve the reference to the original message and put the response in there, but much more complicatedly stall the vCPU until the request has been processed by the worker.

In my opinion, what it boils down to is this:

  • Replacing the global lock by an I/O thread would mean lots of dedicated waiting mechanisms for the device models or a very complicated system of signals/callbacks/whatever to make sure the vCPU continues only after the request is finished (if the response is needed)
  • At the moment I cannot see any sane concept how to integrate such a full blown I/O thread into the current structure. It would maybe mean an entire redesign of the message passing concept...
  • I think I would rather the I/O thread is only responsible for some posted write optimizations, where appropriate. The rest of the synchronization should be done in the device models with as little locking as possible.
  • The initial concept of hijacking the DBus objects would then be such that the default is direct delivery in the context of the vCPU and only specific messages get redirected to the queue. The claim-callback would then be a queue-callback.
  • Maybe such an I/O thread is not a good idea after all. It would have been nice to prove this with numbers instead of thoughts, though...

Now I am posting this here, because I wanted to discuss this topic with people who have a deeper insight in how the current message passing structure works. I have already talked about some of it with @udosteinberg, but maybe also @vmmon, @blitz or @alex-ab have something to say about it. Perhaps I am just missing a critical point, totally going the wrong way or whatnot :) I am open to any kind of feedback.

clang support

Investigate compilation failure with clang and fix it.

[MIG1] Migration Feature Merge - Preliminary Discussion

Hi there,

i am currently working on a new live migration feature for Vancouver as a student intern at Intel Labs in Brunswick. Udo Steinberg is my supervisor for this project. After he came back from FOSDEM, he told me that he talked to you NUL/Vancouver/Seoul people and there was consent about the idea that i present how i integrated my migration code into the VMM code. If we discuss my interfaces and integration here and now, this would make a later merge easier. So i can adapt/fix my code before the code vanishes in the company's law machinery and reappears for merge. :-)

At first i wrote a PDF document about all interfaces and an overview about how they should work together. But eventually we came up with the idea to have separate discussions about every distinct subsystem, so i will open five more threads here.

The current thread explains more general stuff to help getting a big picture of the implemented migration solution. Therefore i present the general workflow both from perspective of the actual user and an overview about the migration protocol exchange between two machines being involved in a migration process.

The last section describes how migration in the sender and receiver is actually handled. I'm referencing the other subsystem-threads at specific points of the description to help understanding design decisions.

General Workflow

The Live Migration Process - User Experience

  1. The user runs VMs with Vancouver as usual.
  2. At an arbitrary point in time the user (or maybe some program) decides that a particular VM shall be migrated away onto another physical computer running NUL. At this time this event can be raised by typing $ migrate <destination IP> in the local running guest VM.
  3. Depending on network speed and guest size, some seconds or minutes later the guest VM is running on another physical machine. This happens ideally without pausing the execution flow of the VM very often during the process of migrating.

More details about the VMM’s execution flow during migration

  1. Migration to another machine is initiated, however this happens…
  2. The VMM connects over ethernet to a listener process running on the destination machine. This listener process receives the configuration of the source VMM and tries to start a new VMM instance with a matching configuration to function as the destination VMM. The new VMM instance is additionally configured to listen on a new TCP port rather than booting guest code. If this procedure succeeds, the listener tells the source VMM the new destination VMM’s TCP port.
  3. The source VMM now connects to the destination VMM and sends all relevant guest state data.
  4. To make the migration process an actual live migration, the guest VM which is migrated has to run during transfer of its machine state. To make this possible, the VMM has to track all guest memory write accesses.
  5. The transfer process is based on rounds. During every round the guest memory pages which were changed in the preceding round are sent away. The guest VM is frozen for a very short time at the check point between two rounds.
  6. The VMM will eventually decide to freeze the guest system for a very last transfer round. It will send all remaining guest pages and guest device states as well as VCPU registers.
  7. The destination VMM can now continue the freshly migrated guest’s VCPU execution flow.
  8. Choice might be up to the user to kill the source VM or continue its exe- cution flow (which would make the live migration feature a clone feature).

The Migration Process

Send Part

  1. Migration is initiated.
  2. A new execution context (thread) is started, which allocates an instance of the migration code class and calls its main send procedure.
  3. Perform port-negotiation procedure with destination (Thread #13)
  4. Connect to destination VMM’s port and send header information.
  5. [ VCPU freeze ] (Thread #10)
  6. Make a list of all memregions to send
  7. [ Start page tracking ] (Thread #11)
  8. [ VCPU release ]
  9. send page index with checksums and according memory pages from list
  10. [ VCPU freeze ]
  11. [ Stop page tracking (and generate new list) ]
  12. if (network transfer rate > page dirtying rate) goto 7
  13. send last round of dirty pages
  14. send device states (Thread #12)
  15. Migration complete. Either kill the guest or let it execute again.

Receive Part

Since a VMM with the exactly same configuration string like the source VMM is started as destination end point, receiving all serialized data and writing it back into guest memory and pushing restore messages into the restore bus is trivial. Therefore this section explains what is done to prepare a freshly started VMM instance to be listening on the ethernet for data arrival.
The listener service (Thread #13), starts a VMM instance with an additional command line argument for Vancouver: retrieve guest:<port>. This argument instructs the VMM to set itself into restore mode and then continue initialization as usual.
After model initialization and start of the execution of the VCPU, the VCPU will eventually fault on its first execution step. The execution flow of the VMM then arrives in the VirtualBiosMultiboot class which is asked to initialize register states and load binary ELF modules into guest ram. Just before it does this, a MessageHostOp::OP_RESTORE_GUEST is sent to the Vancouver main class. If Vancouver recognizes that it is currently executing in restore mode, then it will call the migration class to initiate the network connection and restore the guest memory as well as device states. In this case, VirtualBiosMultiboot will stop its own initialization procedure and let the VCPU just return to the guest execution flow which is already completely restored by now.

[mig6] Listener Service (/Flight Control)

The task of the listener service is to listen on a fixed port and start VMMs on request which will act as migration destination end points. Currently the protocol works like the following (With LST, SRC and DST as listener service, source VMM and destination VMM):

  1. SRC initially connects to LST and sends SRC’s complete command line string.
  2. LST cuts off all binaries from this string and adds a retrieve_guest:<port> argument for the new Vancouver instance. Then it tries to start DST with this command line string. (port is a running number)
  3. If the start of DST was successful, LST propagates DST’s listen port to SRC.
  4. SRC can now connect to DST directly and start the migration procedure.

It turned out to be very handy to have a listener service because this way a running machine is always ready for migration requests. Furthermore it is easy to start VMM instances with the same configuration string like the source VMM.

Cutting off the binaries out of a configuration string

The listener service receives a string like the following:

sigma0::mem:64 sigma0::dma name::/s0/log name::/s0/timer name::/s0/fs/rom ||
rom:///nul/vancouver.nul.gz PC_PS2 ||
rom:///tools/munich ||
rom:///linux/bzImage clocksource=tsc ||
rom:///linux/initrd1.lzma

…and transforms it to the following…

sigma0::mem:64 sigma0::dma name::/s0/log name::/s0/timer name::/s0/fs/rom ||
rom:///nul/vancouver.nul.gz PC_PS2 tsc_offset rdtsc_exit retrieve_guest:40000

Obtaining the source VMM's configuration string

The source VMM has to obtain its own configuration string before being able to send it to the listener service.

Because only sigma0 knows the full configuration string of each running application, i gave MessageConsole some additional semantics: If the new read bit in a message of type MessageConsole::TYPE_START is set, sigma0 will copy the desired configuration string (of the requesting application) into the memory pointed to by cmdline.

The application provides information about the length of the given string memory area in the mem field. If this number is not sufficient for storing the whole configuration, sigma0 returns the message with the first bit of the configuration string set to '\0' and the actual configuration length in the mem field. The application can then try another request with a sufficient number of bytes allocated.

Launching the new VMM

The listener service has to start the destination VMM from a configuration string. Since only sigma0 can start applications, i added another interface for this. The same semantics like for obtaining the configuration string are used with the message type MessageConsole::TYPE_START, but this time the read bit is set to false and mem contains the value 0xdeadbeef. Sigma0 will try to launch the new application and accordingly reply into the res field of the message and return.

Launching an application this way will typically look like this:

MessageConsole msg(MessageConsole::TYPE_START, configuration_string);
unsigned res = Sigma0Base::console(msg);

[mig3] Freezing VCPUs

To freeze VCPUs, another message type was added. In particular, it is two message types: CpuMessage::TYPE_FREEZE and MessageHostOp::OP_VCPU_FREEZE. The workflow is the following:

  1. The application emits one CpuMessage::TYPE_FREEZE for every VCPU. (At the current development state this is just one VCPU).
  2. The VirtualCpu-instance receiving this message sets an internal blocked state variable to true and emits a MessageHostOp::OP_VCPU_FREEZE to the Vancouver main class.
  3. Vancouver receives this message and calls nova_recall(...) to trigger a VCPU-recall.
  4. The according execution context arrives in the do_recall handler and calls a migration class method iff its blocked state variable is set to true. This method contains a blocking semaphore vcpu_sem.down() call.

This way, all running VCPUs can be caught using the following code:

CpuMessage cpumsg(CpuMessage::TYPE_FREEZE);
for (VCpu *vcpu = _mb->last_vcpu; vcpu; vcpu=vcpu->get_last())
vcpu->executor.send(cpumsg, true);

To release frozen VCPUs, the application only needs to call vcpu_sem.up().
I chose this kind of interface, because the do_recall handler procedure provides a comfortable tls pointer to the VCpu object representing the VCPU in question. This way it is easy to have the VCPU blocking on a semaphore which is allocated in the migration class code and also handing over a pointer to the UTCB of the VCPU containing the register state. The first message lets the VCPU know that it has to stop when it arrives at the recall handler. The second message isolates the NOVA-specific nova_recall() call within vancouver.cc from the rest of the code.

Use case of MessageIrqNotify

The PIC and I/O-APIC models feature a notification mechanism for acked IRQs. If I understand it correctly, the PIT model uses this notification to rearm the timer. Is there a specific reasoning behind that decision? Why not just rearm on every MessageTimeout? @vmmon, please enlighten me ;)

VGA output

We only have a dummy VGA output. Implement a real VGA or at least something to see text mode.

[mig4] Page Tracking after Guest VM Write Accesses

To track which memory pages are changed by the guest VM, the VMM has to define both beginning and end of a tracking period. I implemented another type of MessageHostOp operation to provide an interface for this:

Vancouver now accepts messages of type MessageHostOp::OP_TRACK_PAGES. This seems to be the natural way for this, since the memory mapping procedure map_memory_helper from within vancouver.cc needed to be extended to support write access tracking mechanisms. The new message type instructs Vancouver to switch between the usual guest memory mapping mode and page tracking mode.

Workflow

Starting a Tracking Interval

To start a tracking interval, the application emits a MessageHostOp of mentioned type with the msg.value field set to 1. Vancouver will then activate its tracking mechanism and return the message after adding the actual number of bytes needed to store a bitmap containing the desired information. This bitmap should be allocated before end of this period.

Stopping the Tracking Interval

The tracking interval is stopped by emitting a MessageHostOp with the fields msg.value set to 0 and msg.dirtymap pointing to the destination buffer for the bitmap. Vancouver will stop the tracking period (and tracking mechanism at all), copy its bitmap to the destination buffer and also write the number of dirty pages to the msg.dirtypages field.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.