Git Product home page Git Product logo

ai2thor's Introduction

A Near Photo-Realistic Interactable Framework for Embodied AI Agents

GitHub release Documentation License Downloads

🏑 Environments

iTHOR ManipulaTHOR RoboTHOR
A high-level interaction framework that facilitates research in embodied common sense reasoning. A mid-level interaction framework that facilitates visual manipulation of objects using a robotic arm. A framework that facilitates Sim2Real research with a collection of simlated scene counterparts in the physical world.

🌍 Features

🏑 Scenes. 200+ custom built high-quality scenes. The scenes can be explored on our demo page. We are working on rapidly expanding the number of available scenes and domain randomization within each scene.

πŸͺ‘ Objects. 2600+ custom designed household objects across 100+ object types. Each object is heavily annotated, which allows for near-realistic physics interaction.

πŸ€– Agent Types. Multi-agent support, a custom built LoCoBot agent, a Kinova 3 inspired robotic manipulation agent, and a drone agent.

🦾 Actions. 200+ actions that facilitate research in a wide range of interaction and navigation based embodied AI tasks.

πŸ–Ό Images. First-class support for many image modalities and camera adjustments. Some modalities include ego-centric RGB images, instance segmentation, semantic segmentation, depth frames, normals frames, top-down frames, orthographic projections, and third-person camera frames. User's can also easily change camera properties, such as the size of the images and field of view.

πŸ—Ί Metadata. After each step in the environment, there is a large amount of sensory data available about the state of the environment. This information can be used to build highly complex custom reward functions.

πŸ“° Latest Announcements

Date Announcement
5/2021
RandomizeMaterials.mp4
RandomizeMaterials is now supported! It enables a massive amount of realistic looking domain randomization within each scene. Try it out on the demo
4/2021 We are excited to release ManipulaTHOR, an environment within the AI2-THOR framework that facilitates visual manipulation of objects using a robotic arm. Please see the full 3.0.0 release notes here.
4/2021
iTHOR-RandomizeLighting.mp4
RoboTHOR-RandomizeLighting.mp4
RandomizeLighting is now supported! It includes many tunable parameters to allow for vast control over its effects. Try it out on the demo!

2/2021 We are excited to host the AI2-THOR Rearrangement Challenge, RoboTHOR ObjectNav Challenge, and ALFRED Challenge, held in conjunction with the Embodied AI Workshop at CVPR 2021.
2/2021 AI2-THOR v2.7.0 announces several massive speedups to AI2-THOR! Read more about it here.
6/2020 We've released 🐳 AI2-THOR Docker a mini-framework to simplify running AI2-THOR in Docker.
4/2020 Version 2.4.0 update of the framework is here. All sim objects that aren't explicitly part of the environmental structure are now moveable with physics interactions. New object types have been added, and many new actions have been added. Please see the full 2.4.0 release notes here.
2/2020 AI2-THOR now includes two frameworks: iTHOR and RoboTHOR. iTHOR includes interactive objects and scenes and RoboTHOR consists of simulated scenes and their corresponding real world counterparts.
9/2019 Version 2.1.0 update of the framework has been added. New object types have been added. New Initialization actions have been added. Segmentation image generation has been improved in all scenes.
6/2019 Version 2.0 update of the AI2-THOR framework is now live! We have over quadrupled our action and object states, adding new actions that allow visually distinct state changes such as broken screens on electronics, shattered windows, breakable dishware, liquid fillable containers, cleanable dishware, messy and made beds and more! Along with these new state changes, objects have more physical properties like Temperature, Mass, and Salient Materials that are all reported back in object metadata. To combine all of these new properties and actions, new context sensitive interactions can now automatically change object states. This includes interactions like placing a dirty bowl under running sink water to clean it, placing a mug in a coffee machine to automatically fill it with coffee, putting out a lit candle by placing it in water, or placing an object over an active stove burner or in the fridge to change its temperature. Please see the full 2.0 release notes here to view details on all the changes and new features.

πŸ’» Installation

With Google Colab

AI2-THOR Colab can be used to run AI2-THOR freely in the cloud with Google Colab. Running AI2-THOR in Google Colab makes it extremely easy to explore functionality without having to set AI2-THOR up locally.

With pip

pip install ai2thor

With conda

conda install -c conda-forge ai2thor

With Docker

🐳 AI2-THOR Docker can be used, which adds the configuration for running a X server to be used by Unity 3D to render scenes.

Minimal Example

Once you've installed AI2-THOR, you can verify that everything is working correctly by running the following minimal example:

from ai2thor.controller import Controller
controller = Controller(scene="FloorPlan10")
event = controller.step(action="RotateRight")
metadata = event.metadata
print(event, event.metadata.keys())

Requirements

Component Requirement
OS Mac OS X 10.9+, Ubuntu 14.04+
Graphics Card DX9 (shader model 3.0) or DX11 with feature level 9.3 capabilities.
CPU SSE2 instruction set support.
Python Versions 3.5+
Linux X server with GLX module enabled

πŸ’¬ Support

Questions. If you have any questions on AI2-THOR, please ask them on our GitHub Discussions Page.

Issues. If you encounter any issues while using AI2-THOR, please open an Issue on GitHub.

🏫 Learn more

Section Description
Demo Interact and play with AI2-THOR live in the browser.
iTHOR Documentation Documentation for the iTHOR environment.
ManipulaTHOR Documentation Documentation for the ManipulaTHOR environment.
RoboTHOR Documentation Documentation for the RoboTHOR environment.
AI2-THOR Colab A way to run AI2-THOR freely on the cloud using Google Colab.
AllenAct An Embodied AI Framework build at AI2 that provides first-class support for AI2-THOR.
AI2-THOR Unity Development A (sparse) collection of notes that may be useful if editing on the AI2-THOR backend.
AI2-THOR WebGL Development Documentation on packaging AI2-THOR for the web, which might be useful for annotation based tasks.

πŸ“’ Citation

If you use AI2-THOR or iTHOR scenes, please cite the original AI2-THOR paper:

@article{ai2thor,
  author={Eric Kolve and Roozbeh Mottaghi and Winson Han and
          Eli VanderBilt and Luca Weihs and Alvaro Herrasti and
          Daniel Gordon and Yuke Zhu and Abhinav Gupta and
          Ali Farhadi},
  title={{AI2-THOR: An Interactive 3D Environment for Visual AI}},
  journal={arXiv},
  year={2017}
}

If you use 🏘️ ProcTHOR or procedurally generated scenes, please cite the following paper:

@inproceedings{procthor,
  author={Matt Deitke and Eli VanderBilt and Alvaro Herrasti and
          Luca Weihs and Jordi Salvador and Kiana Ehsani and
          Winson Han and Eric Kolve and Ali Farhadi and
          Aniruddha Kembhavi and Roozbeh Mottaghi},
  title={{ProcTHOR: Large-Scale Embodied AI Using Procedural Generation}},
  booktitle={NeurIPS},
  year={2022},
  note={Outstanding Paper Award}
}

If you use ManipulaTHOR agent, please cite the following paper:

@inproceedings{manipulathor,
  title={{ManipulaTHOR: A Framework for Visual Object Manipulation}},
  author={Kiana Ehsani and Winson Han and Alvaro Herrasti and
          Eli VanderBilt and Luca Weihs and Eric Kolve and
          Aniruddha Kembhavi and Roozbeh Mottaghi},
  booktitle={CVPR},
  year={2021}
}

If you use RoboTHOR scenes, please cite the following paper:

@inproceedings{robothor,
  author={Matt Deitke and Winson Han and Alvaro Herrasti and
          Aniruddha Kembhavi and Eric Kolve and Roozbeh Mottaghi and
          Jordi Salvador and Dustin Schwenk and Eli VanderBilt and
          Matthew Wallingford and Luca Weihs and Mark Yatskar and
          Ali Farhadi},
  title={{RoboTHOR: An Open Simulation-to-Real Embodied AI Platform}},
  booktitle={CVPR},
  year={2020}
}

πŸ‘‹ Our Team

AI2-THOR is an open-source project built by the PRIOR team at the Allen Institute for AI (AI2). AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering.


ai2thor's People

Contributors

alvarohg avatar anikem avatar apoorvkh avatar bzinberg avatar d-val avatar danielgordon10 avatar drschwenk avatar ehsanik avatar elimvb avatar guhur avatar jiasenlu avatar jonborchardt avatar jsanmiya avatar kuohaozeng avatar lucaweihs avatar mattdeitke avatar model-patching avatar mohitshridhar avatar raejeong avatar roozbehm avatar synapticarbors avatar winthos avatar wozsax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ai2thor's Issues

Draw 2D map

Hi, I was wondering if there's a way to plot a 2D map with the agent navigating in it? Thanks

Typo in example complex actions

Hello

I just found a typo in this example. Instead of

event = controller.step(dict(
    action='PutObject',
    receptacleObjectid=receptacle_object_id,
    objectId=mug_object_id), raise_for_failure=True)

it should be

event = controller.step(dict(
    action='PutObject',
    receptacleObjectId=receptacle_object_id,
    objectId=mug_object_id), raise_for_failure=True)

the id in receptacleObjectId should be uppercase.

In addition, I could not run this example, maybe the microwave is not visible at the final position?

'Digest mismatch' with controller.start()

Ubuntu16.04
Following the installation guide, install ai2thor with pip, then 'Digest mismatch' Exeption raised in controller.start()

thor-201712211442-Linux64: [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 100% 1.4 MiB/s] of 0.0MB
Exception Traceback (most recent call last)
in ()
----> 1 controller.start()
/usr/local/lib/python2.7/dist-packages/ai2thor/controller.pyc in start(self, port, start_unity, player_screen_width, player_screen_height, x_display)
513 env['DISPLAY'] = ':' + x_display
514
--> 515 self.download_binary()
516
517 self.server = ai2thor.server.Server(
/usr/local/lib/python2.7/dist-packages/ai2thor/controller.pyc in download_binary(self)
487 url,
488 self.build_name(),
--> 489 BUILDS[platform.system()]['sha256'])
490
491 z = zipfile.ZipFile(io.BytesIO(zip_data))
/usr/local/lib/python2.7/dist-packages/ai2thor/downloader.pyc in download(url, build_name, sha256_digest)
25 pbar.finish()
26 if m.hexdigest() != sha256_digest:
---> 27 raise Exception("Digest mismatch for url %s" % url)
28
29 return b''.join(file_data)
Exception: Digest mismatch for url http://s3-us-west-2.amazonaws.com/ai2-thor/builds/thor-201712211442-Linux64.

creating new object in ai2thor

Hi, I want to create new object in ai2thor and modify some object's properties, but I am not very familiar with unity. I am catching up right now but how do I create/modify object?

Applying a learning agent

Hello,
Thank you for sharing this excellent simulation tool. I have used Unity before so it feels familiar. I noticed that in the machine learning code referenced here: https://github.com/caomw/icra2017-visual-navigation they used hdf5 dumps of the simulated scenes in Thor. In the hdf5 file they can detect when collisions occur. I did not see a method for detecting collisions in the controller code for Thor. Am I missing something?
Thanks again for a very interesting simulation environment. Are you planning on releasing a sample learning agent with the environment?

How can I collect expert trajectories to run imitation learning ?

I want to move my agent inside the environment more like a human and collect visual frames and, actions in each frame. Let's say a coffee making task in a kitchen where agent interacts with fridge, coffee machines, sink and the mug. How can I collect this data ?

Any way to add sound?

Hello,

Thanks for a nice simulator. I am wondering if there is any way/plan to add sound to the simulation?

Disable rendering

How can we disable the rendering of the game during the training phase?

Open ai2thor with unity in linux

Hi, I was trying to open ai2thor with unity in Linux(unfortunately I don't have a MacOS). When I run the command invoke local-build, it throws an error and I was not able to get it work. I am wondering if anyone happens to know the solution to this?

Physics broken in room 223

Hi, part of the walls are not physical entities in FloorPlan 223. And the result is that the agent could move freely through the wall without any obstacles. This happens when the agent keeps moving ahead right after default initialization.

Minimum code to reproduce the error (on my ubuntu machine):

import ai2thor.controller

GRID_SIZE=0.5

controller = ai2thor.controller.Controller()
controller.start()
controller.reset('FloorPlan223')
controller.step(dict(action='Initialize', gridSize=GRID_SIZE))
for i in range(10):
    controller.step(dict(action='MoveAhead'))

reset scene

Hi, I'm trying to reset scene so the object positions are randomized at the start of each episode. I'm using controller.random_initialize(random_seed, randomize_open=True, unique_object_types=False, exclude_receptacle_object_pairs=[]) and then controller.initialize_scene. Somehow this always fails and shuts down the program. Help would be appreciated.

Is there a way to access actions besides open, close, pick up, put down on actionable objects?

Hi Eric,

I was able to quickly get AI2Thor up and running and write some code that lets me explore the worlds via keyboard presses. Thank you for the simple tutorials and easy install!

I have not been able to figure out how to use the diverse set of actions listed here: object with new actions. Do I need clone a specific branch of the repo or is this a complex change I would need to make using Unity? If so, how involved is it? I'm a robotics researcher and having the diverse set of actions would really help in my experiments.

Thanks,
Angel

controller was stuck for "No handlers could be found for logger "werkzeug" "

I am using thor in multiprocessing.
When reset and initialization were done, it was about to step, the controller was stuck and read

No handlers could be found for logger "werkzeug"

When press ctrl+C, it showed info below

Traceback (most recent call last):
File "/home/elizabeth/ai2thor.py", line 90, in forward_action
controller.step(dict(action='MoveAhead', moveMagnitude=move_mag, snapToGrid=False))
File "/usr/local/lib/python2.7/dist-packages/ai2thor/controller.py", line 537, in step
self.last_event = queue_get(self.request_queue)
File "/usr/local/lib/python2.7/dist-packages/ai2thor/server.py", line 39, in queue_get
res = que.get(block=True, timeout=0.5)
File "/usr/lib/python2.7/Queue.py", line 177, in get
self.last_event = queue_get(self.request_queue)
File "/usr/local/lib/python2.7/dist-packages/ai2thor/server.py", line 39, in queue_get
self.not_empty.wait(remaining)
File "/usr/lib/python2.7/threading.py", line 359, in wait
res = que.get(block=True, timeout=0.5)
File "/usr/lib/python2.7/Queue.py", line 177, in get
self.not_empty.wait(remaining)
File "/usr/lib/python2.7/threading.py", line 359, in wait
_sleep(delay)
KeyboardInterrupt

How to solve this? Many thanks!!

How can I change the camera's distance threshold?

Hello
An object is said to be visible if it is in camera view and within a threshold of distance.
Default value is 1 meter.
I want to change the threshold of distance.
How can I change the camera's distance threshold?

Is there a way to change agent's discrete action to continuous action?

First, thank you all for providing such a splendid simulation environment!

I realized that actions of agent like MoveAhead, MoveBack, MoveRight are discrete actions, so in every step the agent can only move in fixed distance e.g, gridsize or fixed angular e.g, step(dict(action='MoveLeft')).
I find specific definition of MoveLeft in unity/Assets/Scripts/DiscreteRemoteFPSAgentController.cs
public void MoveLeft(ServerAction action) { moveCharacter (action, 270); }
So I wonder if there is a way to change these fixed discrete actions into continuous? And how? Thanks.

adding more object

Hi,

I would like to add more objects into the environment, so how do I do that? (generating unique id for simobj, etc.)

Best

build unity executable in ubuntu 16.04

Hi,
I am trying to build the unity executable for ai2thor in ubunti16.04, and I am confuse on where I should modified to get invoke local-build work.

Best

unable to install ai2thor

Hello,

I am using Ubuntu 14.04 and Python 2.7, whenever i try 'sudo pip install ai2thor' i am getting the following error:

Collecting ai2thor /usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning. SNIMissingWarning /usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning. InsecurePlatformWarning Could not fetch URL https://pypi.python.org/simple/ai2thor/: There was a problem confirming the ssl certificate: [Errno 1] _ssl.c:510: error:14077419:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert access denied - skipping Could not find a version that satisfies the requirement ai2thor (from versions: ) No matching distribution found for ai2thor

How can i solve it?

Running on gcloud

Hi, I'm trying to run on the google cloud compute engine. I have x-server running remotely by Xvfb :0.0&. When I try to run

import ai2thor.controller
c = ai2thor.controller.Controller()
c.start()

I get the error message

File "/usr/local/lib/python3.6/dist-packages/ai2thor/controller.py", line 572, in _start_unity_thread
    raise Exception("command: %s exited with %s" % (command, returncode))
Exception: command: ['/home/.ai2thor/releases/thor-201804271353-Linux64/thor-201804271353-Linux64', '-screen-width', '300', '-screen-height', '300'] exited with 1

Pivots are visible through doors

All of the pivot locations should be checked (probably by hand) to make sure this never happens. Sometimes it seems like the coordinate systems are flipped in weird ways (x and y are reversed for example) and it makes the pivot locations outside of the object entirely. One example is Floorplan13 the 3 cabinets on the wall starting with Cabinet|-03.04|+00.47|+05.03

connecting different floor

Hi,

I would like to connect different floor so the agent can walk and bring items to different scenes, is there a way to do that?

Best

Bug for receptacle containers?

So in Floorplan 1 there is a Bread on the table in the middle of the room. However that bread doesn't show up in any of the receptacles of the scene.
Why is the table not considered a receptacle here? Or is this a bug? I've seen this for many different objects not being contained within a receptacle.

Thanks

To reproduce:
import ai2thor.controller
c = ai2thor.controller.Controller()
c.docker_enabled = True
c.start()
c.reset('FloorPlan1')
e = c.step(dict(action='Initialize', gridSize=0.25))
receptacles = [x for x in e.metadata['objects'] if x['receptacle'] == True]
for r in receptacles:
print(r['objectId'], r['receptacleObjectIds'])

Duplicated grid_points returned by `search_all_closed` method

Hi,

I tried to generate the grid points on FloorPlan28 and got duplicated points. Here is my code.

import ai2thor.controller

controller = ai2thor.controller.BFSController()
controller.start()
controller.search_all_closed('FloorPlan28')

Is this a bug or is it acceptable?

How to change fieldOfView

I realized the field Of View is fixed and the interface does not support changing it. Is there any specific reason for this limitation or this is just not exposed?

How can I get bounding box of object on 2D frame?

Hi. Thanks for a nice simulator.

I want to do object detection on ai2thor like IQA paper's object detection.
Thus, I want to get dataset that consists of 2D images and labeled objects(category, bounding box) in each image.
How can I get bounding box of object on 2D frame to get such dataset?

And I have another question.
I get 2D image(W,H,C) from event.frame, and try to save 2D image using scipy, PIL, opencv.
But these libraries get input as (H, W, C) array to save image.
I try to transpose (W, H, C) to (H, W, C), but result file show wrong image.
Only in case if W == H, result file is correct.
How can I get frame formed by (H, W, C)?

thank you for reading.

Is there a way to find the shortest path within this env?

Hello I'm student who is trying to train a navigation agent based on the shortest path algorithms like A*.
How can I use those algorithms within this env? Any algorithm would be fine.
Moreover If it is possible, can I get a simple sample code for BFSController?
Thanks

contact issue

Hi,

I contacted the ai2thor multiple times sending a few emails, yet I never get a response back. Is anyone even respond to those emails?

Best

Running failed after update to 0.0.25

After update to 0.0.25 using sudo pip install --upgrade ai2thor, I can't even running the basic example.
The error message goes like this

Traceback (most recent call last):
File "test_example.py", line 2, in
import ai2thor.controller
File "/usr/local/lib/python2.7/dist-packages/ai2thor/controller.py", line 461
SyntaxError: Non-ASCII character '\xe2' in file /usr/local/lib/python2.7/dist-packages/ai2thor/controller.py on line 461, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details

Why did import ai2thor go wrong and how to fix it?

Stuck in controller.start()

I am using Ubuntu 14.04 64bits version OS.

I did following scripts and it stuck in controller.start() functions.

import ai2thor.controller
controller = ai2thor.controller.Controller()
controller.start()

controller.reset('FloorPlan28')
event = controller.step(dict(action='Initialize', gridSize=0.25))

It successfully open the unity windows and entered a kitchen like place.
however my python script just stopped and showed following message:

Found path: /home/koma/.ai2thor/releases/thor-201802061507-Linux64/thor-201802061507-Linux64
Mono path[0] = '/home/koma/.ai2thor/releases/thor-201802061507-Linux64/thor-201802061507-Linux64_Data/Managed'
Mono path[1] = '/home/koma/.ai2thor/releases/thor-201802061507-Linux64/thor-201802061507-Linux64_Data/Mono'
Mono config path = '/home/koma/.ai2thor/releases/thor-201802061507-Linux64/thor-201802061507-Linux64_Data/Mono/etc'
displaymanager : xrandr version warning. 1.5
client has 4 screens
displaymanager screen (0)(HDMI-0): 1920 x 1080
Using libudev for joystick management

Importing game controller configs

and it just stopped forever.

can anyone help?

Graph representation of scenes

Hello

How can I get the graph representation of each scene? I see there is a class BFSController that has a find shortest path method. It needs a NetworkX graph as input. Can I have a working example on how to construct such graph on these scenes?

Thank you!

Coordinate System

How are the coordinate system of x, y, z defined? What are their ranges? Thanks.

Multiple agents

Is it possible to have multiple agents in the same environment and control them separately?

getting more 3d model

Hi,
I would like to add more 3d models into the environment, so can I know where you guys get the 3d models? (including the different state of the model, such as apple, sliced apple, etc. )

Tutorial Update - How to know the positions of exact objects

Hi,
I am following the tutorial examples. Following code will make agent look at the mug. But how do we know other positions of the objects to do imitation learning? (Eg: Finding the Fridge)

controller.step(dict(action='Teleport', x=-1.25, y=1.00, z=-1.5)) controller.step(dict(action='LookDown')) event = controller.step(dict(action='Rotate', rotation=90))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.