Git Product home page Git Product logo

discord-llm-chatbot's Introduction

llmcord.py

Talk to LLMs with your friends!

llmcord.py lets you and your friends chat with LLMs directly in your Discord server. It works with practically any LLM, remote or locally hosted.

Features

Reply-based chat system

Just @ the bot to start a conversation and reply to continue. Build conversations with reply chains!

You can do things like:

  • Build conversations together with your friends
  • "Rewind" a conversation simply by replying to an older message
  • @ the bot while replying to any message in your server to ask a question about it

Additionally:

  • Back-to-back messages from the same user are automatically chained together. Just reply to the latest one and the bot will see all of them.
  • You can seamlessly move any conversation into a thread. Just create a thread from any message and @ the bot inside to continue.

Choose any LLM

Supports remote models from OpenAI API, Mistral API, Anthropic API and many more thanks to LiteLLM.

Or run a local model with ollama, oobabooga, Jan, LM Studio or any other OpenAI compatible API server.

And more:

  • Supports image attachments when using a vision model (like gpt-4-turbo, claude-3, llava, etc.)
  • Customizable system prompt
  • DM for private access (no @ required)
  • User identity aware (OpenAI API only)
  • Streamed responses (turns green when complete, automatically splits into separate messages when too long, throttled to prevent Discord ratelimiting)
  • Displays helpful user warnings when appropriate (like "Only using last 20 messages", "Max 5 images per message", etc.)
  • Caches message data in a size-managed (no memory leaks) and per-message mutex-protected (no race conditions) global dictionary to maximize efficiency and minimize Discord API calls
  • Fully asynchronous
  • 1 Python file, ~200 lines of code

Instructions

Before you start, install Python and clone this git repo.

  1. Install Python requirements: pip install -r requirements.txt

  2. Create a copy of .env.example named .env and set it up (see below)

  3. Run the bot: python llmcord.py

Setting Instructions
DISCORD_BOT_TOKEN Create a new Discord bot at discord.com/developers/applications and generate a token under the Bot tab. Also enable MESSAGE CONTENT INTENT.
DISCORD_CLIENT_ID Found under the OAuth2 tab of the Discord bot you just made.
LLM For LiteLLM supported providers (OpenAI API, Mistral API, ollama, etc.), follow the LiteLLM instructions for its model name formatting.

For local models (running on an OpenAI compatible API server), set to local/openai/model. If using a vision model, set to local/openai/vision-model. Some setups will instead require local/openai/<MODEL_NAME> where <MODEL_NAME> is the exact name of the model you're using.
LLM_MAX_TOKENS The maximum number of tokens in the LLM's chat completion.
(Default: 1024)
LLM_TEMPERATURE LLM sampling temperature. Higher values make the LLM's output more random.
(Default: 1.0)
LLM_TOP_P LLM nucleus sampling value. Alternative to sampling temperature. Higher values make the LLM's output more diverse.
(Default: 1.0)
CUSTOM_SYSTEM_PROMPT Write practically anything you want to customize the bot's behavior!
CUSTOM_DISCORD_STATUS Set a custom message that displays on the bot's Discord profile. Max 128 characters.
ALLOWED_CHANNEL_IDS Discord channel IDs where the bot can send messages, separated by commas. Leave blank to allow all channels.
ALLOWED_ROLE_IDS Discord role IDs that can use the bot, separated by commas. Leave blank to allow everyone. Specifying at least one role also disables DMs.
MAX_IMAGES The maximum number of image attachments allowed in a single message. Only applicable when using a vision model.
(Default: 5)
MAX_MESSAGES The maximum number of messages allowed in a reply chain.
(Default: 20)
LOCAL_SERVER_URL The URL of your local API server. Only applicable when using a local model.
(Default: http://localhost:5000/v1)
LOCAL_API_KEY The API key to use with your local API server. Only applicable when using a local model. Usually safe to leave blank.
OOBABOOGA_CHARACTER Your oobabooga character that you want to use. Only applicable when using oobabooga. Leave blank to use CUSTOM_SYSTEM_PROMPT instead.
OPENAI_API_KEY Only required if you choose an OpenAI API model. Generate an OpenAI API key at platform.openai.com/account/api-keys. You must also add a payment method to your OpenAI account at platform.openai.com/account/billing/payment-methods.
MISTRAL_API_KEY Only required if you choose a Mistral API model. Generate a Mistral API key at console.mistral.ai/api-keys. You must also add a payment method to your Mistral account at console.mistral.ai/billing.

OPENAI_API_KEY and MISTRAL_API_KEY are provided as examples. Add more as needed for other LiteLLM supported providers.

Notes

  • Only models from OpenAI API are user identity aware because only OpenAI supports the message "name" property. Hopefully others support this in the future.

  • PRs are welcome :)

Star History

Star History Chart

discord-llm-chatbot's People

Contributors

jakobdylanc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

discord-llm-chatbot's Issues

chat/completions HTTP 404

I pinged gpt "Hello", which crashed the bot:

$ python gpt-discord.py
2023-11-14 16:12:39.884 INFO: logging in using static token
2023-11-14 16:12:40.679 INFO: Shard ID None has connected to Gateway (Session ID: some-id).
2023-11-14 16:12:44.978 INFO: Generating response for prompt:
you alive?
2023-11-14 16:12:45.189 INFO: HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 404 Not Found"
2023-11-14 16:12:45.192 ERROR: Ignoring exception in on_message
Traceback (most recent call last):
  File "/home/someUser/.local/lib/python3.8/site-packages/discord/client.py", line 441, in _run_event
    await coro(*args, **kwargs)
  File "gpt-discord.py", line 83, in on_message
    async for part in await openai_client.chat.completions.create(model=os.environ["GPT_MODEL"], messages=msgs, max_tokens=MAX_COMPLETION_TOKENS, stream=True):
  File "/home/someUser/.local/lib/python3.8/site-packages/openai/resources/chat/completions.py", line 1191, in create
    return await self._post(
  File "/home/someUser/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1474, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "/home/someUser/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1275, in request
    return await self._request(
  File "/home/someUser/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1318, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4-1106-preview` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

Encountering Error with OpenAi key

New to all this. Trying to run the script but keep getting this:

Traceback (most recent call last):
File "C:\Users\Chase\PythonProjects\DiscordLLM\Discord-LLM-Chatbot\llmcord.py", line 19, in
"api_key": os.environ["OPENAI_API_KEY"],
~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "", line 685, in getitem
KeyError: 'OPENAI_API_KEY'


my .env file is exact to your spec with my open ai key recently generated OPENAI_API_KEY = sk-cswxxxxxxxxxxxxxxxxxxxxxxxxxx

Fresh Install - Trying to use ooba api

I did a fresh install and have the ooba api running, but can not run as anything I try different in the .env I keep getting this error

Traceback (most recent call last):
  File "/home/aaron/Documents/GitHub/llmcord/llmcord.py", line 18, in <module>
    LOCAL_LLM: bool = env["LLM"].startswith("local/")
  File "/usr/lib/python3.10/os.py", line 680, in __getitem__
    raise KeyError(key) from None
KeyError: 'LLM'

Any suggestions on things I can try? I've installed all requirements.

My env

LLM = local/openai/model
CUSTOM_SYSTEM_PROMPT = You are a snarky Discord chatbot.
CUSTOM_DISCORD_STATUS = 

ALLOWED_CHANNEL_IDS = 
ALLOWED_ROLE_IDS = 
MAX_IMAGES = 5
MAX_MESSAGES = 20

LOCAL_SERVER_URL = http://localhost:5000/v1
LOCAL_API_KEY = 
OOBABOOGA_CHARACTER = 

# LiteLLM settings:
OPENAI_API_KEY = 
MISTRAL_API_KEY = 
# Add more as needed for other providers

Not sure what I am missing here, I used it for weeks up to this point without any issue.

Suggestion: Consider adding a conda environment.yml file and containerizing it using Docker

Hey there,

Thank you for helping us to set up your chatbot on our Discord server. It's awesome! Very good job. I am also very glad that you started having traction. I am also trying to spread the word myself.

I would like to suggest the following:

  • Adding a conda environment.yml file (or a similar virtual environment file)
  • Adding a Dockerfile to use a Docker container

Both are good practises, because they ensure that other people who install it do not get the "but it works on my computer" problem.

For example, how would someone know what version of Python he should be using to install this? Also, what version of pip should he be using? Yes, you could write it in your README file, but that's suboptimal. A virtual environment like conda helps eliminate or at least mitigate this problem.

Here's an example of an environmenl.yml from one of my repositories (using Apache License 2.0). I am not saying that it's the best, but it should be good enough to get you started:

Happy to talk more about it over Discord, if you wish. 🙂

Feature req: Please integrate apipie.ai

Users want access to as much AI as they can get, they dont want to manage 50 accounts, they want the fastest AI they want the cheapest AI, and you can provide all of that for them with this update.

in addition to or in place of integrating with any aggregators - Please integrate APIpie so devs can access them all from one place/subscription and plus it also provides:

-The most affordable, reliable and fastest AI available
-One API to access ~500 Models and growing
-Language, embedding, voice, image, vision and more
-Global AI load balancing, route queries based on price or latency
-Redundancy for major models providing the greatest up time possible
-Global reporting of AI availability, pricing and performance

Its the same API format as openai, just change the domain name and your API key and enjoy a plethora of models without changing any of your code other than how you handle the models list.

This is a win win for everyone, any new AI's from any providers will be automatically integrated into your stack with this one integration. Not to mention all the other advantages.

Suggestion: Sampler settings

Not sure if I'm missing something, but having some measure of control over sampler settings would be nice. At the minimum, being able to control Top_P and Temperature would make this a lot more usable, currently using this with Mixtral through Openrouter and it's rambling like mad.

Does ollama supports images?

Is sending images to the bot supported using ollama?

This is what I get from the logs using llava model:

2024-04-10 21:25:09.161 INFO: Message received (user ID: 161792098901688320, attachments: 1, reply chain length: 1):
describe it
2024-04-10 21:25:09.480 INFO: HTTP Request: POST http://localhost:11434/api/generate "HTTP/1.1 400 Bad Request"
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/litellm/llms/ollama.py", line 270, in ollama_async_streaming
    raise OllamaError(
litellm.llms.ollama.OllamaError: b'{"error":"illegal base64 data at input byte 4"}'
2024-04-10 21:25:09.483 ERROR: Error while streaming response
Traceback (most recent call last):
  File "/root/discord-llm-chatbot/llmcord.py", line 176, in on_message
    async for curr_chunk in await acompletion(**kwargs):
  File "/usr/local/lib/python3.11/dist-packages/litellm/llms/ollama.py", line 284, in ollama_async_streaming
    raise e
  File "/usr/local/lib/python3.11/dist-packages/litellm/llms/ollama.py", line 270, in ollama_async_streaming
    raise OllamaError(
litellm.llms.ollama.OllamaError: b'{"error":"illegal base64 data at input byte 4"}'

Option to send as regular text

Hello, I was messing around with your bot and comparing it to chrisrude's oobabot plugin (when it was functional) for textgen webui. One thing that I think would enhance this software would be giving the bot the option to send as regular text like in oobabot or have the option to send it as embeds like what is currently possible in this software.

msg_noses dict needs size limit

The msg_nodes dict will keep growing indefinitely while the bot is kept running. Need to find the most elegant way to resolve this.

Probably need to use OrderedDict..or maybe just auto-wipe the entire msg_nodes dict on a configurable schedule (e.g every 24 hours).

Either way we would also need protection against race conditions (a msg_nodes entry being deleted while simultaneously being accessed).

discord bot

the discord bot just typing but it doesn't replay
ezgif-3-41333f17dd

Bot real slow to post / edit.

Hi,

I had great success with this project when using ollama.

I swapped to a model that runs on koboldai (openai compatible) and now the bot responses take ages.

It processes the input quick and has the answer ready in ~30 seconds max, but when the bot is posting the answer to discord, python3 starts running 100% on a single core and the bot takes ages to finish posting.

Any idea what I'm doing wrong?

No image support

gpt-discord does not support images due to OpenAI API/model limitations. Other bots accomplish this by bringing in other APIs/models that have image support. But I want to keep gpt-discord as elegant and simple as possible, prioritizing GPT-4 support for its overall superiority.

Once OpenAI adds support, so will I. Hopefully soon!

"WARNING: Shard ID None heartbeat blocked for more than XXX seconds." has been shown up

Hi guys,

I am using latest code, but when I looked into log file,

"WARNING: Shard ID None heartbeat blocked for more than XXX seconds." has been shown up.

And then My Discord Bot has been off-line after several minutes.

I've checked the code around async and asyncio.Lock, it looks fine.

So far I have no idea for this issue.

Please help me if you have any solutions.

Thanks,

Best

Priv'd intents requirements missing from README

On start:

 python gpt-discord.py
2023-11-14 15:59:27.105 INFO: logging in using static token
Traceback (most recent call last):
  File "gpt-discord.py", line 110, in <module>
    if __name__ == "__main__": asyncio.run(main())
  File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "gpt-discord.py", line 109, in main
    async def main(): await discord_client.start(os.environ["DISCORD_BOT_TOKEN"])
  File "/home/someUser/.local/lib/python3.8/site-packages/discord/client.py", line 778, in start
    await self.connect(reconnect=reconnect)
  File "/home/someUser/.local/lib/python3.8/site-packages/discord/client.py", line 704, in connect
    raise PrivilegedIntentsRequired(exc.shard_id) from None
discord.errors.PrivilegedIntentsRequired: Shard ID None is requesting privileged intents that have not been explicitly enabled in the developer portal. It is recommended to go to https://discord.com/developers/applications/ and explicitly enable the privileged intents within your application's page. If this is not possible, then consider disabling the privileged intents instead.
2023-11-14 15:59:27.884 ERROR: Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x7f1880b31220>, 36723372.991516)]']
connector: <aiohttp.connector.TCPConnector object at 0x7f187fd64e80>

Which intents do we need aside from "message content intent" (already checked)?
image

Loading characters?

Is there any way I can load the character I made in oobabooga webui?
Really nice job btw, thanks!

not start sertvice

ubuntu 22, python 3.10
u can help me?

root@gptsher:~/gpt-discord# python gpt-discord.py
Traceback (most recent call last):
  File "/root/gpt-discord/gpt-discord.py", line 7, in <module>
    encoding = tiktoken.get_encoding("cl100k_base")
  File "/root/gpt-discord-bot/tutorial-env/lib/python3.10/site-packages/tiktoken/registry.py", line 73, in get_encoding
    enc = Encoding(**constructor())
  File "/root/gpt-discord-bot/tutorial-env/lib/python3.10/site-packages/tiktoken_ext/openai_public.py", line 64, in cl100k_base
    mergeable_ranks = load_tiktoken_bpe(
  File "/root/gpt-discord-bot/tutorial-env/lib/python3.10/site-packages/tiktoken/load.py", line 117, in load_tiktoken_bpe
    return {
  File "/root/gpt-discord-bot/tutorial-env/lib/python3.10/site-packages/tiktoken/load.py", line 119, in <dictcomp>
    for token, rank in (line.split() for line in contents.splitlines() if line)
ValueError: not enough values to unpack (expected 2, got 1)

ollama integration !

If i want to use ollama llama3 model !
what should i put in LLM = ???

give me an example please :)

Oobabooga Support

Hello! Do you have any plans to support Oobaboga, other than LM Studio when you use the local model?

Issues when using Oobabooga

Trying to use this with Oobabooga atm, but getting this error:

2024-03-31 23:36:25.227 ERROR: Error while streaming response
Traceback (most recent call last):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 254, in aiter
async for part in self._httpcore_stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 367, in aiter
raise exc from None
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 363, in aiter
async for part in self._stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 349, in aiter
raise exc
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 341, in aiter
async for chunk in self._connection._receive_response_body(**kwargs):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 210, in _receive_response_body
event = await self._receive_event(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 220, in _receive_event
with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
File "/usr/lib/python3.11/contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/llmcord.py", line 176, in on_message
async for chunk in await acompletion(**kwargs):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/litellm/utils.py", line 9864, in anext
raise e
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/litellm/utils.py", line 9748, in anext
async for chunk in self.completion_stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/openai/_streaming.py", line 150, in aiter
async for item in self._iterator:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/openai/_streaming.py", line 167, in stream
async for sse in iterator:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/openai/_streaming.py", line 158, in _iter_events
async for sse in self._decoder.aiter(self.response.aiter_lines()):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/openai/_streaming.py", line 295, in aiter
async for line in iterator:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_models.py", line 963, in aiter_lines
async for text in self.aiter_text():
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_models.py", line 950, in aiter_text
async for byte_content in self.aiter_bytes():
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_models.py", line 929, in aiter_bytes
async for raw_bytes in self.aiter_raw():
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_models.py", line 987, in aiter_raw
async for raw_stream_bytes in self.stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_client.py", line 149, in aiter
async for chunk in self._stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 253, in aiter
with map_httpcore_exceptions():
File "/usr/lib/python3.11/contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

The model I'm using is a GPTQ llama2 model running via Exllama.

Not working anymore for local models (LLM=Local/model LM Studio)

Hi,
I had previous version of Discord-LLM-ChatBot working well with LM Studio locally.

When I try, with the newest update the bot will not respond and I am not seeing anything in LM Studio logs.

I tried every thing, what did I do incorrectly? I can provide error logs if that can help.

I use python3.11.7 in conda env. and pip 23.3.1, LM Studio 0.2.13 on Windows.
(Not a native English, apologies my English.)

Want a guided install for Oobabooga/ Open AI. A newbie this side.

Hey Jakob,

I was trying your bot to install in my server and just could not succeed. I am just an oobabooga user and don't know much about Open AI API but I dearly want to make it work. What do you suggest? Can you help me via a session? Can you add @nickoaiart or @AiModelMaya on discord to discuss further?Offcourse paid gig if you are fine

Try this before submitting an issue

  • Force-update to my latest commit: git fetch && git reset --hard origin/main
  • Update your python packages: pip install -U -r requirements.txt
  • Make sure your .env is up-to-date with .env.example (this project is WIP so config options are still changing)

If the bot errors...

Make sure your .env is up-to-date with .env.example. I've added some new config options.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.