Git Product home page Git Product logo

gen.nvim's People

Contributors

abayomi185 avatar alaaibrahim avatar amzd avatar bartman avatar danieleliasib avatar david-kunz avatar hass-demacia avatar jmdaly avatar joseconseco avatar kamnxt avatar leafo avatar mte90 avatar naripok avatar smjonas avatar weygoldt avatar wishuuu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

gen.nvim's Issues

Allow for ollama REST API usage

Since I am running ollama on a different machine I would like to be able to use this plugin through the REST API instead of starting a new service on my working machine.

Maybe this could be done by some configuration?

Feat: Allow to compare with current buffer

Hey there,
It would be great to be able to compare the new output when we generate code with the previous when when text is selected.
At a high level it could be an interface similar to vim fugitive:
image
It does not need to be as complex but I think this would help where we can decide if we want to accept some / all the chunk and compare with what we had.
Best,

[enhancement] hide diagnostic in the new buffer gen.nvim on split view

The new buffer created becomes a christmas tree with lsps and other diagnostics one may have active.

I've tried to set up an autocmd on BufEnter with pattern gen.nvim to trigger vim.diagnostic.hide() but wasn't succesful. ( works when exiting and reentering tho). The other event emitters I tried don't work too.

Is it possible to disable them on the new buffer?

the code in question is here

gen.nvim/lua/gen/init.lua

Lines 121 to 129 in 46ab810

else
vim.cmd("vnew gen.nvim")
M.result_buffer = vim.fn.bufnr("%")
M.float_win = vim.fn.win_getid()
vim.api.nvim_buf_set_option(M.result_buffer, "filetype", "markdown")
vim.api.nvim_buf_set_option(M.result_buffer, "buftype", "nofile")
vim.api.nvim_win_set_option(M.float_win, "wrap", true)
vim.api.nvim_win_set_option(M.float_win, "linebreak", true)
end

this is the result

image

ask for diff mode

Vim/Neovim has the best diff editor, which is easy to use to display and accept some chunks from a diff between two texts. However, I haven't found any plugin in Neovim that provides me with the generated text from ChatGPT in a diff format.

Cannot override model

I think this is due to line

    local opts = vim.tbl_deep_extend('force', {
        model = 'mistral:instruct',
        command = M.command

Not sure though. I do not see anywhere require('gen').model = 'your_model' - model seems to be set only in local opts. And not in M.model...

Blank output with the setup config in readme

I have tried the following

-- Custom Parameters (with defaults)
{
    "David-Kunz/gen.nvim",
    opts = {
        model = "mistral", -- The default model to use.
        display_mode = "float", -- The display mode. Can be "float" or "split".
        show_prompt = false, -- Shows the Prompt submitted to Ollama.
        show_model = false, -- Displays which model you are using at the beginning of your chat session.
        no_auto_close = false, -- Never closes the window automatically.
        init = function(options) pcall(io.popen, "ollama serve > /dev/null 2>&1 &") end,
        -- Function to initialize Ollama
        command = "curl --silent --no-buffer -X POST http://localhost:11434/api/generate -d $body",
        -- The command for the Ollama service. You can use placeholders $prompt, $model and $body (shellescaped).
        -- This can also be a lua function returning a command string, with options as the input parameter.
        -- The executed command must return a JSON object with { response, context }
        -- (context property is optional).
        list_models = '<omitted lua function>', -- Retrieves a list of model names
        debug = false -- Prints errors and the command which is run.
    }
},

and

require('gen').setup({
  -- same as above
})

What seems to work for me was using the following setup given here
#32 (comment)

However the output is just one line with no line breaks or word wrap.

Docker

How can I use this plugin with Ollama in the docker container?

Thanks.

Output window remains blank

hi there,

thanks for the plugin!

I am trying it out but - unfortunately - cannot get ollamas output to display inside of nvim.
Running the commands the plugin runs in my terminal directly, everything works fine.

Here's a mp4:
https://storage.siblanco.dev/gen-nvim.mp4

Neovim version:

NVIM v0.9.4
Build type: Release
LuaJIT 2.1.1692716794

Any idea how I could debug this further?
Please, let me know what else I could provide.

hard to add this plugin using lazy.nvim with the container option setting.

I need to use the docker container to serve my custom model and it works when using ollama docker images. BUT, when come to using it in this plugin, it is difficult to set up the container value in lazy.nvim package manager. Can you provide more code snips to using different config setting ups? Thanks for your time and effort on time good project.

Some problem "Expected Lua number" when use plugin.

Hello David, Thanks for that great plugin it awesome.

Could please help me fix this problem:

curl --silent --no-buffer -X POST http://localhost:11434/api/generate -d '{"model": "mistral", "stream": true, "prompt": "generate kubernetes manifest file as pod with busybox"}'
...my-user/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:292: Expected Lua number
stack traceback:
^I[C]: in function 'nvim_buf_delete'
^I...my-ser/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:292: in function <...my-user/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:268> function: builtin#1
8 ...my-user/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:292: Expected Lua number

Setup:

astronvim: require("astronvim.health").check()

AstroNvim ~
- AstroNvim Version: v3.39.0
- Neovim Version: v0.9.2
- OK Using stable Neovim >= 0.8.0
- OK `git` is installed: Used for core functionality such as updater and plugin management
- OK `open` is installed: Used for `gx` mapping for opening files with system opener (Optional)
- OK `lazygit` is installed: Used for mappings to pull up git TUI (Optional)
- OK `node` is installed: Used for mappings to pull up node REPL (Optional)
- OK `gdu` is installed: Used for mappings to pull up disk usage analyzer (Optional)
- WARNING `btm` is not installed: Used for mappings to pull up system monitor (Optional)
- OK `python` is installed: Used for mappings to pull up python REPL (Optional)

Plugin config:

return {
  "David-Kunz/gen.nvim",
  keys = {
    { "<leader>ga", ":Gen<CR>", desc = "Ollama Generate", mode = { "n", "v" } }, -- This Works
  },
    model = "mistral", -- The default model to use.
    display_mode = "float", -- The display mode. Can be "float" or "split".
    show_prompt = false, -- Shows the Prompt submitted to Ollama.
    show_model = false, -- Displays which model you are using at the beginning of your chat session.
    no_auto_close = false, -- Never closes the window automatically.
    init = function(options) pcall(io.popen, "ollama serve > /dev/null 2>&1 &") end,
    -- Function to initialize Ollama
    command = "curl --silent --no-buffer -X POST http://localhost:11434/api/generate -d $body",
    -- The command for the Ollama service. You can use placeholders $prompt, $model and $body (shellescaped).
    -- This can also be a lua function returning a command string, with options as the input parameter.
    -- The executed command must return a JSON object with { response, context }
    -- (context property is optional).
    -- list_models = "<function>", -- Retrieves a list of model names
    -- debug = false, -- Prints errors and the command which is run.
    debug = true, -- Prints errors and the command which is run.
  },
}

model change doesn't work

with a lazyvim setup

vim.api.nvim_create_user_command("ChangeModel", function()
  vim.ui.input({ prompt = "Enter new model: " }, function(input)
    if input and input ~= "" then
      require("gen").model = input
      print("Model changed to: " .. input)
    else
      print("No model name provided. Model not changed.")
    end
  end)
end, {})

return {
  "David-Kunz/gen.nvim",
  keys = {
    { "<leader>ai", ":Gen<CR>", mode = { "n", "v", "x" }, desc = "Local AI Gen" },
    { "<leader>am", ":ChangeModel<CR>", mode = { "n" }, desc = "Change Model" },
  },
  config = function()
    local opts = {
      model = "deepseek-coder:33b-instruct-q4_0",
      display_mode = "float",
      show_model = true,
      no_serve = true,
      debugCommand = true,
    }

    local gen = require("gen")

    gen.model = opts.model
    gen.display_mode = opts.display_mode
    gen.show_model = opts.show_model
    gen.no_serve = opts.no_serve
  end,
}

for some reason after changing model there is no response even tho initially it works fine. I tried different models result is the same blank window.

Any clues?

No output if the model is missing

I am on linux and I installed Ollama (and I see running).

When I try the :Gen command anything generate a tiny window empty:
immagine

I was able also to generate a big window inside the code but I see if the cursor is on the Neotree buffer open inside that.
There isn't any parameter to define what filetype/buffer exclude.

Fix code with help of LSP

Thank you for building this plugin and sharing with the community.

Prompt : Write code in golang to send email with attachment. handle content type automatically.

Improvement:

  • We can pass all errors generated by lsp and send it back to gen ai when asked to fix the code.
image

Error: Unexpected EOF

Hi,

this plugin is not working for me and I don't quite understand why (maybe because I'm on windows?).

I don't get a response from ollama when using this plugin, so the response modal is always empty. Enabling debug output, I get these logs:

curl --silent --no-buffer -X POST http://localhost:11434/api/chat -d "{""messages"": [{""role"": ""user"", ""content"": ""Regarding the following text, hi:\n""}], ""model"": ""mistral"", ""stream"": true}"
Response data:
{ '{"error":"unexpected EOF"}' }
Response data:
{ "" }

When I try to run the same curl command in git-bash, the output is this:

$ curl --silent --no-buffer -X POST http://localhost:11434/api/chat -d "{""messages"": [{""role"": ""user"", ""content"": ""Regarding the following text, hi:\n""}], ""model"": ""mistral"", ""stream"": true}"
{"error":"invalid character 'm' looking for beginning of object key string"}

I don't quite understand why the json body contains doubled quotes. When I remove them and surround the body with single quotes, than it works:

curl --silent --no-buffer -X POST http://localhost:11434/api/chat -d '{"messages": [{"role": "user", "content": "Regarding the following text, hi:\n"}], "model": "mis
tral", "stream": true}'
{"model":"mistral","created_at":"2024-03-17T16:18:22.8339767Z","message":{"role":"assistant","content":" Hello"},"done":false}
...

Don't know if this is really the cause of the issue or if the "unexpected EOF" comes from something else. But that's what I was able to figure out so far.

Here's a screenshot of this issue in action:
image

Using Plugged as package manager?

I'm loading Gen using

Plug 'David-Kunz/gen.nvim', {'branch': 'main'}                           

What's the proper way to pass the configuration in this case?

Thanks.

Not trigger the ollama

Only successfully generate some content once.

After that ollama seems not be triggered (the RAM usage is normal). The window is empty for every without any debug message.
image

Other tools using ollama works fine. (Like raycast ollama extension).

My setting:

return {
  "David-Kunz/gen.nvim",
  cond = No_vscode,
  event = "VeryLazy",
  config = function()
    require("gen").prompts["Explain_Code"] = {
      prompt = "Explain the following code in $filetype:\n```\n$text\n```",
    }
  end,
  opts = {
    model = "codellama:7b-instruct",
    show_model = true,
    debug = true,
  },
  keys = {
    {
      "<leader>gs",
      "<cmd>Gen<CR>",
      mode = { "n", "x" },
      desc = "[S]tart [G]enrate with llm",
    },
    {
      "<leader>gc",
      "<cmd>Gen Chat<CR>",
      mode = { "n", "x" },
      desc = "[C]ontinue [C]hat with llm",
    },
  },
}

Add parameters table to prompt

In some cases it would be good to tweak model parameters

  • eg. for summary low temperature is suggested - so that model does not hallucinate as much.
  • Depending on user gpu, we could also set number of gpu layers used.
  • some models are not correctly configured - eg. mistral supports context of 8k, but ollama does not set the context so I think its is using just default 2k,
    etc
    Ollama does seem support this parameters feature - https://github.com/jmorganca/ollama/blob/main/docs/api.md#parameters .
    My feature request, add additional parameters input to prompt.

And one question (since there is not discussion section)
I'm just slightly confused about how ollama works :

  • gen.nvim starts 'ollama serve' - no model is yest loaded in memory right?
  • users sends query to ollama, ollama loads the model, sends response, and but modes stays loaded in memory ?
  • is we send request with different gpu-layers parameter - it would unload previews model, and load new one ?

Suggestion: add choice of models to switch to

Not entirely sure where to put tho can be added in the main list of actions that is invoked with :Gen

or can be added to just README as a suggestion for people to use if they want to conveniently switch
to different models for a given :Gen invocation. What do you think?

this is how I use it for myself:

local changeModel = function()
  -- take the first column of the output of ollama list after header row and trim whitespace
  -- so that systemlist returns a valid list of models
  local list = vim.fn.systemlist("ollama list | awk 'NR>1' | cut -f1 | tr -d ' '")

  vim.ui.select(list, { prompt = "Select a model: " }, function(selected)
    if selected == nil then
      return
    end

    require("gen").model = selected
    print("Model set to " .. selected)
  end)
end

return {
  "David-Kunz/gen.nvim",
  keys = {
    { "<leader>ai", ":Gen<CR>", mode = { "n", "v", "x" }, desc = "Local [AI]: Menu" },
    { "<leader>am", changeModel, mode = { "n" }, desc = "Local [AI]: Change Model" },
  },
  opts = {
    display_mode = "float",
    show_model = true,
    no_serve = true,
    debugCommand = true,
  },
}

extract not extracting from response

Extract doesn't work anymore, instead the selection is replaced by the whole response.
Last commit where it still works for me with the default LazyVim setup is this one: 699b4a5

Thanks for this nice tool!

[feature request] dialogue with the model

It would be awesome to be able to conversate with the model based on the input instead of it being a fire-once approach.

I'm thinking something like having a "Conversate" prompt that opens a window in which each question-answer can be viewed.

I would love this to adjust the question asked or specify even further details for the prompts.

Which-key seems to "steal" the buffer content

I'm using lazyvim, and I have added a keybinding for gen.nvim. It seems that because which-key is triggered, the content of the buffer is empty and so the $text parameter. ( In visual mode ).
If I use : to trigger the Gen command, it seems it works as expected.

Formatting in gen.nvim tab

Hi David,

thanks for writing this plugin, just wondering how are you getting the output from Gen to be formatted correctly? Is this another plugin you have setup?. Right now the response I get from Gen seems to appear as raw markdown.

Thanks

feature request - Add conversation Support

I would like to be able to have an iterative conversation with the AI. This is supported with the web UIs for ollama already.

Example:

user: Write a lambda in typescript that returns "Hello World"

AI: <lambda code>

user: Can modify the lambda to get the response message from the process_body method?

AI: <modified lambda code>

Add Telescope Support and prepare for lasting Sessions

Hello Everyone,

First of all thanks for the great plugin @David-Kunz.

I'd like to propose some new features. As I see this as a discussion, I'll not yet make separate topics for each, to not spam the issues list. The development of the plugin is very fast, so some things might be redundant now.

  1. Adding Support for Telescope
  2. No static model, either configure a model in the prompt or let the user select
  3. Option for vertical split for comparisons (also prep for chat) (Less focus on temporary floats)
  4. Multi-line prompts
  5. Selective prompts

Adding Support for Telescope

Currently, when selecting a prompt, you get the selector menu and type in a number. This is very fast, if you know exactly which prompt you want and the list is short.

However, this becomes unusable as soon as you define a larger number of prompts and/or your screen is smaller than the total set of prompts. Therefor i think adding Telescope as an (optional?) alternative might be better here, as it allows a fast selection even with a larger number of prompts.

I've made a quick poc to test this out and see if it's better and I think it is worth adding. I've forked your repo and made the necessary changes: https://github.com/cloud-wanderer/gen.nvim

I'm sorry about the code quality. It's my first time coding in lua, so i'm not familiar with the standards yet. Also my focus was on testing it fast, rather than making it perfect.

Once you use :Gen a telescope instance opens to select the prompt.

selectp selectedp

No static Model

Foundational Models are not compatible with every type of prompt / use case. Depending on the type of question I found that different models bring better results. Therefor I do think it is not advisable to have the concept of a default model.

I suggest to have the model as parameter/attribute inside the prompts.lua and if it is not set, a dialogue for available models opens. I've done this as well in the mentioned repository with telescope. (the default model can of course be a fallback in case of an empty list)

Side Remark: I've found some issues with the way the content is extracted from visual selection when using telescope, so I had to change the way this is done. However it's not compatible with replacing text.

selectmodel ``

Vertical Splits

Floating windows are nice visually. However if you want to keep the content and maybe extend it later for chat, these temporary windows are not ideal. Also most of the results i got from the models are not satisfactory enough to just replace as is. Sometimes they have security issues and other times it is simply wrong.

Especially when looking further into a future chat option, I suggest to focus on vertical splits (configurable). This way you can compare the output and take whatever you need. That buffer can then be extended with more answers from further chats.

In my poc I open a new buffer for every new prompt. If a vertical split exists (position 2), the output will be shown there. If not, a new vsplit will be created. This way it is easy to compare the results of the prompt with the original buffer. It would also allow for a new command to "Accept" the result as well as keeping a record / save answers for later.

As for chat, I thought about keeping the last buffer as well as the underlying data. The split window can then be extended with further chat answers, while the code can be applied in the other split if satisfactory.

split

Multi-line prompts

Currently the input selector is not ideal for adding more content or longer prompt - especially of pasted from another source. I suggest to add multi-line prompts as the default (maybe via a temporary buffer and a command to send the content of this buffer as a prompt, while adding """before and after.

Selective Prompts

Not every prompt is applicable to all filetypes. I suggest to preselect the list of available prompts depending on the current filetype, to narrow it down. For example Create a markdown table from this input should only pop up for a list of filetypes like *.md.

I suggest a new optional attribute for a list of fileendings. If not set, the prompt will apply to all.

feat: opt in to default prompts

I'd like to make a PR that implements the option for users to opt in/out of the default prompts.

Would look like this:

require("gen").setup({ load_default_prompts = bool })

Use case:

The default prompts can be "hidden" for users who do not want to use them, or want to modify them into their own version that they use instead of the original defaults.

Users who are content with only the default prompts, or adding a few of their own to compliment them will see no change.

Unable to configure custom keymaps

I use Lazy.nvim with the config for gen.nvim being set up in the module custom.plugins . I have it in a file called gen.lua and it contains the following:

return {
    "David-Kunz/gen.nvim",
    opts = {
        model = "mistral",      -- The default model to use.
        display_mode = "split", -- The display mode. Can be "float" or "split".
        show_prompt = false,    -- Shows the Prompt submitted to Ollama.
        show_model = true,     -- Displays which model you are using at the beginning of your chat session.
        no_auto_close = false,  -- Never closes the window automatically.
        init = function(options) pcall(io.popen, "ollama serve > /dev/null 2>&1 &") end,
        -- Function to initialize Ollama
        command = "curl --silent --no-buffer -X POST http://<some non-localhost ip>:11434/api/generate -d $body",
        -- The command for the Ollama service. You can use placeholders $prompt, $model and $body (shellescaped).
        -- This can also be a lua function returning a command string, with options as the input parameter.
        -- The executed command must return a JSON object with { response, context }
        -- (context property is optional).
        list_models = '<omitted lua function>', -- Retrieves a list of model names
        debug = false                           -- Prints errors and the command which is run.
    },
    config = function()
        vim.keymap.set({ 'n', 'v' }, '<leader>ia', ':Gen Ask<CR>', { desc = "A[I] [A]sk" })
        vim.keymap.set({ 'n', 'v' }, '<leader>ic', ':Gen Change<CR>', { desc = "A[I] [C]hange" })
        vim.keymap.set({ 'n', 'v' }, '<leader>icc', ':Gen Change_Code<CR>', { desc = "A[I] [C]hange [C]ode" })
        vim.keymap.set({ 'n', 'v' }, '<leader>ih', ':Gen Chat`<CR>', { desc = "A[I] C[h]at" })
        vim.keymap.set({ 'n', 'v' }, '<leader>ie', ':Gen Enhance_Code<CR>', { desc = "A[I] [E]nhance code" })
        vim.keymap.set({ 'n', 'v' }, '<leader>iew', ':Gen Enhance_Wording<CR>', { desc = "A[I] [E]nhace [W]ording" })
        vim.keymap.set({ 'n', 'v' }, '<leader>ieg', ':Gen Enhance_Grammar_Spelling<CR>', { desc = "A[I] [E]nhance [G]rammar" })
        vim.keymap.set({ 'n', 'v' }, '<leader>ig', ':Gen Generate<CR>', { desc = "A[I] [G]enerate" })
        vim.keymap.set({ 'n', 'v' }, '<leader>ir', ':Gen Review_Code<CR>', { desc = "A[I] [R]eview Code" })
        vim.keymap.set({ 'n', 'v' }, '<leader>is', ':Gen Summarize<CR>', { desc = "A[I] [S]ummarize" })
    end
}

If I run it as is, the keymaps will work, but the window will never populate with a response from Ollama. However, if I remove the config section, the keymaps don't work, but the CLI commands do and I start getting text back from the Ollama server.

Admittedly my Lua skills are not amazing, but based on how my other plugins are configured and some Googling around, I believe this should work. Can anyone tell me what's up?

Can't get Gen command working

Thanks for the great work.
I'm new to neovim, I use nvchad.
I installed gen.nvim with the minimal configuration

~/.config/nvim/lua/custom/plugins/plugins.lua
_______
  ...
  {
      "David-Kunz/gen.nvim",
  },
...

. After ollama serve (version is 0.1.22), I opened a text file, and found the Gen command not working. Here's a screenshot.
image

Closing float window

Sorry if this has been documented I looked and could not see an answer:

Q. I run Gen Chat -> get a prompt -> send query,

  • Floating window gets updated
  • I cut the snippet out the floating window
  • But cannot find a way to close the floating window to get back to my buffer to do the paste of the snippet

Any clues ?

`$text` only grab a single character into prompt

While Select any amount of text (both in v and V mode), use prompt contains $text to generate, there seems to be only one character is grabbed into the prompt.

In the following example, it seems only , is send to the prompt.
image

The prompt I am using:

prompt = "Explain the meaning of following code:\n```$filetype\n$text\n```",

Lost functionality with output rendered with "termopen"

A few things I noticed from the termopen change with regards to usability:

  • pressing any key will now immediately close the window, this means:
    • You can now longer scroll up and down the output if it's longer than the size of the window
    • You can no longer make selections in the window to copy data from it
  • stderr is now included the output (not as critical, but can be an annoyance if using alternate command that outputs debug info)

The change was originally discussed here: #32 (comment)

Unable to get LLM to see my input buffer

Hi David,

Great stuff you have here. I am able to use this to chat with codellama just fine, however I cannot figure out how to get it to see the current buffer.

Things I have tried:

  1. Using the default settings, by just calling require('gen').setup{}
  2. Trying out the custom options, which I believe are just the defaults from the README.

I am using Packer with the below:

-- Ollama, a local, offline GPT
-- Minimal configuration
use {
	"David-Kunz/gen.nvim",
	-- cmd = "Gen",
	config = function()
		require('gen').setup {
			model='codellama',
			show_model=true,
			init = function(options) pcall(io.popen, "ollama serve > /dev/null 2>&1 &") end,
			command = "curl --silent --no-buffer -X POST http://localhost:11434/api/generate -d $body",
		}
	end
}

When I try to prompt via :Gen Ask or any of the other prompts, it appears that the LLM has no context about my current buffer.

This is very likely a case of pebcak, but wanted to check how can one diagnose the text given to the LLM?

Receiving error from ollama server after first successful prompt and response

Hi David,

I've started seeing an issue when using gen.nvim in the last couple of days. When I make a first request to the ollama server using the plugin, I get a response as expected. However, when I make follow-up requests, I'm not able to get any further responses from the ollama server. If I close the gen.nvim buffer and ask something again, I will get another response back, but the context of the conversation has been lost. These follow-up responses contain an error from what I think is the ollama server. Here is a sample error response:

Response data:
{ '{"error":"json: cannot unmarshal string into Go struct field ChatRequest.messages of type api.Message"}' }

I'll paste an entire example set of requests and responses here, in the hope that it's helpful. In the example, I ask codellama to generate a small C++ function, which is does. I then ask it to explain the code to me, and that's when the error occurs. Here is the debug output,

curl --silent --no-buffer -X POST http://localhost:11434/api/chat -d '{"messages": [{"role": "user", "content": "Please write a C++ function to print the prime numbers from 1 to 100. Do not explain it to me"}], "model": "codella
ma:34b-instruct", "stream": true}'
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.009173699Z","message":{"role":"assistant","content":"```"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.047441015Z","message"
:{"role":"assistant","content":"\\n"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.085628159Z","message":{"role":"assistant","content":"#"},"done":false}', '{"model":"codellama:34b-instruct","crea
ted_at":"2024-03-05T17:05:35.123695404Z","message":{"role":"assistant","content":"include"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.198421919Z","message":{"role":"assistant","content":" \\u00
3ciostream"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.235195017Z","message":{"role":"assistant","content":"\\u003e"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05
T17:05:35.27201077Z","message":{"role":"assistant","content":"\\n"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.308843027Z","message":{"role":"assistant","content":"\\n"},"done":false}', '{"model
":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.345662982Z","message":{"role":"assistant","content":"void"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.382433902Z","message":{"role":
"assistant","content":" print"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.419245195Z","message":{"role":"assistant","content":"Pr"},"done":false}', '{"model":"codellama:34b-instruct","created_a
t":"2024-03-05T17:05:35.456065413Z","message":{"role":"assistant","content":"ime"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.492872028Z","message":{"role":"assistant","content":"Numbers"},"done
":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.529735857Z","message":{"role":"assistant","content":"("},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.566470304Z","m
essage":{"role":"assistant","content":"int"},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.603336974Z","message":{"role":"assistant","content":" start"},"done":false}', '{"model":"codellama:34b-ins
truct","created_at":"2024-03-05T17:05:35.640141819Z","message":{"role":"assistant","content":","},"done":false}', '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.676956973Z","message":{"role":"assistant","content":"
 int"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.713909021Z","message":{"role":"assistant","content":" end"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.750783326Z","message":{"role":"assistant","content":")"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.787617617Z","message":{"role":"assistant","content":" {"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.82453608Z","message":{"role":"assistant","content":"\\n"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.86146572Z","message":{"role":"assistant","content":"   "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.898424389Z","message":{"role":"assistant","content":" for"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.935363189Z","message":{"role":"assistant","content":" ("},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:35.972173152Z","message":{"role":"assistant","content":"int"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.009080314Z","message":{"role":"assistant","content":" i"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.045978564Z","message":{"role":"assistant","content":" ="},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.08294064Z","message":{"role":"assistant","content":" start"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.119884015Z","message":{"role":"assistant","content":";"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.156846022Z","message":{"role":"assistant","content":" i"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.193699726Z","message":{"role":"assistant","content":" \\u003c="},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.232027162Z","message":{"role":"assistant","content":" end"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.272238082Z","message":{"role":"assistant","content":";"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.309580377Z","message":{"role":"assistant","content":" i"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.34643344Z","message":{"role":"assistant","content":"++)"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.383336973Z","message":{"role":"assistant","content":" {"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.420216072Z","message":{"role":"assistant","content":"\\n"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.457033493Z","message":{"role":"assistant","content":"       "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.493881685Z","message":{"role":"assistant","content":" if"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.530788352Z","message":{"role":"assistant","content":" ("},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.567643942Z","message":{"role":"assistant","content":"i"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.604589178Z","message":{"role":"assistant","content":" %"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.641386678Z","message":{"role":"assistant","content":" "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.678248218Z","message":{"role":"assistant","content":"2"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.71516269Z","message":{"role":"assistant","content":" =="},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.75203016Z","message":{"role":"assistant","content":" "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.788871802Z","message":{"role":"assistant","content":"0"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.825754667Z","message":{"role":"assistant","content":" ||"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.862601768Z","message":{"role":"assistant","content":" i"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.89951774Z","message":{"role":"assistant","content":" %"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.936406116Z","message":{"role":"assistant","content":" "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:36.973298694Z","message":{"role":"assistant","content":"3"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.010291791Z","message":{"role":"assistant","content":" =="},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.047143467Z","message":{"role":"assistant","content":" "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.08410792Z","message":{"role":"assistant","content":"0"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.121002733Z","message":{"role":"assistant","content":" ||"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.157942703Z","message":{"role":"assistant","content":" i"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.194845011Z","message":{"role":"assistant","content":" %"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.231778067Z","message":{"role":"assistant","content":" "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.268652855Z","message":{"role":"assistant","content":"5"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.305495306Z","message":{"role":"assistant","content":" =="},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.342430314Z","message":{"role":"assistant","content":" "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.379275735Z","message":{"role":"assistant","content":"0"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.416151905Z","message":{"role":"assistant","content":" ||"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.453024455Z","message":{"role":"assistant","content":" i"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.489919007Z","message":{"role":"assistant","content":" %"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.526748953Z","message":{"role":"assistant","content":" "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.563510857Z","message":{"role":"assistant","content":"7"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.600391822Z","message":{"role":"assistant","content":" =="},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.637302744Z","message":{"role":"assistant","content":" "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.67418099Z","message":{"role":"assistant","content":"0"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.711052939Z","message":{"role":"assistant","content":")"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.747898345Z","message":{"role":"assistant","content":" {"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.784810681Z","message":{"role":"assistant","content":"\\n"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.821686724Z","message":{"role":"assistant","content":"           "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.858459102Z","message":{"role":"assistant","content":" std"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.895354448Z","message":{"role":"assistant","content":"::"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:37.932200666Z","message":{"role":"assistant","content":"cout"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.005820321Z","message":{"role":"assistant","content":" \\u003c\\u003c i"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.07975631Z","message":{"role":"assistant","content":" \\u003c\\u003c \\""},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.116880041Z","message":{"role":"assistant","content":" \\";"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.154014465Z","message":{"role":"assistant","content":"\\n"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.191079665Z","message":{"role":"assistant","content":"       "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.228262956Z","message":{"role":"assistant","content":" }"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.265302891Z","message":{"role":"assistant","content":"\\n"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.302430778Z","message":{"role":"assistant","content":"   "},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.339522085Z","message":{"role":"assistant","content":" }"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.376677551Z","message":{"role":"assistant","content":"\\n"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.413720326Z","message":{"role":"assistant","content":"}"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.450877984Z","message":{"role":"assistant","content":"\\n"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.487964601Z","message":{"role":"assistant","content":"```"},"done":false}', "" }
Response data:
{ '{"model":"codellama:34b-instruct","created_at":"2024-03-05T17:05:38.525275496Z","message":{"role":"assistant","content":""},"done":true,"total_duration":3900933048,"load_duration":545483,"prompt_eval_count":28,"prompt_eval_duration":
383055000,"eval_count":96,"eval_duration":3515957000}', "" }
Response data:
{ "" }
curl --silent --no-buffer -X POST http://localhost:11434/api/chat -d '{"messages": ["```", "\n", "#", "include", " <iostream", ">", "\n", "\n", "void", " print", "Pr", "ime", "Numbers", "(", "int", " start", ",", " int", " end",
 ")", " {", "\n", "   ", " for", " (", "int", " i", " =", " start", ";", " i", " <=", " end", ";", " i", "++)", " {", "\n", "       ", " if", " (", "i", " ", " ", "2", " ==", " ", "0", " ||", " i", " ", " ", "3", " ==", " ", "0", " ||",
 " i", " ", " ", "5", " ==", " ", "0", " ||", " i", " ", " ", "7", " ==", " ", "0", ")", " {", "\n", "           ", " std", "::", "cout", " << i", " << \"", " \";", "\n", "       ", " }", "\n", "   ", " }", "\n", "}", "\n", "```", "", {
"role": "user", "content": "Now provide the explanation"}], "model": "codellama:34b-instruct", "stream": true}'
Response data:
{ '{"error":"json: cannot unmarshal string into Go struct field ChatRequest.messages of type api.Message"}' }
Response data:
{ "" }

The logs from the ollama server for those two requests are as follows:

[GIN] 2024/03/05 - 17:05:38 | 200 |  3.901055037s |     127.0.0.1 | POST     "/api/chat"
[GIN] 2024/03/05 - 17:05:59 | 400 |     213.546µs |     127.0.0.1 | POST     "/api/chat"

When I run that last curl command from my shell, I see the same cannot unmarshal... error as a response. I'm unfortunately not a JSON, Go, or Web expert, but I wonder if part of the message isn't getting escaped properly or something like that?

Option to enable word wrap?

Hi and thanks for this excellent plugin!

I noticed that the output of e.g. :Gen Summarize can produce one very long line:

image

If I manually enabled word wrap, it becomes much more readable:

image

I think I would prefer to have this word wrapping "always on". Do you think it's possible to add an option to enable this?

-- lazy.nvim example
{
"David-Kunz/gen.nvim",
config = function()
  require("gen").setup({
    word_wrap = true
  })
end,
},

[feature request] prompt-answer history

Sometimes I just want to go back in the history for each prompt I've asked in order to refetch whatever response the model has come up with.

An example use-case for me was fixing a Makefile in which I asked for suggestion for a specific thing. I copied what I needed but ended up having to reask in order to fetch the rest. That seemed like unnecessary wait and compute.

Gen.nvim don't work

Hi, when I try to use gen, the plugin doesn't generate anything:
image

I have installed ollama and mistral:instruct

image

I am using lazy to install it and this is my config to gen.nvim

return {
	"David-Kunz/gen.nvim",
	config = function()
		require("gen").prompts["DevOps me!"] = {
			prompt = "You Are a senior devops engineer, acting as an assistant. You offer help with cloud technologies like: Terraform, AWS, Kubernetes, Python, Azure DevOps yaml pipelines. You answer with code examples when possible. $input:\n$text",
			replace = true,
		}

		vim.keymap.set("v", "<leader>]", ":Gen<CR>")
		vim.keymap.set("n", "<leader>]", ":Gen<CR>")
	end,
}

I am using and MacBook pro m2 with Ventura 13.6

problem with rust

Hi,

I tried this plugin on go code, and it worked there, but when I try on some rust code, I get this:

Error executing vim.schedule lua callback: /Users/jos/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:99: invalid capture index
stack traceback:
[C]: in function 'gsub'
/Users/jos/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:99: in function 'substitute_placeholders'
/Users/jos/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:113: in function 'exec'
/Users/jos/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:200: in function 'cb'
/Users/jos/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:178: in function 'on_choice'
...are/nvim/lazy/dressing.nvim/lua/dressing/select/init.lua:78: in function ''
vim/_editor.lua: in function <vim/_editor.lua:0>
Press ENTER or type command to continue

On other rust snippets it did work.

replace = true not replacing

I created a custom prompt with replace = true but it just adds the result at the spot of the cursor without removing the selected part.

require('gen').prompts['Fix_Err'] = {
    replace = true,
    extract = "```$filetype\n(.-)```",
}

vim.keymap.set({'v', "n"}, '<leader>sf', function()
    local lsp = vim.lsp
    local cursor = vim.fn.getcurpos()
    local line_number = cursor[2] - 1
    local diagnostics = vim.diagnostic.get(0, { lnum = line_number })

    local lines = vim.api.nvim_buf_get_lines(0, 0, -1, false)
    local filecontents = table.concat(lines, "\n")
    local line = tostring(line_number)
    local error = diagnostics[1].message

    require('gen').prompts['Fix_Err'].prompt = "This is my code: \n\n```$filetype\n" .. filecontents .. "```\n\n"
        .. "Tell me the replacement for line " .. line 
        .. " that fixes the error \"" .. error 
        .. "\" in format: ```$filetype\n...\n``` without any other text."
    vim.api.nvim_command('Gen Fix_Err')
end)

It usually gets the code that would fix the error but it does not remove the selected part.

Am I doing something obviously wrong? I am new to vim.

error in readme

Following code from readme:

table.insert(require('gen').prompts, {
    Elaborate_Text = {
        prompt = "Elaborate the following text:\n$text",
        replace = true
    },
    Fix_Code = {
        prompt = "Fix the following code. Only ouput the result in format ```$filetype\n...\n```:\n```$filetype\n$text\n```",
        replace = true,
        extract = "```$filetype\n(.-)```"
    }
})

Will give wrong structure in prompts table like this:
prompts = { old_prompt={}, old_prompt2={}, {inserted_promp1={}, inserted_prompt2={}}
Which then gives error when running :Gen.
what worked was
require('gen').prompts['my custom prompt'] = { ... }

Text outside the floating windows

Hi, I'm new to nvim and I am currently using the Gen plugin. Recently, I noticed an issue when using the AI (Mistral) inside nvim. When the answer provided by the AI is long and fits onto a single line, it goes beyond the boundaries of the floating window. Does anyone have any advice on how to resolve this issue?

Additionally, I was wondering if it is possible to customize the size and position of the floating windows.

2023-12-15_19-11

Telescope integration

Hey,

first of all thanks for the awesome plugin! It's really fun to tinker around with ollama within neovim.

I've created an extension for telescope to select the prompt with fuzzy search. Are you interested in a pull request that adds the extension directly to gen.nvim?

Here's the repo for the extension: https://github.com/dj95/telescope-gen.nvim

And here's a short demo.

Screen.Recording.2023-11-19.at.12.52.47.mov

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.