Git Product home page Git Product logo

gptel's People

Contributors

akirak avatar alessandrow avatar bbigras avatar cashpw avatar codeasone avatar daedsidog avatar dbactual avatar fbergroth avatar fmguerreiro avatar joaotavora avatar karimaziev avatar karthink avatar marcuskammer avatar martenlienen avatar minad avatar moritzschaefer avatar mrdylanyin avatar neilfulwiler avatar nicehiro avatar nickanderson avatar palacechan avatar r0man avatar ridaayed avatar schroedi avatar tmr08c avatar tshu-w avatar yantar92 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gptel's Issues

How do I move the cursor to the new prompt after a query response is returned?

Is there a hook variable I can add a hook to to move to cursor to the new prompt?

I use evil-mode, so after submitting a query I just wait until the response is finished because I've found that pressing escape to enter visual mode cancels the request. After I get a response, I'd like to automatically focus on the next new prompt.

Thank you for this package, I've tried more than a few of these clients and this is my favorite by far!

mode-line-process instead of header-line-format

It would be great if we could use different indicators, e.g., the mode line instead of the header line for process updates via a configurable gptel-status-function. I just started experimenting with your package and sometimes it takes a long time for chatgpt to answer. Would it be possible to give more progress updates? For example in the browser the answer comes in "live".

Problem due to transient

Merging transient already wasn't ideal. I use gptel on Emacs 29 and I get the following error:

Error: void-function (transient-prefix)
  mapbacktrace(#f(compiled-function (evald func args flags) #<bytecode 0x1421f3accfae8684>))
  debug-early-backtrace()
  debug-early(error (void-function transient-prefix))
  transient-prefix(:command gptel-send-menu)

Couldn't you avoid relying on the newest features of transient? It would be great if this works out of the box with the transient version from Emacs 29 (or even better Emacs 28 only).

Is there another way to access openai api?

My company blocked openai api, so I con't use this package and any other package like these.

By I can deploy a bot with my own api on telegram, and emacs have telega, so I can chat on emacs.

Org support

I saw that you consider adding this. What are the obstacles? Could you use some generic regexp-based machinery to support different file types? Maybe you also need some converter for the responses, which I suspect are in markdown?

Query can be restarted multiple times

When pressing C-c RET on a headline multiple times, it seems possible to restart the query. This does not seem desired. It would be better if gptel prints an error message if a query is already running and hasn't completed yet.

Include context by header instead of "number of responses"

Thanks for an awesome tool.

It would be neat if the context could be controlled by header. So you can have a conversation like this (in org-mode):

* Topic A
** Question 1
Reponse
** Question 2
Response
* Topic B
** Question 1

And new questions in either topic fetch everything until the topic header as a context. I think this is currently done if I mark that region myself and have number of responses as nil? But I'm not sure and in any case it would be a nice default behavior (maybe also set the prompt default to * instead of ***).

gptel-send region in read-only buffer or text

I've added the following code in my init script to have gptel-send output the answer to the end of a buffer gptel-answer when the selected region is in a read-only buffer, or the :position points to a text that is read-only. Current beahvior is to error out, and leave the curl answer buffer available.

  (defun vh/gptel-insert-response-advice (old-function response info &rest args)
    (let ((gptel-buffer (plist-get info :buffer)))
      (with-current-buffer gptel-buffer
        (when (or buffer-read-only
                  (get-text-property (plist-get info :position) 'read-only))
          (let ((answer-buffer (get-buffer-create "*gptel-answer*")))
            (setq info (plist-put info :buffer answer-buffer))
            (setq info (plist-put info :position (with-current-buffer answer-buffer (point-max))))
            (when (not (get-buffer-window answer-buffer 'visible))
              (pop-to-buffer answer-buffer))))
        (apply old-function (cons response (cons info args)))
        (when buffer-read-only
          (pop-to-buffer "*gptel-answer*")))))
  (advice-add #'gptel--insert-response :around #'vh/gptel-insert-response-advice)
  (advice-add #'gptel-curl--stream-insert-response :around #'vh/gptel-insert-response-advice)

You might consider using this approach or something along these lines in your code.

error in process sentinel

I set up gptel by (straight-use-package 'gptel), and tried to run gptel in Emacs. It seems like I have successfully submitted the API key, but it gives me an error:

Send your query with C-c RET!
Querying ChatGPT...
error in process sentinel: gptel-curl--parse-response: Search failed: "some abc123 thing"
error in process sentinel: Search failed: "some abc123 thing"

I am able to run GPT 4 openai online and also be able to use openai api in R.

Any direction to solve the problem will be very helpful.
Thanks.

How do you tweak the model parameters?

It looks like (at least at the moment) it's sending requests with:

"You are a large language model living in Emacs and a helpful assistant. Respond concisely.",

But what if I don't want it to respond concisely? How can I configure it so the responses in Emacs virtually would be the same as in the browser?

I can see that you added gptel--system-message-alist, but it doesn't seem to be used. Also, it seems model parameter vars defined as locals, making it difficult to override them. Are you planning to make things more configurable?

Request failure when prompt has special characters, like accented characters or cedilla

Here's what I get in the **Messages** buffer:

Querying ChatGPT...
error in process sentinel: url-http-create-request: Multibyte text in HTTP request: POST /v1/chat/completions HTTP/1.1
MIME-Version: 1.0
Connection: keep-alive
Extension: Security/Digest Security/SSL
Host: api.openai.com
Accept-encoding: gzip
Accept: */*
User-Agent: URL/Emacs Emacs/27.1 (X11; x86_64-pc-linux-gnu)
Content-Type: application/json
Authorization: Bearer [token]
Content-length: 104

{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"qual é o melhor editor, vim ou emacs?"}]}
error in process sentinel: Multibyte text in HTTP request: POST /v1/chat/completions HTTP/1.1

Thanks for sharing the package.

Incorrect Model Requested

I got requests working, and decided to ask GPT who it was:

*** Are you GPT4?

As of my current programming, there is no such thing as GPT-4 yet. The most advanced version of OpenAI's GPT (Generative Pretrained Transformer) series is currently GPT-3, which I am built upon.

Even though I have gptel-model set to "gpt-4". I debugged the request and got:

HTTP/2 200 
date: Thu, 13 Apr 2023 04:36:44 GMT
content-type: text/event-stream
access-control-allow-origin: *
cache-control: no-cache, must-revalidate
openai-model: gpt-3.5-turbo-0301
openai-organization: user-qjkgvih2ozde13ryzoiwm7t3
openai-processing-ms: 348
openai-version: 2020-10-01
strict-transport-security: max-age=15724800; includeSubDomains
x-ratelimit-limit-requests: 3500
x-ratelimit-remaining-requests: 3499
x-ratelimit-reset-requests: 17ms
x-request-id: 20bc2c2c51b1dd2d503035a274b95064
cf-cache-status: DYNAMIC
server: cloudflare
cf-ray: 7b71027dc802e005-NRT
alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400

Notice the openai-model field.

Show error message when no connection can be established

Thanks for the great package!

It would be very helpful if we could see the error messages in cases the connection did not work. For example when using curl we get a helpful hint:

 curl https://api.openai.com/v1/chat/completions \                                                    (base)
                                     -H "Content-Type: application/json" \
                                     -H "Authorization: Bearer sk-K6F..." \
                                     -d '{
                                 "model": "gpt-3.5-turbo",
                                 "messages": [{"role": "user", "content": "Say this is a test!"}],
                                 "temperature": 0.7
                                 }'

leads to:

{
    "error": {
        "message": "You exceeded your current quota, please check your plan and billing details.",
        "type": "insufficient_quota",
        "param": null,
        "code": null
    }
}

Currently in Emacs it only shows *ChatGPT* Response Error: finished.

"Set directives for chat" failes with error

Hi, great package.

I have this issue that when running M-x gptel-menu h the following error is thrown and no menu is shown:

    transient-setup: Suffix transient:gptel-system-prompt:Programming is not defined or autoloaded as a command

I installed from MELPA:
Version: 20230413.602
Commit: a5be53d

I call gptel-menu from a normal markdown-buffer.

Wired thing though is, that when I install gptel freshly via list-packages it works. Only when I restart emacs and load gptel via require or use-package it does not work.

Here is my use-package code:

  (use-package gptel
    :ensure t
    :config
    (setq gptel-api-key #'(lambda nil
                          (plist-get
                           (nth 0
                                (auth-source-search :Title "Chat GPT"))
                           :API\ Key))))

Similar to #27 ?

Response error nil

After some conversation chatgpt just stopped abruptly.

Moreover, the probabilistic nature of quantum mechanics is often at odds with our classical intuition, and the emergence of a classical world from the underlying quantum mechanics is still not fully understood. The decoherence approach provides a valuable framework for understanding this emergence, but it

Now I am getting Response error nil errors in the header line and it does not seem possible to recover. Did you observe this issue too? If we get #7 it should be possible to just restart.

Conversation feature and GPT-4 support.

I try to ask my question based on the context, but it seems that the reply from gptel doesn't use the past information.

Any tips for enabling the conversations features when using this package?

Regards,
Zhao

gptel-prompt-string should be "* " if gptel uses org-mode

I've set (setq gptel-default-mode 'org-mode). Prompts are in org-mode, but they still begin with '### ', which indicates a markdown heading. It would be better if they began with '* ' or '** ' etc., so that I can collapse and navigate the prompts with the standard org-mode commands.

(Admittedly, this is easy to fix manually with (setq gptel-prompt-string "* ").)

error in process sentinel: Multibyte text in HTTP request: POST /v1/chat/completions HTTP/1.1...

Hi,
It's been working great for me lately, but I just got the error below.
I've fed it an abstract and asked for a summary. Maybe it was too long?

Thanks!

{"model":"gpt-3.5-turbo","messages":[{"role":"system","content":"You are a large language model living in Emacs and a helpful assistant. Respond concisely."},{"role":"user","content":"summarize the following abstract: Sounds can arise from the environment and also predictably from many of our own movements, such as vocalizing, walking, or playing music. The capacity to anticipate these movement-related (reafferent) sounds and distinguish them from environmental sounds is essential for normal hearing1,2, but the neural circuits that learn to anticipate the often arbitrary and changeable sounds that result from our movements remain largely unknown. Here we developed an acoustic virtual reality (aVR) system in which a mouse learned to associate a novel sound with its locomotor movements, allowing us to identify the neural circuit mechanisms that learn to suppress reafferent sounds and to probe the behavioural consequences of this predictable sensorimotor experience. We found that aVR experience gradually and selectively suppressed auditory cortical responses to the reafferent frequency, in part by strengthening motor cortical activation of auditory cortical inhibitory neurons that respond to the reafferent tone. This plasticity is behaviourally adaptive, as aVR-experienced mice showed an enhanced ability to detect non-reafferent tones during movement. Together, these findings describe a dynamic sensory filter that involves motor cortical inputs to the auditory cortex that can be shaped by experience to selectively suppress the predictable acoustic consequences of movement."},{"role":"assistant","content":"The study explores the neural mechanisms that allow us to distinguish sounds that arise from our movements from those that come from the environment. They used mice in an acoustic virtual reality setting to identify the neural circuits responsible for suppressing movement-related sounds and the behavioral consequences of this experience. The findings suggest that motor cortical inputs to the auditory cortex shape a sensory filter that can suppress predictable acoustic consequences of movement and enhance auditory discrimination during movement."},{"role":"user","content":"what is this paper about? Sensory regions of the brain integrate environmental cues with copies of motor-related signals important for imminent and ongoing movements. In mammals, signals propagating from the motor cortex to the auditory cortex are thought to have a critical role in normal hearing and behaviour, yet the synaptic and circuit mechanisms by which these motor-related signals influence auditory cortical activity remain poorly understood. Using in vivo intracellular recordings in behaving mice, we find that excitatory neurons in the auditory cortex are suppressed before and during movement, owing in part to increased activity of local parvalbumin-positive interneurons. Electrophysiology and optogenetic gain- and loss-of-function experiments reveal that motor-related changes in auditory cortical dynamics are driven by a subset of neurons in the secondary motor cortex that innervate the auditory cortex and are active during movement. These findings provide a synaptic and circuit basis for the motor-related corollary discharge hypothesized to facilitate hearing and auditory-guided behaviours."},{"role":"assistant","content":"This paper investigates how motor-related signals from the motor cortex influence auditory cortical activity in mice, and the synaptic and circuit mechanisms underlying this interplay. The research findings suggest that a subset of neurons in the secondary motor cortex that innervate the auditory cortex and are active during movement drive motor-related changes in auditory cortical dynamics, likely facilitating hearing and auditory-guided behaviors. The study used in vivo intracellular recordings and optogenetics in mice."},{"role":"user","content":"what is this paper about? In the present study we investigated the neural code of sensory predictions. Grounded on a variety of empirical findings, we set out from the proposal that sensory predictions are coded via the top-down modulation of the sensory units whose response properties match the specific characteristics of the predicted stimulus (Albright, 2012; Arnal and Giraud, 2012). From this proposal, we derive the hypothesis that when the specific physical characteristics of the predicted stimulus cannot be advanced, the sensory system should not be able to formulate such predictions, as it would lack the means to represent them. In different conditions, participant   s self-paced button presses predicted either only the precise time when a random sound would be presented (random sound condition) or both the timing and the"}],"temperature":1.0}
error in process sentinel: Multibyte text in HTTP request: POST /v1/chat/completions HTTP/1.1

MIME-Version: 1.0

Connection: keep-alive

Extension: Security/Digest Security/SSL

Host: api.openai.com

Accept-encoding: gzip

Accept: */*

User-Agent: URL/Emacs Emacs/28.1 (Windows-NT; 32bit; x86_64-w64-mingw32)

Content-Type: application/json

Authorization: Bearer sk-c---------------------------------------------pS

Content-length: 4777

Stateless design

Inspired by #7 I had the idea that it would be great if gptel uses a "stateless" design. If I understood you correctly this is also how the gpt api (or generally llms) work, since you have to send everything again on every request. More precisely the idea is that you don't maintain any internal state in gptel and instead take everything from the current buffer.

  • The last n headings (and content below corresponding to the responses) are taken as prompt
  • The model parameters are stored in a :gpt: Org drawer and taken from there. The drawer will only be present when you change the parameters from the default. For markdown or other text formats you could use a source block at the very beginning instead, similar to the front matter of static website generators.
  • Model parameter tweaking could still be done via the neat transient UI. The changed parameters would just have to be stored in the drawer. See also #6.

This change would give you restarting chats for free. Using gptel would just mean to turn on the gptel-mode (local minor mode) in an already existing buffer.

authinfo, gpg, password store

After following the usage section I added this config:

           (setq auth-sources '("~/.authinfo"))
           (setq auth-source-pass-filename "~/.password-store")

But gptel still prompts me for an api key.

What do you think I am missing?

Add auth-source api function

Would it make sense to provide this function or even set gptel-apikey to this function by default?

(defun gptel-apikey-from-auth-source ()
  (funcall (plist-get (car (auth-source-search :host "openai.com" :user "apikey")) :secret)))))

I get wrong-type-argument lisp error when querying ChatGTP

I installed gptel package from MELPA. It's the latest version as far as I can tell (20230407.32).
I'm using with a free openai account. I'm not sure if this package works with the free account.

When I send a query I get the following error:

Debugger entered--Lisp error: (wrong-type-argument integer-or-marker-p nil)
  make-overlay(38 nil)
  pulse-momentary-highlight-region(38 nil)
  gptel-curl--stream-cleanup(#<process gptel-curl> "finished\n")

In the message buffer I have this output:

gptel-curl--sentinel
Send your query with C-c RET!
Mark set
Querying ChatGPT...
HTTP/2 429: Could not parse HTTP response.
Entering debugger...
Mark set

I also evaluated the function in https://github.com/karthink/gptel/issues/10#issuecomment-1463617391 and go this output:

HTTP/2 429 
date: Sat, 08 Apr 2023 11:20:35 GMT
content-type: application/json; charset=utf-8
content-length: 206
vary: Origin
x-request-id: 1d3db88ff14cfd42943bbb25cbafa753
strict-transport-security: max-age=15724800; includeSubDomains
cf-cache-status: DYNAMIC
server: cloudflare
cf-ray: 7b4a1f350d0f194a-BCN
alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400

{
    "error": {
        "message": "You exceeded your current quota, please check your plan and billing details.",
        "type": "insufficient_quota",
        "param": null,
        "code": null
    }
}
(a0ea421e57ee8e0ec12316d692184b48 . 376)

I'm running emacs 28.2 in Archlinux. curl version is 8.0.1
Although it says my quota is over I couldn't make it work a single time. Also, I'm not sure it's relevant but when I check my keys in the openai website I see that my keys were never used.

José.

Cannot set directive for a chat with `gptel-send`

When I try to set the directive for a chat with gptel-send (selecting h) I get the following error message:

transient-setup: Suffix transient:gptel-system-prompt:Programming is not defined or autoloaded as a command

Maybe I'm using it not in the intended way?

Usage of aio

I must admit, I am a bit ideological about avoiding dependencies in Emacs packages, which may break the package in the future. Does aio add much value or could we remove it as well, just using callbacks? I noticed that skeeto (the author of aio) is not actively using Emacs anymore, except Elfeed, which is not an ideal maintenance situation. I am happy to help out here with PRs of course, don't feel pressured to act on any of the issues I've opened.

Multi-byte char handling with url-retrieve is broken

The proposed solution for Emacs bug 23750 is to call encoding-coding-string on the JSON request and send utf-8 text, but this does not seem to work. This bug and workaround is also explained here.

(let (((url-request-method "POST")
       (url-request-extra-headers
        '(("Content-Type" . "application/json")))
       (url-request-data
        (encode-coding-string
         (json-encode
          '(:messages [(:role "user"
                        :content "qual é o melhor editor, vim ou emacs?")]))
         'utf-8))))
  (url-retrieve-synchronously "https://api.openai.com/v1/chat/completions"))

url-http-create-request still complains about the message length and message size (in bytes) not lining up.

Underscores `_` in responses are replaced by `/`

Great package, and I enjoy its integration ootb with the new preview system in Org-mode that you are developing.

This issue is just a small annoyance: The underscores _ in responses are replaced by /, this messes up some of the latex equation outputs:

Screenshot from 2023-04-09 23-15-13

The model gpt-4 does not exist

Selecting gpt model gpt-4, and sending a prompt results in this message: ChatGPT error: (HTTP/2 404) The model: gpt-4 does not exist.

Selecting gpt model gpt-3.5-turbo sends and receives an answer from the API.

If it's a configuration issue on my end I'm not sure what to do, i added this to my authstore (.authinfo):

machine openai.com login apikey password

and i have a paid account on openai as well as for chatgpt+.

GPT-4 is a model in the open ai docs.

Aborting active query

Sometimes GPT goes running off in the wrong direction. I'd like to be able to abort it to rephrase my prompt, without waiting for it to finish chugging out junk. I'm imagining a gpt-abort or gpt-cancel which cancels any queries in the current buffer, but I'm up for whatever.

Can't get it to work

I'm using latest --head Emacs:

GNU Emacs 30.0.50 (build 1, aarch64-apple-darwin22.3.0, NS appkit-2299.40 Version 13.2.1 (Build 22D68)) of 2023-03-06

And can't figure out why it's not working. The API key is set, gptel opens the buffer, it says "Ready". I would type something and press "C-c RET", and it immediately fails with 'Response Error: nil'.

I'm not sure how to debug aio functions. Edebug doesn't work nicely with them. I'm not sure if it matters (featurep 'gptel-transient) always returns nil for me.

Line breaks

Another small issue is that gptel inserts the responses as long single lines. Do you use this package with visual-fill-column or would it make sense to reformat the answer with manual line breaks? Maybe this could break some formatting of code blocks or equations?

ChatGPT error (nil): Malformed JSON in response.

On Ubuntu 22.10, I updated my Emacs to its latest git master version, but found that gptel doesn't work for this version, as shown below:

image

Is this a bug of this package or Emacs itself?

Regards,
Zhao

The suggested default key binding confliction with auctex.

I'm using auctex, which has bound the following short key, conflicted with the suggested one used by this package:

C-c RET (translated from C-c <return>) runs the command
TeX-insert-macro (found in LaTeX-mode-map), which is an interactive
byte-compiled Lisp function in ‘tex.el’.

It is bound to C-c RET.

(TeX-insert-macro SYMBOL)

Insert TeX macro SYMBOL with completion.

AUCTeX knows of some macros and may query for extra arguments, depending on
the value of ‘TeX-insert-macro-default-style’ and whether ‘TeX-insert-macro’
is called with C-u.

[back]

Considering that auctex is a very popular tool with a long history, I don't want to change its default key binding convention. So, I ask you for an alternative key binding convention in your package.

Regards,
Zhao

Symbol’s function definition is void when running query

Hi,
I'm getting the following while running a query:

Querying ChatGPT...
gptel-curl--sentinel: Symbol’s function definition is void: nil

Even after turning on debug-on-error I'm not getting any stack trace, just this error.
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.