Comments (6)
From the readme it wasn't clear to me that the behavior for no selection is already defined.
Yes, this is intentional since the behavior with no selection is not finalized yet.
The previous part of the buffer up to point it often too much context and irrelevant.
I agree generally, but it depends on how you're using ChatGPT here. With the current behavior you can have a continuous conversation in any buffer, not just a dedicated one created with M-x gptel
. I haven't found the right tradeoff yet. I'll try experimenting with this behavior for a bit.
from gptel.
@tmalsburg Please see the wiki for options. It should be easy to get this behavior using gptel-request
. I don't think I'll be adding sending the current line as a default, it doesn't fit with the rest of the design.
from gptel.
I'm assuming you mean this in non-dedicated buffers, for example in a buffer with some code?
If nothing is selected it already uses the contents of the buffer until (point)
as the prompt. This includes identifying past responses as responses. The readme suggests selecting a region first to limit the text that's used as the prompt, since you may not want to send everything from the beginning of the buffer to the cursor position.
However I don't see the point of sending just the current line by default. Could you explain the advantage of this behavior?
from gptel.
Thanks for the response. From the readme it wasn't clear to me that the behavior for no selection is already defined.
I'm assuming you mean this in non-dedicated buffers, for example in a buffer with some code?
Yes, sorry should have added more detail.
However I don't see the point of sending just the current line by default. Could you explain the advantage of this behavior?
The previous part of the buffer up to point it often too much context and irrelevant. For my personal use cases the current line is often enough as a prompt. But I should say, that I'm not primarily using it for code but for text. When writing text, I use visual-line-mode and a buffer line can be a whole paragraph.
By the way, I noticed that sometimes existing text is deleted when GPT responses are inserted in the buffer. But it seemed a bit random and I wanted to get a better understanding of what's going on before creating an issue for that.
from gptel.
By the way, I noticed that sometimes existing text is deleted when GPT responses are inserted in the buffer. But it seemed a bit random and I wanted to get a better understanding of what's going on before creating an issue for that.
If this is the case, it's a bug. The desired behavior is to always insert the response below the current line.
Does it happen in dedicated gptel buffers, or in other buffers when you select a region, or both?
from gptel.
Does it happen in dedicated gptel buffers, or in other buffers when you select a region, or both?
I actually haven't used the dedicated buffer yet, so I don't know whether it happens there. But it does happen in other buffers. The buffer in which it happened was an org mode buffer and the GPT response replaced a subsequent org section header. If I find time I will try to come up with a reproducible example and create another issue.
from gptel.
Related Issues (20)
- gptel-org-branching-context requires Org 9.7 (unreleased) HOT 6
- Watermarking / inline annotation of LLM generated content vs prompt content HOT 2
- Org mode response formatting HOT 8
- Ollama as default backend HOT 1
- Perplexity: citations HOT 14
- Different results between gptel+ChatGPT and chat.openai.com HOT 2
- Auto Fill mode should be respected HOT 4
- Wrong language in the source block header HOT 4
- Help using local backends HOT 2
- Interacting with privateGPT (specifically for RAG) HOT 10
- Is it possible to interact with LLMs provided by an Open WebUI? HOT 1
- Error with the local LLM scripts in init file HOT 5
- How to disable the highlight after its done generating HOT 9
- How to register additional backends in doom emacs? HOT 5
- Enhance `gptel` to allow selecting from list of existing gptel buffers, when called interactively HOT 1
- [enhancement] Provide information about the different models HOT 1
- Updating `DoomEmacs`/`gptel` and chaing lightly configuration => only garbage generated now... HOT 8
- (gptel-menu) Truncated width of system message is wrong
- Better prefix-arg behavior for M-x gptel
- gptel-request wrong type argument HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gptel.