Comments (9)
Which one do you think should be the default?
I don't care much, but personally I would for sure disable the header line.
The reason I picked the header-line is because there are too many mode-line packages out there that transform/hide mode-line elements and minor-mode lighter info.
I agree. In my setup I only use minions and made a few simple adjustments to the mode line myself. But besides that my mode-line is mostly vanilla.
Since the header-line is only intended for dedicated gptel buffers, it seemed like a safe bet.
Indeed, it is. The mode-line-process
variable should be a reasonable safe alternative.
from gptel.
@minad Streaming responses is now added, and is turned on by default.
from gptel.
Done, finally! Set gptel-use-header-line
to nil
for less obtrusive status display/messaging when using gptel-mode
.
from gptel.
Sure, the header-line is just a stopgap solution. I'm sure there are better alternatives.
To get more progress updates, there are a couple of possibilities:
- The API has a stream option that sends chunks of data. It is possible to write a process filter that updates the buffer live as these come in, like in the browser. This is the correct solution, but I've been working on other aspects. (PRs are welcome!)
- There is a
gptel-playback
option in the package that plays back data in chunks. This doesn't save you any time, it waits until it receives the full response, then plays it back continuously so it's actually slower! It's silly but it does make the process feel somewaht interactive. - Another cheap trick I've tried is to
pulse-momentary-highlight-region
when the response comes in. It's still slow, but like the previous option it feels a little more interactive.
from gptel.
The API has a stream option that sends chunks of data. It is possible to write a process filter that updates the buffer live as these come in, like in the browser. This is the correct solution, but I've been working on other aspects. (PRs are welcome!)
This is the approach I favor. I am happy to help out if I find time, but I just started experimenting with this. The overall approach of your package seems good with the list of prompts in a plain text document, but I haven't checked out the other solutions.
from gptel.
Would you accept a quick PR which replaces the header-line with a mode-line indicator? I would add a gptel-status-function
which can be either set to a function for the header-line and a function for the mode-line
.
from gptel.
Sure, that would be a welcome addition! Which one do you think should be the default?
The reason I picked the header-line is because there are too many mode-line packages out there that transform/hide mode-line elements and minor-mode lighter info. doom-modeline
is a popular example of this -- no Doom users would see any status messages until someone adds explicit support for doom-modeline
. This makes my job of reporting errors/readiness etc harder.
Since the header-line is only intended for dedicated gptel buffers, it seemed like a safe bet.
from gptel.
Indeed, it is. The
mode-line-process
variable should be a reasonable safe alternative.
Sounds good to me.
from gptel.
This is the approach I favor. I am happy to help out if I find time, but I just started experimenting with this. The overall approach of your package seems good with the list of prompts in a plain text document, but I haven't checked out the other solutions.
@minad I added continuous response streaming -- it's now much faster (and continues to be async):
stream-demo.mp4
I haven't merged it yet (it's in the streaming
branch) since the markdown -> Org conversion from an incomplete parse state is proving tricky.
from gptel.
Related Issues (20)
- gptel-org-branching-context requires Org 9.7 (unreleased) HOT 6
- Watermarking / inline annotation of LLM generated content vs prompt content HOT 2
- Org mode response formatting HOT 8
- Ollama as default backend HOT 1
- Perplexity: citations HOT 14
- Different results between gptel+ChatGPT and chat.openai.com HOT 2
- Auto Fill mode should be respected HOT 4
- Wrong language in the source block header HOT 4
- Help using local backends HOT 2
- Interacting with privateGPT (specifically for RAG) HOT 10
- Is it possible to interact with LLMs provided by an Open WebUI? HOT 1
- Error with the local LLM scripts in init file HOT 5
- How to disable the highlight after its done generating HOT 9
- How to register additional backends in doom emacs? HOT 5
- Enhance `gptel` to allow selecting from list of existing gptel buffers, when called interactively HOT 1
- [enhancement] Provide information about the different models HOT 1
- Updating `DoomEmacs`/`gptel` and chaing lightly configuration => only garbage generated now... HOT 8
- (gptel-menu) Truncated width of system message is wrong
- Better prefix-arg behavior for M-x gptel
- gptel-request wrong type argument HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gptel.