Git Product home page Git Product logo

sigoden / aichat Goto Github PK

View Code? Open in Web Editor NEW
3.6K 28.0 239.0 3.38 MB

All-in-one AI CLI tool featuring Chat-REPL, Shell Assistant, RAG, AI tools & agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.

License: Apache License 2.0

Rust 81.79% Shell 3.96% PowerShell 1.07% Nushell 0.51% HTML 12.68%
azure-openai claude gemini ollama openai vertexai chatbot all-in-one bedrock function-calling

aichat's People

Contributors

anthonymichaeltdm avatar arcuru avatar barakplasma avatar chiefmarlin avatar dtrckd avatar gauravkumar37 avatar hkabig avatar jacksongoode avatar josephgoulden avatar kelvie avatar kyu08 avatar lbergnehr avatar lifetimemistake avatar marcusziade avatar nicoretti avatar oatish avatar projectmoon avatar rolfwilms avatar sigoden avatar tequdev avatar vlisivka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aichat's Issues

经常遇到连接失败的提示

配置了key和socks5代理,不知道为什么有大概率会遇到下面的错误.

用过另外一个叫GptCli的程序,那个设置了代理后连接问答就很稳定.

aichat 写一个计算pi的c#程序
Error: Failed to fetch stream

Caused by:
    0: error sending request for url (https://api.openai.com/v1/chat/completions): error trying to connect: operation timed out
    1: error trying to connect: operation timed out
    2: operation timed out

How do I reset the api_key?

Uninstalling and then reinstalling doesn't fix it. Where is it being stored? Changing the API key in the config file does not work either. It still uses the old one.

Multiline Input for Conversation Mode

Thank you for adding the conversation mode feature! It's great and takes aichat to another level! I really enjoy using it.

However, I noticed that the conversation mode doesn't currently allow for multiline input. Would it be possible to add an editor mode for multiline input in the conversation too? This would be very helpful for users who need to input longer prompts or code snippets.

Thank you for considering, and keep up the good work!

aichat没有上下文联系能力

版本 0.7

比如

# CHAT:[2023-03-15T16:38:33+08:00]
spring是如何实现http请求路由匹配的

--------
Spring中的HTTP请求路由匹配是通过Spring MVC中的DispatcherServlet来实现的。

DispatcherServlet是Spring MVC中的核心控制器,它接收所有的HTTP请求,并根据配置的HandlerMapping进行路由匹配。在Spring中,我们可以定义多个HandlerMapping来实现不同的路由策略。

默认情况下,Spring使用RequestMappingHandlerMapping作为HandlerMapping的实现类,它可以通过@RequestMapper注解来定义请求映射规则。当DispatcherServlet接收到一个HTTP请求时,它会遍历所有的RequestMappingHandlerMapping,并找到与请求URI对应的HandlerMethod。

HandlerMethod是Spring MVC中处理请求的关键类,它包含一个Controller对象和一个Method对象,Controller对象负责处理请求,Method对象代表Controller中的一个方法。

一旦DispatcherServlet找到了匹配的HandlerMethod,它就会调用HandlerAdapter执行Controller中的方法。HandlerAdapter是用来执行Controller方法的适配器,它可以将HTTP请求参数转换成Controller方法需要的参数类型,并执行Controller方法。

当Controller方法执行完毕后,HandlerAdapter会返回一个ModelAndView对象,其中包含了处理结果和跳转信息。DispatcherServlet会根据ModelAndView中的信息来处理响应并向客户端返回数据。

总体来说,Spring的HTTP请求路由匹配机制是通过HandlerMapping、HandlerMethod和HandlerAdapter这三个核心组件协同工作来实现的。通过对这些组件的配置和扩展,我们可以实现不同的路由策略和业务逻辑。
--------

# CHAT:[2023-03-15T16:42:11+08:00]
你能给我展示一些核心的代码片段吗

--------
作为AI,我可以给您展示一些常见的核心代码片段,如:

1. 在Python中打印字符串:

```python
print("Hello, World!")
  1. 在Java中打印 Hello World:
System.out.println("Hello, World!");
  1. 在JavaScript中声明一个变量:
let myVariable = "Hello, World!";
console.log(myVariable);
  1. 在C++中使用if语句:
if (x > 10) {
  cout << "x is greater than 10";
}
  1. 在Ruby中使用循环:
for i in 1..5 do
  puts "Current value is #{i}"
end

以上是一些最基本的代码片段,当然还有很多其他的代码片段,在不同的编程语言中实现不同的功能。


这个似乎是因为在请求api时,`messages`数组只传递了最新值,需要将这个对话的所有信息都传递过去才可以联系到上下文

```json
{
    "model": "gpt-3.5-turbo",
    "messages": [
        {
            "role": "user",
            "content": "spring是如何实现http请求路由匹配的"
        },
        {
            "role": "assistant",
            "content": "\n\nSpring 的请求路由匹配基于 HandlerMapping 接口实现。\n\nHandlerMapping 接口定义了处理器映射的规范,它的作用是将 HTTP 请求映射到不同的处理器(Controller)上。Spring 中提供了多种 HandlerMapping 的实现方式,例如:\n\n1. BeanNameUrlHandlerMapping :根据 Bean 的名称进行 URL 映射;\n2. SimpleUrlHandlerMapping :根据配置文件中的 URL 映射映射请求;\n3. DefaultAnnotationHandlerMapping :根据注解定义的 URL 映射映射请求。\n\n在 Spring 中,可以通过配置多个 HandlerMapping 来实现不同的 URL 映射策略,多个 HandlerMapping 会形成 HandlerMapping 链。\n\n每个 HandlerMapping 都会尝试进行 HTTP 请求的映射,如果映射成功,则返回对应的处理器(Controller)及其执行链(HandlerInterceptor 的集合)。如果没有匹配成功,则将请求传递给下一个 HandlerMapping 进行匹配。\n\n当映射成功后,DispatcherServlet 将处理器及其执行链封装成 HandlerExecutionChain 对象并缓存起来,后续请求就可以直接使用缓存中的处理器及其执行链,不需要再执行 HandlerMapping。\n\n使用 HandlerMapping 对路由进行匹配的好处是能够提高 Spring 应用程序的灵活性,允许开发人员根据不同的 URL 微调操作,提供更好的用户体验和便利性。"
        },
        {
            "role": "user",
            "content": "你能给我展示一些核心的代码片段吗"
        },
        {
            "role": "assistant",
            "content": "当然可以!以下是一个使用注解方式映射 URL 的示例代码,演示了 Spring 是如何实现 HTTP 请求路由匹配的:\n\n```java\n@Controller\n@RequestMapping(\"/example\")\npublic class ExampleController {\n \n    @RequestMapping(\"/hello\")\n    public ModelAndView helloWorld() {\n        String message = \"Hello World, Spring MVC @ Javapoint\";\n        return new ModelAndView(\"example/hello\", \"message\", message);\n    }\n   \n    @RequestMapping(value = \"/user/{userId}\", method = RequestMethod.GET)\n    public ModelAndView getUser(@PathVariable(\"userId\") int userId) {\n        User user = userService.getUserById(userId);\n        ModelAndView model = new ModelAndView(\"example/user\", \"user\", user);\n        return model;\n    }\n}\n```\n\n在上面的代码中,我们定义了一个名为 `ExampleController` 的控制器,并在其类级别上使用了 `@RequestMapping` 注解,指定了控制器处理的基本 URL。\n\n接着,我们定义了两个处理器方法,即 `helloWorld()` 和 `getUser()`,在这里我们使用了 `@RequestMapping` 注解来表示可处理的 URL 连接,其中:\n\n- `@RequestMapping(\"/hello\")` 匹配的 URL 为 \"/example/hello\"。\n- `@RequestMapping(value = \"/user/{userId}\", method = RequestMethod.GET)` 匹配 URL 的形式为 \"/example/user/xxx\" 的 GET 请求。\n\n通过这种方式,程序员可以简单、易懂地管理和维护 URL 请求和处理器的映射规则。"
        },
        {
            "role": "user",
            "content": "再展示一些"
        }
    ]
}

Share role examples

I have provides some Role Examples.

Could you share some role examples that you think are awesome.

Note: Role must be related to programming.

[REQUEST] make window install easy using scoop

i think that is the time when , project must be handled / Installed using package manger
but most of time people forgot window , so im here

scoop is package manger for window which i use ,
so i want aichat on its manifest for easier install and upgrade

thx

[Feature Request] chat/session mode (i.e. back and forth conversation)

Currently, aichat behaves in a one-off request/response manner -- it has no "context" info.

e.g.

> show me some haskell code to print fibonacci numbers
Here is the Haskell code to print the first n Fibonacci numbers:

────────────────────
fib :: Int -> [Int]
fib n = take n fibList
  where fibList = 0 : 1 : zipWith (+) fibList (tail fibList)

main :: IO ()
main = do
    putStrLn "Enter the number of Fibonacci numbers to print:"
    n <- readLn
    putStrLn $ "The first " ++ show n ++ " Fibonacci numbers are: "
    print $ fib n
────────────────────

> now convert it to python
Sorry, I'm an AI language model and I would need to see the original code to convert it to Python. 
Can you please provide more context or the original code you would like to convert?

We have chat in our name, so it'd be nice if it can mimic a real chatting bot (a la ChatGPT) who can have a conversation. Implementation-wise (from a high level view), we can just extend the messages sent to the OpenAI API with every input & output we've got so far.

As for the UI, we probably can make it default in interactive mode, while adding some extra commands like .clear session, .mode chat, etc.

[feature request] don't stream response

The current implementation of --no-stream makes sense to some degree, i.e. the OpenAI response is not streamed. However, it doesn't make sense to render the 'ticker' that way - we have the full response in memory anyway.

It would also make sense to be able to toggle this dynamically:

.set stream true
.set stream false

The current --no-stream can be maintained and just set the config value (config.stream) to false.

I took a look at render/mod.rs:render_stream and render/repl.rs:repl_render_stream but don't have enough Rust experience to idiomatically solve this. My hunch is that there are two types of receivers ReplyStreamEvent and ReplyEvent. The writer is then either streamed or instant.

Add text to speech feature

Nice work, could you add a feature could speak the response message directly?
For example, on Mac OS we can use “say message" command easily.

feature request: YAML anchors and aliases for simplifying roles.yaml

Sorry if this is naive (it probably is, I'm less than a week into using aichat), but I think it would be useful to be able to use YAML anchors and aliases. This is the usecase I had in mind:

- name: &code_generator
  prompt: code_generator
    Your answer must only executable code.
    Do not answer with fenced code block.
    The first line of your answer must contain a she-bang.
    Provide explanations in comments.

- name: bash
  prompt: bash generator                                                                                            <<: *code_generator
    Your answer must only be executable Bash code

- name: python
  prompt: python generator                                                                                          <<: *code_generator
    Your answer must only be executable Python code

Basically I have a set of requirements that I ask aichat when generating code in any programming language. I'd like to be able to have these requirements be inherited by specific language roles, which would allow me to tweak the wording of only the code_generator if I want to change how all of the languages are generated.

I can just keep copy/pasting the common requirements though. What do you think?

Working around max tokens limit

I have hit the max tokens limit during a conversation.

Is there a way to reset the limit or remove it during a conversation? Currently the only way to continue is to restart the conversation but then I lose the context.

chatgpt output word limit

pipe input/output, but output word too much, once can not be output completed, how to continue to output the previous content? Or in Chat mode (set config.conversation_first true), the ability to pipe the input file so that we can type 'continue' to let chatgpt continue typing before ?

Cannot print output

When I update to version 0.4.0, after I enter the problem, I can't see the output. But I found the response in the messages.md.

  • OS: Windows 10
  • shell: powershell
  • terminal : alacritty
  • version: aichat 0.4.0

.copy command not working in WSL2 terminal

Running aichat in Ubuntu 22.04 on Windows WSL2, the .copy command doesn't work.
I suppose copypasta thinks it's running a standard Linux machine, but is there some kind of workaround?
In order to write to the Windows clipboard from WSL, you can pipe the contents to clip.exe which is in PATH.

Suggestion for `roles_file structure` to classify items

Here's an example of the structure I'm suggesting !

It could make it more organized and easier to manage, more flexible and extendable.

- category: game
  item
    - name: escape
      prompt: balabala 00
      temperature: null
    - name: logue
      prompt: balabala 01
      temperature: null
- category: emulator
  item
    - name: advanture
      prompt: balabala 02
      temperature: null
    - name: linux
      prompt: balabala 03
      temperature: null

Allowing for easy expansion and customization in the future.

args in roles.yaml

✨amazing project btw!✨

I was trying to use the translator: role example, but I'm not quite understanding the ARGn substitutions and how they relate to the role name definition in roles.yaml.

Example roles.yaml:

- name: translator:french
  prompt: >
    You will act as a translator between English and __ARG1__.
    Whenever you receive a prompt in either language, you will translate the text into the opposite language and provide the translated output as your response.
    Please ensure that your response contains only the translated text.
    No additional descriptions or explanations, No tags or comments to indicate language direction.

Must I define the name as translator:french or can I make it generic and use translator:lang? Or does it not matter? Is french just a placeholder? Or is it a default? If it's a default, I would expect to be able to write:

aichat -r translator Good afternoon

...without specifying a language parameter. But that doesn't work. You must type the full role spec. Am I using this incorrectly?

Feature Suggestion: Allow role-specific configuration

I would like to suggest a new feature that allows users to assign specific configuration for each role. It would be great if we could specify different temperature settings for different prompts since some prompts might perform better with different temperatures.

For instance, when we write a story, it typically requires greater creativity; whereas, when we ask for a definition of terms, we typically require more formality.

[Feature request] File mode (e.g. aichat -r comment -f somefile.ts)

It would be nice to have an inline file mode that allows using the content of a file as the aichat input and writing back the output to the file.

It would allow this kind of use case for adding code comments to a file.

aichat -r comment -f somefile.ts 

Or fixing grammar:

aichat -r grammar -f README.md

etc.

If you approve, I can try writing a PR.

Temporary bash workaround (generated by ChatGPT :P):

#!/bin/bash

# Check if there are enough arguments
if [ "$#" -lt 2 ]; then
    echo "Usage: $0 <aichat_options> <filename>"
    exit 1
fi

# Store the last argument (filename) and remove it from the argument list
FILENAME="${!#}"
shopt -u nocasematch
set -- "${@:1:$#-1}"

# Check if the file exists
if [ ! -f "$FILENAME" ]; then
    echo "Error: File '$FILENAME' not found"
    exit 1
fi

# Create a temporary file in the OS temporary directory
TMPFILE=$(mktemp)

echo "$FILENAME"
echo "${@}"

# Pass all the arguments except the last one to the aichat command
aichat "${@}" < "$FILENAME" > "$TMPFILE"

# Move the temporary file to the original file
mv "$TMPFILE" "$FILENAME"

[Feature request]better support for pasting multi-line content from some terminal emulators

In some special terminals (like the embedded terminals in neovim and the libvterm in emacs), when pasting contents of multi lines, the multi lines will be concated into a single line.

Step to reproduce:

In macOS 12.6, using iterm2 (or wezterm, kitty, or alacritty), open nvim (neovim 0.8.3 installed from homebrew)

nvim

in neovim, type :terminal to open an embedded terminal.

copy the following multi-line content (using Command+c):

hello world

hello world

hello world

in the embedded terminal opened by neovim, type aichat to enter aichat, type { to enter multiline mode, type command+v to paste the contents, will see the result:

Screen Shot 2023-03-23 at 02 39 41

The multi-line content is concated into a single line.

However, if I try to paste the content (type command+v) within the shell (bash or zsh) embedded in neovim, then it can be split into multi-line successfully:

Screen Shot 2023-03-23 at 03 00 21

My random guess is that maybe it is related to how aichat handles \n character when received from a text stream?

Lines show in duplicate/triplicate on narrower terminals

I'm using aichat in Arch Linux with Kitty terminal. Using i3 as a display manager.

It works great, until I set the terminal to half-screen size, when lines appear several times over. This does not seem to occur on wider terminals.

2023-04-05-17:18:11-screenshot

Modify role just for one command

(thanks for the apps !)

Sometime I need to adapt the profile for a specific request for instance:
aichat -r shell install htop packet
but this time, I use alpine linux, so I would need this time something like:
aichat -r shell --extra-role "I want you to use alpine linux" install htop packet

Thanks

Error on first run (and all subsequent runs)

So I just learned about aichat and decided to try it out. I launched aichat for the first time and:

❯ aichat
> No config file, create a new one? Yes
> Openai API Key: sk_lots_of_letters_and_numbers
> Use proxy? No
> Save chat messages Yes
Error: Failed to build http client

Caused by:
    0: builder error: The Trust Settings Record was corrupted.
    1: The Trust Settings Record was corrupted.

And then after that:

❯ aichat
Error: Failed to build http client

Caused by:
    0: builder error: The Trust Settings Record was corrupted.
    1: The Trust Settings Record was corrupted.

I'm sort of impressed I broke it right out of the gate. 😄

Help please?

[Feature request] Implement word wrapping for responses

Currently, responses in aichat are wrapped at the width of the terminal, and this wrapping is not word aware (e.g. this is wrap\nped rather than this is \nwrapped). Properly word wrapped output would be more readable.

[Feature Request] Enhance conversation repl

GPT-3.5 supports up to 4096 tokens, and it is not very likely to have a long conversatio, so there is no need to design a feature-rich conversation repl.

But gpt-4 supports up to 8k tokens, gpt-4-32k supports up to 32k tokens, we can do long conversation. It's necesarry to enhance conversation repl.

.conversation - show conversation help

.conversation new - create a new conversation

.conversation revert [num] - revert current conversation messages.

This give user the ability to edit previous lines, thus branching the conversation in a new direction. This is useful when models get caught in infinite loops, or local optima.

The num parameter specify how many messages to revert. If we omit num parameter, Aichat will display a list of current messages, the user can choose where to revert.

.conversation delete - delete a saved conversation

AIChat will display a multi-select list for the user to select the conversation to delete

.conversation use - reuse a saved conversation

AIChat will display a list for the user to select the conversation to load to

Design details

  1. The conversation automaticlly saved to $CONFIG_DIR/conversations/<datetime>-<first-message-truncated>.json
  2. The conversation is not bound to a specifiy model.
  3. Allow to run .model in the middle of conversation.
  4. Not allow to run .role or .prompt in the middle of conversation.

Welcome to provide suggestions for the design of the conversation repl.

n00b question: config file

How do I edit my config file?

On first launch, aichat will guide you through configuration.... Of course, you can also manually set the configuration file.

Where is the config file located and how do I edit it?

When `conversation_first: true` is set, the role is not applied when the CLI `-r` is used

It was supposed to respond with emojis, but instead, it responded to me like it without the role.

$ aichat -r emoji
Welcome to aichat 0.5.0
Type ".help" for more information.
emoji$sun                                                                                                                                        4089
The sun is the star at the center of the solar system. It comprises about 99.86% of the total mass of the solar system and plays a crucial role in sustaining life on Earth. It is an extremely hot and luminous orb, with surface temperature over 5,500 °C (9,932 °F). The energy generated by the fusion reactions in its core through a process called the proton-proton chain is responsible for all type of life living on its different planets, including trees on Earth providing Oxygen to do for living.

emoji$.info                                                                                                                                      3973
config_file         *
roles_file          *
messages_file       *
api_key             *
temperature         -
save                true
highlight           true
proxy               -
conversation_first  true
dry_run             false

Removing conversation_first: true, or switching via .role, it works correctly.

Welcome to aichat 0.5.0
Type ".help" for more information.
〉.info
config_file         *
roles_file          *
messages_file       *
api_key             *
temperature         -
save                true
highlight           true
proxy               -
conversation_first  false
dry_run             false

〉.role emoji
name: emoji
prompt: |
  suggest emojis:

emoji〉sun
🌞

[Feature request] More advanced role interface

A method to query the roles in the REPL interface could be very useful.
Something along these lines:

$ .role define coder > You are a helpful assistant

$ .role ls 
 .bash
 .coder
 .wiki
 
$ .role prompt bash
  Some prompt definition.

$ .role rm bash

Process closed upon starting

I open aichat in terminal and upon opening I get this message
`zsh: no such file or directory: $PATH/aichat

Saving session...completed.

[Process completed]
`
Everything was working correctly, until I forced stopped aichat process which was generating text I wanted it to stop.

I deleted config folder and aichat binary and downloaded new version, but that didn't resolve the issue.

Could you please help me repair? Thank you.

[Feature Request] Injectable API key

As far as I can see currently, the API key has to be supplied through the configuration file - put there in plain text.

What I would wish for is some way of injecting the API key without it appearing in plain text in my configuration file.

Three options that come to mind, sorted from what I think is the least to the most amount of implementation effort:

  • A cli option aichat --api-key <key> or similar which directly receives the key over cli.
  • An environment variable in the style of AICHAT_API_KEY=<key> which it reads the from the environment
  • or A way to inject the key from the configuration file itself, usually by calling a command like api_key_cmd: gpg --decrypt /password/file.gpg within the config.yaml file. CLI/TUI projects like vdirsyncer, newsboat and a few others went this route for their configuration files.

Let me know if you would consider one (or multiple) of these options feasible, and thank you for a very nice piece of software!

No output

I built version 0.5.0 from source on a mac. Using a free-tier API key but not getting any output. messages.md file shows no results when saving is on. Shouldn't it work with a free tier key? Or if not, shouldn't there be some error message?

undo and clipboard integration features in the README?

Hi I just want to check to see that I'm not missing something obvious here. It appears the README mentions:

Undo support
Clipboard integration

I can't find anything in the help or github search. Are these yet to be implemented or am I bad at searching?

Error while generating list of emojis

I encountered some errors while trying to generate emojis. I have tried to replicate the issue, but unfortunately, the response is no longer the same.

I'm not entirely sure whether this issue is related to the emoji output, as I've checked and found that other emoji responses are displayed correctly.

Welcome to aichat 0.5.0
Type ".help" for more information.
〉suggest emojis: sad
1. 😞
2. 😔
3. 😢
4. 😥
5. 😿
6. 💔 ()
7. 💧
8. 🩸    (hem-faceemoetry/star_fp on github was out that one can use *thon_mas_star  and terminate with + ; function=top_record_attributes?))
9 .👊😓(normal confidence : boosted bravery w/grendalvis_ref_boot.exe>>pone?)
10. 😫(IB\_head)/:sha/sexinoeljumoottomwaveflowermuetcode): bep375 support to denote eager optimism mixing B2 feels backwards-tonelif+t H/F faces making grnf among employees joining waysstar_up105(unittest.cursors().npgsqlCommandsNamesExcept(ib_file_string(web_inst.fb(down_loaded_attach))) win less amongst bunch-less votes era happiness>())))[7])).lowest(value_counts(kind))?dna_kw.dpi:return{return zip_group_str(sum(audio_opcode_one_nested[s_sequence(full(str(isolate_ref_mobile(manager_bridge_files(graphFLESSES_argsigingng.py+.times-cy/file, dis+
establish_COI_* path+='/rf/times/values.html/files/API_AUDIO_HTTP/tests/material/location/db/method/event/posts/stat.ptgdvz>>+_frame for decrypt(origin.path, save_F47(ldie56/RON_add(d_message(email, Audio403MSG(apls.title(TAG_DENLError(message=jgroups(pyTennis + hexload_helper(Crypto_.K_OS_all_the_end(beta_ASSError(theme_pol_writevalide(info_support___MEDIA_FUNCTIONS)), ios&assist_color>>weightless-token+klist,s_value(mock_local=n.local_h(app.gpickle(epoch_to_IS(version>v_10_SUB!=rhovoiler.token_confirm_detail(page=str_current_dtype + email(byteout(Ciphertext_dict(fieldface.sigfgwith_ghost(filename)), iso_CRC326_FUNCTION_CMD_DIR_AGENT000,(ping_textlib(ans.OS_background_runner[NN.index_null => 'FAILED it.success_var(file_header(exception.expected(big_noise_policy,list(last_read(pair_hist.ml(g_full(first_digit).__lte__).split(('err [283708769 n res...NUMBER_QCP== True and application_app_is_address_tcp_wr)))>(['_63BC8271 B reuil1 blue_host exampleAError J.MOH.SH_dheegcgl.
93PGqa7'

um.c.l(context_life_hex(PC=SIGH_I = shrospayload_filename_token_validation_controller_timer_file(tk.StringIOCheat(validater.decode_hex)),falen_Sigh.context(user_PC_reactive.dis,h.as(v / sql_setattr_with_empresning<disabled.startswith_*)(__decipher_service(PKI.prot(segment_encode(PC=\telepython.() amba_crypto>>quantunnugmaticals(FRWAT/snapwin/crypto/libs.mid)))
^C%

Model Configuration for Roles to prepare for GPT-4

GPT-4 has been introduced and its cost is significantly higher than that of GPT-3.5. It may be beneficial to assign the model to each role, thus ensuring that GPT-4 is used only when it is truly needed.

Model Prompt Completion
8K context $0.03 / 1K tokens $0.06 / 1K tokens
32K context $0.06 / 1K tokens $0.12 / 1K tokens

Edit previous lines in conversation

The web interface to ChatGPT offers the user the ability to edit a previous line, thus branching the conversation in a new direction. This is useful when models get caught in infinite loops, or local optima. It'd be great to see such a feature here.

Some confusing items .help

I find the following .help items confusing:

.info "Print the information" -- lol print what information?

"Ctrl+C to abort readline" -- how is this differnt than .exit?

"Ctrl+D to exit the REPL" -- how is this different than Ctrl+C?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.