builderio / ai-shell Goto Github PK
View Code? Open in Web Editor NEWA CLI that converts natural language to shell commands.
License: MIT License
A CLI that converts natural language to shell commands.
License: MIT License
I was able to adjust the text using kolorist. How can we change the coloring in everything else, such as the cyan line and diamond shape? A config file for editing the few settings would be useful to the noobs like me using this to help master the shell. Thank you for putting this out early. We've been waiting for that Copilot CLI too and this definitely helps scratch the itch.
Instead of making the user wait for the entire output of the shell code or description to be finished to see it, it would be great to stream the response word by word to the terminal like done on sites like chatgpt
ai check ip address
┌ AI Shell
│
◇ Your script:
(node:72142) UnhandledPromiseRejectionWarning: TypeError: payload.replaceAll is not a function
at parseContent (file:///usr/local/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:296:27)
at file:///usr/local/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:281:19
at processTicksAndRejections (internal/process/task_queues.js:93:5)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:72142) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:72142) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Is there any information I should provide?
One major inconvenience of using copilot-cli/ai-shell is that I often need to make tweaks to the generated command before executing it, which currently requires copying the command and then manually modifying it with the mouse. It would be beneficial if there was a way for users to modify the command before executing it.
Support for Azure OpenAI could be added, with this feature the data is not used for AI training, ideal for us to use at work, avoiding information leakage.
I tried following the readme to install this tool. But after installation, I get command not found. I restarted iTerm after installation. Any tips? Thanks!
macOS Monterey 12.6.3 (Intel chip)
iTerm2 Build 3.4.19
zsh --version
zsh 5.8.1 (x86_64-apple-darwin21.0)
node --version
v19.8.1
npm --version
8.1.0
ls /usr/local/Cellar/node/19.8.1/lib/node_modules/@builder.io/ai-shell
LICENSE README.md dist node_modules package.json
brew --version
Homebrew 4.0.13
Homebrew/homebrew-core (git revision 39002442006; last commit 2023-03-11)
npm install -g @builder.io/ai-shell
changed 40 packages, and audited 42 packages in 2s
13 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
ai
zsh: command not found: ai
ai-shell
zsh: command not found: ai-shell
mac OS
After setting openai_key,
terminal calling 'ai list all log file'
write EPROTO 4543126976:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
at WriteWrap.onWriteComplete [as oncomplete] (internal/stream_base_commons.js:94:16)
ai-shell v0.1.12
Please open a Bug report with the information above:
https://github.com/BuilderIO/ai-shell/issues/new
λ npm exec ai
C:\Users\eltab\AppData\Local\npm-cache\_npx\afbc0d24890c4360\node_modules\ai\bin\ai.js:5
if (!args || args[0].split("/").length !== 2)
I think should send a nice message if the user doesn't send any argument
Would be nice to offer an option to use GPT4 specifically for the shell script output. We should still use 3.5 for the description as we want that to be fast and cheap, but the shell scripts are few chars so GPT4 could be a nice option or even default
✖ read ECONNRESET
at TLSWrap.onStreamRead (node:internal/stream_base_commons:217:20)
ai-shell v0.1.13
I got this error. I have to use a SOCKS5 or HTTP proxy, does the proxy cause it?
Please help, thanks!
I'm running :
npm install -g @builder.io/ai-shell
ai-shell config set OPENAI_KEY=<API_KEY>
ai list all log files
I've tried switching to node v-16 and still rejects but with a different error message
it seem that I have exceeded my plan limit.
but it would be great to catch the error so it doesn't freeze the shell session
Currently I'm unable to add a new line, when hitting Enter or Shift+Enter on macOS.
✖ Unexpected token < in JSON at position 0
at JSON.parse (<anonymous>)
at generateCompletion (file:///Users/pan/.nvm/versions/node/v16.15.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:211:38)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async getScriptAndInfo (file:///Users/pan/.nvm/versions/node/v16.15.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:168:18)
at async prompt (file:///Users/pan/.nvm/versions/node/v16.15.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:427:36)
ai-shell v0.1.6
Please open a Bug report with the information above:
https://github.com/BuilderIO/ai-shell/issues/new
Snapchat
Hi @steve8708
I found myself wanting to learn some commands without executing them. For general questions, I would like the program to just insert code on terminal rather than execute it.
I'm thinking of something like this
Your script:
du -sm <directory>
Run this script?
Yes | Insert | Revise | Cancel
Is this something useful to add?
Thanks for this cool tool!
Often I need a combination of multiple commands e.g. with grep / jq etc.
For this ChatGPT outputs correct commands with pipes, but they don't work with ai-shell
since it just executes one process without the shell support for pipes. Since execa
is used this can simply be fixed by using the shell-flag.
See https://github.com/sindresorhus/execa#shell-syntax and https://github.com/sindresorhus/execa#shell for more information.
Here is an example for a perfectly fine command which works when pasted directly into bash or zsh but doesn't work with ai-shell:
would be nice to offer the option for a silent mode so we don't output the explanation
$ ai --silent list all log files | find . -name "*.log" | [run, revise, cancel]
I'm using iterm2, oh-my-zsh, and startship prompt.
When initially using this package, I was getting the 429 error. It seems as if the cursor disappears and won't come back after the failed prompt. Here is an example video:
Hey I was thinking that on this piece of the code on the prompt.ts we can refactor it. The value property on each option could be an anonymous function that do the actions specified on the if sequence. That way we can get rid of the "const x = answer === y" part and the different IF calculations. Then you will only need to call answer() to execute the action.
Unless I'm not picking up something it seems to me like a good idea, and I could implement it if you want. Please let me know if Im wrong or if you want me to do it. 😄
const answer = await p.select({
message: nonEmptyScript ? 'Run this script?' : 'Revise this script?',
options: [
...(nonEmptyScript
? [
{ label: '✅ Yes', value: 'yes', hint: 'Lets go!' },
{
label: '📝 Edit',
value: 'edit',
hint: 'Make some adjustments before running',
},
]
: []),
{
label: '🔁 Revise',
value: 'revise',
hint: 'Give feedback via prompt and get a new result',
},
{
label: '📋 Copy',
value: 'copy',
hint: 'Copy the generated script to your clipboard',
},
{ label: '❌ Cancel', value: 'cancel', hint: 'Exit the program' },
],
});
const confirmed = answer === 'yes';
const cancel = answer === 'cancel';
const revisePrompt = answer === 'revise';
const copy = answer === 'copy';
const edit = answer === 'edit';
if (revisePrompt) {
await revisionFlow(script, key, apiEndpoint, silentMode);
} else if (confirmed) {
await runScript(script);
} else if (cancel) {
p.cancel('Goodbye!');
process.exit(0);
} else if (copy) {
await clipboardy.write(script);
p.outro('Copied to clipboard!');
} else if (edit) {
const newScript = await p.text({
message: 'you can edit script here:',
initialValue: script,
});
if (!p.isCancel(newScript)) {
await runScript(newScript);
}
}
I would like to completely refactor this library and contribute the changes to this repo, but it would take some time to complete and the code would be incompatible with any new changes that are implemented to the original repo in the mean time. Therefore, would it be reasonable to contribute the refactored source code to this repo at all, or should I just fork it into a completely separate project?
The reason behind the refactor is that I would like to change how some things work and implement some additional features that would be too complicated to implement considering the current structure of the source code.
Hi! I keep getting the error shown below when running basic "ai" commands. Any idea what is might be? I'm running it on Windows 10, and Git bash. For some reason, it doesn't work on Powershell, but it does on cmd.exe
$ ai How do you declare a new route in a Flask server?
T AI Shell
|
o Your script:
powershell
write-host "To declare a new route in a Flask server, you can add the route decorator to a function definition within the Flask app instance, like this: @app.route('/new_route')" -ForegroundColor Green
•
|
o Explanation:
1. Initializes a PowerShell script.
2. Prints a message using the "write-host" command.
3. The message describes how to declare a new route in a Flask server by using the "@app.route('/new_route')" decorator.
4. The message is displayed in green text using the "-ForegroundColor" parameter.
•
✖ TTY initialization failed: uv_tty_init returned EBADF (bad file descriptor)
at new SystemError (node:internal/errors:250:5)
at new NodeError (node:internal/errors:361:7)
at new WriteStream (node:tty:93:11)
at ED.prompt (file:///C:/Users/sacha/AppData/Roaming/npm/node_modules/@builder.io/ai-shell/node_modules/@clack/core/dist/index.mjs:9:693)
at Module.ee (file:///C:/Users/sacha/AppData/Roaming/npm/node_modules/@builder.io/ai-shell/node_modules/@clack/prompts/dist/index.mjs:28:7)
at runOrReviseFlow (file:///C:/Users/sacha/AppData/Roaming/npm/node_modules/@builder.io/ai-shell/dist/cli.mjs:1736:26)
at prompt (file:///C:/Users/sacha/AppData/Roaming/npm/node_modules/@builder.io/ai-shell/dist/cli.mjs:1732:9)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ai-shell v1.0.1
Please open a Bug report with the information above:
https://github.com/BuilderIO/ai-shell/issues/new
I use http proxy to speed up visiting to api.openai.com.
shell config command like:
export http_proxy=<proxy url>
export https_proxy=<proxy url>
I got the following errors while run ai shell
◒ Loading
✖ Hostname/IP does not match certificate's altnames: Host: api.openai.com. is not in the cert's altnames: DNS:*.<proxy-domain>, DNS:<proxy-domain>
at new NodeError (node:internal/errors:399:5)
at Object.checkServerIdentity (node:tls:337:12)
at TLSSocket.onConnectSecure (node:_tls_wrap:1568:27)
at TLSSocket.emit (node:events:512:28)
at TLSSocket._finishInit (node:_tls_wrap:977:8)
at ssl.onhandshakedone (node:_tls_wrap:771:12)
ai-shell v0.1.6
I checked the proxy setting, it is woking well under other shell command, like curl.
$ curl https://api.openai.com/
{
"error": {
"message": "Invalid URL (GET /)",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
I am getting this error running on my Mac Studio M1 (arm64). I have set my OPENAI_KEY
properly with the command: ai-shell config set OPENAI_KEY=<your token>
Versions:
$ node -v
v14.21.3
$ npm -v
9.6.4
$ ai-shell --version
0.0.17
$ ai 'what is my ip address'
┌ ai-shell
│
◐ Loading
(node:6025) UnhandledPromiseRejectionWarning: Error: Request failed with status code 429
at createError (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/axios/lib/core/createError.js:16:15)
at settle (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/axios/lib/core/settle.js:17:12)
at RedirectableRequest.handleResponse (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/axios/lib/adapters/http.js:278:9)
at RedirectableRequest.emit (events.js:400:28)
at RedirectableRequest._processResponse (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/follow-redirects/index.js:356:10)
at ClientRequest.RedirectableRequest._onNativeResponse (/Users/fhperuchi/.nvm/versions/node/v14.21.3/lib/node_modules/@builder.io/ai-shell/node_modules/follow-redirects/index.js:62:10)
at Object.onceWrapper (events.js:520:26)
at ClientRequest.emit (events.js:400:28)
at HTTPParser.parserOnIncomingClient (_http_client.js:647:27)
at HTTPParser.parserOnHeadersComplete (_http_common.js:127:17)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:6025) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2)
(node:6025) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit co ◑ Loading...
It got stucked at this last line.
Hello,
Thank you very much for your program.
I would like to create a script with for example this command :
#!/bin/bash
ai -s open nemo in the home folder
Unfortunately it doesn't works because of the prompt :
◆ Exécuter ce script?
│ ● ✅ Oui (Allons-y!)
│ ○ 📝 Modifier
│ ○ 🔁 Réviser
│ ○ 📋 Copier
│ ○ ❌ Annuler
└
Could it be possible to add a -y
argument for not showing this prompt ?
#!/bin/bash
ai -s -y open nemo in the home folder
Thanks ;-)
Would be nice to offer an option to use Azure OpenAI service
There is a lot of functionality, that can be a inspiration what ai-shell can easily become and more!
https://github.com/npiv/chatblade
i <3 ai-shell! but i have both installed right now
I'm on Linux Mint distribution and the package works great on version 0.1.13 but any higher version crashes on any command.
Output of any command I try starting with 'ai':
cat: /etc/upstream-release: Is a directory
node:internal/errors:867
const err = new Error(message);
^
Error: Command failed: cat /etc/*release
cat: /etc/upstream-release: Is a directory
at checkExecSyncError (node:child_process:885:11)
at execSync (node:child_process:957:15)
at osReleaseSync (/home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/node_modules/@nexssp/os/lib/distro.js:21:12)
at module.exports.get (/home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/node_modules/@nexssp/os/lib/distro.js:47:25)
at Object.name (/home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/node_modules/@nexssp/os/legacy.js:4:20)
at getOperationSystemDetails (file:///home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:488:13)
at file:///home/irian-adappty/.nvm/versions/node/v19.9.0/lib/node_modules/@builder.io/ai-shell/dist/cli.mjs:493:37
at ModuleJob.run (node:internal/modules/esm/module_job:193:25) {
status: 1,
signal: null,
output: [
null,
Buffer(475) [Uint8Array] [
68, 73, 83, 84, 82, 73, 66, 95, 73, 68, 61, 76,
105, 110, 117, 120, 77, 105, 110, 116, 10, 68, 73, 83,
84, 82, 73, 66, 95, 82, 69, 76, 69, 65, 83, 69,
61, 50, 48, 46, 51, 10, 68, 73, 83, 84, 82, 73,
66, 95, 67, 79, 68, 69, 78, 65, 77, 69, 61, 117,
110, 97, 10, 68, 73, 83, 84, 82, 73, 66, 95, 68,
69, 83, 67, 82, 73, 80, 84, 73, 79, 78, 61, 34,
76, 105, 110, 117, 120, 32, 77, 105, 110, 116, 32, 50,
48, 46, 51, 32,
... 375 more items
],
Buffer(43) [Uint8Array] [
99, 97, 116, 58, 32, 47, 101, 116, 99,
47, 117, 112, 115, 116, 114, 101, 97, 109,
45, 114, 101, 108, 101, 97, 115, 101, 58,
32, 73, 115, 32, 97, 32, 100, 105, 114,
101, 99, 116, 111, 114, 121, 10
]
],
pid: 55561,
stdout: Buffer(475) [Uint8Array] [
68, 73, 83, 84, 82, 73, 66, 95, 73, 68, 61, 76,
105, 110, 117, 120, 77, 105, 110, 116, 10, 68, 73, 83,
84, 82, 73, 66, 95, 82, 69, 76, 69, 65, 83, 69,
61, 50, 48, 46, 51, 10, 68, 73, 83, 84, 82, 73,
66, 95, 67, 79, 68, 69, 78, 65, 77, 69, 61, 117,
110, 97, 10, 68, 73, 83, 84, 82, 73, 66, 95, 68,
69, 83, 67, 82, 73, 80, 84, 73, 79, 78, 61, 34,
76, 105, 110, 117, 120, 32, 77, 105, 110, 116, 32, 50,
48, 46, 51, 32,
... 375 more items
],
stderr: Buffer(43) [Uint8Array] [
99, 97, 116, 58, 32, 47, 101, 116, 99,
47, 117, 112, 115, 116, 114, 101, 97, 109,
45, 114, 101, 108, 101, 97, 115, 101, 58,
32, 73, 115, 32, 97, 32, 100, 105, 114,
101, 99, 116, 111, 114, 121, 10
]
}
Things I've tried:
My OS details:
Distributor ID: Linuxmint
Description: Linux Mint 20.3
Release: 20.3
Codename: una
Linux linux 5.4.0-147-generic #164-Ubuntu SMP Tue Mar 21 14:23:17 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Video of the issue: https://contecnika-my.sharepoint.com/:v:/g/personal/irian_caigo_es/EU1juhHdlB1BugdA3iTy0hAB5TRO_mNWaf9pgVwP8ONa2Q?e=IDzBxM
I am not a bash wizard, and am dying for access to the copilot CLI, and got impactient.
I am not a bash wizard, and am dying for access to the copilot CLI, and got impatient.
Hey! In the prompt.ts
file, I see an option to copy the script in the runOrReviseFlow
method. However, I don't see that as an option when I'm using the tool. Is there a setting I'm missing to have it show the copy option?
Have you considered using langchain+some truely-open source model?
there's also a JS port of langchan + alpaca: https://github.com/linonetwo/langchain-alpaca
can't recommend any use-case specific open model but perhaps this is something that could be easily fine-tuned by feeding man pages+examples?
How do I customize the nodes that stream out ? (For example I am a developer, I have my own server, I have the chatGPT proxy server, I want to get the data back to the client by having the user use my server, then I go to the proxy server and get the data back)
connect ETIMEDOUT 31.13.94.49:443
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1195:16)
ai-shell v0.1.13
Please open a Bug report with the information above:
https://github.com/BuilderIO/ai-shell/issues/new
Right now this assumes you use bash on mac or Linux. Would be great to support windows too, eg by specifying that in the prompt we generate when detected
Support chinese?
I have configured the OpenAI API Endpoint address to use Cloudflare proxy API (e.g. https://github.com/x-dr/chatgptProxyAPI), and I have encountered an issue where the generated script is incorrect.
Upon investigation, I found that this may be due to a single "data:" line was split into multiple chunks, and the code stream-to-iterable.ts didn't consider this situation. This causes errors in subsequent processing, leading to the incorrect script.
Thank you for your attention and support, and I look forward to your response.
I have noticed that Explanation currently only responds in English. It would be possible to allow users to choose the language of the response to make it more user-friendly for more people.
Hello,
The "update" command steals the user input:
ai update all packages
Running: npm update -g @builder.io/ai-shell
up to date in 426ms
18 packages are looking for funding
run `npm fund` for details
Another example:
ai update current time
Running: npm update -g @builder.io/ai-shell
Should only run when there is no other arguments passed.
would be nice to Allow users to save command for later use or load them from a file.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.