Git Product home page Git Product logo

skaplanofficial / raycast-promptlab Goto Github PK

View Code? Open in Web Editor NEW
228.0 7.0 8.0 157.49 MB

A Raycast extension for creating powerful, contextually-aware AI commands using placeholders, action scripts, selected files, and more.

Home Page: https://www.raycast.com/HelloImSteven/promptlab

TypeScript 93.08% AppleScript 6.92%
ai automation extension mac macos prompts raycast raycast-extension chatgpt-api prompt-engineering

raycast-promptlab's Introduction

PromptLab

PromptLab Logo

PromptLab is a Raycast extension for creating and sharing powerful, contextually-aware AI commands using placeholders, action scripts, and more.

PromptLab allows you to create custom AI commands with prompts that utilize contextual placeholders such as {{selectedText}}, {{todayEvents}}, or {{currentApplication}} to vastly expand the capabilities of Raycast AI. PromptLab can also extract information from selected files, if you choose, so that it can tell you about the subjects in an image, summarize a PDF, and more.

PromptLab also supports "action scripts" -- AppleScripts which run with the AI's response as input, as well as experimental autonomous agent features that allow the AI to run commands on your behalf. These capabilities, paired with PromptLab's extensive customization options, open a whole new world of possibilities for enhancing your workflows with AI.

Install PromptLab | My Other Extensions | Donate

PromptLab - Raycast extension for creating context-aware AI commands | Product Hunt

Table Of Contents

Feature Overview

  • Create, Edit, Run, and Share Custom Commands
  • Detail, List, Chat, and No-View Command Types
  • Utilize Numerous Contextual Placeholders in Prompts
  • Use AppleScript, JXA, Shell Scripts, and JavaScript Placeholders
  • Obtain Data from External APIs, Websites, and Applications
  • Analyze Content of Selected Files
  • Extract Text, Subjects, QR Codes, etc. from Images and Videos
  • Quick Access to Commands via Menu Bar Item
  • Import/Export Commands
  • Save & Run Commands as Quicklinks with Optional Input Parameter
  • Run AppleScript or Bash Scripts Upon Model Response
  • Execute Siri Shortcuts and Use Their Output in Prompts
  • PromptLab Chat with Autonomous Command Execution Capability
  • Multiple Chats, Chat History, and Chat Statistics
  • Chat-Specific Context Data Files
  • Upload & Download Commands To/From PromptLab Command Store
  • Use Custom Model Endpoints with Synchronous or Asynchronous Responses
  • Favorite Commands, Chats, and Models
  • Optionally Speak Responses and Provide Spoken Input
  • Create Custom Placeholders with JSON

Top-Level Commands

  • New PromptLab Command
    • Create a custom PromptLab command accessible via 'My PromptLab Commands'.
  • My PromptLab Commands
    • Search and run custom PromptLab commands that you've installed or created.
  • Manage Models
    • View, edit, add, and delete custom models.
  • PromptLab Command Store
    • Explore and search commands uploaded to the store by other PromptLab users.
  • PromptLab Chat
    • Start a back-and-forth conversation with AI with selected files provided as context.
  • PromptLab Menu Item
    • Displays a menu of PromptLab commands in your menu bar.
  • Import PromptLab Commands
    • Add custom commands from a JSON string.

Images

PromptLab 1.0.0 Launch Features

PromptLab 1.1.0 Update Features

DogSVG.webm
EditCommand.webm
InstallAll.webm
CPUPerformance.webm

View more images in the gallery.

Create Your Own Commands

You can create custom PromptLab commands, accessed via the "My PromptLab Commands" command, to execute your own prompts acting on the contents of selected files. A variety of useful defaults are provided, and you can find more in the PromptLab Command Store.

Placeholders

When creating custom commands, you can use placeholders in your prompts that will be substituted with relevant information whenever you run the command. These placeholders range from simple information, like the current date, to complex data retrieval operations such as getting the content of the most recent email or running a sequence of prompts in rapid succession and amalgamating the results. Placeholders are a powerful way to add context to your PromptLab prompts.

A few examples of placeholders are:

Placeholder Replaced With
{{clipboardText}} The text content of your clipboard
{{selectedFiles}} The paths of the files you have selected
{{imageText}} Text extracted from the image(s) you have selected
{{lastNote}} The HTML of the most recently modified note in the Notes app
{{date format="d MMMM, yyyy"}} The current date, optionally specifying a format
{{todayEvents}} The events scheduled for today, including their start and end times
{{youtube:[search term]}} The transcription of the first YouTube video result for the specified search term
{{prompt:...}} The result of running the specified prompt
{{url:[url]}} The visible text at the specified URL
{{as:...}} The result of the specified AppleScript code
{{js:...}} The result of the specified JavaScript code

These are just a few of the many placeholders available. View the full list here. You even create your own placeholders using JSON, if you want!

Action Scripts

When configuring a PromptLab command, you can provide AppleScript code to execute once the AI finishes its response. You can access the response text via the response variable in AppleScript. Several convenient handlers for working with the response text are also provided, as listed below. Action Scripts can be used to build complex workflows using AI as a content provider, navigator, or decision-maker.

Provided Variables

Variable Value Type
input The selected files or text input provided to the command. String
prompt The prompt component of the command that was run. String
response The full response received from the AI. String

Provided Handlers

Handler Purpose Returns
split(theText, theDelimiter) Splits text around the specified delimiter. List of String
trim(theText) Removes leading and trailing spaces from text. String
replaceAll(theText, textToReplace, theReplacement) Replaces all occurrences of a string within the given text. String
rselect(theArray, numItems) Randomly selects the specified number of items from a list. List

Custom Configuration Fields

When creating a command, you can use the Unlock Setup Fields action to enable custom configuration fields that must be set before the command can be run. You'll then be able to use actions to add text fields, boolean (true/false) fields, and/or number fields, providing instructions as you see fit. In your prompt, use the {{config:fieldName}} placeholder, camel-cased, to insert the field's current value. When you share the command to the store and others install it, they'll be prompted to fill out the custom fields before they can run the command. This is a great way to make your commands more flexible and reusable.

Chats, Context Data, Statistics, and More

Chats

Using the "PromptLab Chat" command, you can chat with AI while making use of features like placeholders and selected file contents. Chat are preserved for later reference or continuation, and you can customize each chat's name, icon, color, and other settings. Chats can have "Context Data" associated with them, ensuring that the LLM stays aware of the files, websites, and other information relevant to your conversation. Within a chat's settings, you can view various statistics highlighting how you've interacted with the AI, and you can export the chat's contents (including the statistics) to JSON for portability.

Autonomous Agent Features

When using PromptLab Chat, or any command that uses a chat view, you can choose to enable autonomous agent features by checking the "Allow AI To Run Commands" checkbox. This will allow the AI to run PromptLab commands on your behalf, supplying input as needed, in order to answer your queries. For example, if you ask the AI "What's the latest news?", it might run the "Recent Headlines From 68k News" command to fulfil your request, then return the results to you. This feature is disabled by default, and can be enabled or disabled at any time.

Installation

PromptLab is now available on the Raycast extensions store! Download it now.

Alternatively, you can install the extension manually from this repository by following the instructions below.

Manual Installation

git clone https://github.com/SKaplanOfficial/Raycast-PromptLab.git && cd Raycast-PromptLab

npm install && npm run dev

Custom Model Endpoints

When you first run PromptLab, you'll have the option to configure a custom model API endpoint. If you have access to Raycast AI, you can just leave everything as-is, unless you have a particular need for a different model. You can, of course, adjust the configuration via the Raycast preferences at any time.

To use any arbitrary endpoint, put the endpoint URL in the Model Endpoint preference field and provide your API Key alongside the corresponding Authorization Type. Then, specify the Input Schema in JSON notation, using {prompt} to indicate where PromptLab should input its prompt. Alternatively, you can specify {basePrompt} and {input} separately, for example if you want to provide content for the user and system roles separately when using the OpenAI API. Next, specify the Output Key of the output text within the returned JSON object. If the model endpoint returns a string, rather than a JSON object, leave this field empty. Finally, specify the Output Timing of the model endpoint. If the model endpoint returns the output immediately, select Synchronous. If the model endpoint returns the output asynchronously, select Asynchronous.

Anthropic API Example

To use Anthropic's Claude API as the model endpoint, configure the extension as follows:

Preference Name Value
Model Endpoint https://api.anthropic.com/v1/complete
API Authorization Type X-API-Key
API Key Your API key
Input Schema { "prompt": "\n\nHuman: {prompt}\n\nAssistant: ", "model": "claude-instant-v1-100k", "max_tokens_to_sample": 300, "stop_sequences": ["\n\nHuman:"] , "stream": true }
Output Key Path completion
Output Timing Asynchronous

OpenAI API Example

To use the OpenAI API as the model endpoint, configure the extension as follows:

Preference Name Value
Model Endpoint https://api.openai.com/v1/chat/completions
API Authorization Type Bearer Token
API Key Your API key
Input Schema { "model": "gpt-4", "messages": [{"role": "user", "content": "{prompt}"}], "stream": true }
Output Key Path choices[0].delta.content
Output Timing Asynchronous

Troubleshooting

If you encounter any issues with the extension, you can try the following steps to resolve them:

  1. Make sure you're running the latest version of Raycast and PromptLab. I'm always working to improve the extension, so it's possible that your issue has already been fixed.
  2. If you're having trouble with a command not outputting the desired response, try adjusting the command's configuration. You might just need to make small adjustments to the wording of the prompt. See the Useful Resources section below for help with prompt engineering. You can also try adjusting the included information settings to add or remove context from the prompt and guide the AI towards the desired response.
  3. If you're having trouble with PromptLab Chat responding in unexpected ways, make sure the chat settings are configured correctly. If you are trying to reference selected files, you need to enable "Use Selected Files As Context". Likewise, to run other PromptLab commands automatically, you need to enable "Allow AI To Run Commands". To have the AI remember information about your conversation, you'll need to enable "Use Conversation As Context". Having multiple of these settings enabled can sometimes cause unexpected behavior, so try disabling them one at a time to see if that resolves the issue.
  4. Check the PromptLab Wiki to see if a solution to your problem is provided there.
  5. If you're still having trouble, create a new issue on GitHub with a detailed description of the issue and any relevant screenshots or information. I'll do my best to help you out!

Contributing

Contributions are welcome! Please see the contributing guidelines for more information.

Roadmap

Current Release: v1.2.0

  • Create, Edit, and Run Custom Commands
  • Detail, List, Chat, and No-View Command Types
  • Placeholders in Prompts
  • Get Content of Selected Files
  • Extract Text, Subjects, QR Codes, etc. from Images
  • Import/Export Commands
  • Run AppleScript or Bash Scripts On Model Response
  • PromptLab Chat with Autonomous Command Execution Capability
  • Upload & Download Commands To/From PromptLab Command Store
  • Custom Model Endpoints with Synchronous or Asynchronous Responses
  • Save & Run Commands as Quicklinks
  • Video Feature Extraction example
  • Switch Between Chats & Export Chat History example
  • Auto-Compress Chat History
  • Chat Settings
  • Command Setup On Install
  • Spoken Responses
  • Voice Input
  • New Placeholders
    • Persistent Variables
    • Flow Control Directives
    • Configuration Placeholders
    • JS Sandbox
  • Manage Models example
  • Menu Bar Extra example
  • Placeholders Guide
  • Record Previous Runs of a Command and Use Them as Input

Next Release: v1.3.0

Planned

  • Saved Responses
  • Command Templates
  • Improved Chat UI

Possible

  • TF-IDF
  • Autonomous Web Search
  • LangChain Integration

Future Releases

  • Dashboard
  • Chat Merging
  • GPT Function Calling
  • New Placeholders

Useful Resources

Link Category Description
Best practices for prompt engineering with OpenAI API Prompt Engineering Strategies for creating effective ChatGPT prompts, from OpenAI itself
Brex's Prompt Engineering Guide Prompt Engineering A guide to prompt engineering, with examples and in-depth explanations
Techniques to improve reliability Prompt Engineering Strategies for improving reliability of GPT responses

raycast-promptlab's People

Contributors

skaplanofficial avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

raycast-promptlab's Issues

Azure OpenAI

Any plans to support Azure OpenAI endpoints? I've tried it with what seems to be the correct settings, but no luck yet.

I'm having difficulty debugging what's actually happening. All I've been able to see is the chat gets named Unauthorized.

Azure OpenAI shouldn't be much different from regular OpenAI from what I can tell.

“Bad Request” on OpenAI

Steps to reproduce:

  • Install latest version from store (1.2.1).
  • Configure with OpenAI example here and a valid OpenAPI key.
  • Select PromptLab > New Chat
  • Enter any text.
  • Run the command.

Results

"Bad Request"

Error when opening "My PromptLab Commands"

Hi,

I installed your extension earlier today and it's fantastic. It was working fine but now when I go to 'My PromptLab Commands' I get the following error:


TypeError: command.acceptedFileExtensions?.split is
not a function
SearchCommand:search-commands.tsx:27:2

24: import path from "path";
25: 
26: export default function SearchCommand(props: { arguments: { commandName: string; queryInput: string } }) {
27:   const { commandName, queryInput } = props.arguments;
     ^
28:   const [commands, setCommands] = useState<Command[]>();
29:   const [targetCategory, setTargetCategory] = useState<string>("All");
30:   const [searchText, setSearchText] = useState<string | undefined>(

SearchCommand:search-commands.tsx:27:2
Nr:index.js:6:2490
    at ray-navigation-stack
_o:index.js:6:2088

Finder - prompt-sets - 13-05-2023 - 20 11 01@2x

I installed a few batches of commands from the store. Most worked well. One issue I had, almost certainly unrelated, was when summarising PDF with filename Walter Murch - In the Blink of an Eye Revised 2nd Edition (2001, Silman-James Pr).pdf, it didn't like it and gave me the error:

Error: ENOENT: no such file or directory, Istat '/Users/
Alex/Downloads/Walter Murch - In the Blink of an Eye
Revised 2nd Edition (2001'
Obiect.IstatSync:node:fs:1569:3

![Finder - Downloads - 13-05-2023 - 20 14 22@2x](https://github.com/SKaplanOfficial/Raycast-PromptLab/assets/11060179/c47d2ea3-d6ad-411e-8d39-747765fa2813)

Renaming it to test.pdf worked and then everything seemed fine after.

The last command I tried to run was 'summarise spoken audio' which gave me an error that unfortunately I didn't capture.

Thanks,

Alex

Switch browser or mail app

Hi, how can I edit the Summarize Current Tab or Summarize Last Email commands so that they use different default apps (Edge instead of Safari, Outlook instead of Mail)?

OpenAI endpoint issues with prompts

Hello! I am using the OpenAI endpoint.
When I run TechCrunch News or Summarize GitHub trending, I get responses like

As an AI language model, I do not have the ability to analyze today’s trending GitHub repositories as they change frequently, but I can provide a summary of the current trending repositories based on the given link.

It seems the the content from the URL is not being parsed into the prompt.

Models are not saved in the commands

Here are the steps to reproduce:

  1. Open "My PromptLab Commands"
  2. With any command, open "Edit command" menu
  3. Scroll down and change the modal from the current selected to any other
  4. Save the command

Expected result: the command is correctly saved with the new model used, each command run shows different results because it uses different model.
Actual result: the command is not saved, if you go back to the "edit command" menu the model is just as it was before, the results of the command run are similar to the previous.

Model is not supported - Failed to fetch data

Error:

Error: Model is not supported
    at Rr (/Applications/Raycast.app/Contents/Resources/RaycastCommands_RaycastCommands.bundle/Contents/Resources/api/node_modules/@raycast/server/index.js:25:230)
    at Ye (/Applications/Raycast.app/Contents/Resources/RaycastCommands_RaycastCommands.bundle/Contents/Resources/api/node_modules/@raycast/server/index.js:24:6249)
    at ht (/Applications/Raycast.app/Contents/Resources/RaycastCommands_RaycastCommands.bundle/Contents/Resources/api/node_modules/@raycast/server/index.js:24:6374)
    at Immediate.<anonymous> (/Applications/Raycast.app/Contents/Resources/RaycastCommands_RaycastCommands.bundle/Contents/Resources/api/node_modules/@raycast/server/index.js:24:6139)
    at process.processImmediate (node:internal/timers:478:21)

When I launch a command, the error appear at the bottom left.

The error do not prevent the command from working, but it is a little annoying.
I am using a local model hosted with Ollama. I have tried to configure the model using both extension settings and the model manager.
At this moment, I am not sure what the extension is trying to do, but my command works fine.

JSON parsing errors

When I run the summarise github trending command, it pulls out the first entry but outputs this to the terminal. I am using OpenAI endpoint.

11:28:03.889 Failed to get JSON from model output
11:28:03.889 Warning: Cannot update a component (`CommandResponse`) while rendering a different component (`CommandResponse`). To locate the bad setState() call inside `CommandResponse`, follow the stack trace as described in https://reactjs.org/link/setstate-in-render
    at CommandResponse (/Users/hsai002/.config/raycast/extensions/promptlab/search-commands.js:16447:11)
    at Nr (/Applications/Raycast.app/Contents/Resources/RaycastCommands_RaycastCommands.bundle/Contents/Resources/api/node_modules/@raycast/api/index.js:6:2490)
    at ray-navigation-stack
    at _o (/Applications/Raycast.app/Contents/Resources/RaycastCommands_RaycastCommands.bundle/Contents/Resources/api/node_modules/@raycast/api/index.js:6:2088)
    at Suspense
    at wr (/Applications/Raycast.app/Contents/Resources/RaycastCommands_RaycastCommands.bundle/Contents/Resources/api/node_modules/@raycast/api/index.js:5:2543)
    at ray-root
    at ti (/Applications/Raycast.app/Contents/Resources/RaycastCommands_RaycastCommands.bundle/Contents/Resources/api/node_modules/@raycast/api/index.js:6:3000)```

Change language

Hey, I just tried this extension and it is so good! Thank you very much! Is there any way that I can configure the AI (I'm using OpenAI API with my own key), to respond in German? Currently, even when I feed the AI with German content, it responds in English.

Plugin broken with Raycast 1.51.0 (Pro)

I've been enjoying Raycast-PromptLab since discovering it this morning, until this afternoon's Raycast update (1.51.0) which officially released Raycast AI... 😅

No matter which prompt I use, I get the same message, even after subscribing to the Raycast Pro membership.

Thank you for your work on Raycast-PromptLab!

Screenshot 2023-05-10 at 13 41 50

Issue retrieving calendar events

Hi, PromptLab is having trouble accessing my calendar events. When I run the prompt 'Today's Agenda' that comes with the extension, I get the following error:

Error: Command failed with exit code 1: osascript -e use framework "EventKit"
      property ca : current application
      
      set eventStore to ca's EKEventStore's alloc()'s init()
      eventStore's reset()
      eventStore's requestAccessToEntityType:((get ca's EKEntityMaskEvent) + (get ca's EKEntityMaskReminder)) completion:(missing value)
      delay 0.1
      
      set startDate to ca's NSDate's |date|()
      
      set calendar to ca's NSCalendar's currentCalendar()
      set dateComponents to ca's NSDateComponents's alloc()'s init()
      dateComponents's setDay:1
      set endDate to calendar's dateByAddingComponents:dateComponents toDate:startDate options:(ca's NSCalendarMatchStrictly)
      
      set remindersPredicate to eventStore's predicateForIncompleteRemindersWithDueDateStarting:startDate ending:endDate calendars:(missing value)
      set upcomingReminders to eventStore's remindersMatchingPredicate:remindersPredicate
      set theRemindersData to {title, dueDate} of upcomingReminders
      
      set theReminders to {}
      repeat with index from 1 to (count of upcomingReminders)
        set eventTitle to (item index of item 1 of theRemindersData) as text
        set eventDueDate to item index of item 2 of theRemindersData
        
        set dueDateFormatter to ca's NSDateFormatter's alloc()'s init()
        (dueDateFormatter's setDateFormat:"MMMM dd, YYYY 'at' HH:mm a")
        set eventDueString to (dueDateFormatter's stringFromDate:eventDueDate)
        
        set reminderInfo to eventTitle & " on " & eventDueString
        copy reminderInfo to end of theReminders
      end repeat
      
      return theReminders 
932:948: execution error: Can’t get title of missing value. (-1728)

      property ca : current application
      
      set eventStore to ca's EKEventStore's alloc()'s init()
      eventStore's reset()

I tried creating my own prompt using {{weekEvents}} but it returns nothing, even though there are plenty of events on my calendar.

Any help would be appreciated! Thank you.

Get no response with OpenAI

I tried to run Commands with this extension

My configuration is as follows:

{ "model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "{prompt}"}]}

The problem is, that the request is sent, I am billed from OpenAI, but the extension says it got no responses.

Using Mistral through Jan.ai with this extension

I recently came across this library called Jan.ai, which allows you to run offline models such as Mistral Instruct, which is really powerful. Apparently, this tool has the same API as OpenAI.

Is it possible to use it with this extension for Raycast? If so, is there any resources I could read about this? And if you need my help, please do tell.

This is an example of how I can access the model using their API which is the same as OpenAI's:

curl http://localhost:1337/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer EMPTY" \
  -d '{
     "model": "mistral-ins-7b-q4",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

Reply:

{"choices":[{"finish_reason":null,"index":0,"message":{"content":" I understand that you have asked me to say that \"this is a test.\" Here is that statement for you: \"This is a test.\" Is there anything specific you would like me to do with this test, or is it simply for my understanding that we are conducting a test? Let me know if there's anything else I can assist you with.","role":"assistant"}}],"created":1705970517,"id":"tWF3q25T3yaeF96K2bVF","model":"_","object":"chat.completion","system_fingerprint":"_","usage":{"completion_tokens":72,"prompt_tokens":14,"total_tokens":86}}% 

Inclusion of placeholders causes prompts to load indefinitely

Hi, PromptLab has been working fairly well for me when used without placeholders (anything with {{}}) However as soon as I include terms like {{selectedText}} the prompts will show "loading response" indefinitely. I'm using my personal (not Raycast( openAI endpoint - is there any way to diagnose what exactly the issue is?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.