Git Product home page Git Product logo

alondmnt / joplin-plugin-jarvis Goto Github PK

View Code? Open in Web Editor NEW
219.0 219.0 22.0 3.5 MB

Joplin (note-taking) assistant running a very intelligent system (OpenAI/GPT, Hugging Face, Gemini, Llama, Universal Sentence Encoder, etc.)

License: GNU Affero General Public License v3.0

TypeScript 93.05% JavaScript 5.61% CSS 1.18% Python 0.17%
assistant chatgpt gpt-3 gpt-4 gpt4all huggingface llm note-taking palm semantic-search

joplin-plugin-jarvis's People

Contributors

alondmnt avatar dwinkler1 avatar hegghammer avatar jakubjezek001 avatar ryanfreckleton avatar wladefant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

joplin-plugin-jarvis's Issues

Clearly document exactly which data is sent for each type of query

Thank you for building and sharing this software!

As a potential user of this plugin I feel nervous about having my entire second brain uploaded to a third party service. I like the idea of doing simple chats though. What I would like to see in the documentation is a separate, clear, standardised indication of which data is uploaded for each of the types of queries listed.

For example:

  • Chat:
  • Start a new note, or continue an existing conversation in a saved note. Place the cursor after your prompt and run the command Chat with Jarvis (from the toolbar or Tools/Jarvis menu). Each time you run the command Jarvis will append its response to the note at the current cursor position (given the previous content that both of you created). If you don't like the response, run the command again to replace it with a new one.
  • Data shared: Only the current note you are in.

Not sure if the above 'Data shared' annotation is actually correct, which is why I think it should be clearer to the user. I'm happy to submit a PR implementing this if you can let me know the details of the data shared. Thank you!

User Feedback

Can we have some feedback on what's happening after a request is sent? An indicator that we're awaiting a response, and an error message if there is one?

For instance, I might press Ctrl+Shift+C and the border of the textarea starts glowing for the duration of whatever timeout period we're waiting. It gets an error message so the textarea stops glowing and the message is displayed to the user. If there's no response, once the timeout is reached the textarea stops glowing and the error "Request timed out. Check your connection." is displayed.

Question: Jarvis throwing errors in development tools

When Jarvis is enabled, it keeps throwing this error in the Joplin dev tools:

models/Setting: Could not set setting on the keychain. Will be saved to database instead: plugin-joplin.plugin.alondmnt.jarvis.springer_api_key: Error: Password is required.

Jarvis is working fine with the API key for GPT - so, I don't know why it's throwing this error over and over

Joplin 2.14.22 (prod, win32)
Client ID: 19f20674be77445bbd5535190049d93c
Sync Version: 3
Profile Version: 46
Keychain Supported: Yes
Revision: e579eb9

Feature Request: Separate Chatbox for Enhanced AI Interaction in Joplin

https://github.com/logancyang/obsidian-copilot

Description:

This feature request proposes the addition of a dedicated chatbox interface within Joplin, enhancing the user experience for those utilizing AI-driven conversation features. Inspired by similar functionalities in other note-taking applications, this feature aims to provide a focused and interactive environment for users engaging with AI in Joplin.

Key Features of the Proposed Chatbox:

  1. Dedicated Chat Interface: Implementing a separate chatbox window in the Joplin interface, dedicated to AI interactions. This design will allow for a more immersive and focused conversation experience.

  2. Integration with Notes: Enabling the chatbox to use the currently open note as context, thereby improving the relevance and accuracy of AI responses within the chat.

  3. One-Click Functionality:

    • Save Conversation: Allowing users to save the entire chat conversation into a new or existing note with a single click.
    • Copy to Note: Facilitating the copying of selected chat portions directly into a note with ease.

Rationale for the Request:

The addition of a dedicated chatbox is intended to streamline the AI interaction experience within Joplin, making it more accessible, efficient, and user-friendly. Such a feature would provide a distinct space for AI conversations, separate from the main note-editing area, thus minimizing distractions and improving overall focus. The capability to seamlessly save and incorporate chat contents into notes would greatly enhance the practicality of AI features for a variety of note-taking and research activities.

Anticipated Impact:

The implementation of this feature is expected to substantially increase the productivity and efficiency of users who rely on AI assistance for various purposes, including note-taking and brainstorming. By introducing this advanced AI integration feature, Joplin could attract new users and align more closely with the evolving needs and expectations of its user base.

Your consideration of this feature request is greatly appreciated. A dedicated chatbox would mark a significant enhancement in the functionality and user experience of AI integration within Joplin.

OpenAI proxy

Hello, could you please add an option to customize the OpenAI proxy in the settings?

Local Open Source Options

First thanks for the energy and effort that has been put into this plugin.

I found two additional local opensource options

I discovered https://github.com/janhq/jan when I was researching LM Studio. I came across it in a LM Studio post on reddit that expressed both praise of LMStudio and also concerns about its future as they go more and more commercial.

Someone from the jan.ai team also posted this https://twitter.com/janframework/status/1745472833579540722?t=osxIAvq8ztXuDbNAm11thA praising Ava

I just wanted to these on people's radar.

I am a little out of my element here, but I do hope to figure out a way to make this plugin work using the guide with my joplin.

Edited: corrected links

"Updating note database" appears to be stuck

I have a lot of notes, most of which were imported from Evernote as HTML. When Jarvis starts to update the note database, it gets to a certain number of notes, usually 250, before not progressing any further. I haven't noticed any logs in DevTools from Jarvis, and the "Universal Sentence Encoder" file exists and has some data in it. And occasionally the related notes search comes up and I can actually use it, but not often.

I wonder if one of my notes is causing the issue, but since I didn't find logs, I wouldn't know which one.

image
image

could lm studio and its implementation of open source locally run models work in Jarvis

https://lmstudio.ai

it can provide an open api style

I am no expert just found both your project and this tonight

here's code they suggest for python

# Example: reuse your existing OpenAI setup
import os
import openai

openai.api_base = "http://localhost:1234/v1" # point to the local server
openai.api_key = "" # no need for an API key

completion = openai.ChatCompletion.create(
  model="local-model", # this field is currently unused
  messages=[
    {"role": "system", "content": "Always answer in rhymes."},
    {"role": "user", "content": "Introduce yourself."}
  ]
)

print(completion.choices[0].message)

GPT-4?

How about support for GPT-4?

Azure OpenAI support?

Ref #9 where it seems like Azure OpenAI should work but maybe a bit untested.

I am very intrigued by your project, so I am trying to make it work with Azure. However, I am getting some error messages and I am not quite sure how to set it up. I am trying Jarvis: Chat for now. Are you able to give me some hints?

I've tried these two alternatives:

Setting Value
Model: OpenAI API Key my-key-xyz-123
Chat: Model (online/offline) OpenAI or compatible: custom model
Chat: Custom model API endpoint https://xxxxx.openai.azure.com
Chat: OpenAI (or compatible) custom model ID gpt-4o (also tried gpt-4)
Error message from Jarvis Error message: Resource not found
Setting Value
Model: OpenAI API Key my-key-xyz-123
Chat: Model (online/offline) OpenAI or compatible: custom model
Chat: Custom model API endpoint xxxxx.openai.azure.com
Chat: OpenAI (or compatible) custom model ID gpt-4o (also tried gpt-4)
Error message from Jarvis Request Timeout (60 sec)
(Note that this error appears after 1-2 seconds.)

I have a small python script I've used to verify my Azure OpenAI endpoint ID, API key and model names.

Allow excluding or including by notebook?

Thanks for the plugin, which looks very interesting! It would be useful if I could exclude an entire notebook by notebook - or conversely designate which notebooks I want included - and not only by individual note. I have a huge notebook that I wouldn't want to include. If I modify all thousands of notes in there to add the exclude tag, all the notes will show that new modification date.

option to split by paragraphs

split by min(max_token, tokens_of_X_paragraphs)

pro:

  • splitting by paragraphs should result in more semantically self-contained embeddings than splitting by words
  • thus better semantic search

cons:

  • likely more storage needed
  • (only once) longer init
// pseudo code anyway
embeddings = []
cur_split, cur_token = [], 0
for (p in paragraphs) {
  tokens_p = calc_tokens(p);
  if ((cur_token + tokens_p) >= max_tokens) {
    embeddings.push(calc_embeddings(cur_split));
    cur_split = [p];
    cur_token = tokens_p;
  } else {
    cur_split.push(p);
    cur_token += tokens_p;
  }
}
// and the embeddings of last split

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.