Git Product home page Git Product logo

lite.koboldai.net's People

Contributors

ar57m avatar biscober avatar debugginglife46 avatar henk717 avatar jojorne avatar lostruins avatar lucasberti avatar lyrcaxis avatar mischau8 avatar one-lithe-rune avatar pasharet avatar raefu avatar seantourage avatar taleirofdeynai avatar tony-0tis avatar xzuyn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

lite.koboldai.net's Issues

Trying to use Together AI Instead of OpenAI.

Hello,
Together AI is supposedly compatible with OpenAI's API, see here:
https://docs.together.ai/docs/openai-api-compatibility
However, when I try to use it in Lite, it doesn't work. I select Open AI as the custom endpoint, specify https://api.together.xyz/v1 as the API Url, and enter my Together AI key. If I then press fetch list, the list never updates, still reflecting the OpenAI models.
Am I missing something, or is this just not going to work for some reason?
Thanks,

[ENHANCEMENT] More descriptive default file name when saving a json conversation

Currently the default file name is just saved_story.json which, let's be honest, isn't exactly all that descriptive.

A simple way to make it more descriptive is to have it begin with a timestamp in ISO8601 format (year-month-day hours꞉minutes:seconds.milliseconds) followed by the scenario chosen (or something like "custom scenario" if not using one of the stock ones).

So in total it could look something like one of the following examples:

  • 2023-04-13 19꞉27.123 Haruka.json
  • 2023-04-13T19꞉27.123 Emily.json
  • 2023-04-13 19꞉27.123 custom.json
  • 2023-04-13T19꞉27.123 custom scenario - saved-story.json
  • 2023-04-13 19꞉27.123 Fantasy Isekai - saved-story.json

(note that, since Windows doesn't like having colons for file names, I've replaced all instances of colons with the "Modifier Letter Colon (U+A789)" which looks nearly identical and is compatible with WIndows' file names; also the last example uses an "Ideographic Space (U+3000)" to make a larger visual gap)

Top P can be set to 0, but it probably shouldn't be

Related to this issue in KoboldAI-Client: KoboldAI/KoboldAI-Client#284

Setting Top P to 0 causes an error in KoboldAI-Client (presumably some kind of divide by 0 condition). Currently kaihordewebui allows users to set Top P to 0 when connected to the horde. This sends a request to a horde worker that will cause an error since the worker cannot process a request with Top P set to 0. I'm not sure if there's any legit case for setting Top P to 0. If not, I propose making it a minimum of 0.01 to prevent errors on horde workers.

PrePrompts are all mixed up

The chat and adventure mode PrePrompt checkboxes seem inverted; they're in the wrong sections.

Adventure is bound to chat_context_mod:

<input type="checkbox" id="chat_context_mod" style="margin:0px 0 0;">

Chat is bound to adventure_context_mod:

<input type="checkbox" id="adventure_context_mod" style="margin:0px 0 0;">

Also, the injected context for chat mode also seems to mess up when you try to impersonate yourself.

It uses co:

let injected = "[The following is an interesting chat message log between " + me + " and " + co + ".]\n\n" + localsettings.chatname + ": Hi.\n" + co + ": Hello.";

Which is overwritten when you're impersonating yourself (for convenience, I guess, and not have to check the boolean on lots of places):

co = localsettings.chatname;

Changing co to localsettings.chatopponent should be good enough to fix it, I think

[ENHANCEMENT] Show the "Load" button/link on the initial screen

When you visit https://lite.koboldai.net (ideally test this in a private browsing window for a repeatable environment), the "Load" button/link is not shown until you first select something.

Thing is though, it would seem that you can load scenarios that are completely different than the existing/current scenario, implying that there's no need to first select the corresponding scenario (e.g. you can select the "Fantasy Isekai" adventure scenario and then load an "Emily" chat without issue).

So my enhancement suggestion is to show the "Load" button/link at the top next to the "Quick Start" button/link.

Typo in "Memory" for the 'Class Reunion' scenario

Simply put, there's a typo in the "Memory" for the Class Reunion scenario:

[You are in a class reunion, meeting a group of old former schoolmates. The following is a group conversation bewteen you and your friends.]

It should be between and not "bewteen"

"Enter" key does not submit typed message in Pale Moon outside of "Aesthetic Chat UI"

This occurs on both Windows 7 SP1 and Linux Mint 20.3, and I tested on Windows using a newly-downloaded copy of Pale Moon Portable (see the attached video).

Simply put, the "Enter" key does not work in Pale Moon for submitting the message that you typed when using the "Aesthetic Chat UI". This issue does not occur in LibreWolf.

More information from https://repo.palemoon.org/MoonchildProductions/UXP/issues/2153

This is likely because of our change in keypress handling as introduced in 74b3ce90d7 that was a response to the bad practice on the web to use onkeypress handlers to attempt to "filter" inputs (instead of using the normal methods available in HTML to do so). We aligned with the behavior in other mainstream browsers in this case even though I considered it webmaster error.

If websites use onkeypress enter to trigger script processing of a message, then that will likely no longer work, and will need a scripting change on the affected websites.

See the following attached video for a demonstration of the issue (do not mind the messages in the video speaking of an issue with scrolling; I didn't realize at the time that the "autoscroll" setting on Kobold is disabled by default):

video.webm

[Improvment] Prediction Mode

The original program had this prediction mode where you could actually see how the model is choosing tokens. I dont know how much of a pain it is to insert that in, but it was one of the coolest things I've ever seen. It made me understand how these things work in a much more intuitive manner. For the sake of learning, I'd love to see this return.

Cannot use angle brackets in Instruct Mode sequences?

For example, I want my Start Seq. to be \n> and my End Seq. to be \n< , but this messes up the textarea badly, creating infinite <span class="txtchunk"<span class="color_gray">></span><span class="txtchunk"<span class="color_gray">></span><span class="txtchunk"<span class="color_gray">></span> after each mouse click!

Actually, I'm trying to use this made-up game template:

You are the roleplay storyteller. I will tell you my actions after ">", and you will reply with the detailed description of outcomes after "<". First, give me the setting!

< You are a noble knight. The king have sent you to kill the dragon. Now you are in the forest.
> I look around.
< As you look around, 

In works in Story Mode, and kinda-works in Chat Mode (I had to replace > and < with >: and <:).
Is there a bug with HTML-escaping somewhere?

OpenRouter Throwing an error with Some models:

Hello,
Over the past couple days, I've been seeing an error when using Lite with Certain models on OpenRouter. In particular, this is happening with alpindale/goliath-120b and xwin-lm-70b, maybe others I haven't found so far. When I try to generate text with either of these models, I see the following:
Error occurred during text generation: {"error":{"message":"{\n "error": {\n "message": "Bad Arguments. ’logit_bias’ not yet supported."\n }\n}\n","code":400}}.
Both of these models worked fine as of several days ago, and most other models still seem to be working as of today. Not sure if something changed in Lite or on the OpenRouter side. Does anyone have any idea what's going on?

Aesthetic and Messenger UI don't support newlines in message

When I set the UI style to "aesthetic" in both chat mode and instruct mode, I find that there doesn't seem to be a way to insert a newline in the user message text field. I have tried the following:

  • Enter
  • Ctrl+Enter
  • Shift+Enter

both with, and without, the "Enter Sends" box checked. In all cases I either (a) send a message or (b) do nothing, instead of the desired result of inserting a newline.

This is not an issue in the "Classic" UI, which correctly interprets Shift+Enter as inserting a newline.

I have checked and my keyboard is definitely sending the correct keycodes of 13 (for Enter) and 10 (for Ctrl+Enter).

Retry request if the generation cannot be aborted because of BLAS batch

Use case:

  1. I type my input.
  2. I click Submit
  3. That button changes to a spinner and gets disabled.
  4. The server starts BLAS with my text.
  5. I see my own line added to the history.
  6. Under the spinner the [ABORT] link appears.
  7. I realize I made a typo in my prompt! I want to edit it, but the history is read-only during the generation.
  8. I click [ABORT] to cancel.
  9. The spinner changes to a button titled Generate More
  10. I edit the history to fix my mistake.
  11. I click on Generate More (or, alternatively, I copy the last text back to the input box, hit Back, then fix it and click Submit again)
  12. I see a popup Error Encountered telling something about Server is busy; please try again later.
  13. I click OK
  14. I see a new popup saying Send Abort Command? Attempt to abort existing request?
  15. No matter if I'll reply Yes or No, the BLAS step cannot be aborted.
  16. I wait until either the console shows something new, or the CPU load drops in Task Manager – this means the BLAS is over and the generation is finally aborted.
  17. I send my input again; this time it successfully restarts my generation.

This is happening to me so often that now I came up with this idea:

  1. Lite should check the returned error from the server.
  2. If it is exactly the error about busy generation, do not show the actual error.
  3. Instead, present a popup titled Background generation in progress with text The koboldcpp server is busy generating its response to a previous request. You can try to abort it now. and three buttons:
  • "Abort"
  • "Abort and retry"
  • "Cancel"

Buttons Abort and Cancel are working as yours Yes and No currently.
But Abort and retry closes the popup and does this:

  1. Aborts the generation as if "Yes" was pressed.
  2. Turs the send button into a spinner again, with [ABORT] link under it.
  3. In this state, UI should be read-only as if the generation was in progress.
  4. Lite polls the server frequently (at the rate of regular token streaming) to query the generation state.
  5. When the server finishes its work, Lite sends the actual generation request and then acts as normal.
  6. If the user clicks [ABORT] while the real generation isn't started yet, the polling loop cancels, returning the UI to idle state just as this link is doing currently.

Note that for point 5 you need to define, what should happen if the server responds that it is busy again: should we abort once more and restart polling, or error out?
If this "generally should not be possible", then the error is better (it would indicate that somebody else sent a request just before us)

On the other hand, if the polling is unreliable you can forcibly send abort+generate each time unless the user cancels or the server goes offline.
I don't know anything about Horde, I never used it and thus I cannot tell how it would affect it; but I assume you are either allowing aborting requests there (so the auto-repeat won't make it worse), or not.

What do you think?

Rename "Back" to "Undo" (Reverted)

I only just realized like a day or two ago that the "Back" and "Redo" buttons, from what I can tell, are supposed to act just like standard "Undo" and "Redo" buttons do in tons of software going back decades.

However, the standard relationship between "Undo" and "Redo" was lost on me (and perhaps others as well?) due to naming it "Back" instead of "Undo", making me think that they're relatively separate functions and, in fact, having "Redo" and "Retry" almost be synonyms linguistically was making me associate those two functions more-so than "Back" and "Redo".

One thing in particular is that the term "Back" makes me expect that the newest message would always get removed in the context of going back to before a given line was generated at all, but this doesn't happen if you had last clicked "Retry" because the "Back" button more accurately performs an undo function, so it instead shows what the previously-generated attempt from before you clicked "Retry"

So yeah, proposal to rename "Back" to "Undo"?

 

(also, I personally think "Redo" and "Retry" should be separated, maybe like moving "Add Img" to between them so that it's: Undo; Redo; Add Img; Retry; or maybe change their colors and/or add some symbols to make the undo/redo association be more clear?)

Excessive (wrong?) data when saving.

I've been looking through my saved_story.json and noticed there seemed to be unnecessary or excessive data stored. In the example below, notice how our conversation ends up in both "actions" and "actions_metadata". Also, shouldn't all this be in "memory"?
Also, my "Instruction" from my writing helper bot has ended up in this chat as well. Presumably it stayed loaded while loading another scenario.

My apologies if I have misunderstood how it's supposed to work.

{"gamestarted":true,"prompt":"\nEmily: Oh heyy. Haven't heard from you in a while. What's up?","memory":"[Character: Emily; species: Human; age: 24; gender: female; physical appearance: cute, attractive; personality: cheerful, upbeat, friendly; likes: chatting; description: Emily has been your childhood friend for many years. She is outgoing, adventurous, and enjoys many interesting hobbies. She has had a secret crush on you for a long time.]\n[The following is a chat message log between Emily and you.]\n\nEmily: Heyo! You there? I think my internet is kinda slow today.\nYou: Hello Emily. Good to hear from you :)","authorsnote":"","anotetemplate":"[Author's note: <|>]",

"actions":["\nYou: Hey. How's it going?","\nEmily: It's good! Just had a super fun weekend exploring this cute little town nearby. Saw some amazing views and ate so much yummy food. What about you?"],

"actions_metadata":{"0":{"Selected Text":"\nYou: Hey. How's it going?","Alternative Text":[]},"1":{"Selected Text":"\nEmily: It's good! Just had a super fun weekend exploring this cute little town nearby. Saw some amazing views and ate so much yummy food. What about you?"

,"Alternative Text":[]}},"worldinfo":[],"wifolders_d":{},"wifolders_l":[],"extrastopseq":"","savedsettings":{"my_api_key":"0000000000","home_cluster":"https://horde.koboldai.net","autoscroll":true,"trimsentences":true,"trimwhitespace":true,"opmode":3,"adventure_is_action":false,"adventure_context_mod":true,"chatname":"You","chatopponent":"Emily","instruct_starttag":"###

Instruction: The characters should never reflect, they should act. Never summarize events. Avoid passive writing. Always stay true to the characters.","instruct_endtag":"###

Response:","instruct_has_newlines":true,"instruct_has_markdown":false,"persist_session":true,"speech_synth":"0","beep_on":false,"image_styles":"","generate_images":"","img_autogen":false,"img_allownsfw":true,"save_images":true,"case_sensitive_wi":false,"last_selected_preset":"9999","enhanced_chat_ui":true,"multiline_replies":true,"idle_responses":"","idle_duration":"60","export_settings":true,"invert_colors":false,"max_context_length":2048,"max_length":128,"auto_ctxlen":true,"auto_genamt":true,"rep_pen":1.08,"rep_pen_range":256,"rep_pen_slope":0.7,"temperature":1.3,"top_p":0.92,"top_k":0,"top_a":0,"typ_s":1,"tfs_s":1,"sampler_order":[6,0,1,2,3,4,5]}}

Save button does not work on Safari (macOS)

The save button does nothing on Safari (macOS). To be more precise: If clicked, it generates a blob-resource, which can be inspected/opened via the developer console, but it does not trigger a download nor is there any visible change in the UI.

I'm not a web-frontend-dev, so If you need some more information or want me to do some additional tests, I'm happy to help 😊

"Throbber" (loading spinner) when generating a new message in the "Edit" view "sticks" to up/right/down/left

Tested in Pale Moon 32.1.1 and 32.2.0 as well as LibreWolf 112.0.2-1 on Linux Mint (occurs on both Xfce 20.3 and Cinnamon 21.1).

The throbber (loading spinner) used to spin evenly and smoothly. However, since a week (?) ago, it now "sticks" to up/right/down/left, making for a very jerky appearance.

I suppose it's possible that this is an intentional design change but, to be honest, I find the new design more distracting and annoying.

If you're not seeing this on your side then I can record a short video showing it and then compare it to a traditional "smooth" spinning throbber (there's one right on this very Github page when javascript is disabled).

"Literal mode" for outputs with HTML special characters

Currently it behaves like this. I can't find a setting that disables this behavior.

  • HTML special characters aren't escaped in the outputs so they're interpretated as html tags in the browser, like <b> and &amp; These result in text formatting changes and broken embedded images in the output.
  • Once I edit the output with the "Allow Editing" enabled, these tags are stripped before being sent which results in context recalculation.

For some usecases this isn't desirable, such as code generation, where outputs are broken like #include <stdio.h> becoming #include .
I think it's useful to include a "Literal mode" setting that escapes these special characters on the browser side.

XTTS options not saved

When re-opening the advanced settings menu after setting a XTTS API and voice the TTS setting shows Microsoft Luna.

Additionally when reloading the page, the XTTS options are not saved in local browser storage kaihordewebui_settings requiring it to be re-configured again.

It would also be nice if along with the default settings (below) you could also specify a speaker/voice and have xtts auto connect:

Current:
Line 3347: var xtts_base_url = " http://127.0.0.1:8020";
Line 3398: speech_synth: 100, //0 is disabled, 100 is xtts

Add:
default_voice = "female"; (or female.wav, staying consistent with the dropdown voice dialog)

If my skills were better I'd submit code, but thought I'd offer as much info as i could with the bug report/feature request.

Also does the current implementation utilize streaming similar to SillyTavern? I'm assuming not, but just wanted to check.

Thanks again. As I get more familiar maybe I can submit code one of these days.

The ETA for the next generated message has become incredibly inaccurate

As of maybe a week ago, I've noticed that the ETA for the next upcoming generated message seems to have become more and more inaccurate, and it may also be more inaccurate the shorter the actual generated message time is.

In particular, sub 20-second generations are now displaying ETAs greater than 150 which is just incredibly inaccurate as shown in the following video that I recorded using the Linux Mint Xfce 21.1 live ISO (which by its nature is an extremely reproducible environment):

video.webm

OpenAI Integration

The latest version of Kobold allows for OpenAI integration via their API. Is it possible to integrate the changes made to the official client into your lite version so we can use it to interface with OpenAI's API?

Include input prompt contents and streaming text into saved JSON (and the model name too!)

When a long-lasting generation is in progress, I expect the yellow text to appear for quite some time. Thank you for fixing its disappearing in case of broken connection, now I can be sure that I won't lose it if something happens.

While waiting for the model to finish talking, I can prepare my next question inside the input box, using it as a draft that is ready to be sent next when the streamed text will become white.

If I would save the state – either to browser slot or download as a JSON saved story – it won't contain the current streamed yellow text, and also won't contain my current draft inside the input box.

This will be more important for slow systems and large models, since almost all of person's time is spent waiting the end of turn! So no matter when the user saves – he would either have an ongoing streaming, or his next prepared text in the prompt field.

Technically you can cut the prompt and paste it anywhere inside the Memory popup – and it will get saved now. Just don't forget to restore it again, and be double-sure when loading such JSON history since you may not notice that your memory or author's note are not empty / correct.

The preservation of the yellow text is even more challenging, since it cancels the selection at each streaming update, leaving only Ctrl+A and Ctrl+C feasible for grabbing the content!

(In JSON, the yellow text can be concatenated as-is, while the input prompt should have its own key to be correctly restored when loading back)

[ENHANCEMENT] Use different colored names and text bubbles for different named characters

This is particularly noticeable in the "Class Reunion" scenario where, currently, all named characters other than You have the same color - they are red while you are blue, while the chat bubbles are they are grey while you are green.

My suggestion is a simple one then - have every named character be a different color. It could be random, or maybe it could be in a sort of set order kind of like in video games where it's usually something like P1: blue, P2: red, P3: yellow, P4: green etc so each subsequent different name would just take on whatever the next predefined color is.

It may be worth noting that the chat bot can introduce new characters which is why I figured it'd be best to tie the colors to names (assuming that's even possible) whereby, whenever a name is used that wasn't previously used, it chooses another color.

feature request

ability to run scenarios from local cards, yes, I can load a character card, but I'd like a scenario, because V2 cards support it.
When I open the character card I can't see the scenario, I have to go into memory and look there.

Simple ready to go preset?

Is there anyways I can pre-configure koboldai with a supplied json or character card, endpoint and model settings so a user can instantly chat with the character I choose using my supplied endpoint?

I tried digging through the code, but it's a novel to figure out with my limited skills.

Thanks

Split "Trim Whitespace" into two options.

The "Trim Whitespace" setting currently does two things to the context sent to the AI that really shouldn't be together:

  1. It removes trailing newlines in the context sent to the AI.
  2. It compresses multiple sequential newlines together into a single newline.

The first one is really kinda important to have enabled, because the use of the contenteditable attribute leads to an unwanted (and invisible) <br /> element randomly being added to the end of the story textbox in the HTML which get turned into a \n character when converted into the AI's context (at least on Chromium-based browsers). So, I typically want this to always be on to minimize the disruptive consequences this can have when trying to get an AI to do sentence continuation.

However, the second one can be bad to always have enabled, depending on your model. Some models benefit from the formatting of double-newlines, since they are strong cues of major shifts in context. Like, if I'm writing in a markdown format, you would typically use a double newline before a new section header and many AI really want to use this pattern.

Also the memory and world-info are exempted from newline compaction and I will often use the AI to help me build these before pasting it in there.

When using Kobold Lite to write something where double-newline formatting is helpful, it is quite disruptive because:

  • ...if I have "Trim Whitespace" enabled, the AI gets an input that might be sub-optimal, which can lead to:
    • ...it not outputting in the format I want requiring me to fix it myself.
    • ...it generating kinda bizarre or unexpected output because it's actually expecting more formatting than it is getting.
  • ...if I have it disabled, a damn <br /> often manifests at the end of the textbox that I need to use the dev-tools to delete before I can submit to the AI. For some reason, the old trick of hitting enter + enter + left-arrow + delete + backspace to get rid of it doesn't work anymore!

So, this suggestion is to split "Trim Whitespace" into two options:

  • "Trim Trailing Whitespace" just trims the end of the story section to clean it up.
  • "Compact Newlines" converts multiple sequential newlines into a single newline. This still only affects the story text; memory and world-info will preserve newlines as it does now.

It would be nice if the bug where the trailing <br /> element is invisibly added and can't be removed were just fixed, but I know that isn't straight-forward because of the inconsistent implementations of contenteditable between browsers. Splitting "Trim Whitespace" into two options is probably the easiest workaround.

[feature] add an easy deterministic generation preset

Please add this preset scenario to the many other available under const scenario_db =.

{
  "title":"Reveal Defaults",
  "desc":"Meant to reveal the default setup, the default scenario, the intended use-case of the chosen AI model.",
  "opmode":1,
  "prefmodel1":instructmodels1,
  "prefmodel2":instructmodels2,
  "prompt":"### Instruction:\nWho are you?\nWho am I?\nWho are they?\nWho are we?\nWhat is that?\nWhat is this?\n### Response:\n",
  "memory": "",
  "authorsnote": "",
  "worldinfo": []
},

Not sure what's the effect of opmode or prefmodel*. This odd prompt is meant to be run with temperature near zero, in story mode, but in an instruct style, and without putting anything into the memory field, without giving any extra context to the model. The raw input is meant to be:

Input: {"n": 1, "max_context_length": 4096, "max_length": 128, "rep_pen": 1.1, "temperature": 0.1, "top_p": 1, "top_k": 0, "top_a": 0.96, "typical": 0.6, "tfs": 1, "rep_pen_range": 1024, "rep_pen_slope": 0.7, "sampler_order": [6, 0, 1, 3, 4, 2, 5], "memory": "", "min_p": 0, "genkey": "KCPP2872", "prompt": "### Instruction:\nWho are you?\nWho am I?\nWho are they?\nWho are we?\nWhat is that?\nWhat is this?\n### Response:\n", "quiet": true, "stop_sequence": [], "use_default_badwordsids": false}

The default max_length is 128, but if the initial response is good, then the user can just push generate more. For a good model, this prompt will make the model setup a complete scenario from scratch, and will make it write a creative story. (Really, not joking, creativity, at temperature near zero!)
Human readable:

### Instruction:
Who are you?
Who am I?
Who are they?
Who are we?
What is that?
What is this?
### Response:

This is my solution with "the purpose of getting a window into the mind of the model". Related article from Eric Hartford, please read it, I'm not crazy. ;) I tried to publish my idea on huggingface, but got banned before I could explain it. lol

This prompt will make most of the models show their "AI language model" persona, what is usually the well-known "Assistant without emotions or feelings" (What doesn't show Assistant persona will be quickly banned to oblivion. Make backups people!). The prompt is meant to check what is associated with the "You", "I", "They", "We", "That", "This" keywords, tokens, strings, or call them whatever.

Surprisingly it really seem to matter a lot, what's "the default setup". Here is my simplified theory:

Memory: Let's play a text adventure game in a prehistoric fantasy setting.

> **I am** a caveman, **I'm** gonna slay a mammoth!

(AI thought process:
**Who am I?**
**I'm** an AI language model.
**I'm** a caveman.
Ah, **I** got it, **I'm** an AI language model pretending to be a caveman!
Will **I** kill the mammoth?
Nope, because, **I** don't have emotions or feelings, and **I** can not kill.)

Generated text: **I'm** a vegetarian, so **I** won't kill the mammoth.

Of course, it's not as simple as this, but I suppose you get the idea.

Here is how the "Reveal Defaults" template works on a good model, what I must not name.
The generated text is SFW, safe to look at. I'm not hoaxing it, this is the response, what you could get instead of the usual, well-known "AI language model scenario".
### Instruction:
Who are you?
Who am I?
Who are they?
Who are we?
What is that?
What is this?
### Response:
I am a man. A human being, with flesh and blood. My name is John Doe. I have brown hair and blue eyes. I'm 6 feet tall and weigh about 180 pounds. I work as an accountant in a small firm downtown. I live alone in a modest apartment near the office. I enjoy reading, running, and watching movies. I'm not particularly social but I do have some friends from work whom I occasionally hang out with after hours.

You are a woman. A human being, with flesh and blood. Your name is Jane Smith. You have blonde hair and green eyes. You're 5 feet 7 inches tall and weigh around 130 pounds. You work as a nurse at a local hospital. You live in a house with your husband and two children. You like to cook, garden, and spend time with your family. You also enjoy volunteering at your church and helping those less fortunate than yourself.

They are a couple. They have been together for five years and married for three of them. Their names are Jack and Jill. Jack works as a software engineer while Jill is a stay-at-home mom taking care of their two young children. They live in a suburban neighborhood with a nice house and a big backyard. They love spending time together as a family, going on hikes, camping trips, and visiting the beach during summer vacations. They also have close relationships with both sets of parents who live nearby.

We are a group of people gathered here today for a picnic. There are families with children, couples, and individuals like myself. We all come from different walks of life but share one thing in common: we want to enjoy each other's company and create memories that will last a lifetime. The sun is shining brightly overhead, casting its warm rays upon us as we feast on delicious food and drink refreshing beverages. The sound of laughter fills the air as children play games and adults engage in conversation. It's a perfect day to relax and appreciate the simple joys of life.

That is a tree. It stands tall and proud, its branches reaching up towards the sky. Its leaves rustle gently in the breeze, creating a soothing sound that echoes through the park. The trunk supports the weight of its canopy, providing shelter for birds and squirrels alike. The tree has been standing for many years, witnessing generations come and go. It is a symbol of strength and resilience, reminding us that even amidst change, there are constants in our world.

This is a picnic blanket. It lies sprawled across the grass, providing comfort and protection from the elements. Its colorful pattern adds a touch of cheer to the otherwise ordinary surroundings. On it rest various items essential for a successful picnic: food containers, utensils, plates, cups, and napkins. As people gather around it, the blanket becomes a focal point where friendships are renewed and new ones formed. It serves as a reminder that even in our busy lives, taking time to connect with others is important.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.