Git Product home page Git Product logo

insights-bot's Introduction

insights-bot

A bot works with OpenAI GPT models to provide insights for your info flows.

简体中文


Supported IMs

  • Telegram
  • Slack
  • Discord

Usage

Commands

Insights Bot ships with a set of commands, you can use /help to get a list of available commands when talking to the bot in Telegram. You can also use /cancel to cancel any ongoing actions with the bot.

Summarize webpages

Command: /smr

Arguments: URL, Replied message with only URL

Usage:

/smr https://www.example.com
/smr [Reply to a message with only URL]

By sending /smr command with a URL or replying to a message that only contains a URL, the bot will try to summarize the webpage and return the result.

Configure chat history recapturing

Warning This command is not available in Slack/Discord integration currently.

Command: /configure_recap

Arguments: None

/configure_recap

By sending /configure_recap command, the bot will send you a message with options you can interact with. Click the buttons to choose the settings you want to configure.

Summarize chat histories or Recap

Warning This command is not available in Slack/Discord integration currently.

Command: /recap

Arguments: None

/recap

By sending /recap command, the bot will try to summarize the chat histories and return the result you choose later.

Subscribe to chat histories recap for a group

Warning This command is not available in Slack/Discord integration currently.

Command: /subscribe_recap

Arguments: None

/subscribe_recap

By sending /subscribe_recap command, the bot will start to capture the messages from the group you subscribed and then send a copy of the recap message to you through private chat when it is available.

Unsubscribe to chat histories recap for a group

Warning This command is not available in Slack/Discord integration currently.

Command: /unsubscribe_recap

Arguments: None

/unsubscribe_recap

By sending /unsubscribe_recap command, the bot will no longer send the copy of the recap message for the group you subscribe.

Summarize forwarded messages in private chat

Warning This command is not available in Slack/Discord integration currently.

Commands: /recap_forwarded_start, /recap_forwarded

Arguments: None

/recap_forwarded_start
<Forwarded messages>
/recap_forwarded

By sending /recap_forwarded_start command, the bot will start to capture the forwarded messages you send later in private chat and try to summarize them when you send /recap_forwarded command afterwards.

Deployment

Run with binary

You will have to clone this repository and then build the binary by yourself.

git clone https://github.com/nekomeowww/insights-bot
go build -a -o "build/insights-bot" "github.com/nekomeowww/insights-bot/cmd/insights-bot"

Then copy the .env.example file to build directory and rename it to .env, and then fill in the environment variables.

cd build
cp ../.env.example .env
vim .env
# assign executable permission to the binary
$ chmod +x ./insights-bot
# run the binary
$ ./insights-bot

Run with docker

docker run -it --rm -e TELEGRAM_BOT_TOKEN=<Telegram Bot API Token> -e OPENAI_API_SECRET=<OpenAI API Secret Key> -e DB_CONNECTION_STR="<PostgresSQL connection URL>" insights-bot ghcr.io/nekomeowww/insights-bot:latest

Run with Docker Compose

Clone this project:

git clone github.com/nekomeowww/insights-bot

Or only copy or download the necessary .env.example and docker-compose.yml files (but you will only be able to run the bot with pre-bundled docker image):

curl -O https://raw.githubusercontent.com/nekomeowww/insights-bot/main/.env.example
curl -O https://raw.githubusercontent.com/nekomeowww/insights-bot/main/docker-compose.yml

Create your .env by making a copy of the contents from .env.example file. The .env file should be placed at the root of the project directory next to your docker-compose.yml file.

cp .env.example .env

Replace your OpenAI token and other environment variables in .env, and then run:

docker compose --profile hub up -d

If you prefer run docker image from local codes (which means build it manually, you will need the entire source code of this project), then run:

docker compose --profile local up -d --build

Build on your own

Build with go

go build -a -o "release/insights-bot" "github.com/nekomeowww/insights-bot/cmd/insights-bot"

Build with Docker

docker buildx build --platform linux/arm64,linux/amd64 -t <tag> -f Dockerfile .

Ports we use

Port Description
6060 pprof Debug server
7069 Health check server
7070 Slack App/Bot webhook server
7071 Telegram Bot webhook server
7072 Discord Bot webhook server

Configurations

Environment variables

Name Required Default Description
TIMEZONE_SHIFT_SECONDS false 0 Timezone shift in seconds used for auto generating recap messages for groups, default is 0.
TELEGRAM_BOT_TOKEN true Telegram Bot API token, you can create one and obtain the token through @BotFather
TELEGRAM_BOT_WEBHOOK_URL false Telegram Bot webhook URL and port, you can use https://ngrok.com/ or Cloudflare tunnel to expose your local server to the internet.
TELEGRAM_BOT_WEBHOOK_PORT false 7071 Telegram Bot Webhook server port, default is 7071
OPENAI_API_SECRET true OpenAI API Secret Key that looks like sk-************************************************, you can obtain one by signing in to OpenAI platform and create one at http://platform.openai.com/account/api-keys.
OPENAI_API_HOST false https://api.openai.com OpenAI API Host, you can specify one if you have a relay or reversed proxy configured. Such as https://openai.example.workers.dev
OPENAI_API_MODEL_NAME false gpt-3.5-turbo OpenAI API model name, default is gpt-3.5-turbo, you can specify one if you want to use another model. Such as gpt-4
OPENAI_API_TOKEN_LIMIT false 4096 OpenAI API token limit used to computed the splits and truncations of texts before calling Chat Completion API generally set to the maximum token limit of a model, and let insights-bot to determine how to process it, default is 4096
OPENAI_API_CHAT_HISTORIES_RECAP_TOKEN_LIMIT false 2000 OpenAI chat histories recap token limit, token length of generated and response chat histories recap message, default is 2000, this will leave OPENAI_API_TOKEN_LIMIT - 2000 tokens for actual chat context.
DB_CONNECTION_STR true postgresql://postgres:123456@db_local:5432/postgres?search_path=public&sslmode=disable PostgreSQL database URL. Such as postgres://postgres:postgres@localhost:5432/postgres. You could also suffix with ?search_path=<schema name> if you want to specify a schema.
SLACK_CLIENT_ID false Slack app client id, you can create a slack app and get it, see: tutorial
SLACK_CLIENT_SECRET false Slack app client secret, you can create a slack app and get it, see: tutorial
SLACK_WEBHOOK_PORT false 7070 Port for Slack Bot/App Webhook server, default is 7070
DISCORD_BOT_TOKEN false Discord bot token, you can create a discord app and get it, see: Get started document
DISCORD_BOT_PUBLIC_KEY false Discord bot public key, you can create a discord app and get it, see: Get started document, required if DISCORD_BOT_TOKEN provided.
DISCORD_BOT_WEBHOOK_PORT false 7072 Port for Discord Bot Webhook server, default is 7702
REDIS_HOST true localhost Redis host connects to, default is localhost
REDIS_PORT true 6379 Redis port, default is 6379
REDIS_TLS_ENABLED false false Redis TLS enabled, default is false
REDIS_USERNAME false Redis username.
REDIS_PASSWORD false Redis password.
REDIS_DB false 0 Redis database, default is 0
REDIS_CLIENT_CACHE_ENABLED false false Redis client cache enabled, default is false, read more about client cache at https://redis.io/docs/manual/client-side-caching/ and https://github.com/redis/rueidis#client-side-caching for more details.
LOG_FILE_PATH false <insights-bot_executable>/logs/insights-bot.log Log file path, you can specify one if you want to specify a path to store logs when executed and ran with binary. The default path is /var/log/insights-bot/insights-bot.log in Docker volume, you can override the defaults -e LOG_FILE_PATH=<path> when executing docker run command or modify and prepend a new LOG_FILE_PATH the docker-compose.yml file.
LOG_LEVEL false info Log level, available values are debug, info, warn, error
LOCALES_DIR false locales Locales directory, default is locales, it is recommended to configure as an absolute path.

Acknowledgements

  • Project logo was generated by Midjourney
  • OpenAI for providing the GPT series models

insights-bot's People

Contributors

dependabot[bot] avatar garfield550 avatar lemonnekogh avatar littlesound avatar lorde627 avatar nekomeowww avatar overflowcat avatar pa733 avatar rafiramadhana avatar xwjdsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

insights-bot's Issues

feat: i18n support

  • i18n help messages
  • i18n error message handling
  • i18n reply message handling
  • i18n prompt with multi-language support
  • configurable i18n options (through inline keyboard or whatever)

feat request: support to configure recap and autorecap to control where to send recaps

Brief

Scenario 1

In group 1:

Creator: /enable_recap
Bot: ask for recap mode, whether publicly or privately
Creator: Select
-> if privately is selected, check if user is creator or not, if not a creator, reject the request
Bot: deletes /enable_recap from creator, deletes selections from Bot

Scenario 2

In group 1:

Creator: /configure_recap
Bot: ask for recap mode, whether publicly or privately
Creator: Select
-> if privately is selected, check if user is creator or not, if not a creator, reject the request
Bot: deletes /configure_recap from creator, deletes selections from Bot

Scenario 3

In group 1:

User: /recap
Bot: -> Try to send a message to state out the process is started.
-> if it fails to do so, send /start and reply to user (ask user to enable the bot or unblock the bot)
-> if it succeeded, proceed to summarize the chat histories from targeted group and send to user
Bot: deletes /recap from user,deletes /start from Bot

Scenario 4

In group 1:

User: /subscribe_recap
Bot: -> Try to send a message to state out whether the action has succeeded.
-> if it fails to do so, send /start and reply to user (ask user to enable the bot or unblock the bot)
-> if it succeeded, proceed to subscribe auto recaps for user
<after time interval to send auto recaps>
Bot: look for subscribed users and try to summarize the chat histories from targeted group and send to user, if it fails to do so, ignore.

by @KexyBiscuit

bug: oom for link preview module

fatal error: runtime: out of memory

runtime stack:
runtime.throw({0x1b442db?, 0x2030?})
	/usr/local/go/src/runtime/panic.go:1077 +0x5c fp=0x7ffe6a71d5e8 sp=0x7ffe6a71d5b8 pc=0x43ae9c
runtime.sysMapOS(0xc116800000, 0x40000000?)
	/usr/local/go/src/runtime/mem_linux.go:167 +0x116 fp=0x7ffe6a71d630 sp=0x7ffe6a71d5e8 pc=0x41a756
runtime.sysMap(0x2c0da60?, 0x42f3a0?, 0x2c1dc28?)
	/usr/local/go/src/runtime/mem.go:155 +0x34 fp=0x7ffe6a71d660 sp=0x7ffe6a71d630 pc=0x41a1d4
runtime.(*mheap).grow(0x2c0da60, 0x20000?)
	/usr/local/go/src/runtime/mheap.go:1533 +0x236 fp=0x7ffe6a71d6d0 sp=0x7ffe6a71d660 pc=0x42c256
runtime.(*mheap).allocSpan(0x2c0da60, 0x20000, 0x0, 0xa0?)
	/usr/local/go/src/runtime/mheap.go:1250 +0x1b0 fp=0x7ffe6a71d770 sp=0x7ffe6a71d6d0 pc=0x42b970
runtime.(*mheap).alloc.func1()
	/usr/local/go/src/runtime/mheap.go:968 +0x5c fp=0x7ffe6a71d7b8 sp=0x7ffe6a71d770 pc=0x42b41c
traceback: unexpected SPWRITE function runtime.systemstack
runtime.systemstack()
	/usr/local/go/src/runtime/asm_amd64.s:509 +0x4a fp=0x7ffe6a71d7c8 sp=0x7ffe6a71d7b8 pc=0x46ee2a

goroutine 7168781 [running]:
runtime.systemstack_switch()
	/usr/local/go/src/runtime/asm_amd64.s:474 +0x8 fp=0xc00263ce28 sp=0xc00263ce18 pc=0x46edc8
runtime.(*mheap).alloc(0x40000000?, 0x20000?, 0x0?)
	/usr/local/go/src/runtime/mheap.go:962 +0x5b fp=0xc00263ce70 sp=0xc00263ce28 pc=0x42b37b
runtime.(*mcache).allocLarge(0x1ffbfc55?, 0x40000000, 0x0?)
	/usr/local/go/src/runtime/mcache.go:234 +0x85 fp=0xc00263ceb8 sp=0xc00263ce70 pc=0x4193c5
runtime.mallocgc(0x40000000, 0x0, 0x0)
	/usr/local/go/src/runtime/malloc.go:1127 +0x4f6 fp=0xc00263cf20 sp=0xc00263ceb8 pc=0x410296
runtime.growslice(0x0, 0x11e1a2a?, 0xc002d628d0?, 0xc0aa3b9858?, 0x407a8?)
	/usr/local/go/src/runtime/slice.go:266 +0x4cf fp=0xc00263cf90 sp=0xc00263cf20 pc=0x45338f
bytes.growSlice({0xc08a3fa000, 0x20000000, 0x40d37a?}, 0xc000082860?)
	/usr/local/go/src/bytes/buffer.go:249 +0x8e fp=0xc00263d010 sp=0xc00263cf90 pc=0x51dc0e
bytes.(*Buffer).grow(0xc000c38d50, 0x200)
	/usr/local/go/src/bytes/buffer.go:151 +0x13d fp=0xc00263d048 sp=0xc00263d010 pc=0x51d63d
bytes.(*Buffer).ReadFrom(0xc000c38d50, {0x7f7d4c1734e8, 0xc0030cf0e0})
	/usr/local/go/src/bytes/buffer.go:209 +0x3e fp=0xc00263d0a0 sp=0xc00263d048 pc=0x51da1e
io.copyBuffer({0x1dd1220, 0xc000c38d50}, {0x7f7d4c1734e8, 0xc0030cf0e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147 fp=0xc00263d120 sp=0xc00263d0a0 pc=0x4d84c7
io.Copy(...)
	/usr/local/go/src/io/io.go:389
github.com/nekomeowww/insights-bot/pkg/linkprev.(*Client).request(0xc00031a2b0?, 0x1de2d40?, {0xc0035b0d80, 0x2e})
	/app/insights-bot/pkg/linkprev/linkprev.go:124 +0x245 fp=0xc00263d3c8 sp=0xc00263d120 pc=0x13c2245
github.com/nekomeowww/insights-bot/pkg/linkprev.(*Client).Preview(_, {_, _}, {_, _})
	/app/insights-bot/pkg/linkprev/linkprev.go:37 +0xaa fp=0xc00263d648 sp=0xc00263d3c8 pc=0x13c18aa
github.com/nekomeowww/insights-bot/internal/models/chathistories.(*Model).ExtractTextFromMessage.func1({{0xc00365ba40, 0x3}, 0x12, 0x2e, {0x0, 0x0}, 0x0, {0x0, 0x0}}, 0x0)
	/app/insights-bot/internal/models/chathistories/chat_histories.go:100 +0x1fe fp=0xc00263deb0 sp=0xc00263d648 pc=0x15c617e
github.com/samber/lo/parallel.Map[...].func1(0x0)
	/go/pkg/mod/github.com/samber/[email protected]/parallel/slice.go:15 +0xbf fp=0xc00263df80 sp=0xc00263deb0 pc=0x15d1d5f
github.com/samber/lo/parallel.Map[...].func2()
	/go/pkg/mod/github.com/samber/[email protected]/parallel/slice.go:20 +0x51 fp=0xc00263dfe0 sp=0xc00263df80 pc=0x15d1c71
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00263dfe8 sp=0xc00263dfe0 pc=0x470c21
created by github.com/samber/lo/parallel.Map[...] in goroutine 89
	/go/pkg/mod/github.com/samber/[email protected]/parallel/slice.go:14 +0xc5

goroutine 1 [chan receive, 9248 minutes]:
runtime.gopark(0x7d9a50?, 0x0?, 0x1?, 0x0?, 0xc000b2dbc0?)
	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000e07b30 sp=0xc000e07b10 pc=0x43dcee
runtime.chanrecv(0xc000085bc0, 0xc000b2dc28, 0x1)
	/usr/local/go/src/runtime/chan.go:583 +0x3cd fp=0xc000e07ba8 sp=0xc000e07b30 pc=0x4099ad
runtime.chanrecv1(0xc0005611d0?, 0x1de2d40?)
	/usr/local/go/src/runtime/chan.go:442 +0x12 fp=0xc000e07bd0 sp=0xc000e07ba8 pc=0x4095b2
go.uber.org/fx.(*App).run(0xc0005611d0, 0xc000b2dc70)
	/go/pkg/mod/go.uber.org/[email protected]/app.go:591 +0xaf fp=0xc000e07c60 sp=0xc000e07bd0 pc=0x7d2e8f
go.uber.org/fx.(*App).Run(0xc0005611d0)
	/go/pkg/mod/go.uber.org/[email protected]/app.go:578 +0x34 fp=0xc000e07c90 sp=0xc000e07c60 pc=0x7d2d94
main.main()
	/app/insights-bot/cmd/insights-bot/main.go:45 +0xd25 fp=0xc000e07f40 sp=0xc000e07c90 pc=0x163c065
runtime.main()
	/usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc000e07fe0 sp=0xc000e07f40 pc=0x43d87b
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000e07fe8 sp=0xc000e07fe0 pc=0x470c21

goroutine 2 [force gc (idle), 2 minutes]:
runtime.gopark(0x1633a5f1be7d9a?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000060fa8 sp=0xc000060f88 pc=0x43dcee
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:404
runtime.forcegchelper()
	/usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc000060fe0 sp=0xc000060fa8 pc=0x43db53
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000060fe8 sp=0xc000060fe0 pc=0x470c21
created by runtime.init.6 in goroutine 1
	/usr/local/go/src/runtime/proc.go:310 +0x1a

goroutine 3 [GC sweep wait]:
runtime.gopark(0x2bf1401?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000061778 sp=0xc000061758 pc=0x43dcee
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:404
runtime.bgsweep(0x0?)
	/usr/local/go/src/runtime/mgcsweep.go:321 +0xdf fp=0xc0000617c8 sp=0xc000061778 pc=0x4281bf
runtime.gcenable.func1()
	/usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc0000617e0 sp=0xc0000617c8 pc=0x41d305
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000617e8 sp=0xc0000617e0 pc=0x470c21
created by runtime.gcenable in goroutine 1
	/usr/local/go/src/runtime/mgc.go:200 +0x66

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.