Commit graph

59 commits

Author SHA1 Message Date
0c0cbceb46 Fix uncatched await in execution catch block
mitigates #11
2023-08-04 03:34:28 +02:00
db8628d425 Repeatedly send typing indicator while executing/generating response
fixes #10
2023-08-01 11:03:03 +02:00
d9a97cce8d Handle 5xx errors by repeating requests
the number of tries is stored in the extension of the array class
the shift method is extended to reset the number of tries
on the queue shift.

also I accidently refactored types in execution.ts
there were duplicate types declared

fixes #9
2023-07-31 21:44:03 +02:00
853bf183ee Refactor out the common error handling in moderation 2023-07-31 20:36:49 +02:00
5a116b0531 Handle almost all of the promise rejections
fixes #7
2023-07-31 12:17:14 +02:00
cf3102cbc5 Inform enduser on failed interaction 2023-07-31 12:13:29 +02:00
7225739527 Update eslintrc.json to also make it consider typings
note that I've marked Promises awaiting as a warn,
because I don't want to be bothered with it for now.

I also edited all files to accomodate with the new rules.

I should also think find a way to type-safely import Commands directory,
another time
2023-07-30 22:28:13 +02:00
c4676175ff Update dependencies 2023-07-30 21:37:37 +02:00
01231151b3 Add cache clearing of moderation requests
This removes the memory leak of not removing the moderation api cache
2023-07-30 21:28:39 +02:00
33a16bd629 Use the first character when formatting the -_- like name sequences 2023-07-30 03:00:51 +02:00
0e3962e110 Handle unregistered function calls 2023-07-30 01:51:40 +02:00
000641bcfc Do not catch function execution error when catching errors of json parsing.
Errors from model function calling should propagate up instead of being catched there.
2023-07-30 01:43:41 +02:00
124ac5cbf0 Simplify the ChatCompletion calling loop
removes duplicate code in while loop
2023-07-30 01:32:09 +02:00
56869a2dc2 Make error of function json parsing more descriptive 2023-07-30 01:21:19 +02:00
67d4361c26 Log message data when an error occurs 2023-07-30 01:18:25 +02:00
9c3f25312b fix "0" reply bug
replaces "in" in for loops to "of"
2023-07-28 09:26:53 +02:00
c7b36885a3 Fix overflow of the reply of 2000 character
now it will send more than one message if it gets overflown
fixes #6
2023-07-28 09:22:47 +02:00
a0cad7a348 fix and flip empty reply check of the model 2023-07-28 09:12:59 +02:00
72f4648ff9 Do not add bot's nickname if it's the bot user 2023-07-28 07:45:06 +02:00
f9097ae68d Make error of the execution more verbose to the user 2023-07-25 04:16:59 +02:00
c03d329c3d Fix unnecessary not, breaking entire bot 2023-07-24 03:52:37 +02:00
6673d3c294 Fix crash when replying to request where bot cannot reply 2023-07-24 03:07:24 +02:00
13d8f73356 Fix crash on reaction blocked
should fix #7
2023-07-23 06:28:56 +02:00
31097e03ce Add newline for limit reached message 2023-07-23 06:28:24 +02:00
0df05e2f06 Add function handling for OpenAI model
for now it's querying only time, but in the future there will be more commands
2023-07-23 05:50:16 +02:00
bebef021fb Update dependencies 2023-07-22 20:12:13 +02:00
3cf2af7aed Add handling of autocompletion interactions 2023-05-10 04:19:49 +02:00
ec7df40edb fix description of check-limit command to reflect what it returns 2023-05-10 03:15:31 +02:00
46e2c00ab1 add check-limit command 2023-05-10 03:04:45 +02:00
48b9ec02a0 log guildId in pushCommands script 2023-05-10 03:03:45 +02:00
312f22827e add getNthUseInLimitTimestamp
will be used in a command that check the user limit
2023-05-10 03:03:10 +02:00
c1b165024d export getUserLimit
will be used in a command that check the user limit
2023-05-10 03:02:49 +02:00
ae3a5133b3 Create helper script for pushing commands 2023-05-08 09:15:34 +02:00
8b4b35454b Add commandManager and the first slash command
the command allows for summining the bot without sending an actual mention message
that might hang in the chat log sent to openAi, consuming tokens
2023-05-08 08:53:06 +02:00
56a0e686b0 fully prepare execution for interactions 2023-05-08 08:51:30 +02:00
28dce0b29f Add support for interactions in moderation 2023-05-08 08:50:59 +02:00
f6ac5281e7 Prepare more execution.ts for interactions 2023-05-08 08:50:23 +02:00
cb2ae4d4f2 Fix always false if statement 2023-05-08 07:12:08 +02:00
965e0a2602 Remove unneeded type assertion of an empty array. 2023-05-08 02:43:36 +02:00
d2925a3aa9 Create dm channel when sending message in dm channel
if there is no dm channel
2023-05-08 02:42:58 +02:00
47e7c107c1 Add handling for interactions in execution.ts
this in future will be used to handle interaction requests.
2023-05-08 02:40:24 +02:00
cb304f522b Refactor the main bot execution out of index.js 2023-05-08 01:30:32 +02:00
1c49e8b730 Add simple limit enforcing
For now it is 25 messages in the last 24 horus.
2023-05-02 20:41:59 +02:00
a66115c3b8 Track all requests to OpenAI in a database
this will be in future used to limit access to the bot
2023-05-02 17:55:48 +02:00
05c50d25e4 Fix eslint semi rule for typescript 2023-03-25 11:41:37 +01:00
6141dffa68 Defer responding to a message request
Previously if two message requests appered in a short timespan
in the same channel, the bot would reply twice (in two messages)
for the first message.

This commit fixes that, by queuing message requests and responding
to them in the chronological order based on a channelId.
(requests are queued to a queue identified by channelId)

fixes #4
2023-03-25 11:24:43 +01:00
02730ff488 add limits related to current messages to config 2023-03-24 16:47:26 +01:00
4f4b708ba5 move config to a typescript file, add option for chatCompletionConfig
Now we can write code inside config,
which allows us to send current time to the OpenAI api
inside system message!

Example config updated accordingly
2023-03-24 15:44:22 +01:00
960c340760 Log more more information on error (origin, type)
also fix the emoji in embed sent in reply
2023-03-22 06:40:16 +01:00
dffb13361c Add try-catch in moderation when checking with moderation api
Now it won't crash the bot when the moderation api is not availabe (somehow)
2023-03-20 06:08:42 +01:00