000641bcfc
Do not catch function execution error when catching errors of json parsing.
...
Errors from model function calling should propagate up instead of being catched there.
2023-07-30 01:43:41 +02:00
124ac5cbf0
Simplify the ChatCompletion calling loop
...
removes duplicate code in while loop
2023-07-30 01:32:09 +02:00
56869a2dc2
Make error of function json parsing more descriptive
2023-07-30 01:21:19 +02:00
67d4361c26
Log message data when an error occurs
2023-07-30 01:18:25 +02:00
9c3f25312b
fix "0" reply bug
...
replaces "in" in for loops to "of"
2023-07-28 09:26:53 +02:00
c7b36885a3
Fix overflow of the reply of 2000 character
...
now it will send more than one message if it gets overflown
fixes #6
2023-07-28 09:22:47 +02:00
a0cad7a348
fix and flip empty reply check of the model
2023-07-28 09:12:59 +02:00
72f4648ff9
Do not add bot's nickname if it's the bot user
2023-07-28 07:45:06 +02:00
f9097ae68d
Make error of the execution more verbose to the user
2023-07-25 04:16:59 +02:00
c03d329c3d
Fix unnecessary not, breaking entire bot
2023-07-24 03:52:37 +02:00
6673d3c294
Fix crash when replying to request where bot cannot reply
2023-07-24 03:07:24 +02:00
13d8f73356
Fix crash on reaction blocked
...
should fix #7
2023-07-23 06:28:56 +02:00
31097e03ce
Add newline for limit reached message
2023-07-23 06:28:24 +02:00
0df05e2f06
Add function handling for OpenAI model
...
for now it's querying only time, but in the future there will be more commands
2023-07-23 05:50:16 +02:00
bebef021fb
Update dependencies
2023-07-22 20:12:13 +02:00
3cf2af7aed
Add handling of autocompletion interactions
2023-05-10 04:19:49 +02:00
ec7df40edb
fix description of check-limit command to reflect what it returns
2023-05-10 03:15:31 +02:00
46e2c00ab1
add check-limit command
2023-05-10 03:04:45 +02:00
48b9ec02a0
log guildId in pushCommands script
2023-05-10 03:03:45 +02:00
312f22827e
add getNthUseInLimitTimestamp
...
will be used in a command that check the user limit
2023-05-10 03:03:10 +02:00
c1b165024d
export getUserLimit
...
will be used in a command that check the user limit
2023-05-10 03:02:49 +02:00
ae3a5133b3
Create helper script for pushing commands
2023-05-08 09:15:34 +02:00
8b4b35454b
Add commandManager and the first slash command
...
the command allows for summining the bot without sending an actual mention message
that might hang in the chat log sent to openAi, consuming tokens
2023-05-08 08:53:06 +02:00
56a0e686b0
fully prepare execution for interactions
2023-05-08 08:51:30 +02:00
28dce0b29f
Add support for interactions in moderation
2023-05-08 08:50:59 +02:00
f6ac5281e7
Prepare more execution.ts for interactions
2023-05-08 08:50:23 +02:00
cb2ae4d4f2
Fix always false if statement
2023-05-08 07:12:08 +02:00
965e0a2602
Remove unneeded type assertion of an empty array.
2023-05-08 02:43:36 +02:00
d2925a3aa9
Create dm channel when sending message in dm channel
...
if there is no dm channel
2023-05-08 02:42:58 +02:00
47e7c107c1
Add handling for interactions in execution.ts
...
this in future will be used to handle interaction requests.
2023-05-08 02:40:24 +02:00
cb304f522b
Refactor the main bot execution out of index.js
2023-05-08 01:30:32 +02:00
1c49e8b730
Add simple limit enforcing
...
For now it is 25 messages in the last 24 horus.
2023-05-02 20:41:59 +02:00
a66115c3b8
Track all requests to OpenAI in a database
...
this will be in future used to limit access to the bot
2023-05-02 17:55:48 +02:00
05c50d25e4
Fix eslint semi rule for typescript
2023-03-25 11:41:37 +01:00
6141dffa68
Defer responding to a message request
...
Previously if two message requests appered in a short timespan
in the same channel, the bot would reply twice (in two messages)
for the first message.
This commit fixes that, by queuing message requests and responding
to them in the chronological order based on a channelId.
(requests are queued to a queue identified by channelId)
fixes #4
2023-03-25 11:24:43 +01:00
02730ff488
add limits related to current messages to config
2023-03-24 16:47:26 +01:00
4f4b708ba5
move config to a typescript file, add option for chatCompletionConfig
...
Now we can write code inside config,
which allows us to send current time to the OpenAI api
inside system message!
Example config updated accordingly
2023-03-24 15:44:22 +01:00
960c340760
Log more more information on error (origin, type)
...
also fix the emoji in embed sent in reply
2023-03-22 06:40:16 +01:00
dffb13361c
Add try-catch in moderation when checking with moderation api
...
Now it won't crash the bot when the moderation api is not availabe (somehow)
2023-03-20 06:08:42 +01:00
2a38ae4a95
Limit sent chat to 2048 tokens.
...
This also solves the issue where we would request more tokens,
than the model is capable of (over 4096)
2023-03-19 09:11:37 +01:00
aafefc3ad0
Fix DiscordAPIError on receiving empty content.
...
Previously, when OpenAI returned a message with an empty string,
it would try to send that empty message, which throwed an error.
Now it will react with an emoji to the message that triggered the request.
2023-03-19 02:26:28 +01:00
fa1caf3ad8
Add support for stickers in messages
...
The sticker name and description will be now forwarded
with the rest of the message content.
2023-03-18 04:55:37 +01:00
582dff5243
Add support for fields in embeds
2023-03-18 03:38:29 +01:00
d5cb03502f
Fix the username formatter returning invalid formatName
...
'?' is not accepted character in OpenAI api as a valid name
I also switched the order of the replace steps,
so there will be less _- in the final username
2023-03-18 03:27:44 +01:00
276d70efee
Parse usernames for use in OpenAI api
...
also document the toOpenAIMssages.ts file
2023-03-18 02:06:49 +01:00
d877097517
Fix embed formatting for author name and hyphen condition.
2023-03-17 19:59:46 +01:00
7411648d02
Check messages aganist OpenAI moderation API
2023-03-14 23:52:37 +01:00
c18b8d83ef
Initial commit
2023-03-14 21:16:54 +01:00