6792c05959
Quota: add tokenCount QuotaEngine
2023-09-21 20:07:35 +02:00
e194c1d81a
messageCount: Add docs
2023-09-21 09:31:23 +02:00
339ef06ff9
Quota: Refactor how Quotas are being handled
...
also renamed limits to quota
I believe this new approach would allow me and bot hosters
to add, implement or change the quota behavior more easily.
Reimplemented the currently existing "Message count" limit
to use the new IQuota, refactoring a code *a little*.
2023-09-21 07:11:15 +02:00
46bb5c867d
/summon: fix command being canceled wrongly in execution
...
the problem was that undefined value was compared to number,
making the expression return true
2023-09-19 20:30:55 +02:00
74fe8e3e8b
Init: Remove debug logger
2023-09-18 13:33:41 +02:00
29318592b0
Execution: Don't execute the summon command if bot is writing already
...
fixes #14
2023-09-18 12:44:55 +02:00
b6eb476162
Fix typo again
2023-09-18 12:26:29 +02:00
1ae09945c0
Add Dockerfile
2023-09-18 11:26:37 +02:00
58a054d137
Prisma: fix typo [breaking change]
...
also: bump version because of breaking change
2023-09-18 11:24:35 +02:00
7ff4abc3c0
Configuration: recactor how it is written and handled
2023-09-18 11:22:10 +02:00
13e993b964
meta: change eslint styling and tsconfig project config
2023-09-18 10:41:54 +02:00
e6d6764e34
add user request limit to config, rename previous limit to read limit
2023-09-13 04:32:14 +02:00
0931fe103d
Wait some time before retrying an OpenAI request
...
OpenAI recommended prectice is to wait some time before retrying
the request that ended with http 5xx code
the bot was not doing that before, now it does
this might fix some issues when we were retrying the request too fast
2023-08-22 21:22:18 +02:00
1205eea7af
Merge branch 'fix/5'
2023-08-20 17:21:26 +02:00
c9f7e3a62e
Use the request message as always newest in the model history
...
This may fix #5 but I am not sure
2023-08-20 15:41:39 +02:00
32dd705498
Use Iterable when converting to OpenAI messages
2023-08-20 15:12:26 +02:00
0c0cbceb46
Fix uncatched await in execution catch block
...
mitigates #11
2023-08-04 03:34:28 +02:00
db8628d425
Repeatedly send typing indicator while executing/generating response
...
fixes #10
2023-08-01 11:03:03 +02:00
d9a97cce8d
Handle 5xx errors by repeating requests
...
the number of tries is stored in the extension of the array class
the shift method is extended to reset the number of tries
on the queue shift.
also I accidently refactored types in execution.ts
there were duplicate types declared
fixes #9
2023-07-31 21:44:03 +02:00
853bf183ee
Refactor out the common error handling in moderation
2023-07-31 20:36:49 +02:00
5a116b0531
Handle almost all of the promise rejections
...
fixes #7
2023-07-31 12:17:14 +02:00
cf3102cbc5
Inform enduser on failed interaction
2023-07-31 12:13:29 +02:00
7225739527
Update eslintrc.json to also make it consider typings
...
note that I've marked Promises awaiting as a warn,
because I don't want to be bothered with it for now.
I also edited all files to accomodate with the new rules.
I should also think find a way to type-safely import Commands directory,
another time
2023-07-30 22:28:13 +02:00
c4676175ff
Update dependencies
2023-07-30 21:37:37 +02:00
01231151b3
Add cache clearing of moderation requests
...
This removes the memory leak of not removing the moderation api cache
2023-07-30 21:28:39 +02:00
33a16bd629
Use the first character when formatting the -_- like name sequences
2023-07-30 03:00:51 +02:00
0e3962e110
Handle unregistered function calls
2023-07-30 01:51:40 +02:00
000641bcfc
Do not catch function execution error when catching errors of json parsing.
...
Errors from model function calling should propagate up instead of being catched there.
2023-07-30 01:43:41 +02:00
124ac5cbf0
Simplify the ChatCompletion calling loop
...
removes duplicate code in while loop
2023-07-30 01:32:09 +02:00
56869a2dc2
Make error of function json parsing more descriptive
2023-07-30 01:21:19 +02:00
67d4361c26
Log message data when an error occurs
2023-07-30 01:18:25 +02:00
9c3f25312b
fix "0" reply bug
...
replaces "in" in for loops to "of"
2023-07-28 09:26:53 +02:00
c7b36885a3
Fix overflow of the reply of 2000 character
...
now it will send more than one message if it gets overflown
fixes #6
2023-07-28 09:22:47 +02:00
a0cad7a348
fix and flip empty reply check of the model
2023-07-28 09:12:59 +02:00
72f4648ff9
Do not add bot's nickname if it's the bot user
2023-07-28 07:45:06 +02:00
f9097ae68d
Make error of the execution more verbose to the user
2023-07-25 04:16:59 +02:00
c03d329c3d
Fix unnecessary not, breaking entire bot
2023-07-24 03:52:37 +02:00
6673d3c294
Fix crash when replying to request where bot cannot reply
2023-07-24 03:07:24 +02:00
13d8f73356
Fix crash on reaction blocked
...
should fix #7
2023-07-23 06:28:56 +02:00
31097e03ce
Add newline for limit reached message
2023-07-23 06:28:24 +02:00
0df05e2f06
Add function handling for OpenAI model
...
for now it's querying only time, but in the future there will be more commands
2023-07-23 05:50:16 +02:00
bebef021fb
Update dependencies
2023-07-22 20:12:13 +02:00
3cf2af7aed
Add handling of autocompletion interactions
2023-05-10 04:19:49 +02:00
ec7df40edb
fix description of check-limit command to reflect what it returns
2023-05-10 03:15:31 +02:00
46e2c00ab1
add check-limit command
2023-05-10 03:04:45 +02:00
48b9ec02a0
log guildId in pushCommands script
2023-05-10 03:03:45 +02:00
312f22827e
add getNthUseInLimitTimestamp
...
will be used in a command that check the user limit
2023-05-10 03:03:10 +02:00
c1b165024d
export getUserLimit
...
will be used in a command that check the user limit
2023-05-10 03:02:49 +02:00
ae3a5133b3
Create helper script for pushing commands
2023-05-08 09:15:34 +02:00
8b4b35454b
Add commandManager and the first slash command
...
the command allows for summining the bot without sending an actual mention message
that might hang in the chat log sent to openAi, consuming tokens
2023-05-08 08:53:06 +02:00