23 Commits

Author SHA1 Message Date
perf3ct
2311c3c049
centralize LLM constants more 2025-03-28 23:25:06 +00:00
perf3ct
224cb22fe9
centralize prompts 2025-03-28 23:07:02 +00:00
perf3ct
72c380b6f4
do a wayyy better job at building the messages with context 2025-03-28 22:50:15 +00:00
perf3ct
ea4d3ac800
Do a better job with Ollama context, again 2025-03-28 22:29:33 +00:00
perf3ct
2899707e64
Better use of interfaces, reducing useage of "any" 2025-03-28 21:47:28 +00:00
perf3ct
15630fb432
add swaggerUI docstrings for LLM/AI API routes 2025-03-26 19:19:19 +00:00
perf3ct
713805394c
move providers.ts into providers folder 2025-03-26 19:10:16 +00:00
perf3ct
c49883fdfa
move constants to their own files and folder 2025-03-26 17:56:37 +00:00
perf3ct
1be70f1163
do a better job of building the context 2025-03-20 19:35:20 +00:00
perf3ct
eb1ef36ab3
move the llm_prompt_constants to its own folder 2025-03-20 18:49:30 +00:00
perf3ct
e566692361
centralize all prompts 2025-03-20 00:06:56 +00:00
perf3ct
db4dd6d2ef
refactor "context" services 2025-03-19 19:28:02 +00:00
perf3ct
352204bf78
add agentic thinking to chat 2025-03-19 18:49:14 +00:00
perf3ct
c37201183b
add Voyage AI as Embedding provider 2025-03-17 22:32:00 +00:00
perf3ct
697d348286
set up more reasonable context window and dimension sizes 2025-03-16 18:08:50 +00:00
perf3ct
72b1426d94
break up large vector_store into smaller files 2025-03-12 00:02:02 +00:00
perf3ct
730d123802
create llm index service 2025-03-11 23:26:47 +00:00
perf3ct
0d2858c7e9
upgrade chunking 2025-03-11 23:04:51 +00:00
perf3ct
6ce3f1c355
better note names to LLM? 2025-03-11 22:47:36 +00:00
perf3ct
c1585c73da
actually shows useful responses now 2025-03-10 05:06:33 +00:00
perf3ct
ef6ecdc42d
it errors, but works 2025-03-10 04:28:56 +00:00
perf3ct
cf0e9242a0
try a context approach 2025-03-10 03:34:48 +00:00
perf3ct
adaac46fbf
I'm 100% going to have to destroy this commit later 2025-03-09 02:19:26 +00:00