perf3ct
|
a7cafceac9
|
more heavily weigh notes with title matches when giving context to LLM
|
2025-03-26 23:05:16 +00:00 |
|
perf3ct
|
713805394c
|
move providers.ts into providers folder
|
2025-03-26 19:10:16 +00:00 |
|
perf3ct
|
c49883fdfa
|
move constants to their own files and folder
|
2025-03-26 17:56:37 +00:00 |
|
perf3ct
|
eb1ef36ab3
|
move the llm_prompt_constants to its own folder
|
2025-03-20 18:49:30 +00:00 |
|
perf3ct
|
e566692361
|
centralize all prompts
|
2025-03-20 00:06:56 +00:00 |
|
perf3ct
|
f05fe3f72b
|
set up embedding normalization
|
2025-03-18 21:09:19 +00:00 |
|
perf3ct
|
84a8473beb
|
adapt or regenerate embeddings - allows users to decide
|
2025-03-17 21:47:11 +00:00 |
|
perf3ct
|
5ad730c153
|
openai finally works, respect embedding precedence
|
2025-03-17 21:36:14 +00:00 |
|
perf3ct
|
cc85b9a8f6
|
fix autoupdate name inconsistency
|
2025-03-16 20:55:55 +00:00 |
|
perf3ct
|
c315b32c99
|
wait for DB init even to emit before starting LLM services
|
2025-03-16 18:21:43 +00:00 |
|
perf3ct
|
b6df3a721c
|
allow user to select *where* they want to generate embeddings
|
2025-03-12 18:02:51 +00:00 |
|
perf3ct
|
fcba151287
|
allow for manual index rebuild, and ONLY rebuild the index
|
2025-03-12 00:17:30 +00:00 |
|
perf3ct
|
eaa947ef7c
|
"rebuild index" functionality for users
|
2025-03-12 00:08:39 +00:00 |
|
perf3ct
|
72b1426d94
|
break up large vector_store into smaller files
|
2025-03-12 00:02:02 +00:00 |
|
perf3ct
|
730d123802
|
create llm index service
|
2025-03-11 23:26:47 +00:00 |
|