perf3ct
|
def28b1dcd
|
migrate to a pipeline approach for LLM chats
|
2025-03-29 21:31:33 +00:00 |
|
perf3ct
|
224cb22fe9
|
centralize prompts
|
2025-03-28 23:07:02 +00:00 |
|
perf3ct
|
72c380b6f4
|
do a wayyy better job at building the messages with context
|
2025-03-28 22:50:15 +00:00 |
|
perf3ct
|
ea4d3ac800
|
Do a better job with Ollama context, again
|
2025-03-28 22:29:33 +00:00 |
|
perf3ct
|
a50575c12c
|
move more prompts to the constants file
|
2025-03-26 18:01:20 +00:00 |
|
perf3ct
|
4ff3c5abcf
|
agentic thinking really works now 🗿
|
2025-03-19 20:35:17 +00:00 |
|
perf3ct
|
d5efcfe0a9
|
fix chat_service imports
|
2025-03-19 19:33:03 +00:00 |
|
perf3ct
|
db4dd6d2ef
|
refactor "context" services
|
2025-03-19 19:28:02 +00:00 |
|
perf3ct
|
352204bf78
|
add agentic thinking to chat
|
2025-03-19 18:49:14 +00:00 |
|
perf3ct
|
71b3b04c53
|
break up the huge context_extractor into smaller files
|
2025-03-11 18:39:59 +00:00 |
|
perf3ct
|
4160db9728
|
fancier (but longer waiting time) messages
|
2025-03-11 18:07:28 +00:00 |
|
perf3ct
|
f2a6f92732
|
hey look, it doesn't crash again
|
2025-03-02 19:39:10 -08:00 |
|