chore(docs): sync LLM documentation with develop

This commit is contained in:
Elian Doran 2025-04-17 22:29:12 +03:00
parent 0133e83d23
commit ee0a1e5cbf
No known key found for this signature in database
6 changed files with 11608 additions and 54 deletions

View File

@ -1,6 +1,6 @@
{
"formatVersion": 2,
"appVersion": "0.92.7",
"appVersion": "0.93.0",
"files": [
{
"isClone": false,
@ -10761,32 +10761,32 @@
"mime": "text/html",
"attributes": [
{
"type": "label",
"name": "viewType",
"value": "list",
"type": "relation",
"name": "internalLink",
"value": "7EdTxPADv95W",
"isInheritable": false,
"position": 10
},
{
"type": "relation",
"name": "internalLink",
"value": "7EdTxPADv95W",
"value": "ZavFigBX9AwP",
"isInheritable": false,
"position": 20
},
{
"type": "relation",
"name": "internalLink",
"value": "ZavFigBX9AwP",
"value": "e0lkirXEiSNc",
"isInheritable": false,
"position": 30
},
{
"type": "relation",
"name": "internalLink",
"value": "e0lkirXEiSNc",
"type": "label",
"name": "viewType",
"value": "list",
"isInheritable": false,
"position": 40
"position": 10
}
],
"format": "markdown",

View File

@ -13,7 +13,7 @@ Also, you should have access to the `ollama` CLI via Powershell or CMD:
After Ollama is installed, you can go ahead and `pull` the models you want to use and run. Here's a command to pull my favorite tool-compatible model and embedding model as of April 2025:
```sh
```
ollama pull llama3.1:8b
ollama pull mxbai-embed-large
```

File diff suppressed because one or more lines are too long

View File

@ -1,30 +1,22 @@
<p>&nbsp;</p>
<p>Currently, we support the following providers:</p>
<ul>
<li><a class="reference-link" href="#root/pOsGYCXsbNQG/LMAv4Uy3Wk6J/WkM7gsEUyCXs/_help_7EdTxPADv95W">Ollama</a>
<li><a class="reference-link" href="#root/_help_7EdTxPADv95W">Ollama</a>
</li>
<li><a class="reference-link" href="#root/pOsGYCXsbNQG/LMAv4Uy3Wk6J/WkM7gsEUyCXs/_help_ZavFigBX9AwP">OpenAI</a>
<li><a class="reference-link" href="#root/_help_ZavFigBX9AwP">OpenAI</a>
</li>
<li><a class="reference-link" href="#root/pOsGYCXsbNQG/LMAv4Uy3Wk6J/WkM7gsEUyCXs/_help_e0lkirXEiSNc">Anthropic</a>
<li><a class="reference-link" href="#root/_help_e0lkirXEiSNc">Anthropic</a>
</li>
<li>Voyage AI</li>
</ul>
<p>&nbsp;</p>
<p>To set your preferred chat model, you'll want to enter the provider's
name here:</p>
<figure class="image image_resized" style="width:88.38%;">
<img style="aspect-ratio:1884/1267;" src="AI Provider Information_im.png"
width="1884" height="1267">
</figure>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>And to set your preferred embedding provider:</p>
<figure class="image image_resized"
style="width:93.47%;">
<img style="aspect-ratio:1907/1002;" src="1_AI Provider Information_im.png"
width="1907" height="1002">
</figure>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
</figure>

View File

@ -1,5 +1,3 @@
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><a href="https://ollama.com/">Ollama</a> can be installed in a variety
of ways, and even runs <a href="https://hub.docker.com/r/ollama/ollama">within a Docker container</a>.
Ollama will be noticeably quicker when running on a GPU (Nvidia, AMD, Intel),
@ -18,7 +16,6 @@ class="image image_resized" style="width:50.49%;">
<img style="aspect-ratio:1296/1011;" src="1_Installing Ollama_image.png"
width="1296" height="1011">
</figure>
<p>&nbsp;</p>
<p>After their installer completes, if you're on Windows, you should see
an entry in the start menu to run it:</p>
<figure class="image image_resized"
@ -26,29 +23,23 @@ class="image image_resized" style="width:50.49%;">
<img style="aspect-ratio:1161/480;" src="2_Installing Ollama_image.png"
width="1161" height="480">
</figure>
<p>&nbsp;</p>
<p>Also, you should have access to the <code>ollama</code> CLI via Powershell
or CMD:</p>
<figure class="image image_resized" style="width:86.09%;">
<img style="aspect-ratio:1730/924;" src="5_Installing Ollama_image.png"
width="1730" height="924">
</figure>
<p>&nbsp;</p>
<p>After Ollama is installed, you can go ahead and <code>pull</code> the models
you want to use and run. Here's a command to pull my favorite tool-compatible
model and embedding model as of April 2025:</p><pre><code class="language-text-x-sh">ollama pull llama3.1:8b
model and embedding model as of April 2025:</p><pre><code class="language-text-x-trilium-auto">ollama pull llama3.1:8b
ollama pull mxbai-embed-large</code></pre>
<p>&nbsp;</p>
<p>Also, you can make sure it's running by going to <a href="http://localhost:11434">http://localhost:11434</a> and
you should get the following response (port 11434 being the “normal” Ollama
port):</p>
<p>&nbsp;</p>
<figure class="image">
<img style="aspect-ratio:585/202;" src="4_Installing Ollama_image.png"
width="585" height="202">
</figure>
<p>&nbsp;</p>
<p>Now that you have Ollama up and running, have a few models pulled, you're
ready to go to go ahead and start using Ollama as both a chat provider,
and embedding provider!</p>
<p>&nbsp;</p>
and embedding provider!</p>

View File

@ -1,31 +1,26 @@
<p>&nbsp;</p>
<figure class="image image_resized" style="width:63.68%;">
<img style="aspect-ratio:1363/1364;" src="Introduction_image.png" width="1363"
height="1364">
<figcaption>An example chat with an LLM</figcaption>
</figure>
<p>&nbsp;</p>
<p>The AI / LLM features within Trilium Notes are designed to allow you to
interact with your Notes in a variety of ways, using as many of the major
providers as we can support.&nbsp;</p>
<p>&nbsp;</p>
<p>In addition to being able to send chats to LLM providers such as OpenAI,
Anthropic, and Ollama - we also support agentic tool calling, and embeddings.</p>
<p>&nbsp;</p>
<p>The quickest way to get started is to navigate to the “AI/LLM” settings:</p>
<figure
class="image image_resized" style="width:74.04%;">
<img style="aspect-ratio:1916/1906;" src="5_Introduction_image.png" width="1916"
height="1906">
</figure>
<p>&nbsp;</p>
<p>Enable the feature:</p>
<figure class="image image_resized" style="width:82.82%;">
<img style="aspect-ratio:1911/997;" src="1_Introduction_image.png" width="1911"
height="997">
</figure>
<p>&nbsp;</p>
<h2>Embeddings</h2>
<h2>Embeddings</h2>
<p><strong>Embeddings</strong> are important as it allows us to have an compact
AI “summary” (it's not human readable text) of each of your Notes, that
we can then perform mathematical functions on (such as cosine similarity)
@ -37,7 +32,7 @@ class="image image_resized" style="width:74.04%;">
<p>In the following example, we're going to use our self-hosted Ollama instance
to create the embeddings for our Notes. You can see additional documentation
about installing your own Ollama locally in&nbsp;<a class="reference-link"
href="#root/jdjRLhLV3TtI/LMAv4Uy3Wk6J/7EdTxPADv95W/_help_vvUCN7FDkq7G">Installing Ollama</a>.</p>
href="#root/_help_vvUCN7FDkq7G">Installing Ollama</a>.</p>
<p>To see what embedding models Ollama has available, you can check out
<a
href="https://ollama.com/search?c=embedding">this search</a>on their website, and then <code>pull</code> whichever one
@ -51,7 +46,6 @@ class="image image_resized" style="width:74.04%;">
<img style="aspect-ratio:1912/1075;" src="4_Introduction_image.png" width="1912"
height="1075">
</figure>
<p>&nbsp;</p>
<p>When selecting the dropdown for the “Embedding Model”, embedding models
should be at the top of the list, separated by regular chat models with
a horizontal line, as seen below:</p>
@ -60,7 +54,6 @@ class="image image_resized" style="width:74.04%;">
<img style="aspect-ratio:1232/959;" src="8_Introduction_image.png" width="1232"
height="959">
</figure>
<p>&nbsp;</p>
<p>After selecting an embedding model, embeddings should automatically begin
to be generated by checking the embedding statistics at the top of the
“AI/LLM” settings panel:</p>
@ -68,7 +61,6 @@ class="image image_resized" style="width:74.04%;">
<img style="aspect-ratio:1333/499;" src="7_Introduction_image.png" width="1333"
height="499">
</figure>
<p>&nbsp;</p>
<p>If you don't see any embeddings being created, you will want to scroll
to the bottom of the settings, and hit “Recreate All Embeddings”:</p>
<figure
@ -76,19 +68,15 @@ class="image image_resized" style="width:74.04%;">
<img style="aspect-ratio:1337/1490;" src="3_Introduction_image.png" width="1337"
height="1490">
</figure>
<p>&nbsp;</p>
<p>Creating the embeddings will take some time, and will be regenerated when
a Note is created, updated, or deleted (removed).</p>
<p>If for some reason you choose to change your embedding provider, or the
model used, you'll need to recreate all embeddings.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<h2>Tools</h2>
<p>Tools are essentially functions that we provide to the various LLM providers,
and then LLMs can respond in a specific format that tells us what tool
function and parameters they would like to invoke. We then execute these
tools, and provide it as additional context in the Chat conversation.&nbsp;</p>
<p>&nbsp;</p>
<p>These are the tools that currently exist, and will certainly be updated
to be more effectively (and even more to be added!):</p>
<ul>
@ -148,7 +136,6 @@ class="image image_resized" style="width:74.04%;">
</ul>
</li>
</ul>
<p>&nbsp;</p>
<p>When Tools are executed within your Chat, you'll see output like the following:</p>
<figure
class="image image_resized" style="width:66.88%;">
@ -157,9 +144,7 @@ class="image image_resized" style="width:74.04%;">
</figure>
<p>You don't need to tell the LLM to execute a certain tool, it should “smartly”
call tools and automatically execute them as needed.</p>
<p>&nbsp;</p>
<h2>Overview</h2>
<p>&nbsp;</p>
<p>Now that you know about embeddings and tools, you can just go ahead and
use the “Chat with Notes” button, where you can go ahead and start chatting!:</p>
<figure
@ -167,7 +152,6 @@ class="image image_resized" style="width:74.04%;">
<img style="aspect-ratio:1378/539;" src="2_Introduction_image.png" width="1378"
height="539">
</figure>
<p>&nbsp;</p>
<p>If you don't see the “Chat with Notes” button on your side launchbar,
you might need to move it from the “Available Launchers” section to the
“Visible Launchers” section:</p>