docs: document LLM extractor adapter usage
This commit is contained in:
23
README.md
23
README.md
@@ -113,6 +113,29 @@ await db.ingestStatement('Bun makes TypeScript tooling fast.', {
|
||||
});
|
||||
```
|
||||
|
||||
## LLM-backed extraction
|
||||
|
||||
You can bridge any text-generating model into IdentityDB by wrapping it with `LlmFactExtractor`.
|
||||
|
||||
```ts
|
||||
import { LlmFactExtractor } from 'identitydb';
|
||||
|
||||
const extractor = new LlmFactExtractor({
|
||||
model: {
|
||||
async generateText(prompt) {
|
||||
return callYourFavoriteLlm(prompt);
|
||||
},
|
||||
},
|
||||
instructions: 'Prefer technology, product, and time topics over generic nouns.',
|
||||
});
|
||||
|
||||
await db.ingestStatement('I have worked with Bun and TypeScript since 2025.', {
|
||||
extractor,
|
||||
});
|
||||
```
|
||||
|
||||
The adapter expects the model to return JSON and will validate the structured response before IdentityDB writes a fact.
|
||||
|
||||
## Development
|
||||
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user