Skip to main content

๐ŸŽ“ Knowledge Base

The Knowledge Base of Xpert AI is one of the core components of the system, designed to provide enterprise users with an intelligent and dynamically updated knowledge management solution. By integrating BI metrics management, AI technologies, and both internal and external data resources, the knowledge base offers precise information retrieval, content recommendations, and knowledge sharing capabilities.

Knowledge Base Function Introductionโ€‹

In AI conversational knowledge retrieval, the core functionalities of the knowledge base typically include document text embedding, Chunk retrieval, query optimization, and real-time updates. Below are common functionalities, with a focus on document text embedding and Chunk retrieval:

1. Document Text Embeddingโ€‹

  • Text Vectorization: Using deep learning models (such as BERT, OpenAI Embeddings) to embed text into high-dimensional vectors, enabling similarity matching through vector retrieval. This forms the foundation for fast retrieval in the knowledge base and is suitable for various text types, such as documents, web pages, FAQs, etc.
  • Multilingual Support: For texts in different languages, the corresponding embedding models are used to ensure the accuracy of multilingual retrieval.
  • Data Preprocessing: Cleaning, tokenizing, and denoising the text in documents to generate high-quality vectors for embedding.
  • Context-Aware Embedding: Ensuring that the embedding model can recognize the context within the text to enhance the accuracy of responses in conversations.

2. Chunk Retrievalโ€‹

  • Document Chunking: Breaking large documents into smaller chunks to improve retrieval accuracy and efficiency. Each Chunk typically consists of a few sentences or a paragraph, ensuring the system finds the most relevant specific information during retrieval rather than the entire document.
  • Embedding and Indexing Chunks: Generating independent vectors for each Chunk and storing them in a vector database for indexing. This allows the system to quickly find the Chunks most similar to the user's query during retrieval.
    Rag indexing
    RAG Indexing

3. Query Optimizationโ€‹

  • Semantic Retrieval: Using semantic search technology to convert user queries into vectors that are then matched for similarity with the embedded knowledge base content. Compared to traditional keyword retrieval, semantic retrieval better understands user intent.
  • Fuzzy Matching: Even if a query does not match the wording in a document exactly, the system can still return relevant results based on vector similarity.
  • Search Result Reordering: (todo) Reordering returned results to ensure the most relevant answers appear first. This is typically based on vector similarity scores or other user interaction data (such as click-through rates).
    Rag retrieval
    RAG Retrieval Enhancement

4. Document Update and Managementโ€‹

  • Real-Time Updates: Supports dynamic updates to the knowledge base, ensuring the latest information can be retrieved promptly. An automated document upload and embedding update process ensures the knowledge base content is always up to date.
  • Document Type Support: The knowledge base typically supports various document formats, including PDF, Word, Markdown, plain text, etc., ensuring knowledge can be imported from different document sources.
  • Metadata Management: (todo) Adding metadata (such as creation time, document type, author, etc.) to documents and Chunks, allowing the system to perform more precise retrieval based on metadata.

5. Knowledge Base Q&Aโ€‹

  • Document-Based QA System: Using retrieved text Chunks, combined with models like GPT, to generate direct answers to user questions. This method usually provides fact-based, accurate responses.
  • Step-by-Step Q&A and Context Tracking: The system can retain context in continuous conversations, providing more coherent responses and adjusting retrieval results based on previous dialogues.

6. Extension and Integrationโ€‹

  • External Data Source Integration: The knowledge base can be integrated with external data sources (such as databases, APIs) to provide dynamic data query support.
  • Vector Database: (todo) Utilizing vector databases like FAISS, Milvus, Pinecone, which can effectively manage and retrieve large-scale embedded data.
๐Ÿšง

In Development

These functionalities will help make the knowledge retrieval module in Xpert AI more efficient and intelligent, especially in scenarios involving large amounts of documents and text. When designing the knowledge base retrieval system, you can optimize the speed and accuracy of Chunk retrieval by choosing the appropriate embedding models and vector databases.

Detailed Settings for the Knowledge Baseโ€‹

Below are the detailed configurations for the knowledge base settings:

Configuration ItemDescription
AvatarThe identifying icon for the knowledge base, usually a logo or representative image.
NameThe name of the knowledge base, used to identify and differentiate between different knowledge bases.
DescriptionA brief description of the knowledge base, summarizing its content and purpose.
LanguageThe primary language used in the knowledge base, such as Chinese, English, etc.
PermissionsAccess permission settings for the knowledge base, options include:
- Private
- Within Organization
- Public
Intelligent AssistantThe intelligent assistant providing text embedding capabilities, specifying the AI provider used.
Embedding ModelThe AI model used for text embedding, depending on the chosen intelligent assistant (i.e., AI provider).
Embedding Batch SizeThe number of text embeddings processed at once, affecting processing efficiency and performance.
Chunk SizeThe size of each Chunk when dividing documents, typically by sentence or paragraph.
Chunk OverlapSpecifies the overlapping part between adjacent Chunks to maintain context coherence.
Similarity ThresholdWhen retrieving text blocks, filter results based on a similarity threshold to determine which Chunks to return.

Documentsโ€‹

Here is a detailed introduction to document and parsing functionalities, including document upload, document parsing, and related operation configurations:

1. Uploading Documentsโ€‹

Users can upload various formats of documents to the knowledge base. Supported document types include:

  • Markdown: Used for lightweight markup language documents, easy to read and edit.
  • PDF: Suitable for preserving formatted documents, ideal for long articles and reports.
  • EPUB: An e-book format that supports reading on various devices.
  • DOCX: Microsoft Word documents, widely used for text editing and formatting.

2. Document Parsingโ€‹

Uploaded documents need to be parsed to extract their content and store it in the knowledge base. The parsing process can be configured as follows:

Configuration ItemDescription
Chunk SizeSize setting for each text chunk (number of words or sentences) used to split document content into smaller pieces for retrieval.
Chunk OverlapDefines the overlap in words between adjacent text chunks, ensuring contextual coherence and avoiding information loss.
DelimiterDefines how to separate chunks in the document, such as by paragraph, newline characters, or specific characters.

3. Start Parsingโ€‹

Once the configuration is complete, users can start the parsing process. The parsing task will run on the server-side, and the system will execute the following steps:

  • Read Document: Load the uploaded document and perform format parsing.
  • Extract Content: Based on the configured chunk size, chunk overlap, and delimiter, split the document content into multiple text chunks.
  • Generate Embeddings: Call the AI model to generate vector embeddings for each text chunk for subsequent retrieval and query use.

4. Re-parse or Delete Documentsโ€‹

  • Re-parse: Users can choose to re-parse a document at any time to update its configuration or content. This will re-execute the parsing process and replace old document chunks.
  • Delete Document: Users can delete documents no longer needed, and the system will automatically delete all associated document chunks. This ensures the knowledge base content remains clean and accurate.

Retrieval Testingโ€‹

The retrieval testing feature is used to validate the effectiveness and accuracy of the knowledge base retrieval, ensuring the system can effectively retrieve relevant information from the knowledge base. Below are detailed parameter introductions for this feature:

ParameterDescription
Similarity ThresholdSet a minimum similarity value, only retrieval results with similarity above this value will be returned.
Top NSpecify the number of results to return; the system will sort results based on similarity from high to low and return the top N most relevant results.
InputUser-input query text to trigger the retrieval feature. The system will analyze this input and calculate its similarity with the content in the knowledge base to find relevant text blocks or information.

Function Flow:โ€‹

  1. Input: The user enters a query in the input box.
  2. Similarity Calculation: The system vectorizes the user's input text and calculates similarity with all text blocks in the knowledge base.
  3. Filtering: Based on the set similarity threshold, filter out text blocks with similarity below the threshold.
  4. Sorting: Sort the remaining text blocks by similarity and return the most relevant results.
  5. Return Results: The system returns the specified number of retrieval results based on the "Top N" setting.