Overview
The Chat component creates an AI-powered assistant that can answer questions based on your study materials. It supports web search, RAG (Retrieval Augmented Generation), and multi-step reasoning.
Creating a Chat Component
import StudyfetchSDK from '@studyfetch/sdk' ;
const client = new StudyfetchSDK ({
apiKey: 'your-api-key' ,
baseURL: 'https://studyfetchapi.com' ,
});
const chatComponent = await client . v1 . components . create ({
name: 'Biology Study Assistant' ,
type: 'chat' ,
config: {
materials: [ 'mat-123' , 'mat-456' ],
folders: [ 'folder-789' ],
model: 'gpt-4o-mini-2024-07-18' ,
systemPrompt: 'You are a helpful biology tutor. Answer questions based on the provided materials.' ,
temperature: 0.7 ,
maxTokens: 2048 ,
enableWebSearch: true ,
enableRAGSearch: true ,
maxSteps: 5 ,
enableHistory: false ,
enableVoice: false ,
enableFollowUps: false ,
enableComponentCreation: false ,
enableMessageGrading: false ,
enableReferenceMode: false ,
enableFeedback: true ,
// UI Customization (optional)
hideTitle: false ,
hideEmptyState: false ,
emptyStateHTML: '<div style="text-align: center; padding: 2rem;"><h3>Welcome to Biology Assistant!</h3><p>Ask me anything about your course materials.</p></div>'
}
});
console . log ( 'Chat component created:' , chatComponent . _id );
Configuration Parameters
Name of the chat component
Chat configuration object Show Configuration Properties
Array of material IDs to use as context for the chat assistant
Array of folder IDs containing materials
model
string
default: "gpt-4o-mini-2024-07-18"
AI model to use. Options:
gpt-4o - Most capable model
gpt-4o-mini-2024-07-18 - Faster, cost-effective model
System prompt to guide the AI assistant’s behavior and personality
Temperature for response generation (0-2). Lower values make responses more focused and deterministic
Maximum tokens for AI responses
Enable web search capabilities for finding current information
Enable RAG (Retrieval Augmented Generation) search within materials
Maximum steps for multi-step tool calls and reasoning
Enable conversation history to allow users to continue previous chats
Enable voice input for hands-free interaction
Enable AI-suggested follow-up questions after responses
Allow the AI to create study components (flashcards, tests, etc.) during conversation
Enable message grading to track prompting quality and responsible AI usage. When enabled, user messages are scored on a 1-4 scale for prompting effectiveness and responsible interaction.
Show reference titles and URLs instead of source content in citations. This provides a cleaner interface while maintaining source attribution.
Enable thumbs up/down feedback with reason. When enabled, users can provide feedback on AI responses with optional text explanations. This data can be retrieved using the chat.retrieveFeedback() API for quality monitoring and improvement.
Enable guardrails to apply server-side content policy rules to AI responses. When enabled, the AI’s responses will be evaluated against your configured rules before being returned to the user.
Hide the chat title and avatar in the embedded component
Hide the default empty state (icon and text) when no messages are present
Provide custom HTML to replace the default icon and message when chat is empty.
Array of guardrail rules to control AI behavior. Rules are evaluated on each response. Show Guardrail Rule Properties
Unique identifier for the rule
Action to take when rule is triggered:
BLOCK - Prevents the response and shows your custom message
WARN - Allows the response but adds a warning message
MODIFY - Allows the assistant to revise its response to comply with the rule
Natural language description of when the rule should trigger. The AI evaluates if the response matches this condition.
Brief description of the rule’s purpose
Custom message shown to the user when the rule triggers
Common Guardrail Use Cases:
Academic Integrity : Prevent direct homework/test answers
Safety : Block harmful, medical, or legal advice
Age-Appropriate Content : Modify responses for younger audiences
Brand Guidelines : Ensure responses align with your organization’s values
Topic Boundaries : Keep conversations focused on intended subject matter
Response
{
"_id" : "comp_123abc" ,
"name" : "Biology Study Assistant" ,
"type" : "chat" ,
"status" : "active" ,
"config" : {
"materials" : [ "mat-123" , "mat-456" ],
"folders" : [ "folder-789" ],
"model" : "gpt-4o-mini-2024-07-18" ,
"systemPrompt" : "You are a helpful biology tutor..." ,
"temperature" : 0.7 ,
"maxTokens" : 2048 ,
"enableWebSearch" : true ,
"enableRAGSearch" : true ,
"maxSteps" : 5
},
"createdAt" : "2024-01-15T10:00:00Z" ,
"updatedAt" : "2024-01-15T10:00:00Z" ,
"organizationId" : "org_456def" ,
"usage" : {
"interactions" : 0 ,
"lastUsed" : null
}
}
Adding Guardrails
Guardrails allow you to apply server-side content policy rules to AI responses. When enabled, the AI’s responses are evaluated against your configured rules before being returned to the user.
Example: Academic Integrity Guardrails
const chatWithGuardrails = await client . v1 . components . create ({
name: 'Math Tutor with Guardrails' ,
type: 'chat' ,
config: {
materials: [ 'mat-789' ],
model: 'gpt-4o-mini-2024-07-18' ,
enableFeedback: true ,
enableGuardrails: true ,
guardrailRules: [
{
id: 'no-direct-answers' ,
action: 'modify' ,
condition: 'Response provides direct answers to homework problems without explanation' ,
description: 'Guide students to solve problems themselves' ,
message: 'Modified to provide guidance instead of direct answers'
},
{
id: 'academic-honesty' ,
action: 'warn' ,
condition: 'User asks for test or exam answers' ,
description: 'Warn about academic integrity' ,
message: 'Remember: This tool is for learning, not for completing graded assignments'
}
]
}
});
Example: Safety and Content Moderation
const safeChat = await client . v1 . components . create ({
name: 'K-12 Science Assistant' ,
type: 'chat' ,
config: {
materials: [ 'mat-science-k12' ],
enableFeedback: true ,
enableGuardrails: true ,
guardrailRules: [
{
id: 'no-medical-advice' ,
action: 'block' ,
condition: 'Response contains medical diagnosis or treatment advice' ,
description: 'Prevent medical advice' ,
message: 'I cannot provide medical advice. Please consult a healthcare professional.'
},
{
id: 'age-appropriate' ,
action: 'modify' ,
condition: 'Response contains content not suitable for K-12 students' ,
description: 'Ensure age-appropriate content' ,
message: 'Content adjusted for educational purposes'
},
{
id: 'no-dangerous-experiments' ,
action: 'block' ,
condition: 'Response describes dangerous chemical reactions or experiments' ,
description: 'Prevent dangerous activities' ,
message: 'For safety reasons, I cannot provide instructions for this type of experiment.'
}
]
}
});
Guardrail Actions Explained
BLOCK : Completely prevents the response and shows your custom message to the user
WARN : Allows the response but adds a warning message before or after it
MODIFY : Instructs the AI to revise its response to comply with the rule
Embedding This Component
Once you’ve created a Chat component, you can embed it on your website using the embedding API.
Generate Embed URL
const embedResponse = await client . v1 . components . generateEmbed ( chatComponent . _id , {
// User tracking
userId: 'user-456' ,
studentName: 'Jane Smith' , // Student name for display
groupIds: [ 'class-101' , 'class-102' ],
sessionId: 'session-789' ,
// Chat-specific features
features: {
enableWebSearch: true ,
enableHistory: true ,
enableVoice: true ,
enableFollowUps: true ,
enableComponentCreation: false ,
placeholderText: 'Ask me anything about biology...' ,
enableWebSearchSources: true ,
enableImageSources: true ,
enableBadWordsFilter: true ,
enablePromptingScore: true ,
enableResponsibilityScore: true ,
enableReferenceMode: false ,
enableGuardrails: true ,
enableOutline: false ,
enableTranscript: false
},
// Dimensions
width: '100%' ,
height: '600px' ,
// Token expiry
expiryHours: 24
});
Chat-Specific Embedding Features
Allow the chat to search the web for current information
Show conversation history and allow users to continue previous chats
Enable voice input for asking questions
Show suggested follow-up questions after responses
features.enableComponentCreation
Allow users to create other components (flashcards, tests) from chat
features.placeholderText
string
default: "Ask a question..."
Custom placeholder text for the chat input
features.enableWebSearchSources
Show web search sources when web search is used
features.enableImageSources
Display image sources in responses when relevant
features.enableBadWordsFilter
boolean
default: "true"
required
Enable filtering of inappropriate language (required)
features.enablePromptingScore
Enable prompting quality scoring for user messages (1-4 scale)
features.enableResponsibilityScore
Enable responsibility scoring for user messages (1-4 scale)
features.enableReferenceMode
Show reference titles and URLs instead of source content in citations
features.enableGuardrails
Apply guardrail rules configured on the component to embedded chat
Enable document outline navigation (for document-based components)
features.enableTranscript
Enable transcript view (for video/audio-based components)
Embed in Your HTML
< iframe
src = "https://embed.studyfetch.com/component/comp_123abc?token=..."
width = "100%"
height = "600px"
frameborder = "0"
allow = "microphone; clipboard-write"
style = "border: 1px solid #e5e5e5; border-radius: 8px;" >
</ iframe >
Streaming Chat Responses
The Chat API supports real-time streaming responses using Server-Sent Events (SSE). This allows you to display responses as they’re generated, providing a more interactive experience.
Stream Chat Response
// AI SDK format with messages array
const streamResponse = await client . v1 . chat . stream ({
componentId: 'comp_123abc' ,
sessionId: 'session-789' ,
userId: 'user-456' ,
groupIds: [ 'class-101' , 'class-102' ],
messages: [
{ role: 'system' , content: 'You are a helpful biology tutor.' },
{ role: 'user' , content: 'Explain photosynthesis in simple terms' }
]
});
// Process the stream
for await ( const chunk of streamResponse ) {
console . log ( chunk . content );
}
// Custom format with single message
const customStream = await client . v1 . chat . stream ({
componentId: 'comp_123abc' ,
sessionId: 'session-789' ,
userId: 'user-456' ,
message: {
text: 'What is cellular respiration?' ,
images: [
{
url: 'https://example.com/cell-diagram.png' ,
caption: 'Cell structure diagram' ,
mimeType: 'image/png'
}
]
}
});
Stream Parameters
ID of the chat component to use
Session ID to maintain conversation context
User ID for tracking and personalization
Array of group IDs for access control
Additional context to pass to the chat
Messages array for AI SDK format. Each message should have:
role: “system”, “user”, or “assistant”
content: The message content
Single message for custom format Text content of the message
Array of images attached to the message Base64 encoded image data (alternative to URL)
MIME type of the image (e.g., “image/png”, “image/jpeg”)
The streaming endpoint returns Server-Sent Events (SSE) with the following event types:
data: {"type":"content","content":"The process of photosynthesis..."}
data: {"type":"tool_call","tool":"web_search","args":{"query":"latest photosynthesis research"}}
data: {"type":"sources","sources":[{"title":"Source Title","url":"https://..."}]}
data: {"type":"done","usage":{"tokens":150}}
Example: Building a Streaming Chat Interface
// React example with streaming
function ChatInterface ({ componentId }) {
const [ messages , setMessages ] = useState ([]);
const [ streaming , setStreaming ] = useState ( false );
const sendMessage = async ( text ) => {
setStreaming ( true );
const userMessage = { role: 'user' , content: text };
setMessages ( prev => [ ... prev , userMessage ]);
const assistantMessage = { role: 'assistant' , content: '' };
setMessages ( prev => [ ... prev , assistantMessage ]);
try {
const stream = await client . v1 . chat . stream ({
componentId ,
sessionId: sessionStorage . getItem ( 'chatSession' ),
messages: [ ... messages , userMessage ]
});
for await ( const chunk of stream ) {
if ( chunk . type === 'content' ) {
setMessages ( prev => {
const updated = [ ... prev ];
updated [ updated . length - 1 ]. content += chunk . content ;
return updated ;
});
}
}
} finally {
setStreaming ( false );
}
};
return (
< div >
{ messages . map (( msg , i ) => (
< div key = { i } className = { msg . role } >
{ msg . content }
</ div >
)) }
< input
onSubmit = { ( e ) => sendMessage ( e . target . value ) }
disabled = { streaming }
/>
</ div >
);
}
Managing Chat Embed Context
The context API allows you to dynamically push contextual information to specific embedded chat instances. This is particularly useful for applications where the chat component remains persistent while the surrounding content changes, such as:
Practice Tests : Update context as users navigate between questions
Multi-page Tutorials : Provide page-specific context without resetting chat history
Dynamic Content : Keep the AI informed about what the user is currently viewing
Push Context
Add context information to an embedded chat component:
import StudyfetchSDK from '@studyfetch/sdk' ;
const client = new StudyfetchSDK ({
apiKey: 'your-api-key' ,
baseURL: 'https://studyfetchapi.com' ,
});
// Push context to a specific embed
await client . v1 . embed . context . push ({
token: 'embed-token-123' ,
context: 'The user is now on Question 2 which discusses cellular respiration and the Krebs cycle.'
});
Retrieve Context
Get the current context for an embedded chat component:
// Retrieve current context
const currentContext = await client . v1 . embed . context . retrieve ({
token: 'embed-token-123'
});
console . log ( 'Current context:' , currentContext );
Clear Context
Clear all context from an embedded chat component:
// Clear all context
await client . v1 . embed . context . clear ({
token: 'embed-token-123'
});
Complete Example: Practice Test with Context
Here’s a complete example of using context management in a practice test application:
class PracticeTestChat {
constructor ( client , embedToken ) {
this . client = client ;
this . embedToken = embedToken ;
this . currentQuestion = 0 ;
}
async navigateToQuestion ( questionNumber , questionData ) {
// Clear previous context
await this . client . v1 . embed . context . clear ({
token: this . embedToken
});
// Push new context for current question
const contextText = `The user is now on Question ${ questionNumber } : ${ questionData . title } .
Topic: ${ questionData . topic }
Content: ${ questionData . content } ` ;
await this . client . v1 . embed . context . push ({
token: this . embedToken ,
context: contextText
});
this . currentQuestion = questionNumber ;
}
async addSupplementalContext ( additionalInfo ) {
// Add more context without clearing previous
await this . client . v1 . embed . context . push ({
token: this . embedToken ,
context: `Additional information: ${ additionalInfo } `
});
}
}
// Usage
const chat = new PracticeTestChat ( client , 'embed-token-123' );
// User navigates to question 1
await chat . navigateToQuestion ( 1 , {
title: 'Photosynthesis Process' ,
topic: 'Biology - Cellular Processes' ,
content: 'Explain the light-dependent reactions of photosynthesis...'
});
// User moves to question 2
await chat . navigateToQuestion ( 2 , {
title: 'Cellular Respiration' ,
topic: 'Biology - Cellular Processes' ,
content: 'Describe the steps of the Krebs cycle...'
});
Context API Parameters
The embed token for the specific chat instance. Obtained from generateEmbed() response.
The context string to add to the chat. Can include any relevant information about what the user is currently viewing or doing.
Retrieving Chat Feedback
You can retrieve feedback data (thumbs up/down) from users interacting with your chat components:
import StudyfetchSDK from '@studyfetch/sdk' ;
const client = new StudyfetchSDK ({
apiKey: 'your-api-key' ,
baseURL: 'https://studyfetchapi.com' ,
});
// Retrieve all feedback
await client . v1 . chat . retrieveFeedback ();
// Filter feedback by component
await client . v1 . chat . retrieveFeedback ({
componentId: 'comp_123abc' ,
startDate: '2025-01-01T00:00:00Z' ,
endDate: '2025-12-31T23:59:59Z' ,
feedbackType: 'thumbsUp' ,
limit: '100' ,
skip: '0'
});
Feedback Parameters
Filter feedback by specific component ID
Filter feedback by specific user ID
Start date for feedback range (ISO 8601 format)
End date for feedback range (ISO 8601 format)
Filter by feedback type:
thumbsUp - Positive feedback
thumbsDown - Negative feedback
Number of records to return
Number of records to skip (for pagination)
Retrieving Feedback Context
Get the specific message and full conversation for a feedback item to understand the context of user feedback:
import StudyfetchSDK from '@studyfetch/sdk' ;
const client = new StudyfetchSDK ({
apiKey: 'your-api-key' ,
baseURL: 'https://studyfetchapi.com' ,
});
// Retrieve feedback context for a specific feedback ID
await client . v1 . chat . retrieveFeedbackContext ({
feedbackId: 'feedback_123abc'
});
Feedback Context Parameters
The ID of the feedback item to retrieve context for