Overview
The Chat component creates an AI-powered assistant that can answer questions based on your study materials. It supports web search, RAG (Retrieval Augmented Generation), and multi-step reasoning.
Creating a Chat Component
import StudyfetchSDK from '@studyfetch/sdk' ;
const client = new StudyfetchSDK ({
apiKey: 'your-api-key' ,
baseURL: 'https://studyfetchapi.com' ,
});
const chatComponent = await client . v1 . components . create ({
name: 'Biology Study Assistant' ,
type: 'chat' ,
config: {
materials: [ 'mat-123' , 'mat-456' ],
folders: [ 'folder-789' ],
model: 'gpt-4o-mini-2024-07-18' ,
systemPrompt: 'You are a helpful biology tutor. Answer questions based on the provided materials.' ,
temperature: 0.7 ,
maxTokens: 2048 ,
enableWebSearch: true ,
enableRAGSearch: true ,
maxSteps: 5
}
});
console . log ( 'Chat component created:' , chatComponent . _id );
Configuration Parameters
Name of the chat component
Chat configuration object Show Configuration Properties
Array of material IDs to use as context for the chat assistant
Array of folder IDs containing materials
model
string
default: "gpt-4o-mini-2024-07-18"
AI model to use. Options:
gpt-4o
- Most capable model
gpt-4o-mini-2024-07-18
- Faster, cost-effective model
System prompt to guide the AI assistant’s behavior and personality
Temperature for response generation (0-2). Lower values make responses more focused and deterministic
Maximum tokens for AI responses
Enable web search capabilities for finding current information
Enable RAG (Retrieval Augmented Generation) search within materials
Maximum steps for multi-step tool calls and reasoning
Enable guardrails to apply server-side content policy rules to AI responses. When enabled, the AI’s responses will be evaluated against your configured rules before being returned to the user.
Array of guardrail rules to control AI behavior. Rules are evaluated on each response. Show Guardrail Rule Properties
Unique identifier for the rule
Action to take when rule is triggered:
BLOCK
- Prevents the response and shows your custom message
WARN
- Allows the response but adds a warning message
MODIFY
- Allows the assistant to revise its response to comply with the rule
Natural language description of when the rule should trigger. The AI evaluates if the response matches this condition.
Brief description of the rule’s purpose
Custom message shown to the user when the rule triggers
Common Guardrail Use Cases:
Academic Integrity : Prevent direct homework/test answers
Safety : Block harmful, medical, or legal advice
Age-Appropriate Content : Modify responses for younger audiences
Brand Guidelines : Ensure responses align with your organization’s values
Topic Boundaries : Keep conversations focused on intended subject matter
Response
{
"_id" : "comp_123abc" ,
"name" : "Biology Study Assistant" ,
"type" : "chat" ,
"status" : "active" ,
"config" : {
"materials" : [ "mat-123" , "mat-456" ],
"folders" : [ "folder-789" ],
"model" : "gpt-4o-mini-2024-07-18" ,
"systemPrompt" : "You are a helpful biology tutor..." ,
"temperature" : 0.7 ,
"maxTokens" : 2048 ,
"enableWebSearch" : true ,
"enableRAGSearch" : true ,
"maxSteps" : 5
},
"createdAt" : "2024-01-15T10:00:00Z" ,
"updatedAt" : "2024-01-15T10:00:00Z" ,
"organizationId" : "org_456def" ,
"usage" : {
"interactions" : 0 ,
"lastUsed" : null
}
}
Adding Guardrails
Guardrails allow you to apply server-side content policy rules to AI responses. When enabled, the AI’s responses are evaluated against your configured rules before being returned to the user.
Example: Academic Integrity Guardrails
const chatWithGuardrails = await client . v1 . components . create ({
name: 'Math Tutor with Guardrails' ,
type: 'chat' ,
config: {
materials: [ 'mat-789' ],
model: 'gpt-4o-mini-2024-07-18' ,
enableGuardrails: true ,
guardrailRules: [
{
id: 'no-direct-answers' ,
action: 'modify' ,
condition: 'Response provides direct answers to homework problems without explanation' ,
description: 'Guide students to solve problems themselves' ,
message: 'Modified to provide guidance instead of direct answers'
},
{
id: 'academic-honesty' ,
action: 'warn' ,
condition: 'User asks for test or exam answers' ,
description: 'Warn about academic integrity' ,
message: 'Remember: This tool is for learning, not for completing graded assignments'
}
]
}
});
Example: Safety and Content Moderation
const safeChat = await client . v1 . components . create ({
name: 'K-12 Science Assistant' ,
type: 'chat' ,
config: {
materials: [ 'mat-science-k12' ],
enableGuardrails: true ,
guardrailRules: [
{
id: 'no-medical-advice' ,
action: 'block' ,
condition: 'Response contains medical diagnosis or treatment advice' ,
description: 'Prevent medical advice' ,
message: 'I cannot provide medical advice. Please consult a healthcare professional.'
},
{
id: 'age-appropriate' ,
action: 'modify' ,
condition: 'Response contains content not suitable for K-12 students' ,
description: 'Ensure age-appropriate content' ,
message: 'Content adjusted for educational purposes'
},
{
id: 'no-dangerous-experiments' ,
action: 'block' ,
condition: 'Response describes dangerous chemical reactions or experiments' ,
description: 'Prevent dangerous activities' ,
message: 'For safety reasons, I cannot provide instructions for this type of experiment.'
}
]
}
});
Guardrail Actions Explained
BLOCK : Completely prevents the response and shows your custom message to the user
WARN : Allows the response but adds a warning message before or after it
MODIFY : Instructs the AI to revise its response to comply with the rule
Embedding This Component
Once you’ve created a Chat component, you can embed it on your website using the embedding API.
Generate Embed URL
const embedResponse = await client . v1 . components . generateEmbed ( chatComponent . _id , {
// User tracking
userId: 'user-456' ,
groupIds: [ 'class-101' , 'class-102' ],
sessionId: 'session-789' ,
// Chat-specific features
features: {
enableWebSearch: true ,
enableHistory: true ,
enableVoice: true ,
enableFollowUps: true ,
enableComponentCreation: false ,
placeholderText: 'Ask me anything about biology...' ,
enableWebSearchSources: true ,
enableImageSources: true ,
enableBadWordsFilter: true
},
// Dimensions
width: '100%' ,
height: '600px' ,
// Token expiry
expiryHours: 24
});
Chat-Specific Embedding Features
Allow the chat to search the web for current information
Show conversation history and allow users to continue previous chats
Enable voice input for asking questions
Show suggested follow-up questions after responses
features.enableComponentCreation
Allow users to create other components (flashcards, tests) from chat
features.placeholderText
string
default: "Ask a question..."
Custom placeholder text for the chat input
features.enableWebSearchSources
Show web search sources when web search is used
features.enableImageSources
Display image sources in responses when relevant
Embed in Your HTML
< iframe
src = "https://embed.studyfetch.com/component/comp_123abc?token=..."
width = "100%"
height = "600px"
frameborder = "0"
allow = "microphone; clipboard-write"
style = "border: 1px solid #e5e5e5; border-radius: 8px;" >
</ iframe >
Streaming Chat Responses
The Chat API supports real-time streaming responses using Server-Sent Events (SSE). This allows you to display responses as they’re generated, providing a more interactive experience.
Stream Chat Response
// AI SDK format with messages array
const streamResponse = await client . v1 . chat . stream ({
componentId: 'comp_123abc' ,
sessionId: 'session-789' ,
userId: 'user-456' ,
groupIds: [ 'class-101' , 'class-102' ],
messages: [
{ role: 'system' , content: 'You are a helpful biology tutor.' },
{ role: 'user' , content: 'Explain photosynthesis in simple terms' }
]
});
// Process the stream
for await ( const chunk of streamResponse ) {
console . log ( chunk . content );
}
// Custom format with single message
const customStream = await client . v1 . chat . stream ({
componentId: 'comp_123abc' ,
sessionId: 'session-789' ,
userId: 'user-456' ,
message: {
text: 'What is cellular respiration?' ,
images: [
{
url: 'https://example.com/cell-diagram.png' ,
caption: 'Cell structure diagram' ,
mimeType: 'image/png'
}
]
}
});
Stream Parameters
ID of the chat component to use
Session ID to maintain conversation context
User ID for tracking and personalization
Array of group IDs for access control
Additional context to pass to the chat
Messages array for AI SDK format. Each message should have:
role
: “system”, “user”, or “assistant”
content
: The message content
Single message for custom format Text content of the message
Array of images attached to the message Base64 encoded image data (alternative to URL)
MIME type of the image (e.g., “image/png”, “image/jpeg”)
The streaming endpoint returns Server-Sent Events (SSE) with the following event types:
data: {"type":"content","content":"The process of photosynthesis..."}
data: {"type":"tool_call","tool":"web_search","args":{"query":"latest photosynthesis research"}}
data: {"type":"sources","sources":[{"title":"Source Title","url":"https://..."}]}
data: {"type":"done","usage":{"tokens":150}}
Example: Building a Streaming Chat Interface
// React example with streaming
function ChatInterface ({ componentId }) {
const [ messages , setMessages ] = useState ([]);
const [ streaming , setStreaming ] = useState ( false );
const sendMessage = async ( text ) => {
setStreaming ( true );
const userMessage = { role: 'user' , content: text };
setMessages ( prev => [ ... prev , userMessage ]);
const assistantMessage = { role: 'assistant' , content: '' };
setMessages ( prev => [ ... prev , assistantMessage ]);
try {
const stream = await client . v1 . chat . stream ({
componentId ,
sessionId: sessionStorage . getItem ( 'chatSession' ),
messages: [ ... messages , userMessage ]
});
for await ( const chunk of stream ) {
if ( chunk . type === 'content' ) {
setMessages ( prev => {
const updated = [ ... prev ];
updated [ updated . length - 1 ]. content += chunk . content ;
return updated ;
});
}
}
} finally {
setStreaming ( false );
}
};
return (
< div >
{ messages . map (( msg , i ) => (
< div key = { i } className = { msg . role } >
{ msg . content }
</ div >
)) }
< input
onSubmit = { ( e ) => sendMessage ( e . target . value ) }
disabled = { streaming }
/>
</ div >
);
}