Overview

The Chat component creates an AI-powered assistant that can answer questions based on your study materials. It supports web search, RAG (Retrieval Augmented Generation), and multi-step reasoning.

Creating a Chat Component

import StudyfetchSDK from '@studyfetch/sdk';

const client = new StudyfetchSDK({
  apiKey: 'your-api-key',
  baseURL: 'https://studyfetchapi.com',
});

const chatComponent = await client.v1.components.create({
  name: 'Biology Study Assistant',
  type: 'chat',
  config: {
    materials: ['mat-123', 'mat-456'],
    folders: ['folder-789'],
    model: 'gpt-4o-mini-2024-07-18',
    systemPrompt: 'You are a helpful biology tutor. Answer questions based on the provided materials.',
    temperature: 0.7,
    maxTokens: 2048,
    enableWebSearch: true,
    enableRAGSearch: true,
    maxSteps: 5
  }
});

console.log('Chat component created:', chatComponent._id);

Configuration Parameters

name
string
required
Name of the chat component
type
string
required
Must be "chat"
config
object
required
Chat configuration object

Response

{
  "_id": "comp_123abc",
  "name": "Biology Study Assistant",
  "type": "chat",
  "status": "active",
  "config": {
    "materials": ["mat-123", "mat-456"],
    "folders": ["folder-789"],
    "model": "gpt-4o-mini-2024-07-18",
    "systemPrompt": "You are a helpful biology tutor...",
    "temperature": 0.7,
    "maxTokens": 2048,
    "enableWebSearch": true,
    "enableRAGSearch": true,
    "maxSteps": 5
  },
  "createdAt": "2024-01-15T10:00:00Z",
  "updatedAt": "2024-01-15T10:00:00Z",
  "organizationId": "org_456def",
  "usage": {
    "interactions": 0,
    "lastUsed": null
  }
}

Adding Guardrails

Guardrails allow you to apply server-side content policy rules to AI responses. When enabled, the AI’s responses are evaluated against your configured rules before being returned to the user.

Example: Academic Integrity Guardrails

const chatWithGuardrails = await client.v1.components.create({
  name: 'Math Tutor with Guardrails',
  type: 'chat',
  config: {
    materials: ['mat-789'],
    model: 'gpt-4o-mini-2024-07-18',
    enableGuardrails: true,
    guardrailRules: [
      {
        id: 'no-direct-answers',
        action: 'MODIFY',
        condition: 'Response provides direct answers to homework problems without explanation',
        description: 'Guide students to solve problems themselves',
        message: 'Modified to provide guidance instead of direct answers'
      },
      {
        id: 'academic-honesty',
        action: 'WARN',
        condition: 'User asks for test or exam answers',
        description: 'Warn about academic integrity',
        message: 'Remember: This tool is for learning, not for completing graded assignments'
      }
    ]
  }
});

Example: Safety and Content Moderation

const safeChat = await client.v1.components.create({
  name: 'K-12 Science Assistant',
  type: 'chat',
  config: {
    materials: ['mat-science-k12'],
    enableGuardrails: true,
    guardrailRules: [
      {
        id: 'no-medical-advice',
        action: 'BLOCK',
        condition: 'Response contains medical diagnosis or treatment advice',
        description: 'Prevent medical advice',
        message: 'I cannot provide medical advice. Please consult a healthcare professional.'
      },
      {
        id: 'age-appropriate',
        action: 'MODIFY',
        condition: 'Response contains content not suitable for K-12 students',
        description: 'Ensure age-appropriate content',
        message: 'Content adjusted for educational purposes'
      },
      {
        id: 'no-dangerous-experiments',
        action: 'BLOCK',
        condition: 'Response describes dangerous chemical reactions or experiments',
        description: 'Prevent dangerous activities',
        message: 'For safety reasons, I cannot provide instructions for this type of experiment.'
      }
    ]
  }
});

Guardrail Actions Explained

  • BLOCK: Completely prevents the response and shows your custom message to the user
  • WARN: Allows the response but adds a warning message before or after it
  • MODIFY: Instructs the AI to revise its response to comply with the rule

Embedding This Component

Once you’ve created a Chat component, you can embed it on your website using the embedding API.

Generate Embed URL

const embedResponse = await client.v1.components.generateEmbed(chatComponent._id, {
  // User tracking
  userId: 'user-456',
  groupIds: ['class-101', 'class-102'],
  sessionId: 'session-789',
  
  // Chat-specific features
  features: {
    enableWebSearch: true,
    enableHistory: true,
    enableVoice: true,
    enableFollowUps: true,
    enableComponentCreation: false,
    placeholderText: 'Ask me anything about biology...',
    enableWebSearchSources: true,
    enableImageSources: true
  },
  
  // Dimensions
  width: '100%',
  height: '600px',
  
  // Token expiry
  expiryHours: 24
});

Chat-Specific Embedding Features

Allow the chat to search the web for current information
features.enableHistory
boolean
default:"true"
Show conversation history and allow users to continue previous chats
features.enableVoice
boolean
default:"false"
Enable voice input for asking questions
features.enableFollowUps
boolean
default:"true"
Show suggested follow-up questions after responses
features.enableComponentCreation
boolean
default:"false"
Allow users to create other components (flashcards, tests) from chat
features.placeholderText
string
default:"Ask a question..."
Custom placeholder text for the chat input
features.enableWebSearchSources
boolean
default:"true"
Show web search sources when web search is used
features.enableImageSources
boolean
default:"true"
Display image sources in responses when relevant

Embed in Your HTML

<iframe 
  src="https://embed.studyfetch.com/component/comp_123abc?token=..."
  width="100%"
  height="600px"
  frameborder="0"
  allow="microphone; clipboard-write"
  style="border: 1px solid #e5e5e5; border-radius: 8px;">
</iframe>

Streaming Chat Responses

The Chat API supports real-time streaming responses using Server-Sent Events (SSE). This allows you to display responses as they’re generated, providing a more interactive experience.

Stream Chat Response

// AI SDK format with messages array
const streamResponse = await client.v1.chat.stream({
  componentId: 'comp_123abc',
  sessionId: 'session-789',
  userId: 'user-456',
  groupIds: ['class-101', 'class-102'],
  messages: [
    { role: 'system', content: 'You are a helpful biology tutor.' },
    { role: 'user', content: 'Explain photosynthesis in simple terms' }
  ]
});

// Process the stream
for await (const chunk of streamResponse) {
  console.log(chunk.content);
}

// Custom format with single message
const customStream = await client.v1.chat.stream({
  componentId: 'comp_123abc',
  sessionId: 'session-789',
  userId: 'user-456',
  message: {
    text: 'What is cellular respiration?',
    images: [
      {
        url: 'https://example.com/cell-diagram.png',
        caption: 'Cell structure diagram',
        mimeType: 'image/png'
      }
    ]
  }
});

Stream Parameters

componentId
string
ID of the chat component to use
sessionId
string
Session ID to maintain conversation context
userId
string
User ID for tracking and personalization
groupIds
array
Array of group IDs for access control
context
object
Additional context to pass to the chat
messages
array
Messages array for AI SDK format. Each message should have:
  • role: “system”, “user”, or “assistant”
  • content: The message content
message
object
Single message for custom format

Stream Response Format

The streaming endpoint returns Server-Sent Events (SSE) with the following event types:
data: {"type":"content","content":"The process of photosynthesis..."}

data: {"type":"tool_call","tool":"web_search","args":{"query":"latest photosynthesis research"}}

data: {"type":"sources","sources":[{"title":"Source Title","url":"https://..."}]}

data: {"type":"done","usage":{"tokens":150}}

Example: Building a Streaming Chat Interface

// React example with streaming
function ChatInterface({ componentId }) {
  const [messages, setMessages] = useState([]);
  const [streaming, setStreaming] = useState(false);

  const sendMessage = async (text) => {
    setStreaming(true);
    const userMessage = { role: 'user', content: text };
    setMessages(prev => [...prev, userMessage]);

    const assistantMessage = { role: 'assistant', content: '' };
    setMessages(prev => [...prev, assistantMessage]);

    try {
      const stream = await client.v1.chat.stream({
        componentId,
        sessionId: sessionStorage.getItem('chatSession'),
        messages: [...messages, userMessage]
      });

      for await (const chunk of stream) {
        if (chunk.type === 'content') {
          setMessages(prev => {
            const updated = [...prev];
            updated[updated.length - 1].content += chunk.content;
            return updated;
          });
        }
      }
    } finally {
      setStreaming(false);
    }
  };

  return (
    <div>
      {messages.map((msg, i) => (
        <div key={i} className={msg.role}>
          {msg.content}
        </div>
      ))}
      <input 
        onSubmit={(e) => sendMessage(e.target.value)}
        disabled={streaming}
      />
    </div>
  );
}