Skip to main content

Overview

The Chat component creates an AI-powered assistant that can answer questions based on your study materials. It supports web search, RAG (Retrieval Augmented Generation), and multi-step reasoning.

Creating a Chat Component

import StudyfetchSDK from '@studyfetch/sdk';

const client = new StudyfetchSDK({
  apiKey: 'your-api-key',
  baseURL: 'https://studyfetchapi.com',
});

const chatComponent = await client.v1.components.create({
  name: 'Biology Study Assistant',
  type: 'chat',
  config: {
    materials: ['mat-123', 'mat-456'],
    folders: ['folder-789'],
    model: 'gpt-4o-mini-2024-07-18',
    systemPrompt: 'You are a helpful biology tutor. Answer questions based on the provided materials.',
    temperature: 0.7,
    maxTokens: 2048,
    enableWebSearch: true,
    enableRAGSearch: true,
    maxSteps: 5,
    enableHistory: false,
    enableVoice: false,
    enableFollowUps: false,
    enableComponentCreation: false,
    enableMessageGrading: false,
    enableReferenceMode: false,
    enableFeedback: true,
    // UI Customization (optional)
    hideTitle: false,
    hideEmptyState: false,
    emptyStateHTML: '<div style="text-align: center; padding: 2rem;"><h3>Welcome to Biology Assistant!</h3><p>Ask me anything about your course materials.</p></div>'
  }
});

console.log('Chat component created:', chatComponent._id);

Configuration Parameters

name
string
required
Name of the chat component
type
string
required
Must be "chat"
config
object
required
Chat configuration object

Response

{
  "_id": "comp_123abc",
  "name": "Biology Study Assistant",
  "type": "chat",
  "status": "active",
  "config": {
    "materials": ["mat-123", "mat-456"],
    "folders": ["folder-789"],
    "model": "gpt-4o-mini-2024-07-18",
    "systemPrompt": "You are a helpful biology tutor...",
    "temperature": 0.7,
    "maxTokens": 2048,
    "enableWebSearch": true,
    "enableRAGSearch": true,
    "maxSteps": 5
  },
  "createdAt": "2024-01-15T10:00:00Z",
  "updatedAt": "2024-01-15T10:00:00Z",
  "organizationId": "org_456def",
  "usage": {
    "interactions": 0,
    "lastUsed": null
  }
}

Adding Guardrails

Guardrails allow you to apply server-side content policy rules to AI responses. When enabled, the AI’s responses are evaluated against your configured rules before being returned to the user.

Example: Academic Integrity Guardrails

const chatWithGuardrails = await client.v1.components.create({
  name: 'Math Tutor with Guardrails',
  type: 'chat',
  config: {
    materials: ['mat-789'],
    model: 'gpt-4o-mini-2024-07-18',
    enableFeedback: true,
    enableGuardrails: true,
    guardrailRules: [
      {
        id: 'no-direct-answers',
        action: 'modify',
        condition: 'Response provides direct answers to homework problems without explanation',
        description: 'Guide students to solve problems themselves',
        message: 'Modified to provide guidance instead of direct answers'
      },
      {
        id: 'academic-honesty',
        action: 'warn',
        condition: 'User asks for test or exam answers',
        description: 'Warn about academic integrity',
        message: 'Remember: This tool is for learning, not for completing graded assignments'
      }
    ]
  }
});

Example: Safety and Content Moderation

const safeChat = await client.v1.components.create({
  name: 'K-12 Science Assistant',
  type: 'chat',
  config: {
    materials: ['mat-science-k12'],
    enableFeedback: true,
    enableGuardrails: true,
    guardrailRules: [
      {
        id: 'no-medical-advice',
        action: 'block',
        condition: 'Response contains medical diagnosis or treatment advice',
        description: 'Prevent medical advice',
        message: 'I cannot provide medical advice. Please consult a healthcare professional.'
      },
      {
        id: 'age-appropriate',
        action: 'modify',
        condition: 'Response contains content not suitable for K-12 students',
        description: 'Ensure age-appropriate content',
        message: 'Content adjusted for educational purposes'
      },
      {
        id: 'no-dangerous-experiments',
        action: 'block',
        condition: 'Response describes dangerous chemical reactions or experiments',
        description: 'Prevent dangerous activities',
        message: 'For safety reasons, I cannot provide instructions for this type of experiment.'
      }
    ]
  }
});

Guardrail Actions Explained

  • BLOCK: Completely prevents the response and shows your custom message to the user
  • WARN: Allows the response but adds a warning message before or after it
  • MODIFY: Instructs the AI to revise its response to comply with the rule

Embedding This Component

Once you’ve created a Chat component, you can embed it on your website using the embedding API.

Generate Embed URL

const embedResponse = await client.v1.components.generateEmbed(chatComponent._id, {
  // User tracking
  userId: 'user-456',
  studentName: 'Jane Smith',  // Student name for display
  groupIds: ['class-101', 'class-102'],
  sessionId: 'session-789',
  
  // Chat-specific features
  features: {
    enableWebSearch: true,
    enableHistory: true,
    enableVoice: true,
    enableFollowUps: true,
    enableComponentCreation: false,
    placeholderText: 'Ask me anything about biology...',
    enableWebSearchSources: true,
    enableImageSources: true,
    enableBadWordsFilter: true,
    enablePromptingScore: true,
    enableResponsibilityScore: true,
    enableReferenceMode: false,
    enableGuardrails: true,
    enableOutline: false,
    enableTranscript: false
  },
  
  // Dimensions
  width: '100%',
  height: '600px',
  
  // Token expiry
  expiryHours: 24
});

Chat-Specific Embedding Features

Allow the chat to search the web for current information
features.enableHistory
boolean
default:"true"
Show conversation history and allow users to continue previous chats
features.enableVoice
boolean
default:"false"
Enable voice input for asking questions
features.enableFollowUps
boolean
default:"true"
Show suggested follow-up questions after responses
features.enableComponentCreation
boolean
default:"false"
Allow users to create other components (flashcards, tests) from chat
features.placeholderText
string
default:"Ask a question..."
Custom placeholder text for the chat input
features.enableWebSearchSources
boolean
default:"true"
Show web search sources when web search is used
features.enableImageSources
boolean
default:"true"
Display image sources in responses when relevant
features.enableBadWordsFilter
boolean
default:"true"
required
Enable filtering of inappropriate language (required)
features.enablePromptingScore
boolean
default:"false"
Enable prompting quality scoring for user messages (1-4 scale)
features.enableResponsibilityScore
boolean
default:"false"
Enable responsibility scoring for user messages (1-4 scale)
features.enableReferenceMode
boolean
default:"false"
Show reference titles and URLs instead of source content in citations
features.enableGuardrails
boolean
default:"false"
Apply guardrail rules configured on the component to embedded chat
features.enableOutline
boolean
default:"false"
Enable document outline navigation (for document-based components)
features.enableTranscript
boolean
default:"false"
Enable transcript view (for video/audio-based components)

Embed in Your HTML

<iframe 
  src="https://embed.studyfetch.com/component/comp_123abc?token=..."
  width="100%"
  height="600px"
  frameborder="0"
  allow="microphone; clipboard-write"
  style="border: 1px solid #e5e5e5; border-radius: 8px;">
</iframe>

Streaming Chat Responses

The Chat API supports real-time streaming responses using Server-Sent Events (SSE). This allows you to display responses as they’re generated, providing a more interactive experience.

Stream Chat Response

// AI SDK format with messages array
const streamResponse = await client.v1.chat.stream({
  componentId: 'comp_123abc',
  sessionId: 'session-789',
  userId: 'user-456',
  groupIds: ['class-101', 'class-102'],
  messages: [
    { role: 'system', content: 'You are a helpful biology tutor.' },
    { role: 'user', content: 'Explain photosynthesis in simple terms' }
  ]
});

// Process the stream
for await (const chunk of streamResponse) {
  console.log(chunk.content);
}

// Custom format with single message
const customStream = await client.v1.chat.stream({
  componentId: 'comp_123abc',
  sessionId: 'session-789',
  userId: 'user-456',
  message: {
    text: 'What is cellular respiration?',
    images: [
      {
        url: 'https://example.com/cell-diagram.png',
        caption: 'Cell structure diagram',
        mimeType: 'image/png'
      }
    ]
  }
});

Stream Parameters

componentId
string
ID of the chat component to use
sessionId
string
Session ID to maintain conversation context
userId
string
User ID for tracking and personalization
groupIds
array
Array of group IDs for access control
context
object
Additional context to pass to the chat
messages
array
Messages array for AI SDK format. Each message should have:
  • role: “system”, “user”, or “assistant”
  • content: The message content
message
object
Single message for custom format

Stream Response Format

The streaming endpoint returns Server-Sent Events (SSE) with the following event types:
data: {"type":"content","content":"The process of photosynthesis..."}

data: {"type":"tool_call","tool":"web_search","args":{"query":"latest photosynthesis research"}}

data: {"type":"sources","sources":[{"title":"Source Title","url":"https://..."}]}

data: {"type":"done","usage":{"tokens":150}}

Example: Building a Streaming Chat Interface

// React example with streaming
function ChatInterface({ componentId }) {
  const [messages, setMessages] = useState([]);
  const [streaming, setStreaming] = useState(false);

  const sendMessage = async (text) => {
    setStreaming(true);
    const userMessage = { role: 'user', content: text };
    setMessages(prev => [...prev, userMessage]);

    const assistantMessage = { role: 'assistant', content: '' };
    setMessages(prev => [...prev, assistantMessage]);

    try {
      const stream = await client.v1.chat.stream({
        componentId,
        sessionId: sessionStorage.getItem('chatSession'),
        messages: [...messages, userMessage]
      });

      for await (const chunk of stream) {
        if (chunk.type === 'content') {
          setMessages(prev => {
            const updated = [...prev];
            updated[updated.length - 1].content += chunk.content;
            return updated;
          });
        }
      }
    } finally {
      setStreaming(false);
    }
  };

  return (
    <div>
      {messages.map((msg, i) => (
        <div key={i} className={msg.role}>
          {msg.content}
        </div>
      ))}
      <input 
        onSubmit={(e) => sendMessage(e.target.value)}
        disabled={streaming}
      />
    </div>
  );
}

Managing Chat Embed Context

The context API allows you to dynamically push contextual information to specific embedded chat instances. This is particularly useful for applications where the chat component remains persistent while the surrounding content changes, such as:
  • Practice Tests: Update context as users navigate between questions
  • Multi-page Tutorials: Provide page-specific context without resetting chat history
  • Dynamic Content: Keep the AI informed about what the user is currently viewing

Push Context

Add context information to an embedded chat component:
import StudyfetchSDK from '@studyfetch/sdk';

const client = new StudyfetchSDK({
  apiKey: 'your-api-key',
  baseURL: 'https://studyfetchapi.com',
});

// Push context to a specific embed
await client.v1.embed.context.push({
  token: 'embed-token-123',
  context: 'The user is now on Question 2 which discusses cellular respiration and the Krebs cycle.'
});

Retrieve Context

Get the current context for an embedded chat component:
// Retrieve current context
const currentContext = await client.v1.embed.context.retrieve({
  token: 'embed-token-123'
});

console.log('Current context:', currentContext);

Clear Context

Clear all context from an embedded chat component:
// Clear all context
await client.v1.embed.context.clear({
  token: 'embed-token-123'
});

Complete Example: Practice Test with Context

Here’s a complete example of using context management in a practice test application:
class PracticeTestChat {
  constructor(client, embedToken) {
    this.client = client;
    this.embedToken = embedToken;
    this.currentQuestion = 0;
  }

  async navigateToQuestion(questionNumber, questionData) {
    // Clear previous context
    await this.client.v1.embed.context.clear({
      token: this.embedToken
    });

    // Push new context for current question
    const contextText = `The user is now on Question ${questionNumber}: ${questionData.title}. 
Topic: ${questionData.topic}
Content: ${questionData.content}`;

    await this.client.v1.embed.context.push({
      token: this.embedToken,
      context: contextText
    });

    this.currentQuestion = questionNumber;
  }

  async addSupplementalContext(additionalInfo) {
    // Add more context without clearing previous
    await this.client.v1.embed.context.push({
      token: this.embedToken,
      context: `Additional information: ${additionalInfo}`
    });
  }
}

// Usage
const chat = new PracticeTestChat(client, 'embed-token-123');

// User navigates to question 1
await chat.navigateToQuestion(1, {
  title: 'Photosynthesis Process',
  topic: 'Biology - Cellular Processes',
  content: 'Explain the light-dependent reactions of photosynthesis...'
});

// User moves to question 2
await chat.navigateToQuestion(2, {
  title: 'Cellular Respiration',
  topic: 'Biology - Cellular Processes',
  content: 'Describe the steps of the Krebs cycle...'
});

Context API Parameters

token
string
required
The embed token for the specific chat instance. Obtained from generateEmbed() response.
context
string
required
The context string to add to the chat. Can include any relevant information about what the user is currently viewing or doing.

Retrieving Chat Feedback

You can retrieve feedback data (thumbs up/down) from users interacting with your chat components:
import StudyfetchSDK from '@studyfetch/sdk';

const client = new StudyfetchSDK({
  apiKey: 'your-api-key',
  baseURL: 'https://studyfetchapi.com',
});

// Retrieve all feedback
await client.v1.chat.retrieveFeedback();

// Filter feedback by component
await client.v1.chat.retrieveFeedback({
  componentId: 'comp_123abc',
  startDate: '2025-01-01T00:00:00Z',
  endDate: '2025-12-31T23:59:59Z',
  feedbackType: 'thumbsUp',
  limit: '100',
  skip: '0'
});

Feedback Parameters

componentId
string
Filter feedback by specific component ID
userId
string
Filter feedback by specific user ID
startDate
string
Start date for feedback range (ISO 8601 format)
endDate
string
End date for feedback range (ISO 8601 format)
feedbackType
enum
Filter by feedback type:
  • thumbsUp - Positive feedback
  • thumbsDown - Negative feedback
limit
string
default:"100"
Number of records to return
skip
string
default:"0"
Number of records to skip (for pagination)

Retrieving Feedback Context

Get the specific message and full conversation for a feedback item to understand the context of user feedback:
import StudyfetchSDK from '@studyfetch/sdk';

const client = new StudyfetchSDK({
  apiKey: 'your-api-key',
  baseURL: 'https://studyfetchapi.com',
});

// Retrieve feedback context for a specific feedback ID
await client.v1.chat.retrieveFeedbackContext({
  feedbackId: 'feedback_123abc'
});

Feedback Context Parameters

feedbackId
string
required
The ID of the feedback item to retrieve context for