Prompt Engineering
Cranberrry's SSE response parsing algorithm can convert streaming text responses into structured data that renders as rich React components. This guide teaches you how to structure your LLM prompts to maximize the effectiveness of this parsing system.
Understanding the Parsing Algorithm
Cranberrry uses a tag-based parsing system that looks for specific XML-like tags in your LLM's streaming response. When these tags are detected, they're automatically converted into React components with structured data.
How It Works
- Tag Detection: The parser scans for opening and closing tags like
<conversation>
,<content-block>
, etc. - Content Processing: Text within tags is processed as either plain text or JSON
- Component Rendering: Each tag type maps to a specific React component
- Real-time Updates: Components update as new chunks arrive in the stream
Core Tag Structure
Your LLM should wrap different types of content in specific tags. Here's the basic structure:
<conversation>Your conversational response here</conversation>
<content-block>{"type": "summary", "content": "Structured data", "recommendations": ["item1", "item2"]}</content-block>
<action-block>{"action": "next_step", "message": "What to do next"}</action-block>
Tag Types and Best Practices
1. Conversation Tags
Use <conversation>
for natural language responses that should appear as chat messages.
Good Example:
<conversation>Hello! I'm your AI assistant. I understand you want to build a React component. Let me help you with that step by step.</conversation>
Best Practices:
- Keep responses conversational and engaging
- Use clear, concise language
- Include context about what you're helping with
- Make responses feel natural and human-like
2. Content Block Tags
Use <content-block>
for structured data that should be displayed in cards or formatted sections.
Good Example:
<content-block>{"type": "analysis", "content": "Based on your requirements, I've identified 3 key areas to focus on:", "recommendations": ["Implement proper state management", "Add error handling", "Optimize for performance"]}</content-block>
JSON Structure:
{
"type": "summary|analysis|recommendation|status",
"content": "Main content text",
"recommendations": ["item1", "item2", "item3"],
"metadata": {
"priority": "high|medium|low",
"category": "frontend|backend|design"
}
}
Best Practices:
- Always use valid JSON within content blocks
- Include a
type
field to categorize the content - Use
recommendations
array for actionable items - Keep content concise but informative
3. Action Block Tags
Use <action-block>
for instructions or next steps that require user interaction.
Good Example:
<action-block>{"action": "await_confirmation", "message": "Please review the code above and let me know if you'd like me to implement any changes or proceed with the next step."}</action-block>
JSON Structure:
{
"action": "await_confirmation|proceed|stop|retry",
"message": "Clear instruction for the user",
"options": ["option1", "option2"],
"timeout": 30000
}
Best Practices:
- Use clear, actionable language
- Specify what you're waiting for
- Provide options when possible
- Set appropriate timeouts for time-sensitive actions
Prompt Engineering Strategies
1. Structured Response Pattern
Teach your LLM to always structure responses with appropriate tags:
You are an AI assistant that helps with coding tasks. Always structure your responses using these tags:
- Use <conversation> for your main response and explanations
- Use <content-block> for structured data, summaries, or recommendations (must be valid JSON)
- Use <action-block> when you need user input or confirmation
Example response format:
<conversation>I'll help you build that React component. Let me analyze your requirements first.</conversation>
<content-block>{"type": "analysis", "content": "Requirements analysis complete", "recommendations": ["Use functional components", "Implement proper state management"]}</content-block>
<action-block>{"action": "await_confirmation", "message": "Should I proceed with this approach?"}</action-block>
2. Progressive Disclosure Pattern
Break complex responses into multiple blocks for better user experience:
<conversation>I'll help you create a user authentication system. Let me start by understanding your current setup.</conversation>
<content-block>{"type": "assessment", "content": "Current project analysis", "recommendations": ["Check existing auth setup", "Identify security requirements"]}</content-block>
<conversation>Based on your tech stack, I recommend using JWT tokens with refresh token rotation.</conversation>
<content-block>{"type": "recommendation", "content": "JWT implementation plan", "recommendations": ["Install required packages", "Create auth middleware", "Set up token refresh"]}</content-block>
<action-block>{"action": "await_confirmation", "message": "Shall I proceed with the JWT implementation?"}</action-block>
3. Error Handling Pattern
Structure error responses to be helpful and actionable:
<conversation>I encountered an issue while processing your request. Let me explain what happened and how we can fix it.</conversation>
<content-block>{"type": "error", "content": "Error: Invalid API key format", "recommendations": ["Check API key format", "Verify environment variables", "Test connection"]}</content-block>
<action-block>{"action": "await_input", "message": "Please provide a valid API key or let me help you troubleshoot further."}</action-block>
Advanced Techniques
1. Dynamic Content Generation
Use conditional logic in your prompts to generate different content blocks:
If the user asks for code review:
<conversation>I'll review your code and provide feedback.</conversation>
<content-block>{"type": "review", "content": "Code review complete", "recommendations": ["Fix potential memory leak", "Add input validation", "Improve error handling"]}</content-block>
If the user asks for implementation:
<conversation>I'll help you implement this feature.</conversation>
<content-block>{"type": "implementation", "content": "Implementation plan ready", "recommendations": ["Create component structure", "Add state management", "Implement event handlers"]}</content-block>
2. Multi-Step Workflows
Structure complex tasks as a series of steps:
Step 1: Analysis
<conversation>Let me analyze your requirements first.</conversation>
<content-block>{"type": "analysis", "content": "Requirements analysis", "recommendations": ["Identify core features", "Plan architecture", "Estimate complexity"]}</content-block>
Step 2: Planning
<conversation>Based on the analysis, here's my recommended approach.</conversation>
<content-block>{"type": "plan", "content": "Implementation plan", "recommendations": ["Phase 1: Core functionality", "Phase 2: UI/UX", "Phase 3: Testing"]}</content-block>
Step 3: Execution
<conversation>Let's start implementing. I'll begin with the core functionality.</conversation>
<action-block>{"action": "await_confirmation", "message": "Ready to proceed with implementation?"}</action-block>
3. Context-Aware Responses
Use the conversation context to provide more relevant responses:
Based on the user's previous messages and current context:
<conversation>I see you're working on a React project. Let me help you with the authentication system you mentioned earlier.</conversation>
<content-block>{"type": "context", "content": "Continuing from previous discussion", "recommendations": ["Review previous implementation", "Add missing features", "Test current setup"]}</content-block>
Common Pitfalls to Avoid
1. Invalid JSON in Content Blocks
❌ Bad:
<content-block>{"type": "summary", "content": "This is a summary", "recommendations": [item1, item2]}</content-block>
✅ Good:
<content-block>{"type": "summary", "content": "This is a summary", "recommendations": ["item1", "item2"]}</content-block>
2. Unclosed Tags
❌ Bad:
<conversation>This is a response
<content-block>{"type": "data"}</content-block>
✅ Good:
<conversation>This is a response</conversation>
<content-block>{"type": "data"}</content-block>
3. Overly Complex JSON
❌ Bad:
<content-block>{"type": "complex", "data": {"nested": {"very": {"deep": {"structure": "hard to parse"}}}}}</content-block>
✅ Good:
<content-block>{"type": "simple", "content": "Clear, flat structure", "recommendations": ["Keep it simple", "Use flat objects"]}</content-block>
Testing Your Prompts
1. Validate JSON Structure
Before sending to your LLM, test that your expected JSON structure is valid:
// Test your expected JSON structure
const testBlock = {
"type": "summary",
"content": "Test content",
"recommendations": ["item1", "item2"]
};
console.log(JSON.stringify(testBlock)); // Should be valid JSON
2. Test Tag Parsing
Use the Cranberrry playground to test how your prompts parse:
// Example test response
const testResponse = `<conversation>Hello! I'm testing the parsing.</conversation>
<content-block>{"type": "test", "content": "This should parse correctly", "recommendations": ["Test item 1", "Test item 2"]}</content-block>
<action-block>{"action": "await_confirmation", "message": "Did this parse correctly?"}</action-block>`;
Integration with Cranberrry
1. Configure Tag Processors
In your React application, configure how each tag type should be processed:
const tagConfigs: CBTagConfig[] = [
{
tag: "conversation",
processor: "TEXT",
component: ConversationBlock
},
{
tag: "content-block",
processor: "JSON",
component: ContentBlock
},
{
tag: "action-block",
processor: "JSON",
component: ActionBlock
},
];
2. Create Custom Components
Build React components that render your structured data:
const ContentBlock = ({ ai }: { ai: any }) => (
<div className="content-card">
<h3>{ai.type}</h3>
<p>{ai.content}</p>
{ai.recommendations && (
<ul>
{ai.recommendations.map((rec: string, index: number) => (
<li key={index}>{rec}</li>
))}
</ul>
)}
</div>
);
Text-to-UI Conversion
Cranberrry's powerful text-to-UI conversion system transforms your structured text responses into rich, interactive React components. This system allows you to create dynamic user interfaces that update in real-time as responses stream in.
Overview
The text-to-UI conversion process works by:
- Parsing custom tags from your LLM's structured responses
- Mapping tags to React components that render the data
- Updating UI in real-time as new content streams in
- Creating rich, interactive experiences from simple text
Basic Example
Your LLM generates structured responses:
<conversation>I'll help you build a React component.</conversation>
<code-analysis>{"language": "javascript", "issues": ["missing semicolon"], "suggestions": ["Add semicolon"]}</code-analysis>
<progress-update>{"step": 1, "total": 3, "message": "Analyzing code...", "percentage": 33}</progress-update>
These get converted into interactive UI components:
- Conversation bubbles for natural language
- Code analysis cards with issues and suggestions
- Progress bars that update in real-time
Key Benefits
- Unlimited custom tags - Create as many UI components as you need
- Real-time updates - UI components update as responses stream in
- Rich interactions - Build complex, interactive user experiences
- Domain-specific - Tailor the system to your application's needs
- Type-safe - Full TypeScript support for better development
Learn More
For detailed implementation guides, custom component examples, and advanced usage patterns, see the Text-to-UI documentation page.
Summary
Effective prompt engineering with Cranberrry involves:
- Structure your responses with appropriate tags
- Use valid JSON in content and action blocks
- Keep responses conversational in conversation blocks
- Provide clear actions when user input is needed
- Test your prompts to ensure proper parsing
- Create custom components to render your structured data
- Leverage text-to-UI conversion for rich user experiences
By following these guidelines, you'll create prompts that work seamlessly with Cranberrry's parsing algorithm, resulting in rich, interactive user experiences that convert text responses into dynamic UI components.