Documentation
Agentic Chat
Multi-turn AI conversations about your images and documents with streaming support
What is Agentic Chat?
Agentic chat uses specialized AI agents to analyze your images and documents during conversations. The AI can search your files, answer questions, organize them into folders, and generate reports — all through natural language.
Tip: Save Tokens with Standalone Search
If you only need to search images or documents without multi-turn conversation, use standalone search agents directly. They consume significantly fewer tokens by skipping session overhead and conversational context.
Agent Capabilities
- Image Search, Document Search, Link Search
- Visual Analysis, Document Analysis, Link Analysis
- Analytics, Cross-Reference
- Folder Organization, Report Synthesis
Quick Start
Start a Chat
from aion import AionVision
async with AionVision(api_key="aion_...") as client: async with client.chat_session() as session: response = await session.send("What objects appear in my images?") print(response.content)
# Continue conversation response2 = await session.send("Which ones are damaged?") print(response2.content)Context Modes
All Images Mode
AI can search and access all your images.
async with client.chat_session(use_all_images=True) as session: response = await session.send("Find damaged equipment")Selected Images Mode
AI only sees specific images you choose.
async with client.chat_session( image_ids=["img_001", "img_002"], use_all_images=False) as session: response = await session.send("Compare these two images")Automatic Document Access
Documents are automatically accessible in chat via RAG — the AI searches relevant documents based on your query without needing to explicitly attach them.
Streaming Responses
Stream Tokens in Real-Time
from aion import ChatTokenType
async with client.chat_session() as session: async for event in session.send_stream("Analyze my images"): if event.type == ChatTokenType.TOKEN: print(event.content, end="", flush=True) elif event.type == ChatTokenType.STATUS: print(f"\nStatus: {event.data.get('message', '')}") elif event.type == ChatTokenType.IMAGE_RESULTS: print(f"\nFound {len(event.data.get('images', []))} images") elif event.type == ChatTokenType.COMPLETE: print(f"\n\nDone ({event.data.get('processing_time_ms')}ms)")Streaming Event Types
- TOKEN: Individual response tokens with accumulated content
- STATUS: Processing status updates (e.g., "Searching documents...")
- IMAGE_RESULTS: Images found during response generation
- COMPLETE: Final response with metadata (processing time, token count)
- ERROR: Error occurred during processing
- PLAN_PENDING_APPROVAL: Execution plan requires user approval
- THINKING / THINKING_STEP: AI reasoning content
- TOOL_INVOCATION / TOOL_RESULT: Agent tool activity
- CONNECTION / PING / CLOSE: Connection lifecycle events
Plan Approval
For complex requests (e.g., "organize my photos into folders"), the AI may ask for approval before executing.
from aion import ChatTokenType
async for event in session.send_stream("Organize photos into folders"): if event.type == ChatTokenType.PLAN_PENDING_APPROVAL: print(f"Plan: {event.data.get('description', '')}") await session.approve_plan(event.data["plan_id"]) # or cancel_plan()Session Management
SDK operations
# List sessionssessions = await client.chats.list_sessions()
# Get session detailsdetails = await client.chats.get_session(session_id=session_id)
# Close sessionawait client.chats.close_session(session_id=session_id)
# Export conversationcontent = await client.chats.export_session(session_id, format="markdown")with open("conversation.md", "wb") as f: f.write(content)Export Conversations
# Export as markdowncontent = await client.chats.export_session(session_id, format="markdown")with open("conversation.md", "wb") as f: f.write(content)
# Export as JSON with metadatacontent = await client.chats.export_session( session_id, format="json", include_metadata=True)with open("conversation.json", "wb") as f: f.write(content)ChatResponse Type
@dataclass(frozen=True)class ChatResponse: message_id: str # Unique message identifier session_id: str # Session this message belongs to content: str # AI response text token_count: int = 0 # Tokens used in response provider: str = "" # AI provider used model: str = "" # Model used processing_time_ms: int = 0 # Time to generate response images: Optional[list[ImageReference]] = None metadata: Optional[dict[str, Any]] = NoneSync Client
SDK only — same API without async/await
from aion import SyncAionVision
with SyncAionVision(api_key="aion_...") as client: with client.chat_session() as session: response = session.send("What objects appear in my images?") print(response.content)
response2 = session.send("Which ones are damaged?") print(response2.content)Streaming Not Available
The sync client does not support send_stream(). Use send() for non-streaming responses, or switch to the async client for streaming.
Best Practices
Be specific in your queries
Instead of "Tell me about my photos", ask "What safety equipment is visible in the construction site photos?"
Use streaming for real-time UIs
The streaming endpoint provides a better user experience by showing tokens as they're generated.
Add relevant documents for context
When asking about compliance or standards, documents provide grounded responses based on your specific requirements.