9.0 KiB
| title |
|---|
| Chapter 1: ChatModel and Message (Console) |
Introduction to the Eino Framework
What is Eino?
Eino is an AI application development framework (Agent Development Kit) implemented in Go, designed to help developers quickly build scalable, maintainable AI applications.
What problems does Eino solve?
- Model abstraction: Unifies interfaces across different LLM providers (OpenAI, Ark, Claude, etc.) so switching models doesn't require modifying business code
- Capability composition: Implements replaceable, composable capability units (conversation, tools, retrieval, etc.) through Component interfaces
- Orchestration framework: Provides orchestration abstractions like Agent, Graph, and Chain, supporting complex multi-step AI workflows
- Runtime support: Built-in capabilities for streaming output, interrupt and resume, state management, Callback observability, and more
Eino's main repositories:
- eino (this repository): Core library, defines interfaces, orchestration abstractions, and ADK
- eino-ext: Extension library, provides concrete implementations of various Components (OpenAI, Ark, Milvus, etc.)
- eino-examples: Example code repository, includes this quickstart series
ChatWithEino: An Intelligent Assistant for Chatting with Eino Documentation
What is ChatWithEino?
ChatWithEino is an intelligent assistant built on the Eino framework that helps developers learn the Eino framework and write Eino code. By accessing the Eino repository's source code, comments, and examples, it provides users with the most accurate and up-to-date technical support.
Core capabilities:
- Conversational interaction: Understands user questions about Eino and provides clear answers
- Code access: Directly reads Eino source code, comments, and examples to answer questions based on real implementations
- Persistent sessions: Supports multi-turn conversations, remembers context, and can resume sessions across processes
- Tool calls: Can perform operations like file reading and code searching
Technical architecture:
- ChatModel: Communicates with large language models (OpenAI, Ark, Claude, etc.)
- Tool: Capability extensions like filesystem access and code search
- Memory: Persistent storage for conversation history
- Agent: Unified execution framework that coordinates components working together
Quickstart Documentation Series: Building ChatWithEino from Scratch
This documentation series takes you step by step, starting from the most basic ChatModel call, gradually building a fully functional ChatWithEino Agent.
Learning path:
| Chapter | Topic | Core Content | Capability Gained |
|---|---|---|---|
| Chapter 1 | ChatModel and Message | Understand Component abstraction, implement single-turn conversation | Basic conversation |
| Chapter 2 | Agent and Runner | Introduce execution abstraction, implement multi-turn conversation | Session management |
| Chapter 3 | Memory and Session | Persist conversation history, support session recovery | Persistence |
| Chapter 4 | Tool and Filesystem | Add file access capability, read source code | Tool calling |
| Chapter 5 | Middleware | Middleware mechanism for unified cross-cutting concerns | Enhanced extensibility |
| Chapter 6 | Callback | Callback mechanism for monitoring Agent execution | Observability |
| Chapter 7 | Interrupt and Resume | Interrupt and resume, support for long-running tasks | Enhanced reliability |
| Chapter 8 | Graph and Tool | Use Graph to orchestrate complex workflows | Complex orchestration |
| Chapter 9 | Skill | Use Skill middleware to load and reuse skill documents | Knowledge reuse |
| Final Chapter | A2UI | Agent-to-UI integration solution | Production-grade application |
Why is it designed this way?
Each chapter adds one core capability on top of the previous one, allowing you to:
- Understand each component's role: Instead of showing all features at once, they are introduced gradually
- See the architecture evolution: From simple to complex, understand why each abstraction is needed
- Master practical development skills: Each chapter has runnable code for hands-on practice
The goal of this chapter is to understand Eino's Component abstraction, call a ChatModel with minimal code (with streaming output support), and master the basic usage of schema.Message.
Code Location
- Entry code: cmd/ch01/main.go
Why We Need the Component Interface
Eino defines a set of Component interfaces (ChatModel, Tool, Retriever, Loader, etc.), each describing a replaceable capability:
type BaseChatModel interface {
Generate(ctx context.Context, input []*schema.Message, opts ...Option) (*schema.Message, error)
Stream(ctx context.Context, input []*schema.Message, opts ...Option) (
*schema.StreamReader[*schema.Message], error)
}
Benefits of using interfaces:
- Swappable implementations:
eino-extprovides multiple implementations including OpenAI, Ark, Claude, Ollama, etc. Business code only depends on the interface, so switching models only requires changing the construction logic. - Composable orchestration: Orchestration layers like Agent, Graph, and Chain only depend on Component interfaces, not specific implementations. You can swap OpenAI for Ark without changing orchestration code.
- Mockable for testing: Interfaces naturally support mocking, so unit tests don't need real model calls.
This chapter only covers ChatModel. Subsequent chapters will gradually introduce Tool, Retriever, and other Components.
schema.Message: The Basic Unit of Conversation
Message is the fundamental data structure for conversations in Eino:
type Message struct {
Role RoleType // system / user / assistant / tool
Content string // Text content
ToolCalls []ToolCall // Only assistant messages may have this
// ...
}
Common constructors:
schema.SystemMessage("You are a helpful assistant.")
schema.UserMessage("What is the weather today?")
schema.AssistantMessage("I don't know.", nil) // Second parameter is ToolCalls
schema.ToolMessage("tool result", "call_id")
Role semantics:
system: System instructions, usually placed at the beginning of the messagesuser: User inputassistant: Model replytool: Tool call result (covered in later chapters)
Prerequisites
Get the Code
git clone https://github.com/cloudwego/eino-examples.git
cd eino-examples/quickstart/chatwitheino
- Go version: Go 1.21+ (see
go.mod) - A callable ChatModel (defaults to OpenAI; Ark is also supported)
Option A: OpenAI (Default)
export OPENAI_API_KEY="..."
export OPENAI_MODEL="gpt-4.1-mini" # OpenAI's 2025 new model; gpt-4o, gpt-4o-mini, etc. also work
# Optional:
# OPENAI_BASE_URL (proxy or compatible service)
# OPENAI_BY_AZURE=true (use Azure OpenAI)
Option B: Ark
export MODEL_TYPE="ark"
export ARK_API_KEY="..."
export ARK_MODEL="..."
# Optional: ARK_BASE_URL
Running
In the examples/quickstart/chatwitheino directory, run:
go run ./cmd/ch01 -- "Explain in one sentence what problem Eino's Component design solves?"
Output example (streamed progressively):
[assistant] Eino's Component design solves the problem of...
What the Entry Code Does
In execution order:
- Create ChatModel: Selects OpenAI or Ark implementation based on the
MODEL_TYPEenvironment variable - Construct input messages:
SystemMessage(instruction)+UserMessage(query) - Call Stream: All ChatModel implementations must support
Stream(), which returnsStreamReader[*Message] - Print results: Iterates through the
StreamReaderto print the assistant reply frame by frame
Key code snippet (Note: this is a simplified code snippet that cannot be run directly. For the complete code, please refer to cmd/ch01/main.go):
// Construct input
messages := []*schema.Message{
schema.SystemMessage(instruction),
schema.UserMessage(query),
}
// Call Stream (all ChatModels must implement this)
stream, err := cm.Stream(ctx, messages)
if err != nil {
log.Fatal(err)
}
defer stream.Close()
for {
chunk, err := stream.Recv()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
log.Fatal(err)
}
fmt.Print(chunk.Content)
}
Chapter Summary
- Component interface: Defines replaceable, composable, and testable capability boundaries
- Message: The basic unit of conversation data, with semantics distinguished by role
- ChatModel: The most fundamental Component, providing two core methods:
GenerateandStream - Implementation selection: Switch between OpenAI/Ark and other implementations via environment variables or configuration, with no changes needed in business code