A lightweight AI agent framework featuring tool calling, streaming responses, hooks and skills. Designed for simplicity and flexibility.
The framework operates through iterative (reAct) problem-solving by calling available tools. When no further tools are needed, it returns the final response.
- Dynamic system prompt composition
- Automatic tool schema generation via decorators
- Minimal dependencies (OpenAI library only)
- Dependency injection through tools and hooks
- Conversation history management
- Streaming response support
- Skills
import asyncio
from datetime import datetime
from taf.agent import Agent
SYSTEM_PROMPT = """
You are a helpful AI assistant.
"""
async def read_file(file_path: str) -> str:
"""
Reads and returns the contents of a file at the given file path.
"""
try:
with open(file_path, "r") as file:
return file.read()
except Exception as exc:
return f"Error reading file: {exc}"
agent = Agent(
model="gemini-2.5-flash",
system_prompt=SYSTEM_PROMPT,
api_key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
tools=[read_file], # Pre-register read_file as a tool
)
@agent.tool()
async def write_file(file_path: str, content: str) -> str:
"""
Writes content to a file at the specified file path.
"""
try:
with open(file_path, "w") as file:
file.write(content)
return f"Successfully wrote {len(content)} characters to {file_path}"
except Exception as exc:
return f"Error writing to file: {exc}"
@agent.system_prompt()
async def get_current_datetime():
"""
Injects the current datetime into the system prompt.
"""
now = datetime.now()
return f"Current datetime: {now.isoformat()}"
async def main():
prompt = "Write a Python function that outputs 'hello world' inside a file named script.py"
print("Agent response:")
async for chunk in agent.run_stream(prompt):
if chunk["type"] in ("reasoning", "response"):
print(chunk["content"], end="", flush=True)
print()
if __name__ == "__main__":
asyncio.run(main())import asyncio
from dataclasses import dataclass
from taf.agent import Agent
@dataclass
class UserData:
id: int
name: str
language: str
session_id: str
agent = Agent(
model="gpt-4",
system_prompt="You are a helpful AI assistant that personalizes responses based on user context.",
)
@agent.system_prompt()
async def get_current_user_information(ctx: UserData) -> str:
"""
Injects current user information into the system prompt.
"""
return f"""
Current User Context:
- User ID: {ctx.id}
- Name: {ctx.name}
- Preferred Language: {ctx.language}
- Session ID: {ctx.session_id}
Please tailor your responses to this user's context and speak in their preferred language when appropriate.
""".strip()
@agent.tool()
async def get_user_orders(ctx: UserData, status_filter: str = "all") -> str:
"""
Retrieves user's orders filtered by status.
"""
# In a real application, this would query a database
return (
f"Retrieved {status_filter} orders for user {ctx.name} "
f"(ID: {ctx.id}): [Order1, Order2, Order3]"
)
@agent.tool()
async def get_user_preferences(ctx: UserData) -> str:
"""
Gets the current user's preferences and settings.
"""
return f"""
User Preferences for {ctx.name}:
- Language: {ctx.language}
- Theme: Dark mode
- Notifications: Enabled
- Timezone: America/Sao_Paulo
""".strip()
async def main():
user_context = UserData(
id=1,
name="Lucas",
language="Brazilian Portuguese",
session_id="sess_abc123xyz",
)
prompt = "What do you know about me and can you show my recent orders?"
print("Agent: ", end="")
async for chunk in agent.run_stream(prompt, dependency=user_context):
if chunk["type"] in ("reasoning", "response"):
print(chunk["content"], end="", flush=True)
print("\n")
if __name__ == "__main__":
asyncio.run(main())Example using Skills (new)
Skills are installable units of knowledge that package task-specific instructions (SKILL.md) with optional supporting resources, loaded on demand to guide agent behavior while remaining token-efficient
Directory structure:
.skills/
├─ code-review/
│ └─ SKILL.md
└─ document-processing/
├─ references/
│ ├─ CSV.md
│ ├─ DOCX.md
│ ├─ PDF.md
│ └─ XLSX.md
├─ scripts/
│ ├─ convert_document_to_image.sh
│ └─ remove_images.py
└─ SKILL.md
Code example:
from taf import Agent, Skill
import asyncio
skills = Skill.from_folder(".skills")
print(f"{len(skills)} Skills loaded!")
agent = Agent(
name="Skilled Agent",
system_prompt=SYSTEM_PROMPT,
model="",
skills=skills
)
@agent.system_prompt()
def list_skills():
content = "\n\n".join([f"Name: {i.name}\nDescription: {i.description}" for i in skills])
return f"""
<available_skills>
{content}
</available_skills>
""".strip()
async def main():
while True:
prompt = input(">> ")
async for chunk in agent.run_stream(prompt):
if chunk["type"] in ("reasoning", "response"):
print(i["content"], end="", flush=True)
print()
asyncio.run(main())- When skills are provided, a tool named
skillis automatically added to the agent's toolset. - Skills must be declared in the system prompt, so the model can discover them. If a skill is not listed the agent will not be aware of its existence.
- Skills are optional and replaceable. You can implement equivalent behaviour using tools, prompts or dynamic system prompt hooks without using the
Skillclass. - For best results, include clear instructions in the system prompt that explain when and how the model should load and use skills.
This project is primarily a proof-of-concept for building AI agents from scratch using Python and the OpenAI-compatible API. While functional, it may not be suitable for production environments without modifications.
Key limitations include:
- Basic error handling
- Limited memory management capabilities
- Minimal logging infrastructure
- Static tool registration
You are encouraged to adapt and extend the code according to your needs. The framework has been tested primarily with Gemini Flash and OpenRouter models; other OpenAI-compatible providers may require code adjustments.