229 lines
10 KiB
Markdown
229 lines
10 KiB
Markdown
---
|
|
name: harnessed-agent-module-implementation
|
|
version: 2.0.0
|
|
description: Complete production-ready implementation of Hermes Agent core module with full tool integration, multi-user isolation, SSH remote skills deployment, intelligent memory management, and true workflow orchestration.
|
|
trigger_conditions:
|
|
- User requests to implement or extend Hermes Agent functionality
|
|
- Task involves AI agent development with tool calling capabilities
|
|
- Need for multi-user isolated AI agent system with remote execution
|
|
- Requirement for intelligent memory management with token optimization
|
|
---
|
|
|
|
# Harnessed Agent Module Implementation Guide
|
|
|
|
## Overview
|
|
|
|
This skill provides the complete implementation of the **Harnessed Agent** module, which is the core AI agent component of the Hermes ecosystem. It implements a production-ready, multi-user capable AI agent system with:
|
|
|
|
- **Full tool integration**: All 28+ system tools properly registered with metadata, permissions, and error handling
|
|
- **Multi-user isolation**: Complete user separation with RBAC-style permissions
|
|
- **SSH remote skills**: Deploy and execute skills on remote servers via SSH
|
|
- **Intelligent memory management**: Priority-based memory with token optimization and auto-cleanup
|
|
- **True workflow orchestration**: Complex task decomposition and parallel execution
|
|
- **Production security**: Input validation, path traversal protection, and secure execution
|
|
|
|
## Module Structure
|
|
|
|
Following the [module-development-spec](module-development-spec), the module structure is:
|
|
|
|
```
|
|
harnessed_agent/
|
|
├── harnessed_agent/ # Python package
|
|
│ ├── __init__.py # Module initialization with load_harnessed_agent()
|
|
│ ├── core.py # Core agent implementation (HermesAgent class)
|
|
│ ├── tools/ # Tool integration subsystem
|
|
│ │ ├── __init__.py # Tool imports
|
|
│ │ ├── registry.py # ToolRegistry implementation
|
|
│ │ ├── base_tools.py # Wrapped tool functions
|
|
│ │ ├── config_tools.py # Configuration reading tools
|
|
│ │ └── registration.py # Tool registration logic
|
|
│ └── orchestrator.py # Workflow orchestration engine
|
|
├── wwwroot/ # Frontend resources (.ui, .dspy files)
|
|
├── models/ # Database table definitions
|
|
├── json/ # CRUD operation definitions
|
|
├── init/ # Initialization data
|
|
├── skill/ # This skill documentation
|
|
│ ├── SKILL.md # This document
|
|
│ ├── references/ # Reference documents
|
|
│ ├── assets/ # Static assets
|
|
│ └── scripts/ # Supporting scripts
|
|
├── pyproject.toml # Python packaging
|
|
└── README.md # Module documentation
|
|
```
|
|
|
|
## Key Features Implemented
|
|
|
|
### 1. Full Tool Integration System
|
|
|
|
The module implements a complete tool integration system with:
|
|
|
|
- **Tool Registry**: Central registry (`tools.registry.ToolRegistry`) that manages all available tools
|
|
- **Metadata Management**: Each tool has comprehensive metadata including:
|
|
- Description and parameter specifications
|
|
- Required permissions (RBAC-style)
|
|
- Usage examples and security notes
|
|
- Timeout and retry configurations
|
|
- **Permission System**: Tools are protected by permission requirements that are checked at runtime
|
|
- **Error Handling**: Comprehensive error handling with retries, timeouts, and proper error reporting
|
|
- **User Context Isolation**: Tools automatically respect user work directories and permissions
|
|
|
|
**Available Tool Categories:**
|
|
- **File Operations**: `read_file`, `write_file`, `search_files`, `patch`
|
|
- **System Operations**: `terminal`, `process`, `execute_code`
|
|
- **Browser Automation**: 10 browser tools (`browser_navigate`, `browser_click`, etc.)
|
|
- **AI Capabilities**: `vision_analyze`, `text_to_speech`
|
|
- **Memory Management**: `memory`, `session_search`
|
|
- **Skill Management**: `skill_view`, `skills_list`, `skill_manage`
|
|
- **Task Management**: `todo`, `delegate_task`, `clarify`, `cronjob`
|
|
- **Configuration**: `get_app_config` (reads app config to get `skills_path`)
|
|
|
|
### 2. Multi-User Architecture
|
|
|
|
- **User Isolation**: Each user has separate memory, skills, and workspaces
|
|
- **Context-Aware Execution**: All operations automatically use current user context from ahserver
|
|
- **Permission-Based Access**: Granular permissions control what each user can do
|
|
- **Secure Authentication**: Integrates with ahserver's authentication system
|
|
|
|
### 3. Intelligent Memory Management
|
|
|
|
- **Priority Classification**: Automatic priority assignment (0-100) based on content analysis
|
|
- **Token Optimization**: Intelligent context selection within token limits
|
|
- **Auto-Cleanup**: Configurable automatic memory cleanup with retention policies
|
|
- **User Preferences**: Special handling for user profile information
|
|
|
|
### 4. SSH Remote Skills
|
|
|
|
- **Remote Deployment**: Deploy skills to remote servers via SSH with key or password auth
|
|
- **Remote Execution**: Execute skills on remote servers with proper error handling
|
|
- **Configuration Management**: Store and manage multiple remote skill configurations
|
|
- **Security**: Secure SSH key handling and connection management
|
|
|
|
### 5. True Workflow Orchestration
|
|
|
|
- **Complex Workflows**: Support for sequential, parallel, and conditional workflows
|
|
- **Task Dependencies**: Tasks can depend on other tasks with proper ordering
|
|
- **Parallel Execution**: Multiple tasks can run concurrently within limits
|
|
- **Error Handling**: Comprehensive error handling and retry mechanisms
|
|
- **State Persistence**: Workflow state is persisted and can be resumed
|
|
|
|
## Configuration
|
|
|
|
The module uses `HermesConfig` class with the following configurable parameters:
|
|
|
|
```python
|
|
class HermesConfig:
|
|
work_dir: str = "./hermes_work" # Working directory for user files
|
|
skills_path: str = "~/.hermes/skills" # Path to skills directory (from app config)
|
|
max_memory_tokens: int = 2000 # Max tokens for memory context
|
|
default_priority: int = 50 # Default memory priority (0-100)
|
|
high_priority_threshold: int = 70 # Threshold for high priority
|
|
low_priority_threshold: int = 30 # Threshold for low priority
|
|
auto_cleanup_enabled: bool = True # Enable automatic memory cleanup
|
|
min_retention_days: int = 30 # Minimum days to retain memories
|
|
```
|
|
|
|
The `skills_path` is automatically read from the application configuration file using the `get_app_config()` tool, which searches for `conf/config.json` in standard locations.
|
|
|
|
## Usage Examples
|
|
|
|
### Basic Tool Execution
|
|
```python
|
|
# From frontend .dspy script
|
|
result = await harnessed_execute_tool('read_file', {
|
|
'path': 'config.txt',
|
|
'offset': 1,
|
|
'limit': 100
|
|
})
|
|
```
|
|
|
|
### Memory Management
|
|
```python
|
|
# Save user preference
|
|
await harnessed_manage_memory('add', 'user',
|
|
content='User prefers dark mode')
|
|
|
|
# Get intelligent context for current task
|
|
context = await harnessed_get_intelligent_memory_context(
|
|
current_task='debug database connection',
|
|
max_tokens=1000
|
|
)
|
|
```
|
|
|
|
### Remote Skill Management
|
|
```python
|
|
# Create remote skill configuration
|
|
await harnessed_manage_remote_skills('create', **{
|
|
'name': 'data-analysis-skill',
|
|
'host': 'worker-server.example.com',
|
|
'username': 'ai-worker',
|
|
'auth_method': 'key',
|
|
'ssh_key_path': '~/.ssh/ai-worker-key',
|
|
'remote_path': '~/.skills'
|
|
})
|
|
|
|
# Execute remote skill
|
|
result = await harnessed_manage_remote_skills('execute',
|
|
skill_id='data-analysis-skill',
|
|
parameters={'dataset': 'sales_q4.csv'}
|
|
)
|
|
```
|
|
|
|
### Workflow Orchestration
|
|
```python
|
|
# Create workflow
|
|
workflow_id = await harnessed_create_workflow(
|
|
'data-processing-pipeline',
|
|
description='Process and analyze sales data',
|
|
workflow_type='parallel',
|
|
max_concurrent_tasks=3
|
|
)
|
|
|
|
# Add tasks
|
|
await harnessed_add_task_to_workflow(workflow_id, 'download-data', 'tool',
|
|
tool_name='terminal', parameters={'command': 'wget https://example.com/data.csv'})
|
|
|
|
await harnessed_add_task_to_workflow(workflow_id, 'analyze-data', 'skill',
|
|
skill_name='data-analysis-skill', depends_on='download-data')
|
|
|
|
# Execute workflow
|
|
result = await harnessed_execute_workflow(workflow_id)
|
|
```
|
|
|
|
## Security Considerations
|
|
|
|
- **Input Validation**: All inputs are validated to prevent injection attacks
|
|
- **Path Traversal Protection**: File operations are restricted to safe directories
|
|
- **Permission Checks**: All operations require appropriate permissions
|
|
- **Secure SSH**: SSH keys are handled securely with proper file permissions
|
|
- **Sandboxed Execution**: Code execution is limited with timeouts and resource constraints
|
|
|
|
## Integration Requirements
|
|
|
|
To use this module in an ahserver application:
|
|
|
|
1. **Install Dependencies**: Ensure all required Python packages are installed
|
|
2. **Database Setup**: Run database migrations to create required tables
|
|
3. **Configuration**: Add module to application configuration
|
|
4. **Frontend Integration**: Use bricks-framework .ui files to create interfaces
|
|
5. **Authentication**: Ensure proper user authentication is configured
|
|
|
|
## Verification Steps
|
|
|
|
- [x] Module loads correctly via `load_harnessed_agent()` function
|
|
- [x] All 28+ tools are properly registered with metadata
|
|
- [x] Tool execution works with proper error handling and retries
|
|
- [x] User permissions are properly enforced
|
|
- [x] Memory management functions work with priority classification
|
|
- [x] Remote skills deployment and execution works via SSH
|
|
- [x] Workflow orchestration handles complex task dependencies
|
|
- [x] Configuration is properly loaded from application config
|
|
- [x] Security validations prevent common attack vectors
|
|
- [x] Frontend integration works with bricks-framework
|
|
- [x] Database operations follow sqlor specifications
|
|
|
|
## Related Skills
|
|
|
|
- [module-development-spec](module-development-spec): Module development workflow
|
|
- [bricks-framework](bricks-framework): Frontend development framework
|
|
- [sqlor-database-module](sqlor-database-module): Database integration patterns
|
|
- [hermes-agent-enhanced-architecture](hermes-agent-enhanced-architecture): Enhanced architecture documentation |