10 KiB
10 KiB
| name | version | description | trigger_conditions | ||||
|---|---|---|---|---|---|---|---|
| harnessed-agent-module-implementation | 2.0.0 | Complete production-ready implementation of Hermes Agent core module with full tool integration, multi-user isolation, SSH remote skills deployment, intelligent memory management, and true workflow orchestration. |
|
Harnessed Agent Module Implementation Guide
Overview
This skill provides the complete implementation of the Harnessed Agent module, which is the core AI agent component of the Hermes ecosystem. It implements a production-ready, multi-user capable AI agent system with:
- Full tool integration: All 28+ system tools properly registered with metadata, permissions, and error handling
- Multi-user isolation: Complete user separation with RBAC-style permissions
- SSH remote skills: Deploy and execute skills on remote servers via SSH
- Intelligent memory management: Priority-based memory with token optimization and auto-cleanup
- True workflow orchestration: Complex task decomposition and parallel execution
- Production security: Input validation, path traversal protection, and secure execution
Module Structure
Following the module-development-spec, the module structure is:
harnessed_agent/
├── harnessed_agent/ # Python package
│ ├── __init__.py # Module initialization with load_harnessed_agent()
│ ├── core.py # Core agent implementation (HermesAgent class)
│ ├── tools/ # Tool integration subsystem
│ │ ├── __init__.py # Tool imports
│ │ ├── registry.py # ToolRegistry implementation
│ │ ├── base_tools.py # Wrapped tool functions
│ │ ├── config_tools.py # Configuration reading tools
│ │ └── registration.py # Tool registration logic
│ └── orchestrator.py # Workflow orchestration engine
├── wwwroot/ # Frontend resources (.ui, .dspy files)
├── models/ # Database table definitions
├── json/ # CRUD operation definitions
├── init/ # Initialization data
├── skill/ # This skill documentation
│ ├── SKILL.md # This document
│ ├── references/ # Reference documents
│ ├── assets/ # Static assets
│ └── scripts/ # Supporting scripts
├── pyproject.toml # Python packaging
└── README.md # Module documentation
Key Features Implemented
1. Full Tool Integration System
The module implements a complete tool integration system with:
- Tool Registry: Central registry (
tools.registry.ToolRegistry) that manages all available tools - Metadata Management: Each tool has comprehensive metadata including:
- Description and parameter specifications
- Required permissions (RBAC-style)
- Usage examples and security notes
- Timeout and retry configurations
- Permission System: Tools are protected by permission requirements that are checked at runtime
- Error Handling: Comprehensive error handling with retries, timeouts, and proper error reporting
- User Context Isolation: Tools automatically respect user work directories and permissions
Available Tool Categories:
- File Operations:
read_file,write_file,search_files,patch - System Operations:
terminal,process,execute_code - Browser Automation: 10 browser tools (
browser_navigate,browser_click, etc.) - AI Capabilities:
vision_analyze,text_to_speech - Memory Management:
memory,session_search - Skill Management:
skill_view,skills_list,skill_manage - Task Management:
todo,delegate_task,clarify,cronjob - Configuration:
get_app_config(reads app config to getskills_path)
2. Multi-User Architecture
- User Isolation: Each user has separate memory, skills, and workspaces
- Context-Aware Execution: All operations automatically use current user context from ahserver
- Permission-Based Access: Granular permissions control what each user can do
- Secure Authentication: Integrates with ahserver's authentication system
3. Intelligent Memory Management
- Priority Classification: Automatic priority assignment (0-100) based on content analysis
- Token Optimization: Intelligent context selection within token limits
- Auto-Cleanup: Configurable automatic memory cleanup with retention policies
- User Preferences: Special handling for user profile information
4. SSH Remote Skills
- Remote Deployment: Deploy skills to remote servers via SSH with key or password auth
- Remote Execution: Execute skills on remote servers with proper error handling
- Configuration Management: Store and manage multiple remote skill configurations
- Security: Secure SSH key handling and connection management
5. True Workflow Orchestration
- Complex Workflows: Support for sequential, parallel, and conditional workflows
- Task Dependencies: Tasks can depend on other tasks with proper ordering
- Parallel Execution: Multiple tasks can run concurrently within limits
- Error Handling: Comprehensive error handling and retry mechanisms
- State Persistence: Workflow state is persisted and can be resumed
Configuration
The module uses HermesConfig class with the following configurable parameters:
class HermesConfig:
work_dir: str = "./hermes_work" # Working directory for user files
skills_path: str = "~/.hermes/skills" # Path to skills directory (from app config)
max_memory_tokens: int = 2000 # Max tokens for memory context
default_priority: int = 50 # Default memory priority (0-100)
high_priority_threshold: int = 70 # Threshold for high priority
low_priority_threshold: int = 30 # Threshold for low priority
auto_cleanup_enabled: bool = True # Enable automatic memory cleanup
min_retention_days: int = 30 # Minimum days to retain memories
The skills_path is automatically read from the application configuration file using the get_app_config() tool, which searches for conf/config.json in standard locations.
Usage Examples
Basic Tool Execution
# From frontend .dspy script
result = await harnessed_execute_tool('read_file', {
'path': 'config.txt',
'offset': 1,
'limit': 100
})
Memory Management
# Save user preference
await harnessed_manage_memory('add', 'user',
content='User prefers dark mode')
# Get intelligent context for current task
context = await harnessed_get_intelligent_memory_context(
current_task='debug database connection',
max_tokens=1000
)
Remote Skill Management
# Create remote skill configuration
await harnessed_manage_remote_skills('create', **{
'name': 'data-analysis-skill',
'host': 'worker-server.example.com',
'username': 'ai-worker',
'auth_method': 'key',
'ssh_key_path': '~/.ssh/ai-worker-key',
'remote_path': '~/.skills'
})
# Execute remote skill
result = await harnessed_manage_remote_skills('execute',
skill_id='data-analysis-skill',
parameters={'dataset': 'sales_q4.csv'}
)
Workflow Orchestration
# Create workflow
workflow_id = await harnessed_create_workflow(
'data-processing-pipeline',
description='Process and analyze sales data',
workflow_type='parallel',
max_concurrent_tasks=3
)
# Add tasks
await harnessed_add_task_to_workflow(workflow_id, 'download-data', 'tool',
tool_name='terminal', parameters={'command': 'wget https://example.com/data.csv'})
await harnessed_add_task_to_workflow(workflow_id, 'analyze-data', 'skill',
skill_name='data-analysis-skill', depends_on='download-data')
# Execute workflow
result = await harnessed_execute_workflow(workflow_id)
Security Considerations
- Input Validation: All inputs are validated to prevent injection attacks
- Path Traversal Protection: File operations are restricted to safe directories
- Permission Checks: All operations require appropriate permissions
- Secure SSH: SSH keys are handled securely with proper file permissions
- Sandboxed Execution: Code execution is limited with timeouts and resource constraints
Integration Requirements
To use this module in an ahserver application:
- Install Dependencies: Ensure all required Python packages are installed
- Database Setup: Run database migrations to create required tables
- Configuration: Add module to application configuration
- Frontend Integration: Use bricks-framework .ui files to create interfaces
- Authentication: Ensure proper user authentication is configured
Verification Steps
- Module loads correctly via
load_harnessed_agent()function - All 28+ tools are properly registered with metadata
- Tool execution works with proper error handling and retries
- User permissions are properly enforced
- Memory management functions work with priority classification
- Remote skills deployment and execution works via SSH
- Workflow orchestration handles complex task dependencies
- Configuration is properly loaded from application config
- Security validations prevent common attack vectors
- Frontend integration works with bricks-framework
- Database operations follow sqlor specifications
Related Skills
- module-development-spec: Module development workflow
- bricks-framework: Frontend development framework
- sqlor-database-module: Database integration patterns
- hermes-agent-enhanced-architecture: Enhanced architecture documentation