
Summary
Sage is a virtual Mythic agent that acts as an interface to AI and Large Language Model (LLM) services. Unlike traditional Mythic agents that run on compromised hosts, Sage operates entirely within the Mythic server itself, providing operators with AI-powered capabilities during operations. This agent does not generate payloads for deployment but instead creates a “callback” within Mythic that allows you to interact with various AI model providers.You must obtain access permissions and API credentials for the AI model provider you plan to use. Sage supports multiple providers including Anthropic, AWS Bedrock, OpenAI, ollama, and OpenWebUI.
Highlighted Agent Features
- Multiple AI Provider Support: Connect to Anthropic, AWS Bedrock, OpenAI, ollama, and OpenWebUI
- Interactive Chat Sessions: Maintain conversational context across multiple messages
- Single Query Mode: Send one-off queries for quick responses
- Tool Enhancement: Enable tools to extend model capabilities
- Model Context Protocol (MCP): Connect to Stdio MCP servers for additional functionality
- Flexible Authentication: Configure credentials at multiple levels (task, user, build, environment)
- Model Discovery: List available models from each provider
Authors
Supported Providers
Anthropic
Direct access to Claude models via Anthropic’s API. Requires an API key. Example models:claude-3-7-sonnet-latest
AWS Bedrock
Access Claude models through Amazon Bedrock. Requires AWS credentials with Bedrock permissions. Example models:us.anthropic.claude-3-5-sonnet-20241022-v2:0
OpenAI
Access to OpenAI models or any OpenAI-compatible API endpoint. Example models:gpt-4o-mini
ollama
Connect to local or remote ollama instances for running open-source models.OpenWebUI
Interface with OpenWebUI instances for additional model access.Authentication
Sage uses CASE SENSITIVE settings/keys for authentication. Credentials can be provided in four places (checked in this order):- Task command parameters - Highest priority, override all other settings
- User Secrets - Per-operator credentials in Mythic UI
- Payload build parameters - Shared credentials set during agent creation
- Container environment variables - Lowest priority fallback
Required Settings
provider- The model provider (Anthropic, Bedrock, OpenAI, ollama, OpenWebUI)model- The model identifier string for the selected providerAPI_ENDPOINT- HTTP endpoint for the provider (not used for Bedrock)API_KEY- Authentication key for the provider
AWS Bedrock Specific
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_SESSION_TOKENAWS_DEFAULT_REGION
Model Context Protocol (MCP)
Sage supports connecting to Stdio MCP servers using themcp-connect command. This allows you to extend model capabilities with custom tools and functions.
MCP servers must run in the same location as the Sage container. Ensure all required dependencies are installed in the container or run Sage locally on the Mythic host.
Getting Started
- Clone the Mythic repository
- Install the Sage agent:
sudo ./mythic-cli install github https://github.com/MythicAgents/sage - Start Mythic:
sudo ./mythic-cli start - Navigate to the Payloads tab in Mythic
- Generate a new Sage payload with your desired build parameters
- A callback will be automatically created
- Interact with AI models through the Active Callbacks tab