Summary
Start an interactive chat session with a selected AI model provider. This command maintains conversational context across multiple messages, allowing for natural back-and-forth interactions.- Needs Admin: False
- Version: 0
- Author: @Ne0nd0g
Arguments
provider
- Description: The model provider to chat with (Anthropic, Bedrock, OpenAI, ollama, OpenWebUI)
- Required: False (uses build parameters, user secrets, or environment variables if not specified)
- Type: Choose One
- Choices: Anthropic, Bedrock, OpenAI, ollama, OpenWebUI
model
- Description: The model to use for inference from the selected provider
- Required: False (uses build parameters, user secrets, or environment variables if not specified)
- Type: String
prompt
- Description: The initial prompt to send to the model
- Required: True
- Type: String
tools
- Description: Use tools to enhance the model’s capabilities
- Required: False
- Default: true
- Type: Boolean
verbose
- Description: Show verbose output of all user and AI messages
- Required: False
- Default: false
- Type: Boolean
API_ENDPOINT
- Description: The API endpoint to use for the selected provider
- Required: False
- Type: String
API_KEY
- Description: The API key to authenticate with the provider
- Required: False
- Type: String
AWS_ACCESS_KEY_ID
- Description: AWS Access Key ID for Bedrock
- Required: False
- Type: String
AWS_SECRET_ACCESS_KEY
- Description: AWS Secret Access Key for Bedrock
- Required: False
- Type: String
AWS_SESSION_TOKEN
- Description: AWS Session Token for Bedrock
- Required: False
- Type: String
AWS_DEFAULT_REGION
- Description: AWS Region for Bedrock
- Required: False
- Type: String
Usage
Detailed Summary
Thechat command creates an interactive session with an AI model, maintaining conversation history throughout the session. This is ideal for complex queries that require multiple exchanges or when you need to provide follow-up information.
Features
Session Management- Maintains message history throughout the conversation
- Each response includes context from previous messages
- Sessions are tracked per task ID in Mythic
- After the initial prompt, continue sending messages
- Type responses directly in the Mythic interface
- Exit the session by closing the interactive task
- Access Mythic file storage
- Query task information
- Perform enhanced reasoning with external context
- All user messages with 👤 icon
- All AI responses with 🤖 icon
- Tool usage and intermediate reasoning steps
Provider-Specific Notes
Anthropic- Supports the latest Claude models
- Tool use is highly effective with Claude
- Recommended for complex analytical tasks
- Only Anthropic Claude models are supported
- Model strings must contain
.anthropic. - Requires valid AWS credentials
- Works with OpenAI API and compatible endpoints
- Supports GPT models and compatible alternatives
- Can connect to ollama or other OpenAI-compatible services
Authentication Priority
Credentials are checked in this order:- Command parameters (highest priority)
- User secrets in Mythic UI
- Payload build parameters
- Container environment variables (lowest priority)
The first chat message includes your prompt. Subsequent messages in the session only need the message text - provider and model settings are remembered.
Best Practices
- Use descriptive initial prompts to set context
- Enable tools for enhanced capabilities
- Use verbose mode when debugging or learning
- Keep sessions focused on related topics
- Start a new session for unrelated queries