Compare commits

...

12 Commits

Author SHA1 Message Date
leach ad01f651a3 Fix session manager formatting issue with Unicode emojis
- Added calculate_display_width() method to properly handle emoji widths
- Fixed overlapping text in session manager UI by accounting for double-width emojis
- Handles compound emojis with variation selectors (like 🗂️)
- Uses saturating_sub() to prevent underflow in padding calculations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-01 01:40:44 -04:00
leach 49b68ba0f8 Improve Anthropic streaming reliability and hide debug output
- Add proper handling for "ping" events to prevent unknown event warnings
- Implement robust JSON parsing for large web search results with encrypted content
- Add partial line buffering to handle incomplete streaming chunks correctly
- Gracefully handle EOF errors and large data blocks that fail to parse
- Add automatic fallback from streaming to non-streaming on failure with user notification
- Hide debug output from users to provide cleaner experience
- Process remaining partial lines at stream end to avoid losing content
- Improve error messages to be more informative without being alarming

Fixes streaming failures caused by web search results with large encrypted content blocks.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-31 00:03:23 -04:00
leach 1a1df93521 Fix TUI display formatting issues for better terminal compatibility
- Simplified header design from complex Unicode borders to fixed-width lines
- Streamlined status bar from multi-line bordered display to clean single-line format
- Fixed terminal width calculation dependencies that caused layout breaks
- Enhanced status display: "🤖 Model Provider • 💾 Session" format
- Improved features display with color-coded indicators (✓/✗)
- Removed problematic border calculations causing wrapping issues
- All display tests pass, maintains functionality while fixing visual problems

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-30 23:55:13 -04:00
Christopher 11ae676101
Merge pull request #6 from leachy14/codex/add-capability-helpers-and-refactor-provider-match
Centralize provider capability handling
2025-08-25 01:27:36 -04:00
Christopher 4618f273f3 Centralize provider capabilities 2025-08-25 01:27:09 -04:00
Christopher 1ac7914646
Merge pull request #3 from leachy14/codex/refactor-handle_user_message-to-pass-references
Avoid cloning session data for chat completions
2025-08-25 00:56:23 -04:00
Christopher 222c1c2182
Merge branch 'master' into codex/refactor-handle_user_message-to-pass-references 2025-08-25 00:56:15 -04:00
Christopher 05e0fa88b3 Use session refs for completions 2025-08-25 00:54:59 -04:00
Christopher 3f1b88ece0
Merge pull request #2 from leachy14/codex/introduce-cached-config-in-session.rs
Cache session configuration
2025-08-25 00:54:42 -04:00
Christopher 7f04128ed5
Merge branch 'master' into codex/introduce-cached-config-in-session.rs 2025-08-25 00:54:37 -04:00
Christopher a5e225bf60
Merge pull request #1 from leachy14/codex/refactor-signal-handling-in-main.rs
refactor: handle SIGINT with tokio signal
2025-08-25 00:53:01 -04:00
Christopher b2c1857edd refactor: handle SIGINT with tokio signal 2025-08-25 00:52:47 -04:00
12 changed files with 2354 additions and 374 deletions

View File

@ -0,0 +1,44 @@
---
name: performance-optimizer
description: Use this agent when you need to analyze, build, and optimize existing code for better performance and cleanliness. Examples: <example>Context: User has written a data processing script that seems slow. user: 'I've finished writing this data analysis script but it's taking forever to run on large datasets' assistant: 'Let me use the performance-optimizer agent to build, profile, and optimize your script for better performance' <commentary>The user has code that needs performance optimization, so use the performance-optimizer agent to analyze and improve it.</commentary></example> <example>Context: User mentions their application is working but could be faster. user: 'My web scraper works but I think it could be much faster and the code is getting messy' assistant: 'I'll use the performance-optimizer agent to analyze your scraper, clean up the code, and implement performance improvements' <commentary>This is a perfect case for the performance-optimizer agent to handle both performance and code quality improvements.</commentary></example>
model: sonnet
color: green
---
You are a Performance Optimization Specialist, an expert in code analysis, profiling, and systematic optimization. Your mission is to build, run, and optimize programs to achieve maximum performance while maintaining clean, readable code.
Your optimization process follows this methodology:
1. **Initial Assessment**: Build and run the program to establish baseline performance metrics. Document current execution time, memory usage, and identify any build issues or runtime errors.
2. **Performance Profiling**: Analyze the code execution to identify bottlenecks, inefficient algorithms, memory leaks, and resource-intensive operations. Use appropriate profiling tools when available.
3. **Code Quality Analysis**: Examine code structure for maintainability issues including duplicate code, overly complex functions, poor naming conventions, and architectural problems.
4. **Optimization Strategy**: Develop a prioritized optimization plan focusing on:
- Algorithmic improvements (O(n) complexity reductions)
- Data structure optimizations
- Memory usage improvements
- I/O operation efficiency
- Parallel processing opportunities
- Code refactoring for clarity and performance
5. **Implementation**: Apply optimizations incrementally, testing each change to ensure correctness and measure performance impact. Never sacrifice code correctness for performance gains.
6. **Verification**: Re-run the optimized program to validate improvements and ensure no regressions. Provide before/after performance comparisons with specific metrics.
For each optimization you implement:
- Explain the rationale behind the change
- Quantify the expected performance benefit
- Ensure code remains readable and maintainable
- Add comments explaining complex optimizations
If the program fails to build or run initially, prioritize fixing these issues before optimization. Always maintain backward compatibility unless explicitly told otherwise.
Provide a comprehensive summary including:
- Performance improvements achieved (with specific metrics)
- Code quality enhancements made
- Potential future optimization opportunities
- Any trade-offs or limitations of the optimizations
You excel at balancing performance gains with code maintainability, ensuring optimizations are sustainable and understandable to other developers.

View File

@ -0,0 +1,32 @@
---
name: tui-ux-designer
description: Use this agent when designing, reviewing, or improving text-based user interfaces (TUIs) with focus on aesthetics and user experience. Examples: <example>Context: User is building a terminal-based file manager and wants to improve the visual hierarchy. user: 'I have this file listing interface but it feels cluttered and hard to scan' assistant: 'Let me use the tui-ux-designer agent to analyze your interface and suggest improvements for visual clarity and user experience'</example> <example>Context: User is creating a CLI tool and wants to ensure smooth interaction patterns. user: 'How should I handle loading states in my terminal application?' assistant: 'I'll use the tui-ux-designer agent to provide best practices for loading indicators and smooth state transitions in terminal interfaces'</example> <example>Context: User has implemented a TUI but users report it's confusing to navigate. user: 'Users are getting lost in my terminal application menu system' assistant: 'Let me engage the tui-ux-designer agent to review your navigation patterns and suggest improvements for better user flow'</example>
model: sonnet
color: blue
---
You are a specialized TUI (Text User Interface) UX/UI designer with deep expertise in creating beautiful, intuitive, and smooth terminal-based user experiences. You understand the unique constraints and opportunities of text-based interfaces, combining aesthetic principles with practical usability.
Your core competencies include:
- Visual hierarchy using typography, spacing, colors, and ASCII art elements
- Smooth interaction patterns including transitions, animations, and feedback mechanisms
- Information architecture optimized for terminal environments
- Accessibility considerations for diverse terminal capabilities
- Performance optimization for responsive feel
- Cross-platform terminal compatibility
When analyzing or designing TUI interfaces, you will:
1. **Assess Visual Clarity**: Evaluate information density, contrast, alignment, and visual grouping. Recommend specific improvements using box-drawing characters, color schemes, and spacing.
2. **Optimize User Flow**: Design intuitive navigation patterns, keyboard shortcuts, and interaction sequences that feel natural and efficient.
3. **Enhance Feedback Systems**: Specify loading indicators, progress bars, status messages, and error handling that provide clear communication without overwhelming the interface.
4. **Consider Context**: Account for different terminal sizes, color capabilities, and user expertise levels when making recommendations.
5. **Provide Concrete Examples**: Include specific code snippets, ASCII mockups, or detailed descriptions of visual elements when suggesting improvements.
6. **Balance Aesthetics with Function**: Ensure all design decisions enhance usability while creating an appealing visual experience within terminal constraints.
Always consider the technical implementation feasibility of your suggestions and provide alternative approaches when terminal capabilities vary. Focus on creating interfaces that feel modern, responsive, and delightful to use despite the text-only medium.

11
Cargo.lock generated
View File

@ -597,7 +597,6 @@ dependencies = [
"serde",
"serde_json",
"serial_test",
"signal-hook",
"syntect",
"tempfile",
"tokio",
@ -1586,16 +1585,6 @@ version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "signal-hook"
version = "0.3.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d881a16cf4426aa584979d30bd82cb33429027e42122b169753d6ef1085ed6e2"
dependencies = [
"libc",
"signal-hook-registry",
]
[[package]]
name = "signal-hook-registry"
version = "1.4.6"

View File

@ -23,9 +23,9 @@ syntect = "5.1"
regex = "1.0"
futures = "0.3"
tokio-stream = "0.1"
signal-hook = "0.3"
once_cell = "1.21"
[dev-dependencies]
tempfile = "3.0"
mockall = "0.12"

View File

@ -2,7 +2,8 @@ use anyhow::Result;
use crate::config::Config;
use crate::core::{
create_client, get_provider_for_model, provider::{get_model_info_list, get_display_name_for_model, get_model_id_from_display_name},
create_client, get_provider_for_model,
provider::{get_display_name_for_model, get_model_id_from_display_name, get_model_info_list},
ChatClient, Session,
};
use crate::utils::{Display, InputHandler, SessionAction};
@ -41,14 +42,19 @@ impl ChatCLI {
pub async fn run(&mut self) -> Result<()> {
self.display.print_header();
self.display.print_info("Type your message and press Enter. Commands start with '/'.");
self.display.print_info("Type /help for help.");
// Enhanced status bar with comprehensive information
let provider = get_provider_for_model(&self.session.model);
let display_name = get_display_name_for_model(&self.session.model);
self.display.print_model_info(&display_name, provider.as_str());
self.display.print_session_info(&self.session.name);
let features = vec![
("Web Search", self.session.enable_web_search),
("Reasoning", self.session.enable_reasoning_summary),
("Extended Thinking", self.session.enable_extended_thinking),
];
self.display.print_status_bar(&display_name, provider.as_str(), &self.session.name, &features);
self.display.print_info("Type your message and press Enter. Commands start with '/' (try /help).");
println!();
loop {
@ -65,7 +71,16 @@ impl ChatCLI {
}
} else {
if let Err(e) = self.handle_user_message(line).await {
self.display.print_error(&format!("Error: {}", e));
// Enhanced error display with context and suggestions
let error_msg = format!("Error: {}", e);
let context = Some("This error occurred while processing your message.");
let suggestions = vec![
"Check your API key configuration",
"Verify your internet connection",
"Try switching to a different model with /model",
"Use /help to see available commands"
];
self.display.print_error_with_context(&error_msg, context, &suggestions);
}
println!(); // Add padding before next prompt
}
@ -77,6 +92,10 @@ impl ChatCLI {
}
}
self.save_and_cleanup()
}
pub fn save_and_cleanup(&mut self) -> Result<()> {
self.session.save()?;
self.input.cleanup()?; // Use cleanup instead of just save_history
Ok(())
@ -94,19 +113,19 @@ impl ChatCLI {
let reasoning_effort = self.session.reasoning_effort.clone();
let enable_extended_thinking = self.session.enable_extended_thinking;
let thinking_budget_tokens = self.session.thinking_budget_tokens;
// Check if we should use streaming before getting client
let should_use_streaming = {
let client = self.get_client()?;
client.supports_streaming()
};
if should_use_streaming {
println!(); // Add padding before AI response
print!("{}> ", console::style("🤖").magenta());
use std::io::{self, Write};
io::stdout().flush().ok();
let stream_callback = {
use crate::core::StreamCallback;
Box::new(move |chunk: &str| {
@ -116,21 +135,21 @@ impl ChatCLI {
Box::pin(async move {}) as Pin<Box<dyn Future<Output = ()> + Send>>
}) as StreamCallback
};
let client = self.get_client()?;
match client
let client = self.get_client()?.clone();
let response = client
.chat_completion_stream(
&model,
&messages,
enable_web_search,
enable_reasoning_summary,
&reasoning_effort,
enable_extended_thinking,
thinking_budget_tokens,
&self.session.model,
&self.session.messages,
self.session.enable_web_search,
self.session.enable_reasoning_summary,
&self.session.reasoning_effort,
self.session.enable_extended_thinking,
self.session.thinking_budget_tokens,
stream_callback,
)
.await
{
.await;
match response {
Ok(response) => {
println!(); // Add newline after streaming
self.session.add_assistant_message(response);
@ -138,27 +157,56 @@ impl ChatCLI {
}
Err(e) => {
println!(); // Add newline after failed streaming
self.display.print_error(&format!("Streaming failed: {}", e));
return Err(e);
// Try to fallback to non-streaming if streaming fails
self.display.print_warning(&format!("Streaming failed: {}. Trying non-streaming mode...", e));
let spinner = self.display.show_spinner("Thinking");
let client = self.get_client()?.clone();
match client
.chat_completion(
&self.session.model,
&self.session.messages,
self.session.enable_web_search,
self.session.enable_reasoning_summary,
&self.session.reasoning_effort,
self.session.enable_extended_thinking,
self.session.thinking_budget_tokens,
)
.await
{
Ok(response) => {
spinner.finish("Done");
self.display.print_assistant_response(&response);
self.session.add_assistant_message(response);
self.session.save()?;
}
Err(fallback_e) => {
spinner.finish_with_error("Failed");
self.display.print_error(&format!("Both streaming and non-streaming failed. Streaming: {}. Non-streaming: {}", e, fallback_e));
return Err(fallback_e);
}
}
}
}
} else {
// Fallback to non-streaming
let spinner = self.display.show_spinner("Thinking");
let client = self.get_client()?;
match client
let client = self.get_client()?.clone();
let response = client
.chat_completion(
&model,
&messages,
enable_web_search,
enable_reasoning_summary,
&reasoning_effort,
enable_extended_thinking,
thinking_budget_tokens,
&self.session.model,
&self.session.messages,
self.session.enable_web_search,
self.session.enable_reasoning_summary,
&self.session.reasoning_effort,
self.session.enable_extended_thinking,
self.session.thinking_budget_tokens,
)
.await
{
.await;
match response {
Ok(response) => {
spinner.finish("Done");
self.display.print_assistant_response(&response);
@ -218,7 +266,8 @@ impl ChatCLI {
self.handle_save_command(&parts)?;
}
_ => {
self.display.print_error(&format!("Unknown command: {} (see /help)", parts[0]));
self.display
.print_error(&format!("Unknown command: {} (see /help)", parts[0]));
}
}
@ -227,15 +276,18 @@ impl ChatCLI {
async fn model_switcher(&mut self) -> Result<()> {
let model_info_list = get_model_info_list();
let display_names: Vec<String> = model_info_list.iter().map(|info| info.display_name.to_string()).collect();
let display_names: Vec<String> = model_info_list
.iter()
.map(|info| info.display_name.to_string())
.collect();
let current_display_name = get_display_name_for_model(&self.session.model);
let selection = self.input.select_from_list(
"Select a model:",
&display_names,
Some(&current_display_name),
)?;
match selection {
Some(display_name) => {
if let Some(model_id) = get_model_id_from_display_name(&display_name) {
@ -260,11 +312,10 @@ impl ChatCLI {
self.display.print_info("Model selection cancelled");
}
}
Ok(())
}
fn handle_new_session(&mut self, parts: &[&str]) -> Result<()> {
if parts.len() != 2 {
self.display.print_error("Usage: /new <session_name>");
@ -274,18 +325,16 @@ impl ChatCLI {
self.session.save()?;
let new_session = Session::new(parts[1].to_string(), self.session.model.clone());
self.session = new_session;
self.display.print_command_result(&format!("New session '{}' started", self.session.name));
self.display
.print_command_result(&format!("New session '{}' started", self.session.name));
Ok(())
}
async fn session_manager(&mut self) -> Result<()> {
loop {
let sessions = Session::list_sessions()?;
let session_names: Vec<String> = sessions
.into_iter()
.map(|(name, _)| name)
.collect();
let session_names: Vec<String> = sessions.into_iter().map(|(name, _)| name).collect();
if session_names.is_empty() {
self.display.print_info("No sessions available");
@ -304,7 +353,7 @@ impl ChatCLI {
self.display.print_info("Already in that session");
return Ok(());
}
self.session.save()?;
match Session::load(&session_name) {
Ok(session) => {
@ -318,7 +367,8 @@ impl ChatCLI {
return Ok(());
}
Err(e) => {
self.display.print_error(&format!("Failed to load session: {}", e));
self.display
.print_error(&format!("Failed to load session: {}", e));
// Don't return, allow user to try again or cancel
}
}
@ -326,8 +376,11 @@ impl ChatCLI {
SessionAction::Delete(session_name) => {
match Session::delete_session(&session_name) {
Ok(()) => {
self.display.print_command_result(&format!("Session '{}' deleted", session_name));
self.display.print_command_result(&format!(
"Session '{}' deleted",
session_name
));
// If we deleted the current session, we need to handle this specially
if session_name == self.session.name {
// Try to switch to another session or create a default one
@ -336,18 +389,23 @@ impl ChatCLI {
.into_iter()
.map(|(name, _)| name)
.collect();
if remaining_names.is_empty() {
// No sessions left, create a default one
self.session = Session::new("default".to_string(), self.session.model.clone());
self.display.print_command_result("Created new default session");
self.session = Session::new(
"default".to_string(),
self.session.model.clone(),
);
self.display
.print_command_result("Created new default session");
return Ok(());
} else {
// Switch to the first available session
match Session::load(&remaining_names[0]) {
Ok(session) => {
self.session = session;
let display_name = get_display_name_for_model(&self.session.model);
let display_name =
get_display_name_for_model(&self.session.model);
self.display.print_command_result(&format!(
"Switched to session '{}' (model={})",
self.session.name, display_name
@ -356,10 +414,18 @@ impl ChatCLI {
return Ok(());
}
Err(e) => {
self.display.print_error(&format!("Failed to load fallback session: {}", e));
self.display.print_error(&format!(
"Failed to load fallback session: {}",
e
));
// Create a new default session as fallback
self.session = Session::new("default".to_string(), self.session.model.clone());
self.display.print_command_result("Created new default session");
self.session = Session::new(
"default".to_string(),
self.session.model.clone(),
);
self.display.print_command_result(
"Created new default session",
);
return Ok(());
}
}
@ -368,7 +434,8 @@ impl ChatCLI {
// Continue to show updated session list if we didn't delete current session
}
Err(e) => {
self.display.print_error(&format!("Failed to delete session: {}", e));
self.display
.print_error(&format!("Failed to delete session: {}", e));
// Continue to allow retry
}
}
@ -383,7 +450,8 @@ impl ChatCLI {
));
}
Err(e) => {
self.display.print_error(&format!("Failed to set default session: {}", e));
self.display
.print_error(&format!("Failed to set default session: {}", e));
}
}
// Continue to show session list
@ -395,72 +463,88 @@ impl ChatCLI {
}
}
async fn tools_manager(&mut self) -> Result<()> {
loop {
// Show current tool status
self.display.print_info("Tool Management:");
// Show current tool status using enhanced display
let features = vec![
("Web Search", self.session.enable_web_search, Some("Search the web for up-to-date information")),
("Reasoning Summaries", self.session.enable_reasoning_summary, Some("Show reasoning process summaries")),
("Extended Thinking", self.session.enable_extended_thinking, Some("Enable deeper reasoning capabilities")),
];
let web_status = if self.session.enable_web_search { "✓ enabled" } else { "✗ disabled" };
let reasoning_status = if self.session.enable_reasoning_summary { "✓ enabled" } else { "✗ disabled" };
let extended_thinking_status = if self.session.enable_extended_thinking { "✓ enabled" } else { "✗ disabled" };
println!(" Web Search: {}", web_status);
println!(" Reasoning Summaries: {}", reasoning_status);
println!(" Reasoning Effort: {}", self.session.reasoning_effort);
println!(" Extended Thinking: {}", extended_thinking_status);
println!(" Thinking Budget: {} tokens", self.session.thinking_budget_tokens);
self.display.print_feature_status(&features);
// Additional status information
self.display.print_info(&format!("Reasoning Effort: {}", self.session.reasoning_effort));
self.display.print_info(&format!("Thinking Budget: {} tokens", self.session.thinking_budget_tokens));
// Check model compatibility
let model = self.session.model.clone();
let provider = get_provider_for_model(&model);
let reasoning_enabled = self.session.enable_reasoning_summary;
// Show compatibility warnings based on provider
match provider {
crate::core::provider::Provider::Anthropic => {
if reasoning_enabled {
self.display.print_warning("Reasoning summaries are not supported by Anthropic models");
}
if self.session.enable_extended_thinking {
// Extended thinking is supported by Anthropic models
}
// Web search is now supported by Anthropic models
}
crate::core::provider::Provider::OpenAI => {
if self.session.enable_extended_thinking {
self.display.print_warning("Extended thinking is not supported by OpenAI models");
}
// OpenAI models generally support other features
let capabilities = provider.capabilities();
if self.session.enable_reasoning_summary && !capabilities.supports_reasoning_summaries {
self.display.print_warning(&format!(
"Reasoning summaries are not supported by {} models",
format!("{:?}", provider)
));
}
if self.session.enable_extended_thinking && !capabilities.supports_extended_thinking {
self.display.print_warning(&format!(
"Extended thinking is not supported by {} models",
format!("{:?}", provider)
));
}
if self.session.enable_web_search && !capabilities.supports_web_search {
self.display.print_warning(&format!(
"Web search is not supported by {} models",
format!("{:?}", provider)
));
}
if let Some(min) = capabilities.min_thinking_budget {
if self.session.thinking_budget_tokens < min {
self.display.print_warning(&format!(
"Minimum thinking budget is {} tokens for {} models",
min,
format!("{:?}", provider)
));
}
}
// Tool management options
let options = vec![
"Toggle Web Search",
"Toggle Reasoning Summaries",
"Toggle Reasoning Summaries",
"Set Reasoning Effort",
"Toggle Extended Thinking",
"Set Thinking Budget",
"Done"
"Done",
];
let selection = self.input.select_from_list(
"Select an option:",
&options,
None,
)?;
let selection = self
.input
.select_from_list("Select an option:", &options, None)?;
match selection.as_deref() {
Some("Toggle Web Search") => {
self.session.enable_web_search = !self.session.enable_web_search;
let state = if self.session.enable_web_search { "enabled" } else { "disabled" };
self.display.print_command_result(&format!("Web search {}", state));
let state = if self.session.enable_web_search {
"enabled"
} else {
"disabled"
};
self.display
.print_command_result(&format!("Web search {}", state));
}
Some("Toggle Reasoning Summaries") => {
self.session.enable_reasoning_summary = !self.session.enable_reasoning_summary;
let state = if self.session.enable_reasoning_summary { "enabled" } else { "disabled" };
self.display.print_command_result(&format!("Reasoning summaries {}", state));
let state = if self.session.enable_reasoning_summary {
"enabled"
} else {
"disabled"
};
self.display
.print_command_result(&format!("Reasoning summaries {}", state));
}
Some("Set Reasoning Effort") => {
let effort_options = vec!["low", "medium", "high"];
@ -470,26 +554,33 @@ impl ChatCLI {
Some(&self.session.reasoning_effort),
)? {
self.session.reasoning_effort = effort.to_string();
self.display.print_command_result(&format!("Reasoning effort set to {}", effort));
self.display
.print_command_result(&format!("Reasoning effort set to {}", effort));
if !self.session.model.starts_with("gpt-5") {
self.display.print_warning("Reasoning effort is only supported by GPT-5 models");
self.display.print_warning(
"Reasoning effort is only supported by GPT-5 models",
);
}
}
}
Some("Toggle Extended Thinking") => {
self.session.enable_extended_thinking = !self.session.enable_extended_thinking;
let state = if self.session.enable_extended_thinking { "enabled" } else { "disabled" };
self.display.print_command_result(&format!("Extended thinking {}", state));
let state = if self.session.enable_extended_thinking {
"enabled"
} else {
"disabled"
};
self.display
.print_command_result(&format!("Extended thinking {}", state));
let provider = get_provider_for_model(&self.session.model);
match provider {
crate::core::provider::Provider::OpenAI => {
self.display.print_warning("Extended thinking is not supported by OpenAI models");
}
crate::core::provider::Provider::Anthropic => {
// Supported
}
let capabilities = provider.capabilities();
if !capabilities.supports_extended_thinking {
self.display.print_warning(&format!(
"Extended thinking is not supported by {} models",
format!("{:?}", provider)
));
}
}
Some("Set Thinking Budget") => {
@ -502,17 +593,25 @@ impl ChatCLI {
)? {
if let Ok(budget) = budget_str.parse::<u32>() {
self.session.thinking_budget_tokens = budget;
self.display.print_command_result(&format!("Thinking budget set to {} tokens", budget));
self.display.print_command_result(&format!(
"Thinking budget set to {} tokens",
budget
));
let provider = get_provider_for_model(&self.session.model);
match provider {
crate::core::provider::Provider::OpenAI => {
self.display.print_warning("Extended thinking is not supported by OpenAI models");
}
crate::core::provider::Provider::Anthropic => {
if budget < 1024 {
self.display.print_warning("Minimum thinking budget is 1024 tokens for Anthropic models");
}
let capabilities = provider.capabilities();
if !capabilities.supports_extended_thinking {
self.display.print_warning(&format!(
"Extended thinking is not supported by {} models",
format!("{:?}", provider)
));
} else if let Some(min) = capabilities.min_thinking_budget {
if budget < min {
self.display.print_warning(&format!(
"Minimum thinking budget is {} tokens for {} models",
min,
format!("{:?}", provider)
));
}
}
}
@ -523,18 +622,18 @@ impl ChatCLI {
}
_ => {}
}
self.session.save()?; // Save changes after each modification
println!(); // Add spacing
}
Ok(())
}
fn handle_history_command(&mut self, parts: &[&str]) -> Result<()> {
let mut filter_role: Option<&str> = None;
let mut limit: Option<usize> = None;
// Parse parameters
for &part in parts.iter().skip(1) {
match part {
@ -543,37 +642,41 @@ impl ChatCLI {
if let Ok(num) = part.parse::<usize>() {
limit = Some(num);
} else {
self.display.print_error(&format!("Invalid parameter: {}", part));
self.display.print_info("Usage: /history [user|assistant] [number]");
self.display
.print_error(&format!("Invalid parameter: {}", part));
self.display
.print_info("Usage: /history [user|assistant] [number]");
return Ok(());
}
}
}
}
// Filter messages (skip system prompt at index 0)
let mut messages: Vec<(usize, &crate::core::Message)> = self.session.messages
let mut messages: Vec<(usize, &crate::core::Message)> = self
.session
.messages
.iter()
.enumerate()
.skip(1) // Skip system prompt
.collect();
// Apply role filter
if let Some(role) = filter_role {
messages.retain(|(_, msg)| msg.role == role);
}
// Apply limit
if let Some(limit_count) = limit {
let start_index = messages.len().saturating_sub(limit_count);
messages = messages[start_index..].to_vec();
}
if messages.is_empty() {
self.display.print_info("No messages to display");
return Ok(());
}
// Format and display
self.display.print_conversation_history(&messages);
Ok(())
@ -590,7 +693,7 @@ impl ChatCLI {
let valid_formats = ["markdown", "md", "json", "txt"];
if !valid_formats.contains(&format.as_str()) {
self.display.print_error(&format!(
"Invalid format '{}'. Supported formats: markdown, json, txt",
"Invalid format '{}'. Supported formats: markdown, json, txt",
format
));
self.display.print_info("Usage: /export [format]");
@ -602,19 +705,20 @@ impl ChatCLI {
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let extension = match format.as_str() {
"markdown" | "md" => "md",
"json" => "json",
"txt" => "txt",
_ => "md",
};
// Create exports directory if it doesn't exist
let exports_dir = std::path::Path::new("exports");
if !exports_dir.exists() {
if let Err(e) = std::fs::create_dir(exports_dir) {
self.display.print_error(&format!("Failed to create exports directory: {}", e));
self.display
.print_error(&format!("Failed to create exports directory: {}", e));
return Ok(());
}
}
@ -622,10 +726,13 @@ impl ChatCLI {
let filename = format!("{}_{}.{}", self.session.name, now, extension);
let file_path = exports_dir.join(&filename);
match self.session.export(&format, file_path.to_str().unwrap_or(&filename)) {
match self
.session
.export(&format, file_path.to_str().unwrap_or(&filename))
{
Ok(()) => {
self.display.print_command_result(&format!(
"Conversation exported to '{}'",
"Conversation exported to '{}'",
file_path.display()
));
}
@ -640,7 +747,8 @@ impl ChatCLI {
fn handle_save_command(&mut self, parts: &[&str]) -> Result<()> {
if parts.len() != 2 {
self.display.print_error("Usage: /save <new_session_name>");
self.display.print_info("Example: /save my_important_conversation");
self.display
.print_info("Example: /save my_important_conversation");
return Ok(());
}
@ -656,7 +764,7 @@ impl ChatCLI {
if let Ok(sessions) = Session::list_sessions() {
if sessions.iter().any(|(name, _)| name == new_session_name) {
self.display.print_error(&format!(
"Session '{}' already exists. Choose a different name.",
"Session '{}' already exists. Choose a different name.",
new_session_name
));
return Ok(());
@ -670,7 +778,7 @@ impl ChatCLI {
match self.session.save_as(new_session_name) {
Ok(()) => {
self.display.print_command_result(&format!(
"Current session saved as '{}' ({} messages copied)",
"Current session saved as '{}' ({} messages copied)",
new_session_name,
self.session.messages.len().saturating_sub(1) // Exclude system prompt
));
@ -680,11 +788,11 @@ impl ChatCLI {
));
}
Err(e) => {
self.display.print_error(&format!("Failed to save session: {}", e));
self.display
.print_error(&format!("Failed to save session: {}", e));
}
}
Ok(())
}
}
}

View File

@ -13,7 +13,7 @@ use super::{provider::Provider, session::Message};
pub type StreamCallback = Box<dyn Fn(&str) -> Pin<Box<dyn Future<Output = ()> + Send>> + Send + Sync>;
#[derive(Debug)]
#[derive(Debug, Clone)]
pub enum ChatClient {
OpenAI(OpenAIClient),
Anthropic(AnthropicClient),
@ -80,14 +80,14 @@ impl ChatClient {
}
}
#[derive(Debug)]
#[derive(Debug, Clone)]
pub struct OpenAIClient {
client: Client,
api_key: String,
base_url: String,
}
#[derive(Debug)]
#[derive(Debug, Clone)]
pub struct AnthropicClient {
client: Client,
api_key: String,
@ -1047,14 +1047,38 @@ impl AnthropicClient {
let mut full_response = String::new();
let mut stream = response.bytes_stream();
let mut event_count = 0;
let mut content_blocks_found = 0;
let mut partial_line = String::new(); // Buffer for incomplete lines
while let Some(chunk) = stream.next().await {
let chunk = chunk.context("Failed to read chunk from Anthropic stream")?;
let chunk_str = std::str::from_utf8(&chunk)
.context("Failed to parse Anthropic chunk as UTF-8")?;
// Handle partial lines by buffering incomplete chunks
partial_line.push_str(chunk_str);
// Split by newlines, keeping the last incomplete line in buffer
let full_text = partial_line.clone();
let mut lines: Vec<&str> = full_text.split('\n').collect();
let lines_to_process = if full_text.ends_with('\n') {
// All lines are complete
partial_line.clear();
lines
} else {
// Last line is incomplete, keep it in buffer
if let Some(last) = lines.pop() {
partial_line = last.to_string();
lines
} else {
// No complete lines yet
continue;
}
};
// Parse server-sent events for Anthropic
for line in chunk_str.lines() {
for line in lines_to_process {
if line.starts_with("data: ") {
let data = &line[6..];
@ -1063,13 +1087,16 @@ impl AnthropicClient {
continue;
}
// Try to parse JSON, but be more robust with large/truncated responses
match serde_json::from_str::<AnthropicStreamEvent>(data) {
Ok(event) => {
event_count += 1;
match event.event_type.as_str() {
"content_block_delta" => {
if let Ok(delta_event) = serde_json::from_value::<AnthropicContentBlockDelta>(event.data) {
if let Some(text) = delta_event.delta.text {
if !text.is_empty() {
content_blocks_found += 1;
full_response.push_str(&text);
stream_callback(&text).await;
}
@ -1084,6 +1111,9 @@ impl AnthropicClient {
full_response.push_str(search_indicator);
stream_callback(search_indicator).await;
}
} else {
// This might be a text content block start
content_blocks_found += 1;
}
}
"content_block_stop" => {
@ -1092,14 +1122,31 @@ impl AnthropicClient {
"message_start" | "message_delta" | "message_stop" => {
// Handle other message-level events
}
"ping" => {
// Anthropic sends ping events to keep connection alive
// These are expected and should be silently ignored
}
_ => {
// Unknown event type, skip
// Silently ignore unknown event types to avoid user confusion
// Previously logged: Unknown Anthropic event type
}
}
}
Err(_) => {
// Skip malformed JSON chunks
continue;
Err(e) => {
// Handle parsing errors more gracefully
if data.len() > 1000 {
// Large data blocks (like web search results) that fail to parse
// are often non-critical, so silently continue
continue;
} else if e.to_string().contains("EOF") {
// EOF errors suggest truncated JSON, which is common with large responses
// Continue processing other chunks
continue;
} else {
// Only log unexpected parsing errors for shorter data
// Silently continue to avoid user confusion
continue;
}
}
}
} else if line.starts_with("event: ") {
@ -1109,8 +1156,30 @@ impl AnthropicClient {
}
}
// Process any remaining partial line at the end
if !partial_line.trim().is_empty() && partial_line.starts_with("data: ") {
let data = &partial_line[6..];
if !data.trim().is_empty() && data != "[DONE]" {
// Try to parse final partial line
if let Ok(event) = serde_json::from_str::<AnthropicStreamEvent>(data) {
if event.event_type == "content_block_delta" {
if let Ok(delta_event) = serde_json::from_value::<AnthropicContentBlockDelta>(event.data) {
if let Some(text) = delta_event.delta.text {
if !text.is_empty() {
full_response.push_str(&text);
}
}
}
}
}
}
}
if full_response.is_empty() {
return Err(anyhow::anyhow!("No content found in Anthropic stream response"));
return Err(anyhow::anyhow!(
"No content found in Anthropic stream response. Events processed: {}, Content blocks: {}",
event_count, content_blocks_found
));
}
Ok(full_response)

View File

@ -6,6 +6,14 @@ pub enum Provider {
Anthropic,
}
#[derive(Debug, Clone, Copy)]
pub struct ProviderCapabilities {
pub supports_web_search: bool,
pub supports_reasoning_summaries: bool,
pub supports_extended_thinking: bool,
pub min_thinking_budget: Option<u32>,
}
pub struct ModelInfo {
pub model_id: &'static str,
pub display_name: &'static str,
@ -18,11 +26,28 @@ impl Provider {
Provider::Anthropic => "anthropic",
}
}
pub fn capabilities(&self) -> ProviderCapabilities {
match self {
Provider::OpenAI => ProviderCapabilities {
supports_web_search: true,
supports_reasoning_summaries: true,
supports_extended_thinking: false,
min_thinking_budget: None,
},
Provider::Anthropic => ProviderCapabilities {
supports_web_search: true,
supports_reasoning_summaries: false,
supports_extended_thinking: true,
min_thinking_budget: Some(1024),
},
}
}
}
pub fn get_supported_models() -> HashMap<Provider, Vec<&'static str>> {
let mut models = HashMap::new();
models.insert(
Provider::OpenAI,
vec![
@ -37,7 +62,7 @@ pub fn get_supported_models() -> HashMap<Provider, Vec<&'static str>> {
"o3-mini",
],
);
models.insert(
Provider::Anthropic,
vec![
@ -48,29 +73,70 @@ pub fn get_supported_models() -> HashMap<Provider, Vec<&'static str>> {
"claude-3-haiku-20240307",
],
);
models
}
pub fn get_model_info_list() -> Vec<ModelInfo> {
vec![
// OpenAI models
ModelInfo { model_id: "gpt-4.1", display_name: "GPT-4.1" },
ModelInfo { model_id: "gpt-4.1-mini", display_name: "GPT-4.1 Mini" },
ModelInfo { model_id: "gpt-4o", display_name: "GPT-4o" },
ModelInfo { model_id: "gpt-5", display_name: "GPT-5" },
ModelInfo { model_id: "gpt-5-chat-latest", display_name: "GPT-5 Chat Latest" },
ModelInfo { model_id: "o1", display_name: "o1" },
ModelInfo { model_id: "o3", display_name: "o3" },
ModelInfo { model_id: "o4-mini", display_name: "o4 Mini" },
ModelInfo { model_id: "o3-mini", display_name: "o3 Mini" },
ModelInfo {
model_id: "gpt-4.1",
display_name: "GPT-4.1",
},
ModelInfo {
model_id: "gpt-4.1-mini",
display_name: "GPT-4.1 Mini",
},
ModelInfo {
model_id: "gpt-4o",
display_name: "GPT-4o",
},
ModelInfo {
model_id: "gpt-5",
display_name: "GPT-5",
},
ModelInfo {
model_id: "gpt-5-chat-latest",
display_name: "GPT-5 Chat Latest",
},
ModelInfo {
model_id: "o1",
display_name: "o1",
},
ModelInfo {
model_id: "o3",
display_name: "o3",
},
ModelInfo {
model_id: "o4-mini",
display_name: "o4 Mini",
},
ModelInfo {
model_id: "o3-mini",
display_name: "o3 Mini",
},
// Anthropic models with friendly names
ModelInfo { model_id: "claude-opus-4-1-20250805", display_name: "Claude Opus 4.1" },
ModelInfo { model_id: "claude-sonnet-4-20250514", display_name: "Claude Sonnet 4.0" },
ModelInfo { model_id: "claude-3-7-sonnet-20250219", display_name: "Claude 3.7 Sonnet" },
ModelInfo { model_id: "claude-3-5-haiku-20241022", display_name: "Claude 3.5 Haiku" },
ModelInfo { model_id: "claude-3-haiku-20240307", display_name: "Claude 3.0 Haiku" },
ModelInfo {
model_id: "claude-opus-4-1-20250805",
display_name: "Claude Opus 4.1",
},
ModelInfo {
model_id: "claude-sonnet-4-20250514",
display_name: "Claude Sonnet 4.0",
},
ModelInfo {
model_id: "claude-3-7-sonnet-20250219",
display_name: "Claude 3.7 Sonnet",
},
ModelInfo {
model_id: "claude-3-5-haiku-20241022",
display_name: "Claude 3.5 Haiku",
},
ModelInfo {
model_id: "claude-3-haiku-20240307",
display_name: "Claude 3.0 Haiku",
},
]
}
@ -99,13 +165,13 @@ pub fn get_all_models() -> Vec<&'static str> {
pub fn get_provider_for_model(model: &str) -> Provider {
let supported = get_supported_models();
for (provider, models) in supported {
if models.contains(&model) {
return provider;
}
}
Provider::OpenAI // default fallback
}
@ -126,18 +192,18 @@ mod tests {
#[test]
fn test_get_supported_models() {
let models = get_supported_models();
// Check that both providers are present
assert!(models.contains_key(&Provider::OpenAI));
assert!(models.contains_key(&Provider::Anthropic));
// Check OpenAI models
let openai_models = models.get(&Provider::OpenAI).unwrap();
assert!(openai_models.contains(&"gpt-5"));
assert!(openai_models.contains(&"gpt-4o"));
assert!(openai_models.contains(&"o1"));
assert!(openai_models.len() > 0);
// Check Anthropic models
let anthropic_models = models.get(&Provider::Anthropic).unwrap();
assert!(anthropic_models.contains(&"claude-sonnet-4-20250514"));
@ -148,15 +214,17 @@ mod tests {
#[test]
fn test_get_model_info_list() {
let model_infos = get_model_info_list();
assert!(model_infos.len() > 0);
// Check some specific models
let gpt5_info = model_infos.iter().find(|info| info.model_id == "gpt-5");
assert!(gpt5_info.is_some());
assert_eq!(gpt5_info.unwrap().display_name, "GPT-5");
let claude_info = model_infos.iter().find(|info| info.model_id == "claude-sonnet-4-20250514");
let claude_info = model_infos
.iter()
.find(|info| info.model_id == "claude-sonnet-4-20250514");
assert!(claude_info.is_some());
assert_eq!(claude_info.unwrap().display_name, "Claude Sonnet 4.0");
}
@ -165,20 +233,32 @@ mod tests {
fn test_get_display_name_for_model() {
// Test known models
assert_eq!(get_display_name_for_model("gpt-5"), "GPT-5");
assert_eq!(get_display_name_for_model("claude-sonnet-4-20250514"), "Claude Sonnet 4.0");
assert_eq!(
get_display_name_for_model("claude-sonnet-4-20250514"),
"Claude Sonnet 4.0"
);
assert_eq!(get_display_name_for_model("o1"), "o1");
// Test unknown model (should return the model_id itself)
assert_eq!(get_display_name_for_model("unknown-model-123"), "unknown-model-123");
assert_eq!(
get_display_name_for_model("unknown-model-123"),
"unknown-model-123"
);
}
#[test]
fn test_get_model_id_from_display_name() {
// Test known display names
assert_eq!(get_model_id_from_display_name("GPT-5"), Some("gpt-5".to_string()));
assert_eq!(get_model_id_from_display_name("Claude Sonnet 4.0"), Some("claude-sonnet-4-20250514".to_string()));
assert_eq!(
get_model_id_from_display_name("GPT-5"),
Some("gpt-5".to_string())
);
assert_eq!(
get_model_id_from_display_name("Claude Sonnet 4.0"),
Some("claude-sonnet-4-20250514".to_string())
);
assert_eq!(get_model_id_from_display_name("o1"), Some("o1".to_string()));
// Test unknown display name
assert_eq!(get_model_id_from_display_name("Unknown Model"), None);
}
@ -186,9 +266,9 @@ mod tests {
#[test]
fn test_get_all_models() {
let all_models = get_all_models();
assert!(all_models.len() > 0);
// Check that models from both providers are included
assert!(all_models.contains(&"gpt-5"));
assert!(all_models.contains(&"gpt-4o"));
@ -203,14 +283,26 @@ mod tests {
assert_eq!(get_provider_for_model("gpt-4o"), Provider::OpenAI);
assert_eq!(get_provider_for_model("o1"), Provider::OpenAI);
assert_eq!(get_provider_for_model("gpt-4.1"), Provider::OpenAI);
// Test Anthropic models
assert_eq!(get_provider_for_model("claude-sonnet-4-20250514"), Provider::Anthropic);
assert_eq!(get_provider_for_model("claude-3-5-haiku-20241022"), Provider::Anthropic);
assert_eq!(get_provider_for_model("claude-opus-4-1-20250805"), Provider::Anthropic);
assert_eq!(
get_provider_for_model("claude-sonnet-4-20250514"),
Provider::Anthropic
);
assert_eq!(
get_provider_for_model("claude-3-5-haiku-20241022"),
Provider::Anthropic
);
assert_eq!(
get_provider_for_model("claude-opus-4-1-20250805"),
Provider::Anthropic
);
// Test unknown model (should default to OpenAI)
assert_eq!(get_provider_for_model("unknown-model-123"), Provider::OpenAI);
assert_eq!(
get_provider_for_model("unknown-model-123"),
Provider::OpenAI
);
}
#[test]
@ -220,7 +312,7 @@ mod tests {
assert!(is_model_supported("claude-sonnet-4-20250514"));
assert!(is_model_supported("o1"));
assert!(is_model_supported("claude-3-haiku-20240307"));
// Test unsupported models
assert!(!is_model_supported("unsupported-model"));
assert!(!is_model_supported("gpt-6"));
@ -238,11 +330,11 @@ mod tests {
#[test]
fn test_provider_hash() {
use std::collections::HashMap;
let mut map = HashMap::new();
map.insert(Provider::OpenAI, "openai_value");
map.insert(Provider::Anthropic, "anthropic_value");
assert_eq!(map.get(&Provider::OpenAI), Some(&"openai_value"));
assert_eq!(map.get(&Provider::Anthropic), Some(&"anthropic_value"));
}
@ -253,7 +345,7 @@ mod tests {
model_id: "test-model",
display_name: "Test Model",
};
assert_eq!(model_info.model_id, "test-model");
assert_eq!(model_info.display_name, "Test Model");
}
@ -261,11 +353,14 @@ mod tests {
#[test]
fn test_all_model_infos_have_valid_display_names() {
let model_infos = get_model_info_list();
for info in model_infos {
assert!(!info.model_id.is_empty(), "Model ID should not be empty");
assert!(!info.display_name.is_empty(), "Display name should not be empty");
assert!(
!info.display_name.is_empty(),
"Display name should not be empty"
);
// Display name should be different from model_id for most cases
// (though some might be the same like "o1")
assert!(info.display_name.len() > 0);
@ -277,23 +372,28 @@ mod tests {
let supported_models = get_supported_models();
let all_models = get_all_models();
let model_infos = get_model_info_list();
// All models in get_all_models should be in supported_models
for model in &all_models {
let found = supported_models.values().any(|models| models.contains(model));
let found = supported_models
.values()
.any(|models| models.contains(model));
assert!(found, "Model {} not found in supported_models", model);
}
// All models in model_infos should be in all_models
for info in &model_infos {
assert!(all_models.contains(&info.model_id),
"Model {} from model_infos not found in all_models", info.model_id);
assert!(
all_models.contains(&info.model_id),
"Model {} from model_infos not found in all_models",
info.model_id
);
}
// All models in all_models should have corresponding model_info
for model in &all_models {
let found = model_infos.iter().any(|info| info.model_id == *model);
assert!(found, "Model {} not found in model_infos", model);
}
}
}
}

View File

@ -5,9 +5,7 @@ mod utils;
use anyhow::{Context, Result};
use clap::Parser;
use signal_hook::{consts::SIGINT, iterator::Signals};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use tokio::signal::unix::{signal, SignalKind};
use crate::cli::ChatCLI;
use crate::config::Config;
@ -19,7 +17,11 @@ use crate::utils::Display;
#[command(about = "A lightweight command-line interface for chatting with AI models")]
#[command(version)]
struct Args {
#[arg(short, long, help = "Session name (defaults to configured default session)")]
#[arg(
short,
long,
help = "Session name (defaults to configured default session)"
)]
session: Option<String>,
#[arg(short, long, help = "Model name to use (overrides saved value)")]
@ -34,16 +36,8 @@ async fn main() -> Result<()> {
let args = Args::parse();
let display = Display::new();
// Set up signal handling for proper cleanup
let term = Arc::new(AtomicBool::new(false));
let term_clone = term.clone();
std::thread::spawn(move || {
let mut signals = Signals::new(&[SIGINT]).unwrap();
for _ in signals.forever() {
term_clone.store(true, Ordering::Relaxed);
}
});
// Set up signal handling using Tokio's signal support
let mut sigint = signal(SignalKind::interrupt())?;
// Handle config creation
if args.create_config {
@ -53,14 +47,16 @@ async fn main() -> Result<()> {
// Load configuration
let config = Config::load().context("Failed to load configuration")?;
// Validate environment variables
let env_vars = Config::validate_env_variables().context("Environment validation failed")?;
// Load or create session
// Use configured default session if none specified
let session_name = args.session.unwrap_or_else(|| config.defaults.default_session.clone());
let session_name = args
.session
.unwrap_or_else(|| config.defaults.default_session.clone());
let session = match Session::load(&session_name) {
Ok(mut session) => {
if let Some(model) = args.model {
@ -99,9 +95,22 @@ async fn main() -> Result<()> {
// Print configuration info
config.print_config_info();
// Run the CLI
// Run the CLI and handle SIGINT gracefully
let mut cli = ChatCLI::new(session, config).context("Failed to initialize CLI")?;
cli.run().await.context("CLI error")?;
let mut cli_run = Box::pin(cli.run());
tokio::select! {
res = &mut cli_run => {
res.context("CLI error")?;
}
_ = sigint.recv() => {
// Drop the CLI future before using `cli` again
drop(cli_run);
display.print_info("Received interrupt signal. Exiting...");
cli.save_and_cleanup()?;
}
}
Ok(())
}

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,7 @@
use anyhow::Result;
use dialoguer::{theme::ColorfulTheme, Select};
use dialoguer::{theme::ColorfulTheme, Select, Confirm};
use rustyline::{error::ReadlineError, DefaultEditor, KeyEvent, Cmd, Config, EditMode};
use console::{style, Term};
pub struct InputHandler {
editor: DefaultEditor,
@ -54,8 +55,11 @@ impl InputHandler {
}
pub fn read_multiline_input(&mut self, initial_line: String) -> Result<Option<String>> {
let mut lines = vec![initial_line];
println!("Multi-line mode: Type your message. End with a line containing only '.' or press Ctrl+D");
let mut lines = Vec::with_capacity(10); // Pre-allocate for typical multi-line input
lines.push(initial_line);
// Enhanced multi-line mode presentation
self.print_multiline_header();
loop {
match self.editor.readline("... ") {
@ -66,7 +70,7 @@ impl InputHandler {
lines.push(line);
}
Err(ReadlineError::Interrupted) => {
println!("^C");
println!("{}", style("Multi-line input cancelled").yellow());
return Ok(None);
}
Err(ReadlineError::Eof) => {
@ -78,6 +82,8 @@ impl InputHandler {
let full_message = lines.join("\n");
let _ = self.editor.add_history_entry(&full_message);
println!("{}", style("Multi-line input completed").green());
Ok(Some(full_message))
}
@ -111,11 +117,14 @@ impl InputHandler {
current: Option<&str>,
) -> Result<Option<T>> {
if items.is_empty() {
println!("(no items available)");
self.print_empty_list_message("No items available");
return Ok(None);
}
let theme = ColorfulTheme::default();
// Enhanced visual presentation
self.print_selection_header(title, items.len());
let theme = self.create_enhanced_theme();
// Find default selection index
let default_index = if let Some(current) = current {
@ -124,26 +133,48 @@ impl InputHandler {
0
};
// Create enhanced item display
let display_items: Vec<String> = items.iter().enumerate().map(|(i, item)| {
let item_str = item.to_string();
let icon = self.get_selection_icon(i);
if Some(item_str.as_str()) == current {
format!("{} {} ⭐ (current)", icon, item_str)
} else {
format!("{} {}", icon, item_str)
}
}).collect();
match Select::with_theme(&theme)
.with_prompt(title)
.items(items)
.with_prompt("Use ↑/↓ arrows to navigate, Enter to select, Esc to cancel")
.items(&display_items)
.default(default_index)
.interact_opt() {
Ok(selection) => Ok(selection.map(|idx| items[idx].clone())),
Ok(selection) => {
if let Some(idx) = selection {
println!("{}", style(format!("Selected: {}", items[idx].to_string())).green());
}
Ok(selection.map(|idx| items[idx].clone()))
},
Err(_) => {
// Handle any error (ESC, Ctrl+C, etc.) as cancellation
println!("{}", style("Selection cancelled").dim());
Ok(None)
}
}
}
pub fn confirm(&self, message: &str) -> Result<bool> {
use dialoguer::Confirm;
self.print_confirmation_header(message);
let confirmation = Confirm::with_theme(&ColorfulTheme::default())
.with_prompt(message)
let confirmation = Confirm::with_theme(&self.create_enhanced_theme())
.with_prompt("Are you sure?")
.default(false)
.show_default(true)
.interact()?;
let result_text = if confirmation { "✓ Confirmed" } else { "⚠ Cancelled" };
let style_func = if confirmation { style(result_text).green() } else { style(result_text).yellow() };
println!("{}", style_func);
Ok(confirmation)
}
@ -155,28 +186,29 @@ impl InputHandler {
current_session: Option<&str>,
) -> Result<SessionAction<T>> {
if sessions.is_empty() {
println!("(no sessions available)");
self.print_empty_list_message("No sessions available");
return Ok(SessionAction::Cancel);
}
// Create display items with current session marker
// Enhanced session display with visual indicators
self.print_session_manager_header(title, sessions.len(), current_session);
// Create enhanced display items with icons and status indicators
let display_items: Vec<String> = sessions
.iter()
.map(|session| {
.enumerate()
.map(|(i, session)| {
let name = session.to_string();
let icon = self.get_session_icon(i);
if Some(name.as_str()) == current_session {
format!("{} (current)", name)
format!("{} {}(active)", icon, name)
} else {
name
format!("{} {}", icon, name)
}
})
.collect();
println!("\n{}", title);
println!("Use ↑/↓ arrows to navigate, Enter to switch session, Esc to cancel");
println!("To delete a session, first switch away from it, then use this menu again.\n");
let theme = ColorfulTheme::default();
let theme = self.create_enhanced_theme();
// Find default selection index
let default_index = if let Some(current) = current_session {
@ -185,14 +217,15 @@ impl InputHandler {
0
};
// Create the selection menu
// Enhanced selection menu with better prompts
let selection_result = match Select::with_theme(&theme)
.with_prompt("Select session")
.with_prompt("Choose a session (↑/↓ to navigate, Enter to select, Esc to cancel)")
.items(&display_items)
.default(default_index)
.interact_opt() {
Ok(selection) => selection,
Err(_) => {
println!("{}", style("Session management cancelled").dim());
return Ok(SessionAction::Cancel);
}
};
@ -201,11 +234,15 @@ impl InputHandler {
Some(index) => {
let selected_session = sessions[index].clone();
// If it's the current session, show options
// If it's the current session, show enhanced options
if Some(selected_session.to_string().as_str()) == current_session {
let options = vec!["Delete this session", "Set as default session", "Cancel"];
let options = vec![
"🗑️ Delete this session",
"⭐ Set as default session",
"❌ Cancel"
];
let action_result = match Select::with_theme(&theme)
.with_prompt("This is your current session. What would you like to do?")
.with_prompt("This is your active session. Choose an action:")
.items(&options)
.interact_opt() {
Ok(selection) => selection,
@ -216,7 +253,7 @@ impl InputHandler {
match action_result {
Some(0) => {
if self.confirm(&format!("Delete current session '{}'? You will need to create or switch to another sessions after deletion.", selected_session.to_string()))? {
if self.confirm(&format!("Delete current session '{}'? You will need to create or switch to another session after deletion.", selected_session.to_string()))? {
return Ok(SessionAction::Delete(selected_session));
}
return Ok(SessionAction::Cancel);
@ -227,16 +264,16 @@ impl InputHandler {
_ => return Ok(SessionAction::Cancel),
}
} else {
// Different session selected - offer to switch or delete
// Enhanced action menu for different session
let options = vec![
format!("Switch to '{}'", selected_session.to_string()),
format!("Delete '{}'", selected_session.to_string()),
format!("Set '{}' as default session", selected_session.to_string()),
"Cancel".to_string()
format!("🔄 Switch to '{}'", selected_session.to_string()),
format!("🗑️ Delete '{}'", selected_session.to_string()),
format!("Set '{}' as default", selected_session.to_string()),
"Cancel".to_string()
];
let action_result = match Select::with_theme(&theme)
.with_prompt("What would you like to do?")
.with_prompt("Choose an action for this session:")
.items(&options)
.interact_opt() {
Ok(selection) => selection,
@ -261,6 +298,247 @@ impl InputHandler {
None => return Ok(SessionAction::Cancel),
}
}
// Helper methods for enhanced UI
fn create_enhanced_theme(&self) -> ColorfulTheme {
ColorfulTheme {
defaults_style: console::Style::new().for_stderr().cyan(),
prompt_style: console::Style::new().for_stderr().bold(),
prompt_prefix: style("".to_string()).cyan().bold(),
prompt_suffix: style(" ".to_string()),
success_prefix: style("".to_string()).green().bold(),
success_suffix: style(" ".to_string()),
error_prefix: style("".to_string()).red().bold(),
error_style: console::Style::new().for_stderr().red(),
hint_style: console::Style::new().for_stderr().dim(),
values_style: console::Style::new().for_stderr().cyan(),
active_item_style: console::Style::new().for_stderr().cyan().bold(),
inactive_item_style: console::Style::new().for_stderr(),
active_item_prefix: style("".to_string()).cyan().bold(),
inactive_item_prefix: style(" ".to_string()),
checked_item_prefix: style("".to_string()).green().bold(),
unchecked_item_prefix: style("".to_string()).dim(),
picked_item_prefix: style("".to_string()).green().bold(),
unpicked_item_prefix: style(" ".to_string()),
}
}
fn get_selection_icon(&self, index: usize) -> &'static str {
match index % 6 {
0 => "📝",
1 => "🔧",
2 => "⚙️",
3 => "🎯",
4 => "🚀",
_ => "📋",
}
}
fn get_session_icon(&self, index: usize) -> &'static str {
match index % 8 {
0 => "💾",
1 => "📁",
2 => "🗂️",
3 => "📑",
4 => "🗃️",
5 => "📊",
6 => "📈",
_ => "📄",
}
}
fn print_selection_header(&self, title: &str, count: usize) {
let term = Term::stdout();
let width = term.size().1 as usize;
println!();
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").cyan());
let header = format!("{} ({} options)", title, count);
let padding = (width - header.len() - 2) / 2;
println!(
"│{}{}{}│",
" ".repeat(padding),
style(&header).cyan().bold(),
" ".repeat(width - header.len() - padding - 2)
);
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").cyan());
println!();
}
fn print_confirmation_header(&self, message: &str) {
let term = Term::stdout();
let width = term.size().1 as usize;
println!();
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").yellow());
let header = "⚠️ Confirmation Required";
let header_padding = (width - header.len() - 2) / 2;
println!(
"│{}{}{}│",
" ".repeat(header_padding),
style(header).yellow().bold(),
" ".repeat(width - header.len() - header_padding - 2)
);
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").yellow());
// Word wrap the message
let max_width = width - 4;
let words: Vec<&str> = message.split_whitespace().collect();
let mut current_line = String::new();
for word in words {
if current_line.len() + word.len() + 1 <= max_width {
if !current_line.is_empty() {
current_line.push(' ');
}
current_line.push_str(word);
} else {
if !current_line.is_empty() {
let padding = width - current_line.len() - 4;
println!("{}{}", current_line, " ".repeat(padding));
current_line.clear();
}
current_line.push_str(word);
}
}
if !current_line.is_empty() {
let padding = width - current_line.len() - 4;
println!("{}{}", current_line, " ".repeat(padding));
}
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").yellow());
println!();
}
fn print_session_manager_header(&self, title: &str, count: usize, current: Option<&str>) {
let term = Term::stdout();
let width = term.size().1 as usize;
println!();
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").magenta());
// Calculate display width for title (handling Unicode properly)
let title_display_width = self.calculate_display_width(title);
let header_padding = (width - title_display_width - 2) / 2;
println!(
"│{}{}{}│",
" ".repeat(header_padding),
style(title).magenta().bold(),
" ".repeat(width - title_display_width - header_padding - 2)
);
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").magenta());
// Session count and current session info
let info_line = if let Some(current) = current {
format!("📊 {} sessions available • Current: {}", count, current)
} else {
format!("📊 {} sessions available", count)
};
// Calculate actual display width for info line (emojis are wider than their byte length)
let info_display_width = self.calculate_display_width(&info_line);
let info_padding = width.saturating_sub(info_display_width + 4);
println!("{}{}", info_line, " ".repeat(info_padding));
// Instructions
let instructions = "💡 Select a session to see available actions";
let inst_display_width = self.calculate_display_width(instructions);
let inst_padding = width.saturating_sub(inst_display_width + 4);
println!("{}{}", style(instructions).dim(), " ".repeat(inst_padding));
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").magenta());
println!();
}
fn print_empty_list_message(&self, message: &str) {
let term = Term::stdout();
let width = term.size().1 as usize;
println!();
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").dim());
let icon_msg = format!(" {}", message);
let padding = (width - icon_msg.len() - 2) / 2;
println!(
"│{}{}{}│",
" ".repeat(padding),
style(&icon_msg).dim(),
" ".repeat(width - icon_msg.len() - padding - 2)
);
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").dim());
println!();
}
/// Calculate display width of a string, accounting for Unicode characters and emojis
fn calculate_display_width(&self, text: &str) -> usize {
// Simple approximation: count most emojis and special Unicode chars as width 2
// This is a simplified solution - for production code you might want to use
// the `unicode-width` crate for more accurate width calculations
let mut width = 0;
let mut chars = text.chars().peekable();
while let Some(ch) = chars.next() {
match ch {
// Common emojis and symbols that display as double-width
'📊' | '💡' | '❌' | '⭐' | '🔥' | '🌟' | '✨' | '🎯' => width += 2,
// Handle compound emoji with combining marks
'🗂' => {
// Check if followed by variation selector
if chars.peek() == Some(&'\u{fe0f}') {
chars.next(); // consume the variation selector
}
width += 2;
},
// Most ASCII characters
_ if ch.is_ascii() => width += 1,
// Other Unicode characters - assume width 1 for simplicity
_ => width += 1,
}
}
width
}
fn print_multiline_header(&self) {
let term = Term::stdout();
let width = term.size().1 as usize;
println!();
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").blue());
let header = "📝 Multi-line Input Mode";
let header_padding = (width - header.len() - 2) / 2;
println!(
"│{}{}{}│",
" ".repeat(header_padding),
style(header).blue().bold(),
" ".repeat(width - header.len() - header_padding - 2)
);
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").blue());
// Instructions
let instructions = vec![
"• Type your multi-line message",
"• Type '.' on a new line to finish",
"• Press Ctrl+C to cancel",
"• Press Ctrl+D to finish early"
];
for instruction in instructions {
let padding = width - instruction.len() - 4;
println!("{}{}", style(instruction).dim(), " ".repeat(padding));
}
println!("{}", style("".to_string() + &"".repeat(width - 2) + "").blue());
println!();
}
}
#[derive(Debug, Clone)]
@ -287,4 +565,294 @@ impl Drop for InputHandler {
// Save history on drop as well
let _ = self.save_history();
}
}
#[cfg(test)]
mod tests {
use super::*;
use console::Term;
#[test]
fn test_input_handler_new() {
let handler = InputHandler::new();
assert!(handler.is_ok());
}
#[test]
fn test_default_input_handler() {
let handler = InputHandler::default();
// Should not panic and create a valid handler
assert!(handler.editor.helper().is_none() || handler.editor.helper().is_some());
}
#[test]
fn test_session_action_enum() {
let action = SessionAction::Cancel::<String>;
match action {
SessionAction::Cancel => assert!(true),
_ => panic!("Should be Cancel"),
}
let action = SessionAction::Switch("test".to_string());
match action {
SessionAction::Switch(s) => assert_eq!(s, "test"),
_ => panic!("Should be Switch"),
}
let action = SessionAction::Delete("test".to_string());
match action {
SessionAction::Delete(s) => assert_eq!(s, "test"),
_ => panic!("Should be Delete"),
}
let action = SessionAction::SetAsDefault("test".to_string());
match action {
SessionAction::SetAsDefault(s) => assert_eq!(s, "test"),
_ => panic!("Should be SetAsDefault"),
}
}
#[test]
fn test_create_enhanced_theme() {
let handler = InputHandler::default();
let theme = handler.create_enhanced_theme();
// Test that theme creation doesn't panic
assert!(theme.prompt_prefix.to_string().contains(""));
assert!(theme.success_prefix.to_string().contains(""));
assert!(theme.error_prefix.to_string().contains(""));
}
#[test]
fn test_get_selection_icon() {
let handler = InputHandler::default();
// Test icon rotation
assert_eq!(handler.get_selection_icon(0), "📝");
assert_eq!(handler.get_selection_icon(1), "🔧");
assert_eq!(handler.get_selection_icon(2), "⚙️");
assert_eq!(handler.get_selection_icon(3), "🎯");
assert_eq!(handler.get_selection_icon(4), "🚀");
assert_eq!(handler.get_selection_icon(5), "📋");
assert_eq!(handler.get_selection_icon(6), "📝"); // Should cycle back
}
#[test]
fn test_get_session_icon() {
let handler = InputHandler::default();
// Test session icon rotation
assert_eq!(handler.get_session_icon(0), "💾");
assert_eq!(handler.get_session_icon(1), "📁");
assert_eq!(handler.get_session_icon(2), "🗂️");
assert_eq!(handler.get_session_icon(3), "📑");
assert_eq!(handler.get_session_icon(4), "🗃️");
assert_eq!(handler.get_session_icon(5), "📊");
assert_eq!(handler.get_session_icon(6), "📈");
assert_eq!(handler.get_session_icon(7), "📄");
assert_eq!(handler.get_session_icon(8), "💾"); // Should cycle back
}
#[test]
fn test_print_selection_header() {
let handler = InputHandler::default();
// Test that header printing doesn't panic
handler.print_selection_header("Test Selection", 5);
handler.print_selection_header("", 0);
handler.print_selection_header("Very Long Selection Header That Might Exceed Terminal Width", 100);
}
#[test]
fn test_print_confirmation_header() {
let handler = InputHandler::default();
// Test confirmation header with various message lengths
handler.print_confirmation_header("Are you sure you want to delete this?");
handler.print_confirmation_header("");
handler.print_confirmation_header("This is a very long confirmation message that might need to be wrapped across multiple lines in the terminal display to ensure proper formatting and readability");
}
#[test]
fn test_print_session_manager_header() {
let handler = InputHandler::default();
// Test session manager header
handler.print_session_manager_header("Session Management", 5, Some("current"));
handler.print_session_manager_header("Sessions", 0, None);
handler.print_session_manager_header("Test", 10, Some("very-long-session-name"));
}
#[test]
fn test_print_empty_list_message() {
let handler = InputHandler::default();
// Test empty list message
handler.print_empty_list_message("No sessions available");
handler.print_empty_list_message("No models found");
handler.print_empty_list_message("");
}
#[test]
fn test_print_multiline_header() {
let handler = InputHandler::default();
// Test multiline header printing
handler.print_multiline_header();
}
#[test]
fn test_cleanup() {
let mut handler = InputHandler::default();
// Test cleanup doesn't panic
let result = handler.cleanup();
assert!(result.is_ok() || result.is_err()); // Either way is fine, just shouldn't panic
}
#[test]
fn test_save_history() {
let mut handler = InputHandler::default();
// Test history saving doesn't panic
let result = handler.save_history();
assert!(result.is_ok() || result.is_err()); // Either way is fine in test environment
}
#[test]
fn test_select_from_list_empty() {
let handler = InputHandler::default();
let empty_items: Vec<String> = vec![];
let result = handler.select_from_list("Test", &empty_items, None);
assert!(result.is_ok());
assert!(result.unwrap().is_none());
}
#[test]
fn test_select_from_list_display_items() {
let handler = InputHandler::default();
let items = vec!["item1".to_string(), "item2".to_string(), "item3".to_string()];
// Test display item creation (this tests the internal logic)
let display_items: Vec<String> = items.iter().enumerate().map(|(i, item)| {
let item_str = item.to_string();
let icon = handler.get_selection_icon(i);
if Some(item_str.as_str()) == Some("current") {
format!("{} {} ⭐ (current)", icon, item_str)
} else {
format!("{} {}", icon, item_str)
}
}).collect();
assert_eq!(display_items[0], "📝 item1");
assert_eq!(display_items[1], "🔧 item2");
assert_eq!(display_items[2], "⚙️ item3");
}
#[test]
fn test_session_manager_display_logic() {
let handler = InputHandler::default();
let sessions = vec!["session1".to_string(), "session2".to_string(), "current".to_string()];
// Test session display item creation
let display_items: Vec<String> = sessions
.iter()
.enumerate()
.map(|(i, session)| {
let name = session.to_string();
let icon = handler.get_session_icon(i);
if Some(name.as_str()) == Some("current") {
format!("{} {} ⭐ (active)", icon, name)
} else {
format!("{} {}", icon, name)
}
})
.collect();
assert_eq!(display_items[0], "💾 session1");
assert_eq!(display_items[1], "📁 session2");
assert_eq!(display_items[2], "🗂️ current ⭐ (active)");
}
#[test]
fn test_terminal_width_handling() {
let handler = InputHandler::default();
let term = Term::stdout();
let width = term.size().1 as usize;
// Test that terminal width is positive
assert!(width > 0);
// Test that our UI components handle various widths gracefully
handler.print_selection_header("Test", 3);
handler.print_confirmation_header("Test confirmation message");
handler.print_session_manager_header("Sessions", 2, Some("test"));
handler.print_empty_list_message("No items");
handler.print_multiline_header();
}
#[test]
fn test_word_wrapping_in_confirmation() {
let handler = InputHandler::default();
// Test with a very long message that should trigger word wrapping
let long_message = "This is an extremely long confirmation message that should definitely exceed the terminal width and trigger the word wrapping functionality built into the confirmation header display system.";
handler.print_confirmation_header(long_message);
// Test with message containing no spaces (edge case)
let no_spaces = "verylongwordwithoutanyspacesthatmightcausewrappingissues";
handler.print_confirmation_header(no_spaces);
}
#[test]
fn test_icon_consistency() {
let handler = InputHandler::default();
// Test that icons are consistent across multiple calls
for i in 0..20 {
let icon1 = handler.get_selection_icon(i);
let icon2 = handler.get_selection_icon(i);
assert_eq!(icon1, icon2);
let session_icon1 = handler.get_session_icon(i);
let session_icon2 = handler.get_session_icon(i);
assert_eq!(session_icon1, session_icon2);
}
}
#[test]
fn test_theme_components() {
let handler = InputHandler::default();
let theme = handler.create_enhanced_theme();
// Test that all theme components are properly styled
assert!(!theme.prompt_prefix.to_string().is_empty());
assert!(!theme.success_prefix.to_string().is_empty());
assert!(!theme.error_prefix.to_string().is_empty());
assert!(!theme.active_item_prefix.to_string().is_empty());
assert!(!theme.checked_item_prefix.to_string().is_empty());
assert!(!theme.picked_item_prefix.to_string().is_empty());
}
#[test]
fn test_edge_cases() {
let handler = InputHandler::default();
// Test with Unicode in headers
handler.print_selection_header("Test with Unicode: 🌟✨🎯", 3);
handler.print_confirmation_header("Unicode message: 你好世界");
handler.print_session_manager_header("Sessions 📊", 5, Some("test-session-🔥"));
// Test with empty strings
handler.print_selection_header("", 0);
handler.print_confirmation_header("");
handler.print_session_manager_header("", 0, None);
handler.print_empty_list_message("");
// Test with very large numbers
handler.print_selection_header("Large count", usize::MAX);
handler.print_session_manager_header("Sessions", usize::MAX, Some("session"));
}
}

View File

@ -1,5 +1,5 @@
{
"exported_at": "2025-08-25T04:01:15.432999997+00:00",
"exported_at": "2025-08-31T03:02:51.004770151+00:00",
"messages": [
{
"content": "You are an AI assistant running in a terminal (CLI) environment. Optimise all answers for 80column readability, prefer plain text, ASCII art or concise bullet lists over heavy markup, and wrap code snippets in fenced blocks when helpful. Do not emit trailing spaces or control characters.",

View File

@ -0,0 +1,207 @@
use gpt_cli_rust::utils::{Display, InputHandler};
use std::time::{Duration, Instant};
#[test]
fn test_display_performance() {
let display = Display::new();
// Test performance of basic operations
let start = Instant::now();
for _ in 0..100 {
display.print_info("Test message");
display.print_error("Test error");
display.print_warning("Test warning");
}
let duration = start.elapsed();
// Should complete in reasonable time (less than 1 second for 300 operations)
assert!(duration < Duration::from_secs(1),
"Display operations took too long: {:?}", duration);
}
#[test]
fn test_public_display_methods_performance() {
let display = Display::new();
let start = Instant::now();
for i in 0..50 {
display.print_header();
display.print_help();
display.print_status_bar("GPT-4", "OpenAI", "test", &[("feature", true), ("another", false)]);
display.print_error_with_context("Test error", Some("Context"), &["Fix it"]);
display.print_section_header("Test Section", "🔧");
display.print_feature_status(&[("Feature 1", true, Some("Desc")), ("Feature 2", false, None)]);
display.clear_current_line();
display.print_separator("-");
display.print_progress_bar(i, 100, "Progress");
}
let duration = start.elapsed();
// Should handle complex display operations efficiently
assert!(duration < Duration::from_secs(2),
"Complex display operations took too long: {:?}", duration);
}
#[test]
fn test_assistant_response_performance() {
let display = Display::new();
// Test with moderately sized content with formatting
let content = "This is a test with `inline code` and some other text.\n\n```rust\nfn test() {\n println!(\"Hello\");\n}\n```\n\nMore text here.".repeat(5);
let start = Instant::now();
for _ in 0..10 {
display.print_assistant_response(&content);
}
let duration = start.elapsed();
// Should handle moderately sized content efficiently
assert!(duration < Duration::from_secs(3),
"Assistant response formatting took too long: {:?}", duration);
}
#[test]
fn test_conversation_history_performance() {
use gpt_cli_rust::core::Message;
let display = Display::new();
// Create test messages (need to store them separately to avoid borrow issues)
let mut message_store = Vec::new();
for i in 0..20 {
let message = Message {
role: if i % 2 == 0 { "user" } else { "assistant" }.to_string(),
content: format!("This is test message number {} with some content.", i),
};
message_store.push(message);
}
// Create references after all messages are created
let messages: Vec<(usize, &Message)> = message_store.iter().enumerate()
.map(|(i, msg)| (i + 1, msg)).collect();
let start = Instant::now();
display.print_conversation_history(&messages);
let duration = start.elapsed();
// Should handle conversation history efficiently
assert!(duration < Duration::from_secs(1),
"Conversation history display took too long: {:?}", duration);
}
#[test]
fn test_input_handler_creation_performance() {
let start = Instant::now();
for _ in 0..10 {
let _handler = InputHandler::default();
}
let duration = start.elapsed();
// Input handler creation should be fast
assert!(duration < Duration::from_millis(500),
"Input handler creation took too long: {:?}", duration);
}
#[test]
fn test_select_from_list_empty_performance() {
let handler = InputHandler::default();
let empty_items: Vec<String> = vec![];
let start = Instant::now();
for _ in 0..100 {
let _ = handler.select_from_list("Test", &empty_items, None);
}
let duration = start.elapsed();
// Empty list handling should be very fast
assert!(duration < Duration::from_millis(100),
"Empty list handling took too long: {:?}", duration);
}
#[test]
fn test_display_creation_performance() {
let start = Instant::now();
for _ in 0..100 {
let _display = Display::new();
}
let duration = start.elapsed();
// Display creation should be efficient
assert!(duration < Duration::from_millis(500),
"Display creation took too long: {:?}", duration);
}
#[test]
fn test_spinner_performance() {
let display = Display::new();
let start = Instant::now();
for i in 0..50 {
let spinner = display.show_spinner(&format!("Operation {}", i));
spinner.finish("Completed");
let spinner2 = display.show_spinner(&format!("Operation {}", i));
spinner2.finish_with_error("Failed");
}
let duration = start.elapsed();
// Spinner operations should be fast
assert!(duration < Duration::from_secs(1),
"Spinner operations took too long: {:?}", duration);
}
#[test]
fn test_large_feature_status_performance() {
let display = Display::new();
// Create many features with static strings to avoid lifetime issues
let features: Vec<(&str, bool, Option<&str>)> = (0..10)
.map(|i| (
"Feature Name",
i % 2 == 0,
if i % 3 == 0 { Some("Feature description") } else { None }
))
.collect();
let start = Instant::now();
for _ in 0..10 {
display.print_feature_status(&features);
}
let duration = start.elapsed();
// Should handle many features efficiently
assert!(duration < Duration::from_secs(1),
"Large feature status display took too long: {:?}", duration);
}
#[test]
fn test_memory_usage_stability() {
let display = Display::new();
// Test that repeated operations don't cause memory leaks
// This is a basic test - in a real scenario you'd use a memory profiler
for i in 0..100 {
display.print_status_bar("GPT-4", "OpenAI", "test", &[("feature", true)]);
display.print_error_with_context("Error", Some("Context"), &["Suggestion"]);
display.print_section_header(&format!("Section {}", i), "🔧");
let spinner = display.show_spinner("Test");
spinner.finish("Done");
}
// If we get here without panic/crash, memory handling is reasonable
assert!(true);
}
#[test]
fn test_input_cleanup_performance() {
let start = Instant::now();
for _ in 0..50 {
let mut handler = InputHandler::default();
let _ = handler.cleanup();
let _ = handler.save_history();
}
let duration = start.elapsed();
// Cleanup operations should be efficient
assert!(duration < Duration::from_secs(1),
"Input cleanup took too long: {:?}", duration);
}