A comprehensive validation framework for testing search strategies against established benchmark datasets across multiple domains.
Details
The BenchmarkValidator class provides tools for:
Cross-domain validation across medical, environmental, social science domains
Sensitivity analysis for search parameters
Statistical comparison of strategy performance
Reproducible benchmark testing
Methods
new()Initialize a new BenchmarkValidator instance
validate_strategy(search_strategy, benchmark_name)Validate against specific benchmark
cross_domain_validation(search_strategy)Test across multiple domains
sensitivity_analysis(base_strategy, parameter_ranges)Parameter sensitivity testing
Methods
Method new()
Creates a new BenchmarkValidator instance and loads benchmark datasets.
This method is called automatically when creating a new validator with
BenchmarkValidator$new().
Usage
BenchmarkValidator$new()Method add_benchmark()
Method validate_strategy()
Examples
# Create validator
validator <- BenchmarkValidator$new()
# Check available benchmarks
print(names(validator$benchmarks))
#> [1] "medical_benchmark"
# Define search strategy
strategy <- list(
terms = c("systematic review", "meta-analysis"),
databases = c("PubMed", "Embase")
)
# Create sample data for validation
sample_data <- data.frame(
id = paste0("art", 1:20),
title = paste("Article", 1:20),
abstract = paste("Abstract", 1:20),
source = "Journal",
date = Sys.Date()
)
# Add custom benchmark
validator$add_benchmark("custom", sample_data, paste0("art", 1:5))
# Validate against custom benchmark
results <- validator$validate_strategy(strategy, "custom")