A lightweight Python library for reproducible computational experiments with an ultra-simple, smart API. From idea to insight in under 5 minutes, with zero configuration.
@experiment
decorator - thatβs it!"accuracy > 0.9"
pip install rexf
from rexf import experiment, run
@experiment
def my_experiment(learning_rate, batch_size=32):
# Your experiment code here
accuracy = train_model(learning_rate, batch_size)
return {"accuracy": accuracy, "loss": 1 - accuracy}
# Run single experiment
run.single(my_experiment, learning_rate=0.01, batch_size=64)
# Get insights
print(run.insights())
# Find best experiments
best = run.best(metric="accuracy", top=5)
# Auto-explore parameter space
run.auto_explore(my_experiment, strategy="random", budget=20)
# Launch web dashboard
run.dashboard()
From idea to insight in under 5 minutes, with zero configuration.
RexF prioritizes user experience over architectural purity. Instead of making you learn complex APIs, it automatically detects what youβre doing and provides smart features to accelerate your research.
import math
import random
from rexf import experiment, run
@experiment
def estimate_pi(num_samples=10000, method="uniform"):
"""Estimate Ο using Monte Carlo methods."""
inside_circle = 0
for _ in range(num_samples):
x, y = random.uniform(-1, 1), random.uniform(-1, 1)
if x*x + y*y <= 1:
inside_circle += 1
pi_estimate = 4 * inside_circle / num_samples
error = abs(pi_estimate - math.pi)
return {
"pi_estimate": pi_estimate,
"error": error,
"accuracy": 1 - (error / math.pi)
}
# Run experiments
run.single(estimate_pi, num_samples=50000, method="uniform")
run.single(estimate_pi, num_samples=100000, method="stratified")
# Auto-explore to find best parameters
run_ids = run.auto_explore(
estimate_pi,
strategy="grid",
budget=10,
optimization_target="accuracy"
)
# Get smart insights
insights = run.insights()
print(f"Success rate: {insights['summary']['success_rate']:.1%}")
# Find high-accuracy runs
accurate_runs = run.find("accuracy > 0.99")
# Compare experiments
run.compare(run.best(top=3))
# Launch web dashboard
run.dashboard() # Opens http://localhost:8080
# Random exploration
run.auto_explore(my_experiment, strategy="random", budget=20)
# Grid search
run.auto_explore(my_experiment, strategy="grid", budget=15)
# Adaptive exploration (learns from results)
run.auto_explore(my_experiment, strategy="adaptive", budget=25,
optimization_target="accuracy")
# Find experiments using expressions
high_acc = run.find("accuracy > 0.9")
fast_runs = run.find("duration < 30")
recent_good = run.find("accuracy > 0.8 and start_time > '2024-01-01'")
# Query help
run.query_help()
# Get next experiment suggestions
suggestions = run.suggest(
my_experiment,
count=5,
strategy="balanced", # "exploit", "explore", or "balanced"
optimization_target="accuracy"
)
for suggestion in suggestions["suggestions"]:
print(f"Try: {suggestion['parameters']}")
print(f"Reason: {suggestion['reasoning']}")
Analyze experiments from the command line:
# Show summary
rexf-analytics --summary
# Query experiments
rexf-analytics --query "accuracy > 0.9"
# Generate insights
rexf-analytics --insights
# Compare best experiments
rexf-analytics --compare --best 5
# Export to CSV
rexf-analytics --list --format csv --output results.csv
Launch a beautiful web interface:
run.dashboard() # Opens http://localhost:8080
Features:
import mlflow
import sacred
from sacred import Experiment
# Complex setup required
ex = Experiment('my_exp')
mlflow.set_tracking_uri("...")
@ex.config
def config():
learning_rate = 0.01
batch_size = 32
@ex.automain
def main(learning_rate, batch_size):
with mlflow.start_run():
# Your code here
mlflow.log_param("lr", learning_rate)
mlflow.log_metric("accuracy", accuracy)
from rexf import experiment, run
@experiment
def my_experiment(learning_rate=0.01, batch_size=32):
# Your code here - that's it!
return {"accuracy": accuracy}
run.single(my_experiment, learning_rate=0.05)
Feature | Traditional Tools | RexF |
---|---|---|
Setup | Complex configuration | Single decorator |
Parameter Detection | Manual logging | Automatic |
Metric Tracking | Manual logging | Automatic |
Insights | Manual analysis | Auto-generated |
Exploration | Write custom loops | run.auto_explore() |
Comparison | Custom dashboards | run.compare() |
Querying | SQL/Complex APIs | run.find("accuracy > 0.9") |
RexF uses a clean, modular architecture:
rexf/
βββ core/ # Core experiment logic (@experiment decorator)
βββ backends/ # Storage implementation (IntelligentStorage)
βββ intelligence/ # Smart features (insights, exploration, queries)
βββ dashboard/ # Web interface
βββ cli/ # Command-line tools
βββ run.py # Main user interface
@experiment
decorator for zero-configuration usageRexF automatically captures:
All data is stored locally in SQLite with no external dependencies.
RexF ensures reproducibility by automatically tracking:
We welcome contributions! Please see our Contributing Guide for details.
git clone https://github.com/dhruv1110/rexf.git
cd rexf
pip install -e ".[dev]"
pre-commit install
pytest tests/ -v --cov=rexf
MIT License - see LICENSE for details.
Made with β€οΈ for researchers who want to focus on science, not infrastructure.