rexf

πŸ§ͺ RexF - Smart Experiments Framework

CI CodeQL PyPI Python Versions License: MIT

A lightweight Python library for reproducible computational experiments with an ultra-simple, smart API. From idea to insight in under 5 minutes, with zero configuration.

✨ Key Features

πŸš€ Quick Start

Installation

pip install rexf

Ultra-Simple Usage

from rexf import experiment, run

@experiment
def my_experiment(learning_rate, batch_size=32):
    # Your experiment code here
    accuracy = train_model(learning_rate, batch_size)
    return {"accuracy": accuracy, "loss": 1 - accuracy}

# Run single experiment
run.single(my_experiment, learning_rate=0.01, batch_size=64)

# Get insights
print(run.insights())

# Find best experiments
best = run.best(metric="accuracy", top=5)

# Auto-explore parameter space
run.auto_explore(my_experiment, strategy="random", budget=20)

# Launch web dashboard
run.dashboard()

🎯 Core Philosophy

From idea to insight in under 5 minutes, with zero configuration.

RexF prioritizes user experience over architectural purity. Instead of making you learn complex APIs, it automatically detects what you’re doing and provides smart features to accelerate your research.

πŸ“– Comprehensive Example

import math
import random
from rexf import experiment, run

@experiment
def estimate_pi(num_samples=10000, method="uniform"):
    """Estimate Ο€ using Monte Carlo methods."""
    inside_circle = 0
    
    for _ in range(num_samples):
        x, y = random.uniform(-1, 1), random.uniform(-1, 1)
        if x*x + y*y <= 1:
            inside_circle += 1
    
    pi_estimate = 4 * inside_circle / num_samples
    error = abs(pi_estimate - math.pi)
    
    return {
        "pi_estimate": pi_estimate,
        "error": error,
        "accuracy": 1 - (error / math.pi)
    }

# Run experiments
run.single(estimate_pi, num_samples=50000, method="uniform")
run.single(estimate_pi, num_samples=100000, method="stratified")

# Auto-explore to find best parameters
run_ids = run.auto_explore(
    estimate_pi,
    strategy="grid", 
    budget=10,
    optimization_target="accuracy"
)

# Get smart insights
insights = run.insights()
print(f"Success rate: {insights['summary']['success_rate']:.1%}")

# Find high-accuracy runs
accurate_runs = run.find("accuracy > 0.99")

# Compare experiments
run.compare(run.best(top=3))

# Launch web dashboard
run.dashboard()  # Opens http://localhost:8080

πŸ”§ Advanced Features

Smart Parameter Exploration

# Random exploration
run.auto_explore(my_experiment, strategy="random", budget=20)

# Grid search
run.auto_explore(my_experiment, strategy="grid", budget=15)

# Adaptive exploration (learns from results)
run.auto_explore(my_experiment, strategy="adaptive", budget=25, 
                optimization_target="accuracy")

Query Interface

# Find experiments using expressions
high_acc = run.find("accuracy > 0.9")
fast_runs = run.find("duration < 30")
recent_good = run.find("accuracy > 0.8 and start_time > '2024-01-01'")

# Query help
run.query_help()

Experiment Suggestions

# Get next experiment suggestions
suggestions = run.suggest(
    my_experiment, 
    count=5, 
    strategy="balanced",  # "exploit", "explore", or "balanced"
    optimization_target="accuracy"
)

for suggestion in suggestions["suggestions"]:
    print(f"Try: {suggestion['parameters']}")
    print(f"Reason: {suggestion['reasoning']}")

CLI Analytics

Analyze experiments from the command line:

# Show summary
rexf-analytics --summary

# Query experiments
rexf-analytics --query "accuracy > 0.9"

# Generate insights
rexf-analytics --insights

# Compare best experiments
rexf-analytics --compare --best 5

# Export to CSV
rexf-analytics --list --format csv --output results.csv

Web Dashboard

Launch a beautiful web interface:

run.dashboard()  # Opens http://localhost:8080

Features:

🎨 Why RexF?

Before (Traditional Approach)

import mlflow
import sacred
from sacred import Experiment

# Complex setup required
ex = Experiment('my_exp')
mlflow.set_tracking_uri("...")

@ex.config
def config():
    learning_rate = 0.01
    batch_size = 32

@ex.automain
def main(learning_rate, batch_size):
    with mlflow.start_run():
        # Your code here
        mlflow.log_param("lr", learning_rate)
        mlflow.log_metric("accuracy", accuracy)

After (RexF)

from rexf import experiment, run

@experiment
def my_experiment(learning_rate=0.01, batch_size=32):
    # Your code here - that's it!
    return {"accuracy": accuracy}

run.single(my_experiment, learning_rate=0.05)

Key Differences

Feature Traditional Tools RexF
Setup Complex configuration Single decorator
Parameter Detection Manual logging Automatic
Metric Tracking Manual logging Automatic
Insights Manual analysis Auto-generated
Exploration Write custom loops run.auto_explore()
Comparison Custom dashboards run.compare()
Querying SQL/Complex APIs run.find("accuracy > 0.9")

πŸ› οΈ Architecture

RexF uses a clean, modular architecture:

rexf/
β”œβ”€β”€ core/           # Core experiment logic (@experiment decorator)
β”œβ”€β”€ backends/       # Storage implementation (IntelligentStorage)
β”œβ”€β”€ intelligence/   # Smart features (insights, exploration, queries)
β”œβ”€β”€ dashboard/      # Web interface
β”œβ”€β”€ cli/           # Command-line tools
└── run.py         # Main user interface

Core Components

Intelligence Modules

πŸ“Š Data Storage

RexF automatically captures:

All data is stored locally in SQLite with no external dependencies.

πŸ”„ Reproducibility

RexF ensures reproducibility by automatically tracking:

🚧 Roadmap

🀝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

git clone https://github.com/dhruv1110/rexf.git
cd rexf
pip install -e ".[dev]"
pre-commit install

Running Tests

pytest tests/ -v --cov=rexf

πŸ“„ License

MIT License - see LICENSE for details.


Made with ❀️ for researchers who want to focus on science, not infrastructure.