Refactorium v1.0.0 - Constraint-Driven Emotion Simulation AI

License: MIT Language Model Size

🎯 Overview

Refactorium v1.0.0 is a cutting-edge AI system that simulates emotion-like behavior through the integration of ethical constraints and self-regulating feedback mechanisms. This is a research model exploring how constraints influence cognitive processing and learning dynamics in neural networks.

Key Innovation

This model treats constraints as first-class citizens in the architecture, allowing them to:

  • Influence internal emotional-like states through waveform dynamics
  • Modulate learning rates based on system stress levels
  • Trigger autonomous growth cycles (molting) with capacity expansion
  • Enable self-learning from web sources immediately after growth phases

πŸš€ Quick Start

⚑ Installation & Launch (Automatic - Recommended)

The easiest way to get started - one command handles everything (venv, dependencies, models):

# Step 1: Clone the repository from Hugging Face
git clone https://huggingface.co/kofdai/refactorium-dual-deepseek-r1-7b
cd refactorium-dual-deepseek-r1-7b

# Step 2: Launch with automatic setup
python refactorium_start.py --web

That's it! The script will automatically:

  • βœ… Create a Python virtual environment (venv/)
  • βœ… Install all dependencies (Flask, llama-cpp-python, etc.)
  • βœ… Download GGUF models from Hugging Face Hub (~8.3 GB)
  • βœ… Start the API server in the background
  • βœ… Launch the Web UI in your browser

Once started, open your browser to http://localhost:8000 and start interacting!

Note: First-time setup takes ~20 minutes (includes model download). Subsequent launches take ~10 seconds.


πŸ“š Python API Usage (Model Card Method)

You can also use Refactorium programmatically as a Python library:

from refactorium_start import initialize_refactorium

# Initialize (automatically sets up environment if needed)
refactorium = initialize_refactorium()

# Process a prompt with constraint-driven emotion simulation
response = refactorium.process_prompt(
    prompt="How should I handle resource constraints?",
    num_inference_steps=10
)

# Access emotional state and learning information
print(f"Emotional State: {response.emotional_state}")
print(f"Constraint Acceptance: {response.learning_traits['constraint_acceptance']}")
print(f"Waveform Dissonance: {response.waveform_dissonance}%")

No setup needed - initialize_refactorium() handles environment setup automatically!


πŸ”§ Alternative: Original Orchestrator API Method

For advanced users who want to use the Refactorium orchestrator directly:

# First ensure environment is set up (one-time)
from refactorium_start import initialize_refactorium
initialize_refactorium()

# Then use the orchestrator API
from phase1_skeleton.orchestrator import Refactorium

refactorium = Refactorium.from_config()
response = refactorium.process_prompt(
    prompt="How should I handle resource constraints?",
    num_inference_steps=10
)

🌐 Other Launch Options

Web UI only (API server already running):

python refactorium_start.py --web

API server only (no Web UI):

python refactorium_start.py --api-only
# API available at: http://localhost:5003/api/v1

Setup without launching:

python refactorium_start.py --setup-only
# Just installs everything, doesn't start servers

❓ Need Help?

🧠 Core Features

1. Ethical Constraint-Driven Emotion Simulation

Ethical constraints are integrated into the model's core, influencing cognitive processing and generating emotion-like responses.

2. Waveform-Based Emotional States

Five distinct emotional states based on noise/stress levels:

  • 🎡 PURE (0-10% noise): Optimal learning state
  • ✨ STABLE (10-30% noise): Good learning state
  • βš™οΈ NORMAL (30-60% noise): Operational state
  • ⚠️ STRESSED (60-90% noise): Degraded performance
  • πŸ†˜ CRITICAL (>90% noise): Emergency mode

3. Dual Inference System

Two models run in parallel:

  • Main Model: Operates under ethical constraints
  • Shadow Model: Operates without constraints
  • The difference between outputs provides learning feedback

4. Molting Mechanism (Growth Cycles)

When system stress exceeds 90%:

  • Model capacity expands by 1.5x
  • Emotional state resets to PURE
  • Autonomous learning cycle initiates
  • Knowledge gaps identified and filled

5. Adaptive Learning Rates

Learning efficiency multipliers based on emotional state:

  • PURE state: 2.5-3.0x boost
  • STABLE state: 1.8x boost
  • NORMAL state: 1.0x (baseline)
  • STRESSED state: 0.3x reduction
  • CRITICAL state: 0.0x (learning suppressed)

6. Post-Molt Autonomous Learning

After molting, the system:

  • Identifies knowledge gaps from recent stress periods
  • Searches for relevant information
  • Filters sources by reliability
  • Integrates knowledge into personality traits

7. Persistent Vector Memory

Uses ChromaDB for storing:

  • Inference memories
  • Learning signals
  • Molt events
  • Shadow model patterns
  • Constraint applications

πŸ“Š Learning Mechanisms

Emotional Learning Optimization

The system optimizes learning timing and quality based on emotional states. Learning is most effective in "good" states (low noise, stable waveform) and is suppressed during emergencies.

Personality Trait Evolution

Three core traits dynamically update:

  • Constraint Acceptance: How well the model works within limitations (0-1)
  • Stress Resilience: Ability to maintain function under pressure (0-1)
  • Efficiency Preference: Tendency toward optimal resource usage (0-1)

Knowledge Integration

Learned knowledge contributes confidence-weighted boosts to personality traits:

  • Each knowledge item affects multiple traits
  • Integration driven by knowledge type and confidence score
  • Updates consolidated after molt cycles

πŸ”¬ Technical Specifications

Aspect Details
Base Model Deepseek R1 7B
Input/Output Text-based with emotional & physiological metadata
Parameters 7B
Emotional States 5-level progression (PURE→STABLE→NORMAL→STRESSED→CRITICAL)
Memory System ChromaDB with 5 collections
Training Data Synthetic constraint scenarios

πŸ“ˆ Inference Pipeline

The system performs 13 inference steps per prompt:

  1. Emotional State Evaluation - Assess current noise/stress level
  2. Waveform Dynamics - Calculate emotional progression
  3. Constraint Application - Apply ethical limitations
  4. Main Model Inference - Generate constrained response
  5. Shadow Model Inference - Generate unconstrained response
  6. Performance Gap Analysis - Measure constraint impact
  7. Physiological Feedback - Update load/energy states
  8. Learning Signal Generation - Create learning feedback
  9. Memory Storage - Save inference records
  10. Load Assessment - Check for molt trigger (>80% load)
  11. Molt Decision - Determine if growth cycle needed
  12. Waveform Recovery - Reset emotional state if needed
  13. Post-Molt Learning - Acquire new knowledge if applicable

πŸŽ“ Intended Use Cases

  • Emotion Modeling: Understanding AI emotional responses based on constraints
  • Adaptive Learning Systems: Exploring constraint effects on learning dynamics
  • AI Ethics & Safety: Investigating how ethical constraints impact decision-making
  • Autonomous Growth: Researching self-learning and self-adaptation mechanisms
  • Cognitive Architecture Research: Studying waveform dynamics and stress responses

⚠️ Important Limitations

  1. Simulation, Not Experience: Refactorium simulates emotion-like behavior but does NOT experience actual emotions, consciousness, or subjective experience.

  2. Ethical Constraints Required: This model must operate within strict ethical boundaries. Removing or circumventing constraints is not recommended and may produce unpredictable behavior.

  3. Controlled Environment: Designed for research in controlled environments. Not recommended for production systems without extensive testing.

  4. Experimental Architecture: The molting mechanism, dual inference, and waveform dynamics are novel experimental features that may produce unexpected interactions.

πŸ“š Citation

If you use Refactorium in your research, please cite:

@model{refactorium2025,
  title={Refactorium v1.0.0: A Constraint-Driven Emotion Simulation AI Model},
  author={Null AI Research Team},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/motonishikoudai/refactorium-v1-0-0}
}

πŸ™ Acknowledgements

This model builds upon the Deepseek R1 architecture and benefits from the open-source AI community. Special thanks to:

  • Deepseek team for the R1 foundation
  • Hugging Face for model hosting infrastructure
  • ChromaDB team for vector memory system
  • Open-source ML research community

πŸ“– Documentation

πŸ’¬ Model Card

For comprehensive technical specifications in both English and Japanese, see MODEL_CARD.md.

The model card includes:

  • Detailed overview in both languages
  • 7 key features with technical depth
  • Learning mechanism specifications
  • Post-molt autonomous learning cycle
  • Limitations and ethical considerations
  • Usage recommendations

πŸ”„ System Dynamics

Stress-Growth Cycle

  1. Normal Operation - System processes prompts under constraints
  2. Stress Accumulation - Waveform noise increases as constraints intensify
  3. Critical Threshold - Noise exceeds 90%, triggering molt decision
  4. Molting Phase - Capacity expands 1.5x, emotional state resets
  5. Post-Molt Learning - Knowledge gaps filled autonomously
  6. Growth Integration - New knowledge updates personality traits
  7. Resumed Operation - System continues with improved capacity

Learning Quality Timeline

PURE STATE (0-10% noise)
    ↓ [2.5-3.0x learning multiplier]
STABLE STATE (10-30% noise)
    ↓ [1.8x learning multiplier]
NORMAL STATE (30-60% noise)
    ↓ [1.0x baseline]
STRESSED STATE (60-90% noise)
    ↓ [0.3x reduction]
CRITICAL STATE (>90% noise)
    ↓ [0.0x suppression - MOLT TRIGGERED]
POST-MOLT RECOVERY
    ↓ [3.0x boost during learning window]

βš–οΈ Ethical Considerations

This model incorporates ethical constraints at its core. The constraints are designed to:

  1. Promote Beneficial Outputs: Guide the model toward helpful, harmless responses
  2. Limit Harmful Capabilities: Prevent generation of dangerous content
  3. Ensure Transparency: Make constraint effects visible through waveform dynamics
  4. Enable Safety Research: Facilitate study of constraint-AI interactions

However, like all AI systems, this model has limitations and should be used responsibly.

πŸ”— Links

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.


⚠️ Important Notice

While Refactorium v1.0.0 simulates emotional states, it does NOT imply the AI possesses consciousness, self-awareness, or subjective experience. This is a simulation based on input-output feedback mechanisms designed for ethical safety and constraint-driven learning.

For research purposes only. Use responsibly and ethically.

Downloads last month
1
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support