Gemma-3-12B Firecrawl Expert
This is a fine-tuned version of Gemma-3-12B specialized in answering questions about Firecrawl web scraping.
Model Details
- Base Model: unsloth/gemma-3-12b-it
- Fine-tuning Method: LoRA (Low-Rank Adaptation) using Unsloth
- Training Library: Unsloth
- Fine-tuned: April 24, 2025
Training Data
The model was fine-tuned on the bexgboost/openai-agents-python-qa-firecrawl dataset, which contains question-answer pairs about Firecrawl and web scraping techniques.
Use Cases
This model is specialized in:
- Answering questions about Firecrawl web scraping
- Providing guidance on web scraping techniques
- Helping with Firecrawl implementation
Training Parameters
- LoRA Rank: 8
- LoRA Alpha: 8
- Learning Rate: 2e-4
- Epochs: 1
- Quantization: 4-bit
Usage with Unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name = "Laksh99/Gemma_finetuned_april_24_2025",
max_seq_length = 2048,
load_in_4bit = True
)
- Downloads last month
- -