The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Deploying a Cybersecurity LLM On-Premise with Ollama on Proxmox
Deployer un LLM Cybersecurite On-Premise avec Ollama sur Proxmox
This dataset contains a technical article available in both French and English. Cet article technique est disponible en francais et en anglais.
Navigation
title: "Deployer un LLM Cybersecurite On-Premise avec Ollama sur Proxmox" author: "AYI-NEDJIMI Consultants" date: "2026-02-21" language: "fr" tags: - ollama - proxmox - on-premise - gpu-passthrough - gguf - llm - cybersecurite license: "cc-by-sa-4.0"
Deployer un LLM Cybersecurite On-Premise avec Ollama sur Proxmox
Auteur : AYI-NEDJIMI Consultants | Date : 21 fevrier 2026 | Temps de lecture : 11 min
Introduction
Pour les organisations soumises a des contraintes reglementaires fortes (defense, sante, finance), le deploiement de modeles d'IA en mode SaaS est souvent exclu. Les donnees de securite -- logs, artefacts forensiques, rapports de vulnerabilites -- sont trop sensibles pour transiter vers des API cloud. La solution : deployer un LLM cybersecurite directement sur l'infrastructure on-premise.
Dans cet article, nous detaillons le deploiement complet de CyberSec-Assistant-3B via Ollama sur une infrastructure Proxmox VE, avec GPU passthrough pour l'inference. Cette demarche s'appuie sur notre guide complet Proxmox VE.
Architecture Cible
Vue d'ensemble de l'infrastructure
+---------------------------+
| Proxmox VE Host |
| (Bare-metal, GPU NVIDIA) |
+---------------------------+
| |
+---------+--------+ +--------+---------+
| VM Ollama | | VM SOC Tools |
| (Ubuntu 22.04) | | (SIEM, SOAR) |
| GPU Passthrough| | |
| Ollama Server | REST | Integration |
| Port 11434 |<------>| API Client |
+---------+--------+ +--------+---------+
| |
+---------------------------+
| VLAN Securite |
+---------------------------+
Prerequis Materiels
Configuration recommandee
| Composant | Minimum | Recommande |
|---|---|---|
| CPU | AMD EPYC 7313 (16c) | AMD EPYC 9354 (32c) |
| RAM | 64 Go ECC | 128 Go ECC |
| GPU | NVIDIA RTX 4060 Ti 16GB | NVIDIA RTX 4090 24GB |
| Stockage OS | SSD NVMe 500 Go | SSD NVMe 1 To |
| Stockage modeles | SSD NVMe 1 To | SSD NVMe 2 To |
| Reseau | 10 GbE | 25 GbE |
Pour le dimensionnement detaille, consultez notre guide de dimensionnement Proxmox.
Configuration du GPU Passthrough
Etape 1 : Activation IOMMU sur l'hote Proxmox
# Edition du bootloader GRUB
nano /etc/default/grub
# Ajouter les parametres IOMMU
# Pour AMD :
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
# Pour Intel :
# GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
update-grub
# Charger les modules VFIO
cat >> /etc/modules << EOF
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
EOF
# Blacklister les drivers GPU natifs
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
# Identifier le GPU
lspci -nn | grep -i nvidia
# Exemple de sortie : 41:00.0 VGA compatible controller [0300]: NVIDIA Corporation AD102 [GeForce RTX 4090] [10de:2684] (rev a1)
# Configurer VFIO pour ce GPU
echo "options vfio-pci ids=10de:2684,10de:22ba" >> /etc/modprobe.d/vfio.conf
update-initramfs -u
reboot
Etape 2 : Creation de la VM Ollama
# Creation de la VM via CLI Proxmox
qm create 200 \
--name ollama-cybersec \
--memory 32768 \
--cores 8 \
--sockets 1 \
--cpu host \
--net0 virtio,bridge=vmbr1,tag=100 \
--scsihw virtio-scsi-single \
--scsi0 local-lvm:100,iothread=1,ssd=1 \
--ide2 local:iso/ubuntu-22.04-live-server-amd64.iso,media=cdrom \
--boot order=scsi0 \
--ostype l26 \
--machine q35 \
--bios ovmf \
--efidisk0 local-lvm:1
# Ajouter le GPU en passthrough
qm set 200 --hostpci0 41:00,pcie=1,x-vga=0
# Demarrer la VM
qm start 200
Etape 3 : Installation des drivers NVIDIA dans la VM
# Dans la VM Ubuntu
sudo apt update && sudo apt upgrade -y
# Installer les drivers NVIDIA
sudo apt install -y nvidia-driver-550 nvidia-cuda-toolkit
# Verifier l'installation
nvidia-smi
Installation et Configuration d'Ollama
Installation
# Installer Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Verifier que le GPU est detecte
ollama --version
nvidia-smi
# Configurer Ollama pour ecouter sur le reseau
sudo systemctl edit ollama.service
# Ajouter :
# [Service]
# Environment="OLLAMA_HOST=0.0.0.0:11434"
# Environment="OLLAMA_MODELS=/data/models"
sudo systemctl daemon-reload
sudo systemctl restart ollama
Import du modele CyberSec-Assistant en format GGUF
# Telecharger le modele GGUF depuis Hugging Face
wget https://huggingface.co/AYI-NEDJIMI/CyberSec-Assistant-3B-GGUF/resolve/main/cybersec-assistant-3b-q4_k_m.gguf \
-O /data/models/cybersec-assistant-3b-q4_k_m.gguf
Creer le Modelfile Ollama
# Fichier : Modelfile.cybersec
FROM /data/models/cybersec-assistant-3b-q4_k_m.gguf
PARAMETER temperature 0.3
PARAMETER top_p 0.9
PARAMETER top_k 40
PARAMETER num_ctx 4096
PARAMETER repeat_penalty 1.1
PARAMETER stop "<|im_end|>"
SYSTEM """Tu es CyberSec-Assistant, un expert en cybersecurite specialise dans :
- L'analyse forensique Windows (ETW, Prefetch, MFT)
- La correlation MITRE ATT&CK
- L'audit ISO 27001:2022
- La conformite RGPD
- La reponse aux incidents (DFIR)
- La securisation Active Directory
Tu reponds de maniere precise, technique et structuree.
Tu cites systematiquement les references normatives applicables."""
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
# Creer le modele dans Ollama
ollama create cybersec-assistant -f Modelfile.cybersec
# Tester le modele
ollama run cybersec-assistant "Quelles sont les techniques de mouvement lateral les plus courantes dans Active Directory ?"
Integration avec le SOC
API REST pour les outils SOC
import requests
import json
class CyberSecOllamaClient:
"""Client pour l'API Ollama hebergeant CyberSec-Assistant."""
def __init__(self, host="http://ollama-cybersec.internal:11434"):
self.host = host
self.model = "cybersec-assistant"
def analyze(self, prompt: str, context: str = "") -> str:
"""Envoie une requete d'analyse au modele."""
payload = {
"model": self.model,
"prompt": prompt,
"system": context,
"stream": False,
"options": {
"temperature": 0.3,
"num_ctx": 4096,
}
}
response = requests.post(
f"{self.host}/api/generate",
json=payload,
timeout=120
)
return response.json()["response"]
def analyze_siem_alert(self, alert_data: dict) -> dict:
"""Analyse une alerte SIEM et produit un verdict enrichi."""
prompt = f"""Analyse cette alerte SIEM et fournis :
1. Verdict (Vrai positif / Faux positif / Indetermine)
2. Techniques MITRE ATT&CK associees
3. Score de severite (1-10)
4. Actions recommandees
Alerte :
{json.dumps(alert_data, indent=2)}"""
analysis = self.analyze(prompt)
return {"alert": alert_data, "ai_analysis": analysis}
# Exemple d'utilisation
client = CyberSecOllamaClient()
result = client.analyze_siem_alert({
"rule": "Suspicious PowerShell Execution",
"source_ip": "10.0.1.50",
"destination": "DC01.corp.local",
"command": "Invoke-Mimikatz -DumpCreds",
"severity": "high"
})
print(result["ai_analysis"])
Benchmarks de Performance
Inference sur RTX 4090
| Modele | Quantification | VRAM | Tokens/sec | Latence P95 |
|---|---|---|---|---|
| CyberSec-3B | Q4_K_M | 2.8 GB | 85 t/s | 1.2s |
| CyberSec-3B | Q5_K_M | 3.2 GB | 72 t/s | 1.5s |
| CyberSec-3B | Q8_0 | 4.1 GB | 55 t/s | 2.0s |
| CyberSec-3B | FP16 | 6.0 GB | 42 t/s | 2.8s |
| ISO27001-1.5B | Q4_K_M | 1.4 GB | 120 t/s | 0.8s |
La quantification Q4_K_M offre le meilleur compromis entre qualite et performance, avec une degradation de seulement 2-3% sur nos benchmarks cybersecurite.
Securisation du Deploiement
Mesures de securite
# Firewall : restreindre l'acces a l'API Ollama
iptables -A INPUT -p tcp --dport 11434 -s 10.0.100.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport 11434 -j DROP
# TLS avec reverse proxy nginx
# /etc/nginx/sites-available/ollama
server {
listen 443 ssl;
server_name ollama-cybersec.internal;
ssl_certificate /etc/ssl/certs/ollama.crt;
ssl_certificate_key /etc/ssl/private/ollama.key;
location /api/ {
proxy_pass http://127.0.0.1:11434/api/;
proxy_set_header Host $host;
proxy_read_timeout 300;
# Authentification basique
auth_basic "CyberSec API";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
Conclusion
Le deploiement on-premise d'un LLM cybersecurite via Ollama sur Proxmox offre un controle total sur les donnees sensibles tout en fournissant des capacites d'analyse IA avancees. Avec un investissement materiel raisonnable et notre guide de dimensionnement Proxmox, cette architecture est accessible a toute organisation soucieuse de sa souverainete numerique.
Cet article fait partie d'une serie sur l'IA appliquee a la cybersecurite par AYI-NEDJIMI Consultants.
title: "Deploying a Cybersecurity LLM On-Premise with Ollama on Proxmox" author: "AYI-NEDJIMI Consultants" date: "2026-02-21" language: "en" tags: - ollama - proxmox - on-premise - gpu-passthrough - gguf - llm - cybersecurity license: "cc-by-sa-4.0"
Deploying a Cybersecurity LLM On-Premise with Ollama on Proxmox
Author: AYI-NEDJIMI Consultants | Date: February 21, 2026 | Reading time: 11 min
Introduction
For organizations subject to strict regulatory constraints (defense, healthcare, finance), SaaS AI model deployment is often ruled out. Security data -- logs, forensic artifacts, vulnerability reports -- are too sensitive to transit through cloud APIs. The solution: deploy a cybersecurity LLM directly on on-premise infrastructure.
In this article, we detail the complete deployment of CyberSec-Assistant-3B via Ollama on a Proxmox VE infrastructure with GPU passthrough for inference. This approach builds on our comprehensive Proxmox VE guide.
Target Architecture
+---------------------------+
| Proxmox VE Host |
| (Bare-metal, NVIDIA GPU) |
+---------------------------+
| |
+---------+--------+ +--------+---------+
| VM Ollama | | VM SOC Tools |
| (Ubuntu 22.04) | | (SIEM, SOAR) |
| GPU Passthrough| | |
| Ollama Server | REST | Integration |
| Port 11434 |<------>| API Client |
+---------+--------+ +--------+---------+
| |
+---------------------------+
| Security VLAN |
+---------------------------+
Hardware Prerequisites
| Component | Minimum | Recommended |
|---|---|---|
| CPU | AMD EPYC 7313 (16c) | AMD EPYC 9354 (32c) |
| RAM | 64 GB ECC | 128 GB ECC |
| GPU | NVIDIA RTX 4060 Ti 16GB | NVIDIA RTX 4090 24GB |
| OS Storage | NVMe SSD 500 GB | NVMe SSD 1 TB |
| Model Storage | NVMe SSD 1 TB | NVMe SSD 2 TB |
| Network | 10 GbE | 25 GbE |
For detailed sizing, consult our Proxmox sizing guide.
GPU Passthrough Configuration
Step 1: Enable IOMMU on the Proxmox Host
# Edit GRUB bootloader
nano /etc/default/grub
# Add IOMMU parameters
# For AMD:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
# For Intel:
# GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
update-grub
# Load VFIO modules
cat >> /etc/modules << EOF
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
EOF
# Blacklist native GPU drivers
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
# Identify the GPU
lspci -nn | grep -i nvidia
# Example output: 41:00.0 VGA compatible controller [0300]: NVIDIA Corporation AD102 [GeForce RTX 4090] [10de:2684]
# Configure VFIO for this GPU
echo "options vfio-pci ids=10de:2684,10de:22ba" >> /etc/modprobe.d/vfio.conf
update-initramfs -u
reboot
Step 2: Create the Ollama VM
qm create 200 \
--name ollama-cybersec \
--memory 32768 \
--cores 8 \
--sockets 1 \
--cpu host \
--net0 virtio,bridge=vmbr1,tag=100 \
--scsihw virtio-scsi-single \
--scsi0 local-lvm:100,iothread=1,ssd=1 \
--ide2 local:iso/ubuntu-22.04-live-server-amd64.iso,media=cdrom \
--boot order=scsi0 \
--ostype l26 \
--machine q35 \
--bios ovmf \
--efidisk0 local-lvm:1
# Add GPU passthrough
qm set 200 --hostpci0 41:00,pcie=1,x-vga=0
qm start 200
Step 3: Install NVIDIA Drivers in the VM
sudo apt update && sudo apt upgrade -y
sudo apt install -y nvidia-driver-550 nvidia-cuda-toolkit
nvidia-smi
Ollama Installation and Configuration
Installation
curl -fsSL https://ollama.com/install.sh | sh
# Configure Ollama to listen on network
sudo systemctl edit ollama.service
# Add:
# [Service]
# Environment="OLLAMA_HOST=0.0.0.0:11434"
# Environment="OLLAMA_MODELS=/data/models"
sudo systemctl daemon-reload
sudo systemctl restart ollama
Import the CyberSec-Assistant Model in GGUF Format
wget https://huggingface.co/AYI-NEDJIMI/CyberSec-Assistant-3B-GGUF/resolve/main/cybersec-assistant-3b-q4_k_m.gguf \
-O /data/models/cybersec-assistant-3b-q4_k_m.gguf
Create the Ollama Modelfile
FROM /data/models/cybersec-assistant-3b-q4_k_m.gguf
PARAMETER temperature 0.3
PARAMETER top_p 0.9
PARAMETER top_k 40
PARAMETER num_ctx 4096
PARAMETER repeat_penalty 1.1
PARAMETER stop "<|im_end|>"
SYSTEM """You are CyberSec-Assistant, a cybersecurity expert specialized in:
- Windows forensic analysis (ETW, Prefetch, MFT)
- MITRE ATT&CK correlation
- ISO 27001:2022 auditing
- GDPR compliance
- Digital Forensics and Incident Response (DFIR)
- Active Directory hardening
You respond precisely, technically, and in a structured manner.
You systematically cite applicable normative references."""
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
ollama create cybersec-assistant -f Modelfile.cybersec
ollama run cybersec-assistant "What are the most common lateral movement techniques in Active Directory?"
SOC Integration
REST API for SOC Tools
import requests
import json
class CyberSecOllamaClient:
"""Client for the Ollama API hosting CyberSec-Assistant."""
def __init__(self, host="http://ollama-cybersec.internal:11434"):
self.host = host
self.model = "cybersec-assistant"
def analyze(self, prompt: str, context: str = "") -> str:
"""Send an analysis request to the model."""
payload = {
"model": self.model,
"prompt": prompt,
"system": context,
"stream": False,
"options": {"temperature": 0.3, "num_ctx": 4096}
}
response = requests.post(
f"{self.host}/api/generate", json=payload, timeout=120
)
return response.json()["response"]
def analyze_siem_alert(self, alert_data: dict) -> dict:
"""Analyze a SIEM alert and produce an enriched verdict."""
prompt = f"""Analyze this SIEM alert and provide:
1. Verdict (True positive / False positive / Undetermined)
2. Associated MITRE ATT&CK techniques
3. Severity score (1-10)
4. Recommended actions
Alert:
{json.dumps(alert_data, indent=2)}"""
analysis = self.analyze(prompt)
return {"alert": alert_data, "ai_analysis": analysis}
client = CyberSecOllamaClient()
result = client.analyze_siem_alert({
"rule": "Suspicious PowerShell Execution",
"source_ip": "10.0.1.50",
"destination": "DC01.corp.local",
"command": "Invoke-Mimikatz -DumpCreds",
"severity": "high"
})
print(result["ai_analysis"])
Performance Benchmarks
Inference on RTX 4090
| Model | Quantization | VRAM | Tokens/sec | P95 Latency |
|---|---|---|---|---|
| CyberSec-3B | Q4_K_M | 2.8 GB | 85 t/s | 1.2s |
| CyberSec-3B | Q5_K_M | 3.2 GB | 72 t/s | 1.5s |
| CyberSec-3B | Q8_0 | 4.1 GB | 55 t/s | 2.0s |
| CyberSec-3B | FP16 | 6.0 GB | 42 t/s | 2.8s |
| ISO27001-1.5B | Q4_K_M | 1.4 GB | 120 t/s | 0.8s |
Q4_K_M quantization offers the best quality-performance tradeoff, with only 2-3% degradation on our cybersecurity benchmarks.
Deployment Security
# Firewall: restrict Ollama API access
iptables -A INPUT -p tcp --dport 11434 -s 10.0.100.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport 11434 -j DROP
# TLS with nginx reverse proxy
# /etc/nginx/sites-available/ollama
server {
listen 443 ssl;
server_name ollama-cybersec.internal;
ssl_certificate /etc/ssl/certs/ollama.crt;
ssl_certificate_key /etc/ssl/private/ollama.key;
location /api/ {
proxy_pass http://127.0.0.1:11434/api/;
proxy_set_header Host $host;
proxy_read_timeout 300;
auth_basic "CyberSec API";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
Conclusion
On-premise deployment of a cybersecurity LLM via Ollama on Proxmox offers complete control over sensitive data while providing advanced AI analysis capabilities. With reasonable hardware investment and our Proxmox sizing guide, this architecture is accessible to any organization concerned about digital sovereignty.
This article is part of a series on AI applied to cybersecurity by AYI-NEDJIMI Consultants.
- Downloads last month
- 17