Total Core Count

1,792
Allocated: 1,234 • Idle: 558

Server Nodes

256
Healthy: 254 • Maintenance: 2

Active Jobs

187
Pending: 34 • Completed: 2,847

Avg Usage

68.9%
CPU: 72% • Memory: 79%

NVIDIA B200 GPU

1,024
180GB HBM3e per core

NVIDIA RTX 5090

512
32GB GDDR6 per core

TPA Inference Card

256
6 chips / 384GB per card

Monthly Cost

$847K
Budget Usage: 67%

🔥 Hot Active Jobs

Job ID Job Name User Compute Type Status Duration
#JOB-2847 DeepSeek-671B-Inference ai_research@citrux.ai TPA 32×TPA Cards Running 8h 23m
#JOB-2846 GPT-4-Level-Training ml_team@citrux.ai NVIDIA 128×B200 Running 23h 47m
#JOB-2845 Stable-Diffusion-XL-Finetune creative@citrux.ai NVIDIA 16×RTX 5090 Running 4h 12m

📁 Active Projects

Project Name Department Quota Used Usage Rate Monthly Cost
AI Research Lab R&D 512 Cores 478 Cores 93% $324,500
LLM Inference Service Product 384 TPA Cards 312 TPA Cards 81% $287,200
Computer Vision Team AI Lab 256 Cores 189 Cores 74% $145,800

B200 Server Nodes

128
Online: 127 • 8×GPU per node

RTX 5090 Servers

64
Online: 64 • 8×GPU per node

TPA Inference Servers

64
Online: 63 • 4×TPA per node

Total Memory

299 TB
Used: 236 TB • Idle: 63 TB

⚙️ Job List

Job ID Job Name Project Compute Type Resources Status
#JOB-2847 DeepSeek-671B-Inference LLM Inference TPA 32× TPA Cards Running
#JOB-2846 GPT-4-Level-Training AI Research Lab B200 128× B200 Running

💿 Available Images

Image Name Version Platform Size Usage Count
pytorch-cuda-12.4 v2.5.1 CUDA 8.7 GB 342 times
tpa-inference-runtime v3.2.1 TPA SDK 4.2 GB 189 times

Total Monthly Cost

$847K
Last Month: $782K • Growth: +8.3%

B200 GPU Cost

$457K
Per Core-Hour: $0.95/hr

TPA Card Cost

$287K
Per Card-Hour: $0.42/hr

RTX 5090 Cost

$103K
Per Core-Hour: $0.28/hr

ℹ️ System Information

Compute Resource Configuration:

  • NVIDIA B200: 128 Servers × 8 GPU = 1,024 Cores (180GB HBM3e per core)
  • NVIDIA RTX 5090: 64 Servers × 8 GPU = 512 Cores (32GB GDDR6 per core)
  • TPA Inference Card: 64 Servers × 4 Cards = 256 TPA Cards (6 Chips + 384GB LPDDR4 per card)
  • Total Memory Capacity: B200: 184TB + RTX5090: 16TB + TPA: 98TB = 298TB
  • TPA Card Features: Optimized for large LLM inference, capable of running 500B+ parameter models like DeepSeek-671B, LLaMA-3-405B.