#Cedric Clyburn
4 articles with this tag

Artificial Intelligence
AI Model Compression: Key to Efficient LLM Deployment
Cedric Clyburn of Redh explains how AI model compression, especially quantization, is crucial for efficient LLM deployment, reducing costs and improving performance.
1 day ago

Artificial Intelligence
Cedric Clyburn on Models as a Service
Red Hat's Cedric Clyburn discusses the evolution of AI from code assistants to Models as a Service (MaaS), highlighting on-premise and hybrid deployments with Kubernetes and OpenShift.
8 days ago

Artificial Intelligence
Run LLMs Locally with Llama.cpp
Cedric Clyburn explains how Llama.cpp makes running large language models locally on consumer hardware possible, highlighting GGUF format and optimized kernels for efficiency and accessibility.
16 days ago

AI Video
Llama Stack: Kubernetes for Generative AI Applications
7 months ago