All Tags

#vllm

4 posts tagged with "vllm"

Understanding What Makes vLLM Fast

vLLM serves 10x more requests than naive PyTorch. PagedAttention, continuous batching, and memory management make the difference.

How vLLM Serves 10x More Requests

vLLM doesn't use a faster model. It uses memory smarter. PagedAttention treats KV cache like virtual memory, and the results are dramatic.

Moving Beyond Simple Request Batching

Static batching wastes GPU cycles waiting for the slowest request. Continuous batching fills those gaps. The difference is 3-5x throughput.