LLM Compression Explained: Build Faster, Efficient AI Models