Documentation

Contents

Documentation#



llmSHAP logo llmSHAP logo

What it is: llmSHAP is a Python library for attributing the contributions of different input parts to the output of large language models (LLMs), using Shapley values.

Who it’s for: Researchers and developers working with LLMs who need insight into why a model produced a particular response.

Why it exists: LLMs are powerful but often opaque. llmSHAP helps make their outputs interpretable by quantifying the impact of each input element on the result.

Contents#