karenina.benchmark.verification.evaluators.trace¶
trace
¶
Trace analysis components for detecting abstention and sufficiency.
This package provides: - detect_abstention: Detect when models refuse to answer questions - detect_sufficiency: Detect if responses have sufficient information for templates
Functions¶
detect_abstention
¶
detect_abstention(
raw_llm_response: str,
parsing_model: ModelConfig,
question_text: str,
prompt_config: PromptConfig | None = None,
) -> tuple[bool, bool, str | None, dict[str, Any]]
Detect if the model refused to answer or abstained from answering.
This function uses an LLM to analyze the response and determine if it contains patterns indicating refusal, abstention, or evasion. Uses retry logic for transient errors (connection issues, rate limits, etc.).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
str
|
The raw response text from the answering model |
required |
|
ModelConfig
|
Configuration for the model to use for abstention detection |
required |
|
str
|
The original question that was asked |
required |
Returns:
| Type | Description |
|---|---|
bool
|
Tuple of (abstention_detected, check_performed, reasoning, usage_metadata): |
bool
|
|
str | None
|
|
dict[str, Any]
|
|
tuple[bool, bool, str | None, dict[str, Any]]
|
|
Examples:
>>> config = ModelConfig(id="parser", model_provider="openai", ...)
>>> detected, performed, reasoning, metadata = detect_abstention("I cannot answer this", config, "What is X?")
>>> print(detected, performed, reasoning)
True, True, "Response contains explicit refusal pattern"
Source code in src/karenina/benchmark/verification/evaluators/trace/abstention.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | |
detect_sufficiency
¶
detect_sufficiency(
raw_llm_response: str,
parsing_model: ModelConfig,
question_text: str,
template_schema: dict[str, Any],
prompt_config: PromptConfig | None = None,
) -> tuple[bool, bool, str | None, dict[str, Any]]
Detect if the response contains sufficient information to populate the template schema.
This function uses an LLM to analyze the response against the template schema and determine if all required fields can be populated. Uses retry logic for transient errors (connection issues, rate limits, etc.).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
str
|
The raw response text from the answering model |
required |
|
ModelConfig
|
Configuration for the model to use for sufficiency detection |
required |
|
str
|
The original question that was asked |
required |
|
dict[str, Any]
|
The JSON schema of the answer template to populate |
required |
Returns:
| Type | Description |
|---|---|
bool
|
Tuple of (sufficient, check_performed, reasoning, usage_metadata): |
bool
|
|
str | None
|
|
dict[str, Any]
|
|
tuple[bool, bool, str | None, dict[str, Any]]
|
|
Examples:
>>> config = ModelConfig(id="parser", model_provider="openai", ...)
>>> schema = {"properties": {"answer": {"type": "string"}}}
>>> sufficient, performed, reasoning, metadata = detect_sufficiency(
... "The answer is 42", config, "What is X?", schema
... )
>>> print(sufficient, performed)
True, True
Source code in src/karenina/benchmark/verification/evaluators/trace/sufficiency.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | |