Show 160 more MEDIUM findings
MEDIUM
D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:73
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:75
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:77
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:79
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:80
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...haystack\.github\utils\create_unstable_docs_docusaurus.py:50
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...haystack\.github\utils\create_unstable_docs_docusaurus.py:55
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...llery\repos\haystack\.github\utils\docstrings_checksum.py:46
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:73
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:75
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:77
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:80
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:83
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:94
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:96
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:99
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:109
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:111
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:118
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:120
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...aystack\.github\utils\promote_unstable_docs_docusaurus.py:42
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...aystack\.github\utils\promote_unstable_docs_docusaurus.py:117
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...haystack\.github\utils\update_haystack_dc_custom_nodes.py:54
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...rojects\warden\gallery\repos\haystack\haystack\logging.py:231
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D12
Hardcoded model name: '
Generates text using OpenAI's large language models (LLMs).
It works with the gpt-4 - type models and supports streaming responses
from OpenAI API.
You can customize how the text is generated by passing parameters to the
OpenAI API. Use the `**generation_kwargs` argument when you initialize
the component or when you run it. Any parameter that works with
`openai.ChatCompletion.create` will work here too.
For details on OpenAI API parameters, see
[OpenAI documentation](https://platform.openai.com/docs/api-reference/chat).
### Usage example
<!-- test-ignore -->
```python
from haystack.components.generators import AzureOpenAIGenerator
from haystack.utils import Secret
client = AzureOpenAIGenerator(
azure_endpoint="<Your Azure endpoint e.g. `https://your-company.azure.openai.com/>",
api_key=Secret.from_token("<your-api-key>"),
azure_deployment="<this a model name, e.g. gpt-4.1-mini>")
response = client.run("What's Natural Language Processing? Be brief.")
print(response)
```
```
# >> {'replies': ['Natural Language Processing (NLP) is a branch of artificial intelligence that focuses on
# >> the interaction between computers and human language. It involves enabling computers to understand, interpret,
# >> and respond to natural human language in a way that is both meaningful and useful.'], 'meta': [{'model':
# >> 'gpt-4.1-mini', 'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 16,
# >> 'completion_tokens': 49, 'total_tokens': 65}}]}
```
' — no routing/fallback
...ry\repos\haystack\haystack\components\generators\azure.py:19
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...ry\repos\haystack\haystack\components\generators\azure.py:61
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...ry\repos\haystack\haystack\components\generators\azure.py:146
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
Generates text using OpenAI's large language models (LLMs).
It works with the gpt-4 and gpt-5 series models and supports streaming responses
from OpenAI API. It uses strings as input and output.
You can customize how the text is generated by passing parameters to the
OpenAI API. Use the `**generation_kwargs` argument when you initialize
the component or when you run it. Any parameter that works with
`openai.ChatCompletion.create` will work here too.
For details on OpenAI API parameters, see
[OpenAI documentation](https://platform.openai.com/docs/api-reference/chat).
### Usage example
```python
from haystack.components.generators import OpenAIGenerator
client = OpenAIGenerator()
response = client.run("What's Natural Language Processing? Be brief.")
print(response)
# >> {'replies': ['Natural Language Processing (NLP) is a branch of artificial intelligence that focuses on
# >> the interaction between computers and human language. It involves enabling computers to understand, interpret,
# >> and respond to natural human language in a way that is both meaningful and useful.'], 'meta': [{'model':
# >> 'gpt-5-mini', 'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 16,
# >> 'completion_tokens': 49, 'total_tokens': 65}}]}
```
' — no routing/fallback
...y\repos\haystack\haystack\components\generators\openai.py:33
Use model routing or configuration instead of hardcoded names
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:31
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:45
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:47
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:51
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:57
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:63
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:64
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:70
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:71
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:76
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D12
Hardcoded model name: '
Generates text using OpenAI's models on Azure.
It works with the gpt-4 - type models and supports streaming responses
from OpenAI API. It uses [ChatMessage](https://docs.haystack.deepset.ai/docs/chatmessage)
format in input and output.
You can customize how the text is generated by passing parameters to the
OpenAI API. Use the `**generation_kwargs` argument when you initialize
the component or when you run it. Any parameter that works with
`openai.ChatCompletion.create` will work here too.
For details on OpenAI API parameters, see
[OpenAI documentation](https://platform.openai.com/docs/api-reference/chat).
### Usage example
<!-- test-ignore -->
```python
from haystack.components.generators.chat import AzureOpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack.utils import Secret
messages = [ChatMessage.from_user("What's Natural Language Processing?")]
client = AzureOpenAIChatGenerator(
azure_endpoint="<Your Azure endpoint e.g. `https://your-company.azure.openai.com/>",
api_key=Secret.from_token("<your-api-key>"),
azure_deployment="<this a model name, e.g. gpt-4.1-mini>")
response = client.run(messages)
print(response)
```
```
{'replies':
[ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=[TextContent(text=
"Natural Language Processing (NLP) is a branch of artificial intelligence that focuses on
enabling computers to understand, interpret, and generate human language in a way that is useful.")],
_name=None,
_meta={'model': 'gpt-4.1-mini', 'index': 0, 'finish_reason': 'stop',
'usage': {'prompt_tokens': 15, 'completion_tokens': 36, 'total_tokens': 51}})]
}
```
' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:29
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:88
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:89
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-nano' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:90
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4o' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:91
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4o-mini' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:92
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4o-audio-preview' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:93
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:102
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:116
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
Initialize the Azure OpenAI Chat Generator component.
:param azure_endpoint: The endpoint of the deployed model, for example `"https://example-resource.azure.openai.com/"`.
:param api_version: The version of the API to use. Defaults to 2024-12-01-preview.
:param azure_deployment: The deployment of the model, usually the model name.
:param api_key: The API key to use for authentication.
:param azure_ad_token: [Azure Active Directory token](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id).
:param organization: Your organization ID, defaults to `None`. For help, see
[Setting up your organization](https://platform.openai.com/docs/guides/production-best-practices/setting-up-your-organization).
:param streaming_callback: A callback function called when a new token is received from the stream.
It accepts [StreamingChunk](https://docs.haystack.deepset.ai/docs/data-classes#streamingchunk)
as an argument.
:param timeout: Timeout for OpenAI client calls. If not set, it defaults to either the
`OPENAI_TIMEOUT` environment variable, or 30 seconds.
:param max_retries: Maximum number of retries to contact OpenAI after an internal error.
If not set, it defaults to either the `OPENAI_MAX_RETRIES` environment variable, or set to 5.
:param generation_kwargs: Other parameters to use for the model. These parameters are sent directly to
the OpenAI endpoint. For details, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat).
Some of the supported parameters:
- `max_completion_tokens`: An upper bound for the number of tokens that can be generated for a completion,
including visible output tokens and reasoning tokens.
- `temperature`: The sampling temperature to use. Higher values mean the model takes more risks.
Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer.
- `top_p`: Nucleus sampling is an alternative to sampling with temperature, where the model considers
tokens with a top_p probability mass. For example, 0.1 means only the tokens comprising
the top 10% probability mass are considered.
- `n`: The number of completions to generate for each prompt. For example, with 3 prompts and n=2,
the LLM will generate two completions per prompt, resulting in 6 completions total.
- `stop`: One or more sequences after which the LLM should stop generating tokens.
- `presence_penalty`: The penalty applied if a token is already present.
Higher values make the model less likely to repeat the token.
- `frequency_penalty`: Penalty applied if a token has already been generated.
Higher values make the model less likely to repeat the token.
- `logit_bias`: Adds a logit bias to specific tokens. The keys of the dictionary are tokens, and the
values are the bias to add to that token.
- `response_format`: A JSON schema or a Pydantic model that enforces the structure of the model's response.
If provided, the output will always be validated against this
format (unless the model returns a tool call).
For details, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs).
Notes:
- This parameter accepts Pydantic models and JSON schemas for latest models starting from GPT-4o.
Older models only support basic version of structured outputs through `{"type": "json_object"}`.
For detailed information on JSON mode, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs#json-mode).
- For structured outputs with streaming,
the `response_format` must be a JSON schema and not a Pydantic model.
:param default_headers: Default headers to use for the AzureOpenAI client.
:param tools:
A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls.
:param tools_strict:
Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly
the schema provided in the `parameters` field of the tool definition, but this may increase latency.
:param azure_ad_token_provider: A function that returns an Azure Active Directory token, will be invoked on
every request.
:param http_client_kwargs:
A dictionary of keyword arguments to configure a custom `httpx.Client`or `httpx.AsyncClient`.
For more information, see the [HTTPX documentation](https://www.python-httpx.org/api/#client).
' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:131
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:213
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4o' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:72
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4o-mini' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:73
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:75
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-nano' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:76
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:77
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
Initialize the AzureOpenAIResponsesChatGenerator component.
:param api_key: The API key to use for authentication. Can be:
- A `Secret` object containing the API key.
- A `Secret` object containing the [Azure Active Directory token](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id).
- A function that returns an Azure Active Directory token.
:param azure_endpoint: The endpoint of the deployed model, for example `"https://example-resource.azure.openai.com/"`.
:param azure_deployment: The deployment of the model, usually the model name.
:param organization: Your organization ID, defaults to `None`. For help, see
[Setting up your organization](https://platform.openai.com/docs/guides/production-best-practices/setting-up-your-organization).
:param streaming_callback: A callback function called when a new token is received from the stream.
It accepts [StreamingChunk](https://docs.haystack.deepset.ai/docs/data-classes#streamingchunk)
as an argument.
:param timeout: Timeout for OpenAI client calls. If not set, it defaults to either the
`OPENAI_TIMEOUT` environment variable, or 30 seconds.
:param max_retries: Maximum number of retries to contact OpenAI after an internal error.
If not set, it defaults to either the `OPENAI_MAX_RETRIES` environment variable, or set to 5.
:param generation_kwargs: Other parameters to use for the model. These parameters are sent
directly to the OpenAI endpoint.
See OpenAI [documentation](https://platform.openai.com/docs/api-reference/responses) for
more details.
Some of the supported parameters:
- `temperature`: What sampling temperature to use. Higher values like 0.8 will make the output more random,
while lower values like 0.2 will make it more focused and deterministic.
- `top_p`: An alternative to sampling with temperature, called nucleus sampling, where the model
considers the results of the tokens with top_p probability mass. For example, 0.1 means only the tokens
comprising the top 10% probability mass are considered.
- `previous_response_id`: The ID of the previous response.
Use this to create multi-turn conversations.
- `text_format`: A Pydantic model that enforces the structure of the model's response.
If provided, the output will always be validated against this
format (unless the model returns a tool call).
For details, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs).
- `text`: A JSON schema that enforces the structure of the model's response.
If provided, the output will always be validated against this
format (unless the model returns a tool call).
Notes:
- Both JSON Schema and Pydantic models are supported for latest models starting from GPT-4o.
- If both are provided, `text_format` takes precedence and json schema passed to `text` is ignored.
- Currently, this component doesn't support streaming for structured outputs.
- Older models only support basic version of structured outputs through `{"type": "json_object"}`.
For detailed information on JSON mode, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs#json-mode).
- `reasoning`: A dictionary of parameters for reasoning. For example:
- `summary`: The summary of the reasoning.
- `effort`: The level of effort to put into the reasoning. Can be `low`, `medium` or `high`.
- `generate_summary`: Whether to generate a summary of the reasoning.
Note: OpenAI does not return the reasoning tokens, but we can view summary if its enabled.
For details, see the [OpenAI Reasoning documentation](https://platform.openai.com/docs/guides/reasoning).
:param tools:
A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls.
:param tools_strict:
Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly
the schema provided in the `parameters` field of the tool definition, but this may increase latency.
:param http_client_kwargs:
A dictionary of keyword arguments to configure a custom `httpx.Client`or `httpx.AsyncClient`.
For more information, see the [HTTPX documentation](https://www.python-httpx.org/api/#client).
' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:107
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
Completes chats using OpenAI's large language models (LLMs).
It works with the gpt-4 and gpt-5 series models and supports streaming responses
from OpenAI API. It uses [ChatMessage](https://docs.haystack.deepset.ai/docs/chatmessage)
format in input and output.
You can customize how the text is generated by passing parameters to the
OpenAI API. Use the `**generation_kwargs` argument when you initialize
the component or when you run it. Any parameter that works with
`openai.ChatCompletion.create` will work here too.
For details on OpenAI API parameters, see
[OpenAI documentation](https://platform.openai.com/docs/api-reference/chat).
### Usage example
```python
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
messages = [ChatMessage.from_user("What's Natural Language Processing?")]
client = OpenAIChatGenerator()
response = client.run(messages)
print(response)
```
Output:
```
{'replies':
[ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=
[TextContent(text="Natural Language Processing (NLP) is a branch of artificial intelligence
that focuses on enabling computers to understand, interpret, and generate human language in
a way that is meaningful and useful.")],
_name=None,
_meta={'model': 'gpt-5-mini', 'index': 0, 'finish_reason': 'stop',
'usage': {'prompt_tokens': 15, 'completion_tokens': 36, 'total_tokens': 51}})
]
}
```
' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:55
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:105
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:106
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-nano' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:107
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4o' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:108
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4o-mini' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:109
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4-turbo' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:110
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:111
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-3.5-turbo' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:112
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
Creates an instance of OpenAIChatGenerator. Unless specified otherwise in `model`, uses OpenAI's gpt-5-mini
Before initializing the component, you can set the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES'
environment variables to override the `timeout` and `max_retries` parameters respectively
in the OpenAI client.
:param api_key: The OpenAI API key.
You can set it with an environment variable `OPENAI_API_KEY`, or pass with this parameter
during initialization.
:param model: The name of the model to use.
:param streaming_callback: A callback function that is called when a new token is received from the stream.
The callback function accepts [StreamingChunk](https://docs.haystack.deepset.ai/docs/data-classes#streamingchunk)
as an argument.
:param api_base_url: An optional base URL.
:param organization: Your organization ID, defaults to `None`. See
[production best practices](https://platform.openai.com/docs/guides/production-best-practices/setting-up-your-organization).
:param generation_kwargs: Other parameters to use for the model. These parameters are sent directly to
the OpenAI endpoint. See OpenAI [documentation](https://platform.openai.com/docs/api-reference/chat) for
more details.
Some of the supported parameters:
- `max_completion_tokens`: An upper bound for the number of tokens that can be generated for a completion,
including visible output tokens and reasoning tokens.
- `temperature`: What sampling temperature to use. Higher values mean the model will take more risks.
Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer.
- `top_p`: An alternative to sampling with temperature, called nucleus sampling, where the model
considers the results of the tokens with top_p probability mass. For example, 0.1 means only the tokens
comprising the top 10% probability mass are considered.
- `n`: How many completions to generate for each prompt. For example, if the LLM gets 3 prompts and n is 2,
it will generate two completions for each of the three prompts, ending up with 6 completions in total.
- `stop`: One or more sequences after which the LLM should stop generating tokens.
- `presence_penalty`: What penalty to apply if a token is already present at all. Bigger values mean
the model will be less likely to repeat the same token in the text.
- `frequency_penalty`: What penalty to apply if a token has already been generated in the text.
Bigger values mean the model will be less likely to repeat the same token in the text.
- `logit_bias`: Add a logit bias to specific tokens. The keys of the dictionary are tokens, and the
values are the bias to add to that token.
- `response_format`: A JSON schema or a Pydantic model that enforces the structure of the model's response.
If provided, the output will always be validated against this
format (unless the model returns a tool call).
For details, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs).
Notes:
- This parameter accepts Pydantic models and JSON schemas for latest models starting from GPT-4o.
Older models only support basic version of structured outputs through `{"type": "json_object"}`.
For detailed information on JSON mode, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs#json-mode).
- For structured outputs with streaming,
the `response_format` must be a JSON schema and not a Pydantic model.
:param timeout:
Timeout for OpenAI client calls. If not set, it defaults to either the
`OPENAI_TIMEOUT` environment variable, or 30 seconds.
:param max_retries:
Maximum number of retries to contact OpenAI after an internal error.
If not set, it defaults to either the `OPENAI_MAX_RETRIES` environment variable, or set to 5.
:param tools:
A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls.
:param tools_strict:
Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly
the schema provided in the `parameters` field of the tool definition, but this may increase latency.
:param http_client_kwargs:
A dictionary of keyword arguments to configure a custom `httpx.Client`or `httpx.AsyncClient`.
For more information, see the [HTTPX documentation](https://www.python-httpx.org/api/#client).
' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:131
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
Completes chats using OpenAI's Responses API.
It works with the gpt-4 and o-series models and supports streaming responses
from OpenAI API. It uses [ChatMessage](https://docs.haystack.deepset.ai/docs/chatmessage)
format in input and output.
You can customize how the text is generated by passing parameters to the
OpenAI API. Use the `**generation_kwargs` argument when you initialize
the component or when you run it. Any parameter that works with
`openai.Responses.create` will work here too.
For details on OpenAI API parameters, see
[OpenAI documentation](https://platform.openai.com/docs/api-reference/responses).
### Usage example
```python
from haystack.components.generators.chat import OpenAIResponsesChatGenerator
from haystack.dataclasses import ChatMessage
messages = [ChatMessage.from_user("What's Natural Language Processing?")]
client = OpenAIResponsesChatGenerator(generation_kwargs={"reasoning": {"effort": "low", "summary": "auto"}})
response = client.run(messages)
print(response)
```
' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:48
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:86
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:87
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-nano' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:88
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4o' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:89
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4o-mini' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:90
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
Creates an instance of OpenAIResponsesChatGenerator. Uses OpenAI's gpt-5-mini by default.
Before initializing the component, you can set the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES'
environment variables to override the `timeout` and `max_retries` parameters respectively
in the OpenAI client.
:param api_key: The OpenAI API key.
You can set it with an environment variable `OPENAI_API_KEY`, or pass with this parameter
during initialization.
:param model: The name of the model to use.
:param streaming_callback: A callback function that is called when a new token is received from the stream.
The callback function accepts [StreamingChunk](https://docs.haystack.deepset.ai/docs/data-classes#streamingchunk)
as an argument.
:param api_base_url: An optional base URL.
:param organization: Your organization ID, defaults to `None`. See
[production best practices](https://platform.openai.com/docs/guides/production-best-practices/setting-up-your-organization).
:param generation_kwargs: Other parameters to use for the model. These parameters are sent
directly to the OpenAI endpoint.
See OpenAI [documentation](https://platform.openai.com/docs/api-reference/responses) for
more details.
Some of the supported parameters:
- `temperature`: What sampling temperature to use. Higher values like 0.8 will make the output more random,
while lower values like 0.2 will make it more focused and deterministic.
- `top_p`: An alternative to sampling with temperature, called nucleus sampling, where the model
considers the results of the tokens with top_p probability mass. For example, 0.1 means only the tokens
comprising the top 10% probability mass are considered.
- `previous_response_id`: The ID of the previous response.
Use this to create multi-turn conversations.
- `text_format`: A Pydantic model that enforces the structure of the model's response.
If provided, the output will always be validated against this
format (unless the model returns a tool call).
For details, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs).
- `text`: A JSON schema that enforces the structure of the model's response.
If provided, the output will always be validated against this
format (unless the model returns a tool call).
Notes:
- Both JSON Schema and Pydantic models are supported for latest models starting from GPT-4o.
- If both are provided, `text_format` takes precedence and json schema passed to `text` is ignored.
- Currently, this component doesn't support streaming for structured outputs.
- Older models only support basic version of structured outputs through `{"type": "json_object"}`.
For detailed information on JSON mode, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs#json-mode).
- `reasoning`: A dictionary of parameters for reasoning. For example:
- `summary`: The summary of the reasoning.
- `effort`: The level of effort to put into the reasoning. Can be `low`, `medium` or `high`.
- `generate_summary`: Whether to generate a summary of the reasoning.
Note: OpenAI does not return the reasoning tokens, but we can view summary if its enabled.
For details, see the [OpenAI Reasoning documentation](https://platform.openai.com/docs/guides/reasoning).
:param timeout:
Timeout for OpenAI client calls. If not set, it defaults to either the
`OPENAI_TIMEOUT` environment variable, or 30 seconds.
:param max_retries:
Maximum number of retries to contact OpenAI after an internal error.
If not set, it defaults to either the `OPENAI_MAX_RETRIES` environment variable, or set to 5.
:param tools:
The tools that the model can use to prepare calls. This parameter can accept either a
mixed list of Haystack `Tool` objects and Haystack `Toolset`. Or you can pass a dictionary of
OpenAI/MCP tool definitions.
Note: You cannot pass OpenAI/MCP tools and Haystack tools together.
For details on tool support, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/responses/create#responses-create-tools).
:param tools_strict:
Whether to enable strict schema adherence for tool calls. If set to `False`, the model may not exactly
follow the schema provided in the `parameters` field of the tool definition. In Response API, tool calls
are strict by default.
:param http_client_kwargs:
A dictionary of keyword arguments to configure a custom `httpx.Client`or `httpx.AsyncClient`.
For more information, see the [HTTPX documentation](https://www.python-httpx.org/api/#client).
' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:117
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
A component that merges multiple input branches of a pipeline into a single output stream.
`BranchJoiner` receives multiple inputs of the same data type and forwards the first received value
to its output. This is useful for scenarios where multiple branches need to converge before proceeding.
### Common Use Cases:
- **Loop Handling:** `BranchJoiner` helps close loops in pipelines. For example, if a pipeline component validates
or modifies incoming data and produces an error-handling branch, `BranchJoiner` can merge both branches and send
(or resend in the case of a loop) the data to the component that evaluates errors. See "Usage example" below.
- **Decision-Based Merging:** `BranchJoiner` reconciles branches coming from Router components (such as
`ConditionalRouter`, `TextLanguageRouter`). Suppose a `TextLanguageRouter` directs user queries to different
Retrievers based on the detected language. Each Retriever processes its assigned query and passes the results
to `BranchJoiner`, which consolidates them into a single output before passing them to the next component, such
as a `PromptBuilder`.
### Example Usage:
```python
import json
from haystack import Pipeline
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.joiners import BranchJoiner
from haystack.components.validators import JsonSchemaValidator
from haystack.dataclasses import ChatMessage
# Define a schema for validation
person_schema = {
"type": "object",
"properties": {
"first_name": {"type": "string", "pattern": "^[A-Z][a-z]+$"},
"last_name": {"type": "string", "pattern": "^[A-Z][a-z]+$"},
"nationality": {"type": "string", "enum": ["Italian", "Portuguese", "American"]},
},
"required": ["first_name", "last_name", "nationality"]
}
# Initialize a pipeline
pipe = Pipeline()
# Add components to the pipeline
pipe.add_component("joiner", BranchJoiner(list[ChatMessage]))
pipe.add_component("generator", OpenAIChatGenerator(model="gpt-4.1-mini"))
pipe.add_component("validator", JsonSchemaValidator(json_schema=person_schema))
# And connect them
pipe.connect("joiner", "generator")
pipe.connect("generator.replies", "validator.messages")
pipe.connect("validator.validation_error", "joiner")
result = pipe.run(
data={
"generator": {"generation_kwargs": {"response_format": {"type": "json_object"}}},
"joiner": {"value": [ChatMessage.from_user("Create json from Peter Parker")]}}
)
print(json.loads(result["validator"]["validated"][0].text))
# >> {'first_name': 'Peter', 'last_name': 'Parker', 'nationality': 'American', 'name': 'Spider-Man', 'occupation':
# >> 'Superhero', 'age': 23, 'location': 'New York City'}
```
Note that `BranchJoiner` can manage only one data type at a time. In this case, `BranchJoiner` is created for
passing `list[ChatMessage]`. This determines the type of data that `BranchJoiner` will receive from the upstream
connected components and also the type of data that `BranchJoiner` will send through its output.
In the code example, `BranchJoiner` receives a looped back `list[ChatMessage]` from the `JsonSchemaValidator` and
sends it down to the `OpenAIChatGenerator` for re-generation. We can have multiple loopback connections in the
pipeline. In this instance, the downstream component is only one (the `OpenAIChatGenerator`), but the pipeline could
have more than one downstream component.
' — no routing/fallback
...lery\repos\haystack\haystack\components\joiners\branch.py:14
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
A component that returns a list of semantically similar queries to improve retrieval recall in RAG systems.
The component uses a chat generator to expand queries. The chat generator is expected to return a JSON response
with the following structure:
```json
{"queries": ["expanded query 1", "expanded query 2", "expanded query 3"]}
```
### Usage example
```python
from haystack.components.generators.chat.openai import OpenAIChatGenerator
from haystack.components.query import QueryExpander
expander = QueryExpander(
chat_generator=OpenAIChatGenerator(model="gpt-4.1-mini"),
n_expansions=3
)
result = expander.run(query="green energy sources")
print(result["queries"])
# Output: ['alternative query 1', 'alternative query 2', 'alternative query 3', 'green energy sources']
# Note: Up to 3 additional queries + 1 original query (if include_original_query=True)
# To control total number of queries:
expander = QueryExpander(n_expansions=2, include_original_query=True) # Up to 3 total
# or
expander = QueryExpander(n_expansions=3, include_original_query=False) # Exactly 3 total
```
' — no routing/fallback
...epos\haystack\haystack\components\query\query_expander.py:55
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
Initialize the QueryExpander component.
:param chat_generator: The chat generator component to use for query expansion.
If None, a default OpenAIChatGenerator with gpt-4.1-mini model is used.
:param prompt_template: Custom [PromptBuilder](https://docs.haystack.deepset.ai/docs/promptbuilder)
template for query expansion. The template should instruct the LLM to return a JSON response with the
structure: `{"queries": ["query1", "query2", "query3"]}`. The template should include 'query' and
'n_expansions' variables.
:param n_expansions: Number of alternative queries to generate (default: 4).
:param include_original_query: Whether to include the original query in the output.
' — no routing/fallback
...epos\haystack\haystack\components\query\query_expander.py:95
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...epos\haystack\haystack\components\query\query_expander.py:115
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...\repos\haystack\haystack\components\rankers\llm_ranker.py:21
Use model routing or configuration instead of hardcoded names
MEDIUM
D12
Hardcoded model name: '
Ranks documents for a query using a Large Language Model.
The LLM is expected to return a JSON object containing ranked document indices.
Usage example:
```python
from haystack import Document
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.rankers import LLMRanker
chat_generator = OpenAIChatGenerator(
model="gpt-4.1-mini",
generation_kwargs={
"temperature": 0.0,
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "document_ranking",
"schema": {
"type": "object",
"properties": {
"documents": {
"type": "array",
"items": {
"type": "object",
"properties": {"index": {"type": "integer"}},
"required": ["index"],
"additionalProperties": False,
},
}
},
"required": ["documents"],
"additionalProperties": False,
},
},
},
},
)
ranker = LLMRanker(chat_generator=chat_generator)
documents = [
Document(id="paris", content="Paris is the capital of France."),
Document(id="berlin", content="Berlin is the capital of Germany."),
]
result = ranker.run(query="capital of Germany", documents=documents)
print(result["documents"][0].id)
```
' — no routing/fallback
...\repos\haystack\haystack\components\rankers\llm_ranker.py:83
Use model routing or configuration instead of hardcoded names
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:144
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:155
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:156
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:157
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:158
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:161
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:163
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:164
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:194
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:211
Use logging.* or structlog.* for structured, searchable logs
MEDIUM
D12
Hardcoded model name: '
A Tool that wraps Haystack Pipelines, allowing them to be used as tools by LLMs.
PipelineTool automatically generates LLM-compatible tool schemas from pipeline input sockets,
which are derived from the underlying components in the pipeline.
Key features:
- Automatic LLM tool calling schema generation from pipeline inputs
- Description extraction of pipeline inputs based on the underlying component docstrings
To use PipelineTool, you first need a Haystack pipeline.
Below is an example of creating a PipelineTool
## Usage Example:
```python
from haystack import Document, Pipeline
from haystack.dataclasses import ChatMessage
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.embedders.sentence_transformers_text_embedder import SentenceTransformersTextEmbedder
from haystack.components.embedders.sentence_transformers_document_embedder import (
SentenceTransformersDocumentEmbedder
)
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.retrievers import InMemoryEmbeddingRetriever
from haystack.components.agents import Agent
from haystack.tools import PipelineTool
# Initialize a document store and add some documents
document_store = InMemoryDocumentStore()
document_embedder = SentenceTransformersDocumentEmbedder(model="sentence-transformers/all-MiniLM-L6-v2")
documents = [
Document(content="Nikola Tesla was a Serbian-American inventor and electrical engineer."),
Document(
content="He is best known for his contributions to the design of the modern alternating current (AC) "
"electricity supply system."
),
]
docs_with_embeddings = document_embedder.run(documents=documents)["documents"]
document_store.write_documents(docs_with_embeddings)
# Build a simple retrieval pipeline
retrieval_pipeline = Pipeline()
retrieval_pipeline.add_component(
"embedder", SentenceTransformersTextEmbedder(model="sentence-transformers/all-MiniLM-L6-v2")
)
retrieval_pipeline.add_component("retriever", InMemoryEmbeddingRetriever(document_store=document_store))
retrieval_pipeline.connect("embedder.embedding", "retriever.query_embedding")
# Wrap the pipeline as a tool
retriever_tool = PipelineTool(
pipeline=retrieval_pipeline,
input_mapping={"query": ["embedder.text"]},
output_mapping={"retriever.documents": "documents"},
name="document_retriever",
description="For any questions about Nikola Tesla, always use this tool",
)
# Create an Agent with the tool
agent = Agent(
chat_generator=OpenAIChatGenerator(model="gpt-4.1-mini"),
tools=[retriever_tool]
)
# Let the Agent handle a query
result = agent.run([ChatMessage.from_user("Who was Nikola Tesla?")])
# Print result of the tool call
print("Tool Call Result:")
print(result["messages"][2].tool_call_result.result)
print("")
# Print answer
print("Answer:")
print(result["messages"][-1].text)
```
' — no routing/fallback
...en\gallery\repos\haystack\haystack\tools\pipeline_tool.py:22
Use model routing or configuration instead of hardcoded names
MEDIUM
D4
Exposed Generic Secret: tok...N }}
...rden\gallery\repos\haystack\.github\workflows\labeler.yml:15
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...aystack\docs-website\reference\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...aystack\docs-website\reference\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.18\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.18\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.19\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.19\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.20\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.20\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.21\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.21\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.22\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.22\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.23\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.23\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.24\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.24\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.25\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.25\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.26\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.26\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: tok...oken
...ersioned_docs\version-2.27\haystack-api\connectors_api.md:173
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
..._versioned_docs\version-2.27\haystack-api\pipeline_api.md:423
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.27\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.27\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: api...rJgP
...n\gallery\repos\haystack\haystack\telemetry\_telemetry.py:55
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: tok...ken>
...eleasenotes\notes\add-TEI-embedders-8c76593bc25a7219.yaml:11
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D4
Exposed Generic Secret: tok...ken>
...eleasenotes\notes\add-TEI-embedders-8c76593bc25a7219.yaml:23
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM
D8
Agent class 'LLM' has no defined lifecycle states
...repos\haystack\haystack\components\generators\chat\llm.py:19
Add state machine (ACTIVE/SUSPENDED/RETIRED) for agent lifecycle
MEDIUM
D12
Agent class 'TestAgent' has no cost tracking
...llery\repos\haystack\test\components\agents\test_agent.py:186
Track token usage and costs per agent execution
MEDIUM
D12
Agent class 'TestAgent' has no cost tracking
...\repos\haystack\test\components\agents\test_agent_hitl.py:55
Track token usage and costs per agent execution
MEDIUM
D12
Agent class 'FakeAgent' has no cost tracking
...ry\repos\haystack\test\core\pipeline\features\test_run.py:5057
Track token usage and costs per agent execution
MEDIUM
D8
Agent class 'FakeAgent' has no defined lifecycle states
...ry\repos\haystack\test\core\pipeline\features\test_run.py:5057
Add state machine (ACTIVE/SUSPENDED/RETIRED) for agent lifecycle
MEDIUM
D3
No concurrency block — parallel deployments possible
...\haystack\.github\workflows\auto_approve_api_ref_sync.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...n\gallery\repos\haystack\.github\workflows\branch_off.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...allery\repos\haystack\.github\workflows\check_api_ref.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...n\gallery\repos\haystack\.github\workflows\ci_metrics.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...llery\repos\haystack\.github\workflows\docker_release.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D14
Push trigger without branch protection guard
...llery\repos\haystack\.github\workflows\docker_release.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM
D3
No concurrency block — parallel deployments possible
...ack\.github\workflows\docs-website-test-docs-snippets.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...ry\repos\haystack\.github\workflows\docstring_labeler.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...lery\repos\haystack\.github\workflows\docusaurus_sync.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D14
Push trigger without branch protection guard
...lery\repos\haystack\.github\workflows\docusaurus_sync.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM
D3
No concurrency block — parallel deployments possible
...s\warden\gallery\repos\haystack\.github\workflows\e2e.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...llery\repos\haystack\.github\workflows\github_release.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D14
Push trigger without branch protection guard
...llery\repos\haystack\.github\workflows\github_release.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM
D3
No concurrency block — parallel deployments possible
...rden\gallery\repos\haystack\.github\workflows\labeler.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
continue-on-error: true — pipeline failures silently suppressed
...y\repos\haystack\.github\workflows\license_compliance.yml:51
Remove continue-on-error or scope it to non-critical steps only
MEDIUM
D3
No concurrency block — parallel deployments possible
...y\repos\haystack\.github\workflows\license_compliance.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...s\haystack\.github\workflows\nightly_testpypi_release.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...rden\gallery\repos\haystack\.github\workflows\project.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...epos\haystack\.github\workflows\promote_unstable_docs.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D14
Push trigger without branch protection guard
...epos\haystack\.github\workflows\promote_unstable_docs.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM
D3
No concurrency block — parallel deployments possible
...stack\.github\workflows\push_release_notes_to_website.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...gallery\repos\haystack\.github\workflows\pypi_release.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D14
Push trigger without branch protection guard
...gallery\repos\haystack\.github\workflows\pypi_release.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM
D3
No concurrency block — parallel deployments possible
...allery\repos\haystack\.github\workflows\release_notes.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...epos\haystack\.github\workflows\release_notes_skipper.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
No concurrency block — parallel deployments possible
...\warden\gallery\repos\haystack\.github\workflows\slow.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D14
Push trigger without branch protection guard
...\warden\gallery\repos\haystack\.github\workflows\slow.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM
D3
No concurrency block — parallel deployments possible
...warden\gallery\repos\haystack\.github\workflows\stale.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D3
continue-on-error: true — pipeline failures silently suppressed
...warden\gallery\repos\haystack\.github\workflows\tests.yml:146
Remove continue-on-error or scope it to non-critical steps only
MEDIUM
D3
No concurrency block — parallel deployments possible
...warden\gallery\repos\haystack\.github\workflows\tests.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D14
Push trigger without branch protection guard
...warden\gallery\repos\haystack\.github\workflows\tests.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM
D3
No concurrency block — parallel deployments possible
...ry\repos\haystack\.github\workflows\workflows_linting.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM
D1
Cloud AI endpoint URL hardcoded in source — hinders environment portability
...pos\haystack\test\components\audio\test_whisper_remote.py:112
Move AI service endpoints to environment variables or configuration files
OWASP LLM06
MEDIUM
D1
Cloud AI endpoint URL hardcoded in source — hinders environment portability
...test\components\embedders\test_azure_document_embedder.py:206
Move AI service endpoints to environment variables or configuration files
OWASP LLM06
MEDIUM
D1
Cloud AI endpoint URL hardcoded in source — hinders environment portability
...\haystack\test\components\generators\test_openai_dalle.py:44
Move AI service endpoints to environment variables or configuration files
OWASP LLM06
MEDIUM
D17
No adversarial testing evidence — no red team, no prompt injection tests
Implement adversarial testing for agent systems
MEDIUM
D17
No tool-call attack simulation — agent tool calls not tested against adversarial inputs
Implement adversarial testing for agent systems
MEDIUM
D17
No multi-agent chaos engineering — agent swarms not stress tested
Implement adversarial testing for agent systems