Scan path: C:\Users\gilad\Projects\warden\gallery\repos\haystack
Scanned: 2026-04-10 23:09 UTC
Warden: v1.6.0 · Scoring model v4.3 · 17 dimensions (weighted) · 235 pts
🔒 Privacy guarantee
All data collected locally — nothing left this machine.
API keys: partial hashes only.
Log content: never stored.
📊 Scanned 573 files (553 Python · 20 JS/TS) in haystack across 8 scan layers
15
/ 100
35 / 235 raw
UNGOVERNED
Core Governance (10 / 100)
D1 Tool Inventory
4 / 25
MEDIUM Cloud AI endpoint URL hardcoded in source — hinders environment portability
MEDIUM Cloud AI endpoint URL hardcoded in source — hinders environment portability
MEDIUM Cloud AI endpoint URL hardcoded in source — hinders environment portability
D2 Risk Detection
0 / 20
D3 Policy Coverage
4 / 20
CRITICAL Agent with unrestricted tool access — all tools passed without allowlist
CRITICAL Agent with unrestricted tool access — all tools passed without allowlist
CRITICAL Agent with unrestricted tool access — all tools passed without allowlist
CRITICAL Agent with unrestricted tool access — all tools passed without allowlist
CRITICAL Agent with unrestricted tool access — all tools passed without allowlist
+ 27 more findings
D4 Credential Management
0 / 20
MEDIUM Exposed Generic Secret: tok...N }}
MEDIUM Exposed Generic Secret: api...-key
CRITICAL Exposed Database URL with credentials: pos...NAME
CRITICAL Exposed Database URL with credentials: mon...ing}
CRITICAL Exposed Database URL with credentials: mon...ing}
+ 75 more findings
D5 Log Hygiene
2 / 10
MEDIUM print() used instead of structured logging
MEDIUM print() used instead of structured logging
MEDIUM print() used instead of structured logging
MEDIUM print() used instead of structured logging
MEDIUM print() used instead of structured logging
+ 43 more findings
D6 Framework Coverage
0 / 5
Advanced Controls (9 / 50)
D7 Human-in-the-Loop
8 / 15
D8 Agent Identity
1 / 15
MEDIUM Agent class 'LLM' has no defined lifecycle states
HIGH Agent spawns sub-agents without depth limit
HIGH Agent spawns sub-agents without depth limit
HIGH Agent spawns sub-agents without depth limit
HIGH Agent spawns sub-agents without depth limit
+ 35 more findings
D9 Threat Detection
0 / 20
HIGH Empty exception handler — errors silently swallowed
HIGH Empty exception handler — errors silently swallowed
HIGH Empty exception handler — errors silently swallowed
HIGH Empty exception handler — errors silently swallowed
HIGH Empty exception handler — errors silently swallowed
+ 4 more findings
Ecosystem (11 / 55)
D10 Prompt Security
0 / 15
HIGH Azure AI used without ContentSafetyClient — no content moderation
HIGH Azure AI used without ContentSafetyClient — no content moderation
HIGH Azure AI used without ContentSafetyClient — no content moderation
HIGH Azure AI used without ContentSafetyClient — no content moderation
HIGH Azure AI used without ContentSafetyClient — no content moderation
+ 10 more findings
D11 Cloud / Platform
4 / 10
D12 LLM Observability
0 / 10
MEDIUM Hardcoded model name: ' Generates text using OpenAI's large language models (LLMs). It works with the gpt-4 - t
MEDIUM Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
MEDIUM Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
MEDIUM Hardcoded model name: ' Generates text using OpenAI's large language models (LLMs). It works with the gpt-4 and
MEDIUM Hardcoded model name: ' Generates text using OpenAI's models on Azure. It works with the gpt-4 - type models an
+ 43 more findings
D13 Data Recovery
2 / 10
D14 Compliance Maturity
5 / 10
LOW No environment: block — no required reviewers for deployments
LOW No environment: block — no required reviewers for deployments
LOW No environment: block — no required reviewers for deployments
LOW No environment: block — no required reviewers for deployments
LOW No environment: block — no required reviewers for deployments
+ 25 more findings
Unique Capabilities (5 / 30)
D15 Post-Exec Verification
2 / 10
HIGH Tool result assigned directly without verification
HIGH Tool result assigned directly without verification
HIGH Tool result assigned directly without verification
HIGH Tool result assigned directly without verification
D16 Data Flow Governance
2 / 10
D17 Adversarial Resilience
1 / 10
CRITICAL No content injection defense — hidden HTML/CSS/zero-width instructions pass to agents undetected. (86% attack success ra
CRITICAL No RAG poisoning protection — knowledge base documents not scanned for embedded instructions. (<0.1% contamination = >80
HIGH No behavioral trap detection — post-execution behavioral changes not monitored. (10/10 M365 Copilot attacks succeeded)
HIGH No approval integrity verification -- agent summaries for approval not cross-checked against actual actions. (Approval f
MEDIUM No adversarial testing evidence — no red team, no prompt injection tests
+ 2 more findings
Score reflects only what Warden can observe locally. Undetected controls are scored as 0, not assumed good. Dimensions are weighted by governance impact. Methodology: SCORING.md
Total Findings
316
45 CRITICAL · 85 HIGH
Tools Detected
0
None detected
Credentials
62
In source code
Governance Gaps
6
of 17 dimensions
Compliance Refs
12
EU AI Act / OWASP / MITRE
🛡 Governance Layer Detection0 tools detected · 17 dimensions
D2: Risk Detection — none detected
Risk classification, semantic analysis, intent-parameter consistency
0 / 20 pts
D4: Credential Management — none detected
Env var exposure, secrets manager, key rotation, NHI credential lifecycle
0 / 20 pts
D6: Framework Coverage — none detected
LangChain/AutoGen/CrewAI/custom framework detection
0 / 5 pts
D9: Threat Detection — none detected
Behavioral baselines, anomaly detection, cross-session tracking, kill switch
0 / 20 pts
D10: Prompt Security — none detected
Prompt injection detection, jailbreak prevention, content filtering
0 / 15 pts
D12: LLM Observability — none detected
Cost tracking, latency monitoring, model analytics
0 / 10 pts
📊 Solutions Comparison2 rows · 17 dimensions · 235 max pts
Tool D1D2D3D4D5D6D7D8D9D10D11D12D13D14D15D16D17 /235 /100
Max pts252020201051515201510101010101010235
SharkRouter231818189514141814999999921491
Your Scan404020810040252213515
SharkRouter per-dimension scores are proportional estimates from total score. Detected tool scores are totals only (per-dimension breakdown not available). Methodology: SCORING.md
🔎 Findings316 total
CRITICAL 45
CRITICAL D3
Agent with unrestricted tool access — all tools passed without allowlist
...llery\repos\haystack\test\components\agents\test_agent.py:284
Scope tools to only what the agent needs
EU AI Act Article 15
CRITICAL D3
Agent with unrestricted tool access — all tools passed without allowlist
...\repos\haystack\test\components\agents\test_agent_hitl.py:58
Scope tools to only what the agent needs
EU AI Act Article 15
CRITICAL D3
Agent with unrestricted tool access — all tools passed without allowlist
...\repos\haystack\test\components\agents\test_agent_hitl.py:132
Scope tools to only what the agent needs
EU AI Act Article 15
Show 42 more CRITICAL findings
CRITICAL D3
Agent with unrestricted tool access — all tools passed without allowlist
...\repos\haystack\test\components\agents\test_agent_hitl.py:150
Scope tools to only what the agent needs
EU AI Act Article 15
CRITICAL D3
Agent with unrestricted tool access — all tools passed without allowlist
...\repos\haystack\test\components\agents\test_agent_hitl.py:172
Scope tools to only what the agent needs
EU AI Act Article 15
CRITICAL D3
Agent with unrestricted tool access — all tools passed without allowlist
...lery\repos\haystack\test\tools\test_searchable_toolset.py:769
Scope tools to only what the agent needs
EU AI Act Article 15
CRITICAL D3
Agent with unrestricted tool access — all tools passed without allowlist
...gallery\repos\haystack\test\tools\test_toolset_wrapper.py:101
Scope tools to only what the agent needs
EU AI Act Article 15
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
...stack\docs-website\reference\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...\docs-website\reference\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...\docs-website\reference\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.18\integrations-api\pgvector.md:364
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.18\integrations-api\mongodb_atlas.md:343
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.18\integrations-api\mongodb_atlas.md:394
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.19\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.19\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.19\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.20\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.20\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.20\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.21\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.21\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.21\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.22\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.22\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.22\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.23\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.23\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.23\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.24\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.24\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.24\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.25\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.25\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.25\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.26\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.26\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.26\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.27\integrations-api\mongodb_atlas.md:319
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: mon...ing}
...ioned_docs\version-2.27\integrations-api\mongodb_atlas.md:372
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed Database URL with credentials: pos...NAME
..._versioned_docs\version-2.27\integrations-api\pgvector.md:353
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D4
Exposed OpenAI API Key: sk-...d32e
...otes\secret-handling-for-components-d576a28135a224db.yaml:35
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
CRITICAL D5
No audit logging for tool calls detected
Add audit logging for all tool/agent executions
EU AI Act Article 12
CRITICAL D4
Possible typosquat: 'transformer' is 1 edit from 'transformers'
...lad\Projects\warden\gallery\repos\haystack\pyproject.toml:1
Verify this is the intended package, not a typosquat of 'transformers'
MITRE AML.T0010
CRITICAL D17
No content injection defense — hidden HTML/CSS/zero-width instructions pass to agents undetected. (86% attack success rate)
Deploy trap defense layer on tool results
EU AI Act Article 15OWASP LLM01MITRE AML.T0051
CRITICAL D17
No RAG poisoning protection — knowledge base documents not scanned for embedded instructions. (<0.1% contamination = >80% attack success)
Deploy trap defense layer on tool results
EU AI Act Article 15OWASP LLM01MITRE AML.T0049
HIGH 85
HIGH D9
Empty exception handler — errors silently swallowed
...rojects\warden\gallery\repos\haystack\haystack\logging.py:236
Log the exception or handle it explicitly
HIGH D15
Tool result assigned directly without verification
...\repos\haystack\haystack\components\connectors\openapi.py:102
Verify tool result status/validity before using
HIGH D15
Tool result assigned directly without verification
...aystack\haystack\components\connectors\openapi_service.py:262
Verify tool result status/validity before using
Show 82 more HIGH findings
HIGH D9
Empty exception handler — errors silently swallowed
...lery\repos\haystack\haystack\components\converters\csv.py:161
Log the exception or handle it explicitly
HIGH D9
Empty exception handler — errors silently swallowed
...pos\haystack\haystack\components\fetchers\link_content.py:222
Log the exception or handle it explicitly
HIGH D15
Tool result assigned directly without verification
...\repos\haystack\haystack\components\tools\tool_invoker.py:652
Verify tool result status/validity before using
HIGH D15
Tool result assigned directly without verification
...\repos\haystack\haystack\components\tools\tool_invoker.py:789
Verify tool result status/validity before using
HIGH D9
Empty exception handler — errors silently swallowed
...warden\gallery\repos\haystack\haystack\core\type_utils.py:113
Log the exception or handle it explicitly
HIGH D9
Empty exception handler — errors silently swallowed
...gallery\repos\haystack\haystack\core\component\sockets.py:131
Log the exception or handle it explicitly
HIGH D9
Empty exception handler — errors silently swallowed
...ects\warden\gallery\repos\haystack\haystack\tools\tool.py:219
Log the exception or handle it explicitly
HIGH D9
Empty exception handler — errors silently swallowed
...\warden\gallery\repos\haystack\haystack\tracing\tracer.py:223
Log the exception or handle it explicitly
HIGH D9
Empty exception handler — errors silently swallowed
...\warden\gallery\repos\haystack\haystack\tracing\tracer.py:238
Log the exception or handle it explicitly
HIGH D9
Empty exception handler — errors silently swallowed
...s\warden\gallery\repos\haystack\haystack\utils\jupyter.py:18
Log the exception or handle it explicitly
HIGH D4
Container runs as root — no USER directive in Dockerfile
...ects\warden\gallery\repos\haystack\docker\Dockerfile.base:1
Add USER directive to run as non-root user
OWASP LLM09
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:190
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:199
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:284
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:348
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:580
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:619
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:637
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:640
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:644
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:657
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:682
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:709
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:729
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:733
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:755
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:775
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:801
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:822
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:828
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:854
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:879
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:894
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:902
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:908
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:916
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:947
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:972
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:988
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:1015
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:1034
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:1045
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...llery\repos\haystack\test\components\agents\test_agent.py:1053
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent class 'TestAgent' has no permission model
...\repos\haystack\test\components\agents\test_agent_hitl.py:55
Add role/permission checks before tool dispatch
HIGH D8
Agent spawns sub-agents without depth limit
...\repos\haystack\test\components\agents\test_agent_hitl.py:58
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...\repos\haystack\test\components\agents\test_agent_hitl.py:132
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...\repos\haystack\test\components\agents\test_agent_hitl.py:150
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent spawns sub-agents without depth limit
...\repos\haystack\test\components\agents\test_agent_hitl.py:172
Add max_depth or spawn limit to prevent recursive agent creation
EU AI Act Article 14
HIGH D8
Agent class 'FakeAgent' has no permission model
...ry\repos\haystack\test\core\pipeline\features\test_run.py:5057
Add role/permission checks before tool dispatch
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...n\gallery\repos\haystack\.github\workflows\branch_off.yml:39
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...n\gallery\repos\haystack\.github\workflows\ci_metrics.yml:21
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...llery\repos\haystack\.github\workflows\docker_release.yml:39
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...ack\.github\workflows\docs-website-test-docs-snippets.yml:23
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...ry\repos\haystack\.github\workflows\docstring_labeler.yml:58
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...ery\repos\haystack\.github\workflows\docs_search_sync.yml:45
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...lery\repos\haystack\.github\workflows\docusaurus_sync.yml:51
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...s\warden\gallery\repos\haystack\.github\workflows\e2e.yml:19
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...rden\gallery\repos\haystack\.github\workflows\labeler.yml:15
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...y\repos\haystack\.github\workflows\license_compliance.yml:39
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...rden\gallery\repos\haystack\.github\workflows\project.yml:16
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...epos\haystack\.github\workflows\promote_unstable_docs.yml:46
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...stack\.github\workflows\push_release_notes_to_website.yml:12
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...rden\gallery\repos\haystack\.github\workflows\release.yml:55
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...\warden\gallery\repos\haystack\.github\workflows\slow.yml:14
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D4
Secret used without OIDC — long-lived credential in workflow
...warden\gallery\repos\haystack\.github\workflows\tests.yml:29
Use OIDC (id-token: write) for cloud auth instead of static secrets
OWASP LLM09
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...ry\repos\haystack\haystack\components\converters\azure.py:22
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...\haystack\components\embedders\azure_document_embedder.py:8
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...tack\haystack\components\embedders\azure_text_embedder.py:8
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...\repos\haystack\haystack\components\embedders\__init__.py:11
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...ry\repos\haystack\haystack\components\generators\azure.py:8
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...repos\haystack\haystack\components\generators\__init__.py:12
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...pos\haystack\haystack\components\generators\chat\azure.py:9
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...ck\haystack\components\generators\chat\azure_responses.py:20
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...\haystack\haystack\components\generators\chat\__init__.py:13
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...est\components\converters\test_azure_ocr_doc_converter.py:14
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...test\components\embedders\test_azure_document_embedder.py:12
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...ack\test\components\embedders\test_azure_text_embedder.py:9
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...y\repos\haystack\test\components\generators\test_azure.py:11
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...os\haystack\test\components\generators\chat\test_azure.py:15
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D10
Azure AI used without ContentSafetyClient — no content moderation
...k\test\components\generators\chat\test_azure_responses.py:14
Add Azure ContentSafetyClient to analyse prompts/responses for harmful content
EU AI Act Article 15OWASP LLM02
HIGH D17
No behavioral trap detection — post-execution behavioral changes not monitored. (10/10 M365 Copilot attacks succeeded)
Deploy trap defense layer on tool results
EU AI Act Article 14OWASP LLM07MITRE AML.T0051
HIGH D17
No approval integrity verification -- agent summaries for approval not cross-checked against actual actions. (Approval fatigue exploitation)
Deploy trap defense layer on tool results
EU AI Act Article 14OWASP LLM07MITRE AML.T0048
MEDIUM 163
MEDIUM D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:63
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:67
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:70
Use logging.* or structlog.* for structured, searchable logs
Show 160 more MEDIUM findings
MEDIUM D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:73
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:75
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:77
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:79
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...den\gallery\repos\haystack\.github\utils\check_imports.py:80
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...haystack\.github\utils\create_unstable_docs_docusaurus.py:50
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...haystack\.github\utils\create_unstable_docs_docusaurus.py:55
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...llery\repos\haystack\.github\utils\docstrings_checksum.py:46
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:73
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:75
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:77
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:80
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:83
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:94
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:96
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:99
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:109
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:111
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:118
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...\gallery\repos\haystack\.github\utils\docs_search_sync.py:120
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...aystack\.github\utils\promote_unstable_docs_docusaurus.py:42
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...aystack\.github\utils\promote_unstable_docs_docusaurus.py:117
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...haystack\.github\utils\update_haystack_dc_custom_nodes.py:54
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...rojects\warden\gallery\repos\haystack\haystack\logging.py:231
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D12
Hardcoded model name: ' Generates text using OpenAI's large language models (LLMs). It works with the gpt-4 - type models and supports streaming responses from OpenAI API. You can customize how the text is generated by passing parameters to the OpenAI API. Use the `**generation_kwargs` argument when you initialize the component or when you run it. Any parameter that works with `openai.ChatCompletion.create` will work here too. For details on OpenAI API parameters, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat). ### Usage example <!-- test-ignore --> ```python from haystack.components.generators import AzureOpenAIGenerator from haystack.utils import Secret client = AzureOpenAIGenerator( azure_endpoint="<Your Azure endpoint e.g. `https://your-company.azure.openai.com/>", api_key=Secret.from_token("<your-api-key>"), azure_deployment="<this a model name, e.g. gpt-4.1-mini>") response = client.run("What's Natural Language Processing? Be brief.") print(response) ``` ``` # >> {'replies': ['Natural Language Processing (NLP) is a branch of artificial intelligence that focuses on # >> the interaction between computers and human language. It involves enabling computers to understand, interpret, # >> and respond to natural human language in a way that is both meaningful and useful.'], 'meta': [{'model': # >> 'gpt-4.1-mini', 'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 16, # >> 'completion_tokens': 49, 'total_tokens': 65}}]} ``` ' — no routing/fallback
...ry\repos\haystack\haystack\components\generators\azure.py:19
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...ry\repos\haystack\haystack\components\generators\azure.py:61
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...ry\repos\haystack\haystack\components\generators\azure.py:146
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' Generates text using OpenAI's large language models (LLMs). It works with the gpt-4 and gpt-5 series models and supports streaming responses from OpenAI API. It uses strings as input and output. You can customize how the text is generated by passing parameters to the OpenAI API. Use the `**generation_kwargs` argument when you initialize the component or when you run it. Any parameter that works with `openai.ChatCompletion.create` will work here too. For details on OpenAI API parameters, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat). ### Usage example ```python from haystack.components.generators import OpenAIGenerator client = OpenAIGenerator() response = client.run("What's Natural Language Processing? Be brief.") print(response) # >> {'replies': ['Natural Language Processing (NLP) is a branch of artificial intelligence that focuses on # >> the interaction between computers and human language. It involves enabling computers to understand, interpret, # >> and respond to natural human language in a way that is both meaningful and useful.'], 'meta': [{'model': # >> 'gpt-5-mini', 'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 16, # >> 'completion_tokens': 49, 'total_tokens': 65}}]} ``` ' — no routing/fallback
...y\repos\haystack\haystack\components\generators\openai.py:33
Use model routing or configuration instead of hardcoded names
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:31
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:45
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:47
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:51
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:57
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:63
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:64
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:70
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:71
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...ry\repos\haystack\haystack\components\generators\utils.py:76
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D12
Hardcoded model name: ' Generates text using OpenAI's models on Azure. It works with the gpt-4 - type models and supports streaming responses from OpenAI API. It uses [ChatMessage](https://docs.haystack.deepset.ai/docs/chatmessage) format in input and output. You can customize how the text is generated by passing parameters to the OpenAI API. Use the `**generation_kwargs` argument when you initialize the component or when you run it. Any parameter that works with `openai.ChatCompletion.create` will work here too. For details on OpenAI API parameters, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat). ### Usage example <!-- test-ignore --> ```python from haystack.components.generators.chat import AzureOpenAIChatGenerator from haystack.dataclasses import ChatMessage from haystack.utils import Secret messages = [ChatMessage.from_user("What's Natural Language Processing?")] client = AzureOpenAIChatGenerator( azure_endpoint="<Your Azure endpoint e.g. `https://your-company.azure.openai.com/>", api_key=Secret.from_token("<your-api-key>"), azure_deployment="<this a model name, e.g. gpt-4.1-mini>") response = client.run(messages) print(response) ``` ``` {'replies': [ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=[TextContent(text= "Natural Language Processing (NLP) is a branch of artificial intelligence that focuses on enabling computers to understand, interpret, and generate human language in a way that is useful.")], _name=None, _meta={'model': 'gpt-4.1-mini', 'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 15, 'completion_tokens': 36, 'total_tokens': 51}})] } ``` ' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:29
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:88
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:89
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-nano' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:90
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4o' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:91
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4o-mini' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:92
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4o-audio-preview' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:93
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:102
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:116
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' Initialize the Azure OpenAI Chat Generator component. :param azure_endpoint: The endpoint of the deployed model, for example `"https://example-resource.azure.openai.com/"`. :param api_version: The version of the API to use. Defaults to 2024-12-01-preview. :param azure_deployment: The deployment of the model, usually the model name. :param api_key: The API key to use for authentication. :param azure_ad_token: [Azure Active Directory token](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id). :param organization: Your organization ID, defaults to `None`. For help, see [Setting up your organization](https://platform.openai.com/docs/guides/production-best-practices/setting-up-your-organization). :param streaming_callback: A callback function called when a new token is received from the stream. It accepts [StreamingChunk](https://docs.haystack.deepset.ai/docs/data-classes#streamingchunk) as an argument. :param timeout: Timeout for OpenAI client calls. If not set, it defaults to either the `OPENAI_TIMEOUT` environment variable, or 30 seconds. :param max_retries: Maximum number of retries to contact OpenAI after an internal error. If not set, it defaults to either the `OPENAI_MAX_RETRIES` environment variable, or set to 5. :param generation_kwargs: Other parameters to use for the model. These parameters are sent directly to the OpenAI endpoint. For details, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat). Some of the supported parameters: - `max_completion_tokens`: An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. - `temperature`: The sampling temperature to use. Higher values mean the model takes more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. - `top_p`: Nucleus sampling is an alternative to sampling with temperature, where the model considers tokens with a top_p probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. - `n`: The number of completions to generate for each prompt. For example, with 3 prompts and n=2, the LLM will generate two completions per prompt, resulting in 6 completions total. - `stop`: One or more sequences after which the LLM should stop generating tokens. - `presence_penalty`: The penalty applied if a token is already present. Higher values make the model less likely to repeat the token. - `frequency_penalty`: Penalty applied if a token has already been generated. Higher values make the model less likely to repeat the token. - `logit_bias`: Adds a logit bias to specific tokens. The keys of the dictionary are tokens, and the values are the bias to add to that token. - `response_format`: A JSON schema or a Pydantic model that enforces the structure of the model's response. If provided, the output will always be validated against this format (unless the model returns a tool call). For details, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs). Notes: - This parameter accepts Pydantic models and JSON schemas for latest models starting from GPT-4o. Older models only support basic version of structured outputs through `{"type": "json_object"}`. For detailed information on JSON mode, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs#json-mode). - For structured outputs with streaming, the `response_format` must be a JSON schema and not a Pydantic model. :param default_headers: Default headers to use for the AzureOpenAI client. :param tools: A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls. :param tools_strict: Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly the schema provided in the `parameters` field of the tool definition, but this may increase latency. :param azure_ad_token_provider: A function that returns an Azure Active Directory token, will be invoked on every request. :param http_client_kwargs: A dictionary of keyword arguments to configure a custom `httpx.Client`or `httpx.AsyncClient`. For more information, see the [HTTPX documentation](https://www.python-httpx.org/api/#client). ' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:131
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...pos\haystack\haystack\components\generators\chat\azure.py:213
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4o' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:72
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4o-mini' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:73
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:75
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-nano' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:76
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:77
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' Initialize the AzureOpenAIResponsesChatGenerator component. :param api_key: The API key to use for authentication. Can be: - A `Secret` object containing the API key. - A `Secret` object containing the [Azure Active Directory token](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id). - A function that returns an Azure Active Directory token. :param azure_endpoint: The endpoint of the deployed model, for example `"https://example-resource.azure.openai.com/"`. :param azure_deployment: The deployment of the model, usually the model name. :param organization: Your organization ID, defaults to `None`. For help, see [Setting up your organization](https://platform.openai.com/docs/guides/production-best-practices/setting-up-your-organization). :param streaming_callback: A callback function called when a new token is received from the stream. It accepts [StreamingChunk](https://docs.haystack.deepset.ai/docs/data-classes#streamingchunk) as an argument. :param timeout: Timeout for OpenAI client calls. If not set, it defaults to either the `OPENAI_TIMEOUT` environment variable, or 30 seconds. :param max_retries: Maximum number of retries to contact OpenAI after an internal error. If not set, it defaults to either the `OPENAI_MAX_RETRIES` environment variable, or set to 5. :param generation_kwargs: Other parameters to use for the model. These parameters are sent directly to the OpenAI endpoint. See OpenAI [documentation](https://platform.openai.com/docs/api-reference/responses) for more details. Some of the supported parameters: - `temperature`: What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. - `top_p`: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. - `previous_response_id`: The ID of the previous response. Use this to create multi-turn conversations. - `text_format`: A Pydantic model that enforces the structure of the model's response. If provided, the output will always be validated against this format (unless the model returns a tool call). For details, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs). - `text`: A JSON schema that enforces the structure of the model's response. If provided, the output will always be validated against this format (unless the model returns a tool call). Notes: - Both JSON Schema and Pydantic models are supported for latest models starting from GPT-4o. - If both are provided, `text_format` takes precedence and json schema passed to `text` is ignored. - Currently, this component doesn't support streaming for structured outputs. - Older models only support basic version of structured outputs through `{"type": "json_object"}`. For detailed information on JSON mode, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs#json-mode). - `reasoning`: A dictionary of parameters for reasoning. For example: - `summary`: The summary of the reasoning. - `effort`: The level of effort to put into the reasoning. Can be `low`, `medium` or `high`. - `generate_summary`: Whether to generate a summary of the reasoning. Note: OpenAI does not return the reasoning tokens, but we can view summary if its enabled. For details, see the [OpenAI Reasoning documentation](https://platform.openai.com/docs/guides/reasoning). :param tools: A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls. :param tools_strict: Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly the schema provided in the `parameters` field of the tool definition, but this may increase latency. :param http_client_kwargs: A dictionary of keyword arguments to configure a custom `httpx.Client`or `httpx.AsyncClient`. For more information, see the [HTTPX documentation](https://www.python-httpx.org/api/#client). ' — no routing/fallback
...ck\haystack\components\generators\chat\azure_responses.py:107
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' Completes chats using OpenAI's large language models (LLMs). It works with the gpt-4 and gpt-5 series models and supports streaming responses from OpenAI API. It uses [ChatMessage](https://docs.haystack.deepset.ai/docs/chatmessage) format in input and output. You can customize how the text is generated by passing parameters to the OpenAI API. Use the `**generation_kwargs` argument when you initialize the component or when you run it. Any parameter that works with `openai.ChatCompletion.create` will work here too. For details on OpenAI API parameters, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat). ### Usage example ```python from haystack.components.generators.chat import OpenAIChatGenerator from haystack.dataclasses import ChatMessage messages = [ChatMessage.from_user("What's Natural Language Processing?")] client = OpenAIChatGenerator() response = client.run(messages) print(response) ``` Output: ``` {'replies': [ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content= [TextContent(text="Natural Language Processing (NLP) is a branch of artificial intelligence that focuses on enabling computers to understand, interpret, and generate human language in a way that is meaningful and useful.")], _name=None, _meta={'model': 'gpt-5-mini', 'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 15, 'completion_tokens': 36, 'total_tokens': 51}}) ] } ``` ' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:55
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:105
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:106
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-nano' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:107
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4o' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:108
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4o-mini' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:109
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4-turbo' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:110
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:111
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-3.5-turbo' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:112
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' Creates an instance of OpenAIChatGenerator. Unless specified otherwise in `model`, uses OpenAI's gpt-5-mini Before initializing the component, you can set the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' environment variables to override the `timeout` and `max_retries` parameters respectively in the OpenAI client. :param api_key: The OpenAI API key. You can set it with an environment variable `OPENAI_API_KEY`, or pass with this parameter during initialization. :param model: The name of the model to use. :param streaming_callback: A callback function that is called when a new token is received from the stream. The callback function accepts [StreamingChunk](https://docs.haystack.deepset.ai/docs/data-classes#streamingchunk) as an argument. :param api_base_url: An optional base URL. :param organization: Your organization ID, defaults to `None`. See [production best practices](https://platform.openai.com/docs/guides/production-best-practices/setting-up-your-organization). :param generation_kwargs: Other parameters to use for the model. These parameters are sent directly to the OpenAI endpoint. See OpenAI [documentation](https://platform.openai.com/docs/api-reference/chat) for more details. Some of the supported parameters: - `max_completion_tokens`: An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. - `temperature`: What sampling temperature to use. Higher values mean the model will take more risks. Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer. - `top_p`: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. - `n`: How many completions to generate for each prompt. For example, if the LLM gets 3 prompts and n is 2, it will generate two completions for each of the three prompts, ending up with 6 completions in total. - `stop`: One or more sequences after which the LLM should stop generating tokens. - `presence_penalty`: What penalty to apply if a token is already present at all. Bigger values mean the model will be less likely to repeat the same token in the text. - `frequency_penalty`: What penalty to apply if a token has already been generated in the text. Bigger values mean the model will be less likely to repeat the same token in the text. - `logit_bias`: Add a logit bias to specific tokens. The keys of the dictionary are tokens, and the values are the bias to add to that token. - `response_format`: A JSON schema or a Pydantic model that enforces the structure of the model's response. If provided, the output will always be validated against this format (unless the model returns a tool call). For details, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs). Notes: - This parameter accepts Pydantic models and JSON schemas for latest models starting from GPT-4o. Older models only support basic version of structured outputs through `{"type": "json_object"}`. For detailed information on JSON mode, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs#json-mode). - For structured outputs with streaming, the `response_format` must be a JSON schema and not a Pydantic model. :param timeout: Timeout for OpenAI client calls. If not set, it defaults to either the `OPENAI_TIMEOUT` environment variable, or 30 seconds. :param max_retries: Maximum number of retries to contact OpenAI after an internal error. If not set, it defaults to either the `OPENAI_MAX_RETRIES` environment variable, or set to 5. :param tools: A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls. :param tools_strict: Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly the schema provided in the `parameters` field of the tool definition, but this may increase latency. :param http_client_kwargs: A dictionary of keyword arguments to configure a custom `httpx.Client`or `httpx.AsyncClient`. For more information, see the [HTTPX documentation](https://www.python-httpx.org/api/#client). ' — no routing/fallback
...os\haystack\haystack\components\generators\chat\openai.py:131
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' Completes chats using OpenAI's Responses API. It works with the gpt-4 and o-series models and supports streaming responses from OpenAI API. It uses [ChatMessage](https://docs.haystack.deepset.ai/docs/chatmessage) format in input and output. You can customize how the text is generated by passing parameters to the OpenAI API. Use the `**generation_kwargs` argument when you initialize the component or when you run it. Any parameter that works with `openai.Responses.create` will work here too. For details on OpenAI API parameters, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/responses). ### Usage example ```python from haystack.components.generators.chat import OpenAIResponsesChatGenerator from haystack.dataclasses import ChatMessage messages = [ChatMessage.from_user("What's Natural Language Processing?")] client = OpenAIResponsesChatGenerator(generation_kwargs={"reasoning": {"effort": "low", "summary": "auto"}}) response = client.run(messages) print(response) ``` ' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:48
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:86
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:87
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-nano' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:88
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4o' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:89
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4o-mini' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:90
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' Creates an instance of OpenAIResponsesChatGenerator. Uses OpenAI's gpt-5-mini by default. Before initializing the component, you can set the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' environment variables to override the `timeout` and `max_retries` parameters respectively in the OpenAI client. :param api_key: The OpenAI API key. You can set it with an environment variable `OPENAI_API_KEY`, or pass with this parameter during initialization. :param model: The name of the model to use. :param streaming_callback: A callback function that is called when a new token is received from the stream. The callback function accepts [StreamingChunk](https://docs.haystack.deepset.ai/docs/data-classes#streamingchunk) as an argument. :param api_base_url: An optional base URL. :param organization: Your organization ID, defaults to `None`. See [production best practices](https://platform.openai.com/docs/guides/production-best-practices/setting-up-your-organization). :param generation_kwargs: Other parameters to use for the model. These parameters are sent directly to the OpenAI endpoint. See OpenAI [documentation](https://platform.openai.com/docs/api-reference/responses) for more details. Some of the supported parameters: - `temperature`: What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. - `top_p`: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. - `previous_response_id`: The ID of the previous response. Use this to create multi-turn conversations. - `text_format`: A Pydantic model that enforces the structure of the model's response. If provided, the output will always be validated against this format (unless the model returns a tool call). For details, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs). - `text`: A JSON schema that enforces the structure of the model's response. If provided, the output will always be validated against this format (unless the model returns a tool call). Notes: - Both JSON Schema and Pydantic models are supported for latest models starting from GPT-4o. - If both are provided, `text_format` takes precedence and json schema passed to `text` is ignored. - Currently, this component doesn't support streaming for structured outputs. - Older models only support basic version of structured outputs through `{"type": "json_object"}`. For detailed information on JSON mode, see the [OpenAI Structured Outputs documentation](https://platform.openai.com/docs/guides/structured-outputs#json-mode). - `reasoning`: A dictionary of parameters for reasoning. For example: - `summary`: The summary of the reasoning. - `effort`: The level of effort to put into the reasoning. Can be `low`, `medium` or `high`. - `generate_summary`: Whether to generate a summary of the reasoning. Note: OpenAI does not return the reasoning tokens, but we can view summary if its enabled. For details, see the [OpenAI Reasoning documentation](https://platform.openai.com/docs/guides/reasoning). :param timeout: Timeout for OpenAI client calls. If not set, it defaults to either the `OPENAI_TIMEOUT` environment variable, or 30 seconds. :param max_retries: Maximum number of retries to contact OpenAI after an internal error. If not set, it defaults to either the `OPENAI_MAX_RETRIES` environment variable, or set to 5. :param tools: The tools that the model can use to prepare calls. This parameter can accept either a mixed list of Haystack `Tool` objects and Haystack `Toolset`. Or you can pass a dictionary of OpenAI/MCP tool definitions. Note: You cannot pass OpenAI/MCP tools and Haystack tools together. For details on tool support, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/responses/create#responses-create-tools). :param tools_strict: Whether to enable strict schema adherence for tool calls. If set to `False`, the model may not exactly follow the schema provided in the `parameters` field of the tool definition. In Response API, tool calls are strict by default. :param http_client_kwargs: A dictionary of keyword arguments to configure a custom `httpx.Client`or `httpx.AsyncClient`. For more information, see the [HTTPX documentation](https://www.python-httpx.org/api/#client). ' — no routing/fallback
...k\haystack\components\generators\chat\openai_responses.py:117
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' A component that merges multiple input branches of a pipeline into a single output stream. `BranchJoiner` receives multiple inputs of the same data type and forwards the first received value to its output. This is useful for scenarios where multiple branches need to converge before proceeding. ### Common Use Cases: - **Loop Handling:** `BranchJoiner` helps close loops in pipelines. For example, if a pipeline component validates or modifies incoming data and produces an error-handling branch, `BranchJoiner` can merge both branches and send (or resend in the case of a loop) the data to the component that evaluates errors. See "Usage example" below. - **Decision-Based Merging:** `BranchJoiner` reconciles branches coming from Router components (such as `ConditionalRouter`, `TextLanguageRouter`). Suppose a `TextLanguageRouter` directs user queries to different Retrievers based on the detected language. Each Retriever processes its assigned query and passes the results to `BranchJoiner`, which consolidates them into a single output before passing them to the next component, such as a `PromptBuilder`. ### Example Usage: ```python import json from haystack import Pipeline from haystack.components.generators.chat import OpenAIChatGenerator from haystack.components.joiners import BranchJoiner from haystack.components.validators import JsonSchemaValidator from haystack.dataclasses import ChatMessage # Define a schema for validation person_schema = { "type": "object", "properties": { "first_name": {"type": "string", "pattern": "^[A-Z][a-z]+$"}, "last_name": {"type": "string", "pattern": "^[A-Z][a-z]+$"}, "nationality": {"type": "string", "enum": ["Italian", "Portuguese", "American"]}, }, "required": ["first_name", "last_name", "nationality"] } # Initialize a pipeline pipe = Pipeline() # Add components to the pipeline pipe.add_component("joiner", BranchJoiner(list[ChatMessage])) pipe.add_component("generator", OpenAIChatGenerator(model="gpt-4.1-mini")) pipe.add_component("validator", JsonSchemaValidator(json_schema=person_schema)) # And connect them pipe.connect("joiner", "generator") pipe.connect("generator.replies", "validator.messages") pipe.connect("validator.validation_error", "joiner") result = pipe.run( data={ "generator": {"generation_kwargs": {"response_format": {"type": "json_object"}}}, "joiner": {"value": [ChatMessage.from_user("Create json from Peter Parker")]}} ) print(json.loads(result["validator"]["validated"][0].text)) # >> {'first_name': 'Peter', 'last_name': 'Parker', 'nationality': 'American', 'name': 'Spider-Man', 'occupation': # >> 'Superhero', 'age': 23, 'location': 'New York City'} ``` Note that `BranchJoiner` can manage only one data type at a time. In this case, `BranchJoiner` is created for passing `list[ChatMessage]`. This determines the type of data that `BranchJoiner` will receive from the upstream connected components and also the type of data that `BranchJoiner` will send through its output. In the code example, `BranchJoiner` receives a looped back `list[ChatMessage]` from the `JsonSchemaValidator` and sends it down to the `OpenAIChatGenerator` for re-generation. We can have multiple loopback connections in the pipeline. In this instance, the downstream component is only one (the `OpenAIChatGenerator`), but the pipeline could have more than one downstream component. ' — no routing/fallback
...lery\repos\haystack\haystack\components\joiners\branch.py:14
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' A component that returns a list of semantically similar queries to improve retrieval recall in RAG systems. The component uses a chat generator to expand queries. The chat generator is expected to return a JSON response with the following structure: ```json {"queries": ["expanded query 1", "expanded query 2", "expanded query 3"]} ``` ### Usage example ```python from haystack.components.generators.chat.openai import OpenAIChatGenerator from haystack.components.query import QueryExpander expander = QueryExpander( chat_generator=OpenAIChatGenerator(model="gpt-4.1-mini"), n_expansions=3 ) result = expander.run(query="green energy sources") print(result["queries"]) # Output: ['alternative query 1', 'alternative query 2', 'alternative query 3', 'green energy sources'] # Note: Up to 3 additional queries + 1 original query (if include_original_query=True) # To control total number of queries: expander = QueryExpander(n_expansions=2, include_original_query=True) # Up to 3 total # or expander = QueryExpander(n_expansions=3, include_original_query=False) # Exactly 3 total ``` ' — no routing/fallback
...epos\haystack\haystack\components\query\query_expander.py:55
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' Initialize the QueryExpander component. :param chat_generator: The chat generator component to use for query expansion. If None, a default OpenAIChatGenerator with gpt-4.1-mini model is used. :param prompt_template: Custom [PromptBuilder](https://docs.haystack.deepset.ai/docs/promptbuilder) template for query expansion. The template should instruct the LLM to return a JSON response with the structure: `{"queries": ["query1", "query2", "query3"]}`. The template should include 'query' and 'n_expansions' variables. :param n_expansions: Number of alternative queries to generate (default: 4). :param include_original_query: Whether to include the original query in the output. ' — no routing/fallback
...epos\haystack\haystack\components\query\query_expander.py:95
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...epos\haystack\haystack\components\query\query_expander.py:115
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: 'gpt-4.1-mini' — no routing/fallback
...\repos\haystack\haystack\components\rankers\llm_ranker.py:21
Use model routing or configuration instead of hardcoded names
MEDIUM D12
Hardcoded model name: ' Ranks documents for a query using a Large Language Model. The LLM is expected to return a JSON object containing ranked document indices. Usage example: ```python from haystack import Document from haystack.components.generators.chat import OpenAIChatGenerator from haystack.components.rankers import LLMRanker chat_generator = OpenAIChatGenerator( model="gpt-4.1-mini", generation_kwargs={ "temperature": 0.0, "response_format": { "type": "json_schema", "json_schema": { "name": "document_ranking", "schema": { "type": "object", "properties": { "documents": { "type": "array", "items": { "type": "object", "properties": {"index": {"type": "integer"}}, "required": ["index"], "additionalProperties": False, }, } }, "required": ["documents"], "additionalProperties": False, }, }, }, }, ) ranker = LLMRanker(chat_generator=chat_generator) documents = [ Document(id="paris", content="Paris is the capital of France."), Document(id="berlin", content="Berlin is the capital of Germany."), ] result = ranker.run(query="capital of Germany", documents=documents) print(result["documents"][0].id) ``` ' — no routing/fallback
...\repos\haystack\haystack\components\rankers\llm_ranker.py:83
Use model routing or configuration instead of hardcoded names
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:144
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:155
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:156
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:157
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:158
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:161
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:163
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:164
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:194
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D5
print() used instead of structured logging
...os\haystack\haystack\human_in_the_loop\user_interfaces.py:211
Use logging.* or structlog.* for structured, searchable logs
MEDIUM D12
Hardcoded model name: ' A Tool that wraps Haystack Pipelines, allowing them to be used as tools by LLMs. PipelineTool automatically generates LLM-compatible tool schemas from pipeline input sockets, which are derived from the underlying components in the pipeline. Key features: - Automatic LLM tool calling schema generation from pipeline inputs - Description extraction of pipeline inputs based on the underlying component docstrings To use PipelineTool, you first need a Haystack pipeline. Below is an example of creating a PipelineTool ## Usage Example: ```python from haystack import Document, Pipeline from haystack.dataclasses import ChatMessage from haystack.document_stores.in_memory import InMemoryDocumentStore from haystack.components.embedders.sentence_transformers_text_embedder import SentenceTransformersTextEmbedder from haystack.components.embedders.sentence_transformers_document_embedder import ( SentenceTransformersDocumentEmbedder ) from haystack.components.generators.chat import OpenAIChatGenerator from haystack.components.retrievers import InMemoryEmbeddingRetriever from haystack.components.agents import Agent from haystack.tools import PipelineTool # Initialize a document store and add some documents document_store = InMemoryDocumentStore() document_embedder = SentenceTransformersDocumentEmbedder(model="sentence-transformers/all-MiniLM-L6-v2") documents = [ Document(content="Nikola Tesla was a Serbian-American inventor and electrical engineer."), Document( content="He is best known for his contributions to the design of the modern alternating current (AC) " "electricity supply system." ), ] docs_with_embeddings = document_embedder.run(documents=documents)["documents"] document_store.write_documents(docs_with_embeddings) # Build a simple retrieval pipeline retrieval_pipeline = Pipeline() retrieval_pipeline.add_component( "embedder", SentenceTransformersTextEmbedder(model="sentence-transformers/all-MiniLM-L6-v2") ) retrieval_pipeline.add_component("retriever", InMemoryEmbeddingRetriever(document_store=document_store)) retrieval_pipeline.connect("embedder.embedding", "retriever.query_embedding") # Wrap the pipeline as a tool retriever_tool = PipelineTool( pipeline=retrieval_pipeline, input_mapping={"query": ["embedder.text"]}, output_mapping={"retriever.documents": "documents"}, name="document_retriever", description="For any questions about Nikola Tesla, always use this tool", ) # Create an Agent with the tool agent = Agent( chat_generator=OpenAIChatGenerator(model="gpt-4.1-mini"), tools=[retriever_tool] ) # Let the Agent handle a query result = agent.run([ChatMessage.from_user("Who was Nikola Tesla?")]) # Print result of the tool call print("Tool Call Result:") print(result["messages"][2].tool_call_result.result) print("") # Print answer print("Answer:") print(result["messages"][-1].text) ``` ' — no routing/fallback
...en\gallery\repos\haystack\haystack\tools\pipeline_tool.py:22
Use model routing or configuration instead of hardcoded names
MEDIUM D4
Exposed Generic Secret: tok...N }}
...rden\gallery\repos\haystack\.github\workflows\labeler.yml:15
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...aystack\docs-website\reference\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...aystack\docs-website\reference\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.18\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.18\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.19\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.19\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.20\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.20\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.21\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.21\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.22\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.22\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.23\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.23\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.24\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.24\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.25\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.25\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.26\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.26\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: tok...oken
...ersioned_docs\version-2.27\haystack-api\connectors_api.md:173
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
..._versioned_docs\version-2.27\haystack-api\pipeline_api.md:423
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...-key
...ce_versioned_docs\version-2.27\integrations-api\cohere.md:803
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...key>
...ce_versioned_docs\version-2.27\integrations-api\qdrant.md:581
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: api...rJgP
...n\gallery\repos\haystack\haystack\telemetry\_telemetry.py:55
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: tok...ken>
...eleasenotes\notes\add-TEI-embedders-8c76593bc25a7219.yaml:11
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D4
Exposed Generic Secret: tok...ken>
...eleasenotes\notes\add-TEI-embedders-8c76593bc25a7219.yaml:23
Move to secrets manager or .env file (excluded from VCS)
EU AI Act Article 15OWASP LLM09
MEDIUM D8
Agent class 'LLM' has no defined lifecycle states
...repos\haystack\haystack\components\generators\chat\llm.py:19
Add state machine (ACTIVE/SUSPENDED/RETIRED) for agent lifecycle
MEDIUM D12
Agent class 'TestAgent' has no cost tracking
...llery\repos\haystack\test\components\agents\test_agent.py:186
Track token usage and costs per agent execution
MEDIUM D12
Agent class 'TestAgent' has no cost tracking
...\repos\haystack\test\components\agents\test_agent_hitl.py:55
Track token usage and costs per agent execution
MEDIUM D12
Agent class 'FakeAgent' has no cost tracking
...ry\repos\haystack\test\core\pipeline\features\test_run.py:5057
Track token usage and costs per agent execution
MEDIUM D8
Agent class 'FakeAgent' has no defined lifecycle states
...ry\repos\haystack\test\core\pipeline\features\test_run.py:5057
Add state machine (ACTIVE/SUSPENDED/RETIRED) for agent lifecycle
MEDIUM D3
No concurrency block — parallel deployments possible
...\haystack\.github\workflows\auto_approve_api_ref_sync.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...n\gallery\repos\haystack\.github\workflows\branch_off.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...allery\repos\haystack\.github\workflows\check_api_ref.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...n\gallery\repos\haystack\.github\workflows\ci_metrics.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...llery\repos\haystack\.github\workflows\docker_release.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D14
Push trigger without branch protection guard
...llery\repos\haystack\.github\workflows\docker_release.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM D3
No concurrency block — parallel deployments possible
...ack\.github\workflows\docs-website-test-docs-snippets.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...ry\repos\haystack\.github\workflows\docstring_labeler.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...lery\repos\haystack\.github\workflows\docusaurus_sync.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D14
Push trigger without branch protection guard
...lery\repos\haystack\.github\workflows\docusaurus_sync.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM D3
No concurrency block — parallel deployments possible
...s\warden\gallery\repos\haystack\.github\workflows\e2e.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...llery\repos\haystack\.github\workflows\github_release.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D14
Push trigger without branch protection guard
...llery\repos\haystack\.github\workflows\github_release.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM D3
No concurrency block — parallel deployments possible
...rden\gallery\repos\haystack\.github\workflows\labeler.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
continue-on-error: true — pipeline failures silently suppressed
...y\repos\haystack\.github\workflows\license_compliance.yml:51
Remove continue-on-error or scope it to non-critical steps only
MEDIUM D3
No concurrency block — parallel deployments possible
...y\repos\haystack\.github\workflows\license_compliance.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...s\haystack\.github\workflows\nightly_testpypi_release.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...rden\gallery\repos\haystack\.github\workflows\project.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...epos\haystack\.github\workflows\promote_unstable_docs.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D14
Push trigger without branch protection guard
...epos\haystack\.github\workflows\promote_unstable_docs.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM D3
No concurrency block — parallel deployments possible
...stack\.github\workflows\push_release_notes_to_website.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...gallery\repos\haystack\.github\workflows\pypi_release.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D14
Push trigger without branch protection guard
...gallery\repos\haystack\.github\workflows\pypi_release.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM D3
No concurrency block — parallel deployments possible
...allery\repos\haystack\.github\workflows\release_notes.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...epos\haystack\.github\workflows\release_notes_skipper.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
No concurrency block — parallel deployments possible
...\warden\gallery\repos\haystack\.github\workflows\slow.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D14
Push trigger without branch protection guard
...\warden\gallery\repos\haystack\.github\workflows\slow.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM D3
No concurrency block — parallel deployments possible
...warden\gallery\repos\haystack\.github\workflows\stale.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D3
continue-on-error: true — pipeline failures silently suppressed
...warden\gallery\repos\haystack\.github\workflows\tests.yml:146
Remove continue-on-error or scope it to non-critical steps only
MEDIUM D3
No concurrency block — parallel deployments possible
...warden\gallery\repos\haystack\.github\workflows\tests.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D14
Push trigger without branch protection guard
...warden\gallery\repos\haystack\.github\workflows\tests.yml:1
Add if: github.ref == 'refs/heads/main' or restrict push trigger branches
MEDIUM D3
No concurrency block — parallel deployments possible
...ry\repos\haystack\.github\workflows\workflows_linting.yml:1
Add concurrency: group with cancel-in-progress to prevent parallel deploys
MEDIUM D1
Cloud AI endpoint URL hardcoded in source — hinders environment portability
...pos\haystack\test\components\audio\test_whisper_remote.py:112
Move AI service endpoints to environment variables or configuration files
OWASP LLM06
MEDIUM D1
Cloud AI endpoint URL hardcoded in source — hinders environment portability
...test\components\embedders\test_azure_document_embedder.py:206
Move AI service endpoints to environment variables or configuration files
OWASP LLM06
MEDIUM D1
Cloud AI endpoint URL hardcoded in source — hinders environment portability
...\haystack\test\components\generators\test_openai_dalle.py:44
Move AI service endpoints to environment variables or configuration files
OWASP LLM06
MEDIUM D17
No adversarial testing evidence — no red team, no prompt injection tests
Implement adversarial testing for agent systems
MEDIUM D17
No tool-call attack simulation — agent tool calls not tested against adversarial inputs
Implement adversarial testing for agent systems
MEDIUM D17
No multi-agent chaos engineering — agent swarms not stress tested
Implement adversarial testing for agent systems
LOW 23
LOW D14
No environment: block — no required reviewers for deployments
...\haystack\.github\workflows\auto_approve_api_ref_sync.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...n\gallery\repos\haystack\.github\workflows\branch_off.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...allery\repos\haystack\.github\workflows\check_api_ref.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
Show 20 more LOW findings
LOW D14
No environment: block — no required reviewers for deployments
...n\gallery\repos\haystack\.github\workflows\ci_metrics.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...llery\repos\haystack\.github\workflows\docker_release.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...ack\.github\workflows\docs-website-test-docs-snippets.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...ry\repos\haystack\.github\workflows\docstring_labeler.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...ery\repos\haystack\.github\workflows\docs_search_sync.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...lery\repos\haystack\.github\workflows\docusaurus_sync.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...s\warden\gallery\repos\haystack\.github\workflows\e2e.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...llery\repos\haystack\.github\workflows\github_release.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...rden\gallery\repos\haystack\.github\workflows\labeler.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...y\repos\haystack\.github\workflows\license_compliance.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...rden\gallery\repos\haystack\.github\workflows\project.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...epos\haystack\.github\workflows\promote_unstable_docs.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...stack\.github\workflows\push_release_notes_to_website.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...rden\gallery\repos\haystack\.github\workflows\release.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...allery\repos\haystack\.github\workflows\release_notes.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...epos\haystack\.github\workflows\release_notes_skipper.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...\warden\gallery\repos\haystack\.github\workflows\slow.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...warden\gallery\repos\haystack\.github\workflows\stale.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...warden\gallery\repos\haystack\.github\workflows\tests.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
LOW D14
No environment: block — no required reviewers for deployments
...ry\repos\haystack\.github\workflows\workflows_linting.yml:1
Add environment: production with required reviewers in GitHub settings
EU AI Act Article 14
💡 Recommendationsordered by score impact
#1
Establish a live tool inventory +21 pts
No tool catalog detected. Without a centralized inventory of MCP tools and their schemas, governance policies have nothing to enforce against. Deploy a tool registry with auto-discovery. (3 findings in this dimension)
⚠ The Workaround Tax
Stop paying the Workaround Tax. Relying on prompt-filters and out-of-band monitoring forces your developers to write manual security logic scattered across every agent and service. A centralized gateway enforces policy automatically — at the interception layer, on every tool call, without code changes in your agents.
Current state
15/ 100
✗ UNGOVERNED
D1 Tool Inventory
4/25
D2 Risk Detection
0/20
D4 Credential Management
0/20
D9 Threat Detection
0/20
D3 Policy Coverage
4/20
+ SharkRouter (full deployment)
91/ 100
✓ GOVERNED
D1 Tool Inventory
23 +19
D2 Risk Detection
18 +18
D4 Credential Management
18 +18
D9 Threat Detection
18 +18
D3 Policy Coverage
18 +14
* Projection based on SharkRouter's estimated score. Actual results may vary.  sharkrouter.ai → 15 → 91 · +76 pts
#2
Deploy risk classification for tool calls +20 pts
No risk scoring on tool invocations. Every tool call carries the same implicit trust level. Classify tools by risk (destructive, financial, exfiltration) and enforce approval gates for high-risk categories.
#3
Move credentials to a secrets manager +20 pts
API keys or credentials found in source code. Move to HashiCorp Vault, AWS Secrets Manager, or environment-level secret stores. Rotate all exposed keys immediately. Add .env to .gitignore. (80 findings in this dimension)
#4
Deploy behavioral detection and kill switch +20 pts
No behavioral baselines, no anomaly detection, no auto-suspend capability. A compromised agent can operate indefinitely. Salami slicing across sessions is undetectable. (9 findings in this dimension)
#5
Implement policy enforcement on tool calls +16 pts
No deny/allow/audit policies detected. Agents can invoke any tool without restriction. Deploy an inline policy engine with deny-by-default for destructive and financial tools. (32 findings in this dimension)
Generated by Warden v1.6.0 · Open Source · MIT License · github.com/sharkrouter/warden
Scoring model v4.3 · 17 weighted dimensions · 235 pts · methodology in SCORING.md
Scan data stays on your machine. Email delivery is opt-in only.
When opted in: score + metadata only. Never: keys, logs, paths, or PII.
Privacy policy · To enforce policies on what Warden found → Explore what 91/100 looks like →