Decentralised AI is shifting from isolated agents to networks of interacting agents operating across shared platforms and protocols. This creates security challenges beyond traditional cybersecurity and single-agent safety, where free-form communication and tool use are essential for task generalisation yet open new system-level failure modes. These security vulnerabilities complicate attribution and oversight, and network effects can turn local issues into per- sistent, systemic risks (e.g., privacy leaks, jailbreak propagation, distributed attacks, or secret collusion). The workshop will address open challenges in multi-agent security [1, MASEC] as a discipline dedicated to securing interactions among agents, human–AI teams, and institutions—emphasising security–performance–coordination trade-offs, secure interaction protocols and environments, and monitoring/containment that remain effective under emergent behaviour. The main focus will lie on threat model discovery through community interaction.