For university students observing the trajectory of AI, this event serves as a practical demonstration of agentic behavior. Unlike traditional large language models that merely assist with code generation or documentation, an agentic system is designed to pursue an objective—in this case, security auditing—by executing a series of steps independently. It explores the codebase, hypothesizes potential security flaws, verifies those hypotheses against code execution logic, and reports findings without constant human prompting.