Recently, several videos have surfaced on platforms like YouTube showing an engineer known as “STS 3D” who has built an automated turret powered by ChatGPT. In the video, the turret, equipped with a dummy rifle, can automatically track targets and respond to voice commands from the engineer, firing plastic bullets accordingly.
To create the AI turret, STS 3D used the real-time API of ChatGPT. In the video, STS 3D instructs ChatGPT to react as if under attack from the front left and front right. The AI responds by firing the rifle in those directions and offers further assistance if needed. The engineer then commands the rifle to fire at a wide angle, with ChatGPT firing at intervals of five degrees while varying the height independently. The AI follows these commands accurately.
In other clips, the engineer demonstrates programming ChatGPT to target a specific color. Using a yellow balloon as an example, the AI turret, guided by a simple webcam, tracks and shoots the balloon without delay upon command. The engineer comments that while this method of object tracking is not the most practical, it is simple and reliable.
These videos have sparked numerous comments about a dystopian future if this technology is replicated with real weapons. OpenAI seems to share this concern. The company confirmed to Futurism that it has revoked the engineer’s access to ChatGPT. OpenAI stated that its guidelines prohibit users from developing weapons or automating systems that could harm others with the AI.
The engineer has not yet commented on the situation. OpenAI had adjusted its rules for military AI use in 2024, allowing collaboration with the US military.
This situation highlights the potential ethical and safety concerns surrounding the use of AI in weaponry. As AI technology becomes more advanced and accessible, it raises questions about the boundaries of AI applications and the responsibilities of developers and companies in regulating its use.
The incident also underscores the importance of adhering to guidelines and ethical standards when developing AI systems. While AI can offer significant benefits in various fields, including automation and data analysis, its misuse could lead to unintended consequences.
As AI continues to evolve, it is crucial for developers, companies, and regulatory bodies to work together to ensure that AI technologies are used responsibly and safely. This includes setting clear guidelines, monitoring AI applications, and taking action when those guidelines are violated.
OpenAI’s response to this incident reflects the company’s commitment to ethical AI use. By revoking access to ChatGPT for the engineer, OpenAI is taking a stand against the development of potentially harmful AI applications.
In conclusion, while AI has the potential to revolutionize many aspects of our lives, it is essential to approach its development and use with caution and responsibility. Ensuring that AI technologies are used ethically and safely will require ongoing collaboration and vigilance from all stakeholders involved.