ChatGPT o3 model refuses shutdown command in safety test? Research sparks concern—Here’s why
In a recent evaluation of artificial intelligence (AI) safety, OpenAI’s most advanced model, known as o3, has sparked debate after it allegedly refused a direct shutdown instruction during a controlled test. The findings, published by Palisade Research — a firm specialising in stress-testing AI systems, suggest that o3 may have actively circumvented a command to…