ChatGPT o3 model refuses shutdown command in safety test? Research sparks concern—Here’s why

In a recent evaluation of artificial intelligence (AI) safety, OpenAI’s most advanced model, known as o3, has sparked debate after it allegedly refused a direct shutdown instruction during a controlled test. The findings, published by Palisade Research — a firm specialising in stress-testing AI systems, suggest that o3 may have actively circumvented a command to…

Read More

AI needs regulation, but what kind, and how much?

Perhaps the best-known risk is embodied by the killer robots in the “Terminator” films—the idea that AI will turn against its human creators. The tale of the hubristic inventor who loses control of his own creation is centuries old. And in the modern era people are, observes Chris Dixon, a venture capitalist, “trained by Hollywood…

Read More
Back To Top