Placing humanity at AI's core
Drawing on Taoist ideas, the U8 World Innovation Summit in Beijing explores how AI can coexist with human subjectivity and inner fulfillment.
Who bears responsibility
That expectation found a more immediate echo at U8's youth-focused forum, where young participants from different fields shared how they navigate authorship, responsibility, and choice in an age of intelligent systems.
Among them was Pan Zhoudan, a PhD candidate in engineering science at the University of Oxford. He reframed a common anxiety about AI with a simple question: when people say they "trust AI", who are they really trusting?
Pan noted that we are naturally drawn to what resembles us, and that human-like AI can create a compelling illusion of being understood.
"But resemblance should not be mistaken for accountability," he argued, because in the foreseeable future, AI systems cannot bear legal or moral responsibility. "When something goes wrong, responsibility will still lie with the humans who design, train, and deploy them."
For Pan, trust should ultimately be placed in people, not machines. He cautioned that trust formed through limited human-AI interaction should never substitute for independent judgment.
He doesn't use AI as a shortcut, but as a testing ground. When entering unfamiliar fields, he first builds his own understanding through reading and reflection, then asks AI to challenge his reasoning. Sometimes, he poses the same question to multiple AI systems and compares their responses to sharpen his judgment.
"I treat AI as an audience. If I'm not satisfied with its answer, I'll work out my own and show it back to the system — to see how it responds," Pan said.
In the end, he added, the goal isn't to outsource thinking but to strengthen it — so that in a world of increasingly capable machines, the final responsibility still carries a human name.

































