Нейролента - подборка новостей о нейронных сетях, ChatGPT

A recent interview with Ilya Sutskever with a...

A recent interview with Ilya Sutskever with a lot of history inside. And more.

“Existing alignment methods won’t work for models smarter than humans because they fundamentally assume that humans can reliably evaluate what AI systems are doing,” says Leike. “As AI systems become more capable, they will take on harder tasks.” And that—the idea goes—will make it harder for humans to assess them. “In forming the superalignment team with Ilya, we’ve set out to solve these future alignment challenges,” he says.

...

But, for Sutskever, superalignment is the inevitable next step. “It’s an unsolved problem,” he says. It’s also a problem that he thinks not enough core machine-learning researchers, like himself, are working on. “I’m doing it for my own self-interest,” he says. “It’s obviously important that any superintelligence anyone builds does not go rogue. Obviously.”

...

“Once you overcome the challenge of rogue AI, then what? Is there even room for human beings in a world with smarter AIs?” he says.

“One possibility—something that may be crazy by today’s standards but will not be so crazy by future standards—is that many people will choose to become part AI.” Sutskever is saying this could be how humans try to keep up. “At first, only the most daring, adventurous people will try to do it. Maybe others will follow. Or not.”

https://www.technologyreview.com/2023/10/26/1082398/exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai/