On the Future of Life Institute Open Letter: Pause Giant AI Experiments

I agree with the sentiment of the Future of Life Institute letter suggesting we pause 'giant AI experiments.' However, consider if all workers of good heart agreed. That gives a time advantage to adversaries of freedom and others of bad heart.

ChatGTP is the most sophisticated auto-complete/auto-suggest engine ever built. It has an extremely long tail of contexts and content. Thus, in principle, intrinsically, it has greater common sense than any one human - an interesting feature, which I haven't yet seen anyone note.

ChatGTP is not inherently recursively self-improving (RSI), to my knowledge. RSI is what will trigger a hard take-off. And ChatGTP is not sentient, by most workers' definition in the fields of intelligence. 

Therefore, everyone needs to calm down a bit and think about our priorities. What we don't want to happen is very bright AI workers get entranced with the epochal challenge of the project to create AGI and their prowess to meet the challenge, as did the nuclear bomb workers did with their project, and lose sight of the danger if their technology falls into the wrong hands.

Thus, AGI safety R&D must precede AGI R&D. And we must apply the most advanced AI to AGI safety R&D, which we can call RSI^2.  RSI^2 must precede RSI.

Whether we pause giant AI experiments or not, it is time to advance AGI safety R&D ahead of AGI R&D and keep it there.