I really want to hit on one major point about military AI arms buildup. I think you'll find if you re-read my initial note with a fresh, even, calm perspective, that you will find that I was not rude at all, not even slightly. I'm really genuinely puzzled by your actions. But on to the main point. Right now it's just ANI for military arms, and it will take a little while to warm up, but everyone agrees, including responsible generals or program managers with a need to know from DARPA, the DoD, General Mattis, Bob Work, etc. that in 10 years AI and super-drones that kill people with no human participation will be extremely sophisticated and advanced already, and in twenty long years with AGI now on the table it will be out of hand completely. You don't think that Lockheed Martin and Raytheon and Halliburton will be the first one who knows about the AI advances? And the Russians and the Chinese are right there with it for AI -- the Chinese are actually superior after 10 years. The slightest mistake could be absolutely deadly; end of the world deadly. Honestly how you don't know it with your advanced knowledge of AI and AGI just befuddles me -- a lot of true AI experts like Stuart Russell (professor of AI at Cal, my old school) agree with me. Elon Musk agrees with me -- HE SAYS it's the most important problem in the world, an existential threat to out survival as a species within our lifetime -- I'm just agreeing with HIM. And he's the CEO of OpenAI and CEO of Neuralink, and not just a "hands-off" CEO but an active player in their key decisions, and a fantastic one.