As someone who builds software for a living, it's interesting to see how this is an issue for AI in exactly the same way it's an issue for software teams. If you just build what you are asked for, it will be the wrong thing. And even if you're careful, you will learn what people actually want only from their disappointed response to you actually building something. This is why we build things in small chunks and get feedback along the way: people are not good at converting their imaginative vision into written, spoken or encoded instructions, and there will always be something silently assumed. The AIs have it even harder than the human teams though, since we humans are (for the time being) better at predicting common human oversights and reading between the lines of the instructions. Perhaps one day we'll be able to preprocess our instructions through a "figure out what the human probably meant to say" AI.