1 min read

Working with AI Is a Question-Setting Problem

Working with AI Is a Question-Setting Problem
Photo by Emily Morter / Unsplash

AI is often described as knowing a lot. That description is technically true, but the more interesting property is that it knows too much to be useful on its own. There are simply too many directions it could go in at once.

Because of that, it can’t really operate without being steered. Every question you ask narrows the space a little. It tells the system which angle to take, what to ignore, and what kind of answer you’re even looking for. You’re not pulling knowledge out of it so much as pointing it somewhere.

In that sense, AI feels very different from earlier tools. With search engines, the hard part was finding information at all. You were trying to dig the right things out of a large but still limited pile. With AI, the problem is almost the opposite. There’s more material than you can possibly work through, and without direction it just spills out in generic ways.

The difference shows up quickly in practice. If you ask a broad or fuzzy question, the response tends to drift. It covers safe ground. It sounds fine, but it doesn’t help you think. When you ask a more specific question ... one that calls out a constraint, a trade-off, or something you’re unsure about ... the answer changes. The system has less room to wander and is forced to commit to a particular line of reasoning.

I think, that's where the real leverage shows up. You can take one angle, see where it falls apart, adjust the question, and push in a different direction. You’re not trying to cover everything. You’re shaping the space you’re willing to explore, one question at a time.

Over a few iterations, that starts to matter more than the individual answers. The questions you choose determine which options even come into view. Different questions don’t just refine an idea; they often lead you to entirely different solutions or product directions than you would have reached otherwise.