Andrew Marble
marble.onl
andrew@willows.ai
March 8, 2026
Two recent experiences where I had questions needing some external input: One, I saw a word (the word was Pareve but it’s unimportant) and I didn’t know what it meant. I thought it had something to do with food and religious practices and my first thought was to text a chef I knew to ask about it. Of course I quickly realized I could just look it up, which made me lose interest. I would have been interested in the experience and thoughts of someone whose opinions I respected, both as a social thing and to learn something. The dictionary definition I don’t really care about unless I have some acute need to know it.
The second (if you thought the first was boring) was a programming
question about preventing None from being cast to NaN when adding a
python list containing integers interspersed with Nones to a pandas
DataFrame (spoiler alert, the answer is df["A"] =
pd.Series(a,dtype='object')). For this I asked an AI chatbot, got the
answer, tested it, and moved on.
These are examples of two different kinds of question answering, or problem solving, that often get conflated. The second is very transactional and there’s an imminent need for the answer. The first is relationship based and is about wanting someone’s opinion, without which the fact itself isn’t too consequential.
For transactional question answering, we have dictionaries, encyclopedias, Wikipedia, now LLMs that can provide reference information. While those sources aren’t interchangeable with opinions, they often get substituted in. How many conversations have been ruined by someone looking up a fact on wikipedia when you actually wanted to discuss what people knew about it? In relationship based question answering, the question is almost a pretense to be social, share views, and learn something. It’s why we talk to other people and it’s also the basis of most white collar work.
The distinction between question types is becoming more relevant now that people are talking about AI (LLMs) replacing human work. A material, if not dominant percentage of “questions” we answer while we are working are type 1 human interaction questions rather than type 2 transactional. An area where type 1 dominates is strategy.
For as long as there has been AI, there have been claims, often centered around AI being better at making powerpoint presentations, that strategy consulting is about to become obsolete. I don’t think many people involved in the industry (as buyers or sellers) take these too seriously, but strategy consulting is a useful study in why AI answers are often over-rated. Forgetting about the cynical “we hired consultants to provide cover for an unpopular decision” variations (which obviously don’t have the same ring if we replace consultants with ChatGPT), consulting is trust and relationship based. Buyers aren't asking for a correct answer, they are asking for advice from someone whose opinion they respect. They also often, for both catharsis and to clarify their own thinking, want to explain their situation to somebody else, and feel understood. While there is no harm in asking an AI, few rational people are going to give the same weight to what it says than to a trusted advisor; this is just as true for major strategy decisions as it is for personal advice.
Ultimately, most business tasks have a similar component to them. They rely on judgement, experience, and trust to set a plausible course and correct it when needed, and don’t hinge on determining a correct answer or providing facts. And businesses are organized as groups of people that communicate socially with each other. Perhaps unintuitively, human factors become even more important in procedural organizations like government and military because they don’t have market exposure to provide feedback, and for better or worse rely on human organization.
None of this is to say that people can’t use AI for sub-processing tasks that require a type 2 answer. It is great for this. Just that it doesn’t replace the social, human, and relationship based aspects of work, whether this is trust, or just being interested in what someone else says. It doesn’t really matter how good AI systems get, that’s not going to change, and since most white collar work deals with these kinds of problems, there is little danger in it being replaced.