

It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen
This also isn’t an accurate characterization IMO. LLMs and ML algorithms in general can generalize to unseen problems, even if they aren’t perfect at this; for instance, you’ll find that LLMs can produce commands to control robot locomotion, even on different robot types.
“Reasoning” here is based on chains of thought, where they generate intermediate steps which then helps them produce more accurate results. You can fairly argue that this isn’t reasoning, but it’s not like it’s traversing a fixed knowledge graph or something.
Thanks for the respectful discussion! I work in ML (not LLMs, but computer vision), so of course I’m biased. But I think it’s understandable to dislike ML/AI stuff considering that there are unfortunately many unsavory practices taking place (potential copyright infringement, very high power consumption, etc.).