Do LLMs performing chain-of-thought reasoning exhibit biases analogous to human System 1 "fast thinking" as described by Kahneman?