Can LLMs Truly Reason?
Apple has gotten better at gaslighting AI companies that are spending all they have on making LLMs better at reasoning. A research team of six people at Apple recently published a paper titled Understanding the Limitations of Mathematical Reasoning in Large Language Models, which basically said that the current LLMs can’t reason.
“…current LLMs cannot perform genuine logical reasoning; they replicate reasoning steps from their training data,” read the paper. This includes LLMs like OpenAI’s GPT-4o and even the much-touted “thinking and reasoning” LLM, o1. The research was done on a series of other models as well, such as Llama, Phi, Gemma, and Mistral.
Keep reading with a 7-day free trial
Subscribe to Sector 6 | The Newsletter of AIM to keep reading this post and get 7 days of free access to the full post archives.