I just finished watching an interview of Yann LeCun—Chief AI Scientist at Meta—where he basically echoes what we discussed earlier in the M188 video: Large Language Models can't truly reason, because they lack an abstract model of reality inside them. They just repeat what's been said before. On the other hand: how many human beings actually reason, rather than just repeating what they’ve heard?
>>Click here to continue<<
