Can AI Replace Mathematicians?

As a mathematician, I want to tackle one of the hottest debates of recent times: Can AI actually replace us?

To answer this question properly, we need to rewind the tape a bit and look at the dizzying evolution of AI over the last few years.

If you recall the early "hype" days of AI (I’m talking about the 2023 GPT here), the picture was pretty clear: AI was failing math class. I remember those days vividly; especially in abstract topics requiring high-level reasoning like probability, the model would basically talk nonsense. It struggled to even perceive matrices in my linear algebra questions and mixed up simple calculations. Back then, sitting in front of the screen, I thought, "This technology will probably never reach the level of a mathematician."

But when we look at today, the landscape is completely different. There’s been a frighteningly serious improvement in just a few years. We don't even need to hand-write equations anymore; we can just upload photos to ask our questions. Introducing matrices to AI is now child's play. In my recent tests, I’ve seen that the old "confusion" is gone, and it gives much more consistent and sharp answers, especially in probability. I’m sure everyone is aware of this shift.


So, let’s get to the main question: Is this speed of development enough for AI to snatch the mathematician's seat?

From the outside, you might think things are heading that way and it’ll happen very soon. But as someone in the kitchen—inside the field—I don't think it's possible in the near future.

Yes, AI has advanced significantly in math; it might give correct answers to many questions we ask (even complex ones). But there is a world of difference between "giving an answer" and "producing knowledge." AI currently blends existing information, but it cannot generate new knowledge.

Mathematics isn't just about solving problems; it’s about creating problems. AI currently has no chance of developing a hypothesis from scratch and proving it intuitively. Maybe it can help us with the "grunt work"—the tedious steps of a proof—after we come up with the hypothesis, but managing this process from start to finish is nearly impossible for it right now.

There’s another very simple and human reason why I don't think this will happen anytime soon: AI has already scanned and "swallowed" almost everything on the internet—articles, theses, books. It holds humanity's entire mathematical corpus in its hands. Yet, it still makes mistakes.

If it’s really going to replace mathematicians, what it needs to learn is something much bigger than datasets. You can upload data to it, but you cannot code mathematical "intuition" or "talent." That is the real issue, and the biggest wall facing those training AI.

Comments