$\begingroup$

In addition to the answers that have already been given, I think another reason that mathematics doesn't collapse is that the fundamental content of mathematics is ideas and understanding, not only proofs. If mathematics were done by computers that mindlessly searched for theorems and proof but sometimes made mistakes in their proofs, then I expect that it would collapse. But usually when a human mathematician proves a theorem, they do it by achieving some new understanding or idea, and usually that idea is "correct" even if the first proof given involving it is not.

One recent and well-publicized story is that told by the late Vladimir Voevodsky in his note The Origins and Motivations of Univalent Foundations. Here's a bit of one story that he tells about his own experience:

my paper "Cohomological Theory of Presheaves with Transfers," ... was written... in 1992-93. [Only] In 1999-2000... did I discover that the proof of a key lemma in my paper contained a mistake and that the lemma, as stated, could not be salvaged. Fortunately, I was able to prove a weaker and more complicated lemma, which turned out to be sufficient for all applications.... This story got me scared. Starting from 1993, multiple groups of mathematicians studied my paper at seminars and used it in their work and none of them noticed the mistake.... A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.

I don't know any of the details of the mathematics in this story, but the fact that he was able to prove a "weaker and more complicated lemma which turned out to be sufficient for all applications" matches my own experience. For instance, while working on a recent project I discovered no fewer than nine mistaken theorem statements (not just mistakes in proofs of correct theorems) in published or almost-published literature, including several by well-known experts (and two by myself). However, in all nine cases it was simple to strengthen the hypothesis or weaken the conclusion in such a way as to make the theorem true, in a way that sufficed for all the applications I know of.

I would argue that this is because the mistaken statements were based on correct ideas, and the mistakes were simply in making those ideas precise. Or to put it differently, mathematicians get our intuitions from "well-behaved" objects: sometimes that intuition can be wrong for "pathological" objects we didn't know about, but in such cases we simply alter the definitions to exclude the pathological ones from consideration.

On the other hand, people do sometimes get mistaken ideas. For instance, here's another quote from Voevodsky's article:

In October 1998, Carlos Simpson ... claimed to provide an argument that implied that the main result of the "∞-groupoids" paper, which Kapranov and I had published in 1989, cannot be true. However, Kapranov and I had considered a similar critique ourselves and had convinced each other that it did not apply. I was sure that we were right until the fall of 2013 (!!). I can see two factors that contributed to this outrageous situation: Simpson claimed to have constructed a counterexample, but he was not able to show where the mistake was in our paper. Because of this, it was not clear whether we made a mistake somewhere in our paper or he made a mistake somewhere in his counterexample. Mathematical research currently relies on a complex system of mutual trust based on reputations. By the time Simpson’s paper appeared, both Kapranov and I had strong reputations. Simpson’s paper created doubts in our result, which led to it being unused by other researchers, but no one came forward and challenged us on it.

In this case I do know something about the mathematics involved, and my own opinion is somewhat different from Voevodsky's. In the 2000's I was a graduate student working on higher category theory, and my impression was that in the community of higher category theory it was taken for granted that Simpson's counterexample was correct and the Kapranov-Voevodsky paper was wrong, because the claimed KV result contradicted well-known ideas in the field.

The point here is that a community of people developing ideas together is likely to have arrived at correct intuitions, and these intuitions can flag "suspicious" results and lead to increased scrutiny of them. That is, when looking for mistaken ideas (as opposed to technical slips), it makes sense to give differing amounts of scrutiny to different claims based on whether they accord with the intuitions and expectations of experience.

So what do you do as a student? In addition to the other good advice that's been given, I think one of your primary goals should be to train your own intuition. That way you will be better-able to evaluate whether a given result, or something like it, is probably true, before you decide whether to read and check the proof in detail.

Of course, there is also the position that Voevodsky was led to:

And I now do my mathematics with a proof assistant. I have a lot of wishes in terms of getting this proof assistant to work better, but at least I don’t have to go home and worry about having made a mistake in my work.

I have a lot of respect for that position; I do plenty of formalization in proof assistants myself, and am very supportive of it. But I don't think that mathematics would be in danger of collapse without formalization, and I feel free to also do plenty of mathematics that would be prohibitively time-consuming to formalize in present-day proof assistants.