Firefox has seen a great deal of development over the last 22 years, both in maintaining the original code and in re-writing large portions. Currently, much of Firefox is being converted from C++ to modern Rust code.

That said, there are certain core pieces to the browser that maintain code from the time it was open-sourced. A reasonable place to look is the HTML Parser within the Gecko engine. This makes sense because even early versions of Netscape Navigator need to be built out around the basic functionality of parsing the HTML standard, which itself has evolved over the years in a backwards compatible manner.

Using the current Git repository for Gecko, located on Github at mozilla/gecko-projects , you can view the revision history of the HTML parser. The linked file, CParserContext.cpp is part of that original HTML parsing core, and you can see the history goes back 22 years.

My impression of the transition is that many of the same developers just kept working on what they had been working on within Netscape. This was more of an organizational transition and not a real change in the people/minds behind the actual code. I think this is an important point to make, if one is interested in history as a story about the people who made it, rather than just a collection of the artifacts the people made.

This is just an example, and I am sure you can find lots of code in the Github repository (which incidentally mirrors the "official" repository that uses Mercurial for source control). that originated with the release of the Navigator source code in 1998. Just focus on "foundational" functions like parsing and processing of the document model, and much of the already mature code from Netscape's days are likely to still survive.