A linked list is a dynamic data structure used to hold a linearly arranged (typically unordered) collection of data. The simplest variation will be just a "head" pointer referencing the first node in the list. Nodes are declared to contain a data portion and a "next" pointer to the succeeding node on the list. If the next pointer of a node is NULL (or some equivalent), that node is the last one on the list. Additional pointers can be included to facilitate faster access to specific points in the list; e.g., a "tail" pointer referencing the last node will speed up appending new nodes to the end of the list.

Why were linked lists developed? Won't an array work just as well or better? While a linked list doesn't support random access to any element on the list, it does allow for the insertion, deletion, or relocation of nodes in O(1) time. And there are no limits — other than the amount of memory available on the system — to the number of nodes that can be held in the list. So, with regards to access and data movement restrictions, arrays and linked lists would appear to be complementary. Programmers have learned to use whichever data structure best fits the algorithmic situation they are coding.

Now that we've entered the parallel programming epoch, do linked lists still have a place? Dan Grossman of The University of Washington says, "No." I first heard of this radical idea from a colleague that had done a presentation with Prof. Grossman at a SIGCSE conference. The claim is that a tree can be used anywhere that you might want to use a linked list. (In my visually oriented mind, I see an animated linked list and tree singing "Anything You Can Do" from Annie Get Your Gun. I doubt if either data structure can bake a pie, either.) Is Dr. Grossman's idea heretical or forward-looking? Let me try to convince you of the latter.