Ars Technica RAM Guide

Part II: Asynchronous and Synchronous DRAM

(This feature for subscribers only!)

In Part I of the Ars Technica RAM Guide, I talked about the basic technologies behind SRAM and DRAM, as well as some of the problems with squeezing performance out of DRAM. Since the foundation has already been laid in Part I, I won't really be recapping past material in basic form. I will, however, revisit many of those basic concepts in this edition's overview of current, mainstream DRAM technology, so it helps to have read and understood the entire first edition. If you didn't read all of it, I'd encourage you to do so before continuing. And if you did read all of it, then I won't waste any more of your time with this introduction.

The DRAM Read revisited

In the last edition, I talked about the steps in the DRAM read. Folks who've read about RAM before will notice that I skipped something very important in that discussion, something that has to be covered at some point if you're going to understand things like FPM and EDO DRAM. The topic that I skipped was memory timing, and in particular I didn't include a waveform diagram that shows how the various signals in the steps I outlined have to be timed in relation to each other.

Since asynchronous DRAM doesn't operate based on any kind of common system clock pulse that it shares with the CPU, the timings of the control signals, addresses and data have to be consciously taken into account. To see what I'm talking about, let's look again at the steps I listed for the DRAM read:

1) The row address is placed on the address pins via the address bus.

2) The /RAS pin is activated, which places the row address onto the Row Address Latch.

3) The Row Address Decoder selects the proper row to be sent to the sense amps.

4) The Write Enable (not pictured) is deactivated, so the DRAM knows that it's not being written to.

5) The column address is placed on the address pins via the address bus.

6) The /CAS pin is activated, which places the column address on the Column Address Latch.

7) The /CAS pin also serves as the Output Enable, so once the /CAS signal has stabilized the sense amps place the data from the selected row and column on the Data Out pin so that it can travel the data bus back out into the system.

8) /RAS and /CAS are both deactivated so that the cycle can begin again.

For our purposes, there are two main types of delays that we have to take into account. The first type includes the delays that have to take place between successive DRAM reads. You can't just fire off a read and then fire off another one immediately afterwards. Since a DRAM read involves charging and recharging capacitors, and various control signals have to propagate hither and thither so that the chip'll know what it's doing, you have to stick some space in between reads so that all the signals can settle back down and the capacitors can recharge.

Of this first type of in-between-reads delay, there's only one that's going to concern us really, and that's the /RAS and /CAS precharge delay. After /RAS has been active and you deactivate it, you've got to give it some time to juice back up before you can activate it again. The following picture should help you visualize this.

The same goes for the /CAS signal as well, and in fact to visualize the /CAS precharge delay just look at the above picture and replace the term "RAS" with "CAS."

If you think about these /RAS and /CAS precharge delays in light of the list of DRAM read steps, you'll immediately see that this rest period limits the number of reads you can do in a given period of time. Specifically, step 8 dictates that you've got to deactivate /RAS and /CAS at the end of each cycle, so the fact that after you deactivate them you've got to wait for them to precharge before you can use them again means you have to wait a while in between reads (or writes, or refreshes, for that matter). So imagine a Richard Simmons video, where he's having you cycle through the above steps by shouting <DISCO>"And one! and two! and three! and four! and five! and six! and seven! and eight! now rest!... Ok, let's do it again people. And one! and two!..."</DISCO>

This precharge time in between reads isn't the only thing that limits DRAM operations either. The other type of delay that concerns us is internal to a specific read. Just like the in-between-reads delay is associated with deactivating /RAS and /CAS, the inside-the-read delay is associated with activating /RAS and /CAS. For instance, the row access time (tRAC), is the minimum amount of time you have to wait between the moment you activate RAS and the moment the data you want can appear on the data bus. Likewise, the column access time (tCAC) is the minimum delay between the moment you activate /CAS and the moment the data can appear on the data bus.

Think of tRAC and tCAC as the amount of time it takes the chip to fill an order you just placed at the drive-in window. You place your order (the row and column address of the data you want), and it has to go and fetch the data for you so it can place it on the data pins. (Note: I wish to God I could think of an analogy involving Richard Simmons for this part of it, but my ingenuity fails me.) Anyway, this diagram should help you visualize how the two types of delays work.

Now that we've seen both types of delay in action, let's look at a larger diagram that shows them working together in a series of DRAM reads. (Just click on the thumbnail to get the larger diagram.)

Take a moment to study the above timing diagram so that you understand exactly what's going on in it, and leave it open in a separate browser window as you peruse the next few pages, because it'll form the basis for the rest of this part of the RAM guide. In fact, I'd suggest opening it up in a window next to the description of the DRAM read I provided in Part I, and going through it step by step, locating each step in the read on the timing diagram. (Here's a tip: steps 3 and 4 aren't really depicted in the timing diagram. Actually, step 3 is, insofar as WE stays inactive the whole time).