As I've said many times before, I'm a software engineer by profession and have no formal training in electrical engineering. So it's not surprising when design decisions I made five or six years ago turn out to be questionable in light of recently acquired understanding. In this case, I'm talking about transmission line behavior.
When I started laying out the Instruction Pointer board six years ago I knew very little about transmission line behavior. I had a vague idea that it was important, knowledge indirectly derived from coworkers' experience with a failed implementation of a RAMbus-based design. But I had no real understanding of what was involved or why an impedance mismatch or improper termination caused such difficulties.
Having now done a fair bit of reading because of the Xilinx Spartan 6 CCLK issue discussed in my two previous posts, I have a greater awareness. And I woke up this morning with the awful thought that the two clock lines in my CPU board stack were everything that the application notes say a clock tree should NOT be. They're not laid out as transmission lines with a defined impedance. They branch off at random points rather than being a daisy-chain. And they run all over the place on every board.
Forget that the design clock rate is a molasses-in-winter 741 KHz at best; it's the rise and fall times of the clock signals that are critical. As currently designed, these clock lines have the potential to ring like wind chimes in a thunderstorm. Redesigning the IP board would mean a lot of effort and I'd have to scavenge the BSS83 MOSFETs off the current board. I'd want to implement a proper clock tree, which would mean a clock driver on each board, which would mean non-original parts.
Or... I could make the entire board stack look like a lumped circuit. This means the rise and fall times of the clocks need to be longer than the maximum round-trip time from the clock driver chip to the farthest reaches of the board stack and back. Doing a bit of "back of an envelope" calculations:
- propagation on a proper stripline is about 1ns per 6 inches
- the connector carrying the clock signals is on the short edge of the boards
- each board is about 6 inches wide and 4 inches high
- the inter-board connectors will add a couple of inches length.
I started thinking about how I was going to accomplish this. I could use an RC circuit and maybe a constant-current source to generate the ramp. CLK1 connects to 28 devices and CLK2 connects to 51, so even with gate capacitances in the range of 1pF (BSS83) to 8pF (FDV301) each the sum of these capacitances will be 100pF or more. Thus I'd need to buffer the ramp generator with something like an op-amp driving a matched NPN/PNP pair of transistors to handle the drive current needed.
As I was pondering this mess I thought to look at the datasheet for the TC4427A clock driver chip I used on my FPGA interface board. This device is designed to drive big power MOSFETs, and thus can handle a large capacitive loads. The performance charts show that the rise and fall times driving a 100pF load are about 18ns, and about 30ns for a 470pF load. This is exactly what I need: a stable, high output driver with relatively slow rise and fall times.
I picked the TC4427A as much because I already had some on-hand from a previous project as because I knew I needed something that would drive a high capacitance load with minimal over- or under-shoot. I never gave any thought to slow rise and fall times. Yet an 18ns transition time should keep the entire board stack in the lumped circuit category and avoid (or minimize) problems resulting from reflections on the clock lines. Woo hoo!