The concern with clock signals is that their edges define when other signals are sampled. The edges of non-clock signals can bounce all they want with no negative effects as long as they settle to a valid state for at least the minimum set-up period before they're sampled. A clock edge, on the other hand, needs to make a clean transition with no significant noise.
Any time a signal transitions from one state to another (0 to 1, or 1 to 0), the voltage change doesn't appear instantaneously at the other end of the wire. Essentially the wire or PCB track is a transmission line, and the signal propagates down it as a wave over time. When this wave hits any sort of impedance mismatch, a reflection is generated. This includes the far end of the track where it meets the receiving circuit. If the impedance of the receiver is higher than the transmission line (and it usually is) then the reflection back toward the driver will be a positive voltage. However if the driver's impedance is lower than that of the transmission line (and it usually is) it reflects the returning wave as a negative wave, which travels toward the receiver again. These reflected waves can bounce back and forth like a ringing bell for some time, causing the voltages on both the driver and receiver to bounce up and down. If the magnitude of these bounces is great enough, and if they occur around the threshold voltage of the input circuits, what should be a single clock edge can appear to be several.
This site has a nice graphical depiction of transmission line reflections based on how the ends are terminated.
Why is this a problem? Consider a clock that causes a counter to advance. If a clock line has a bounce in it that cause the receiver advance the counter twice instead of once, the count will be off. This is especially a problem when the two ends must remain in sync, such as when they're transferring bits serially.
In most circuits, the driver really doesn't care what its output pin does as long as the receiver only sees one clock edge. With an unterminated clock like my 50 MHz oscillator output, the driver actually sees its output pin rise to about half the intended output voltage and hang there briefly. It's the arriving reflected wave which pushes the pin up to its full output voltage. But that normally doesn't matter because the driver doesn't care.
The Spartan 6 FPGA's configuration clock, CCLK, is different. It's configured as both an input and an output. In "master mode" the internal clock generator drives the external I/O pin to transfer data from the Flash ROM to the FPGA's configuration RAM. However, the FPGA's SPI receiver circuitry senses the external I/O pin directly, rather than a buffered output from the clock generator. This means it's sensitive to the wiggles and bounces caused by these reflected waves.
Xilinx knows this is a problem, and has changed the design in the 7-series chips to address this, but I'm using a Spartan 6 because the 7-series FPGAs only come in BGAs. To avoid problems the Spartan‑6 FPGA Configuration User's Guide (UG380) is very specific when it comes to the routing and termination of CCLK:
- Route the CCLK net as a 50Ω controlled impedance transmission line.
- Always route the CCLK net without any branching; do not use a star topology.
- Stubs, if necessary, must be shorter than 8 mm (0.3 inches).
- Terminate the end of the CCLK transmission line with a parallel termination of 100Ω to VCCO and 100Ω to GND (the Thevenin equivalent of VCCO/2, and assuming a trace characteristic impedance of 50Ω).
- After configuration in master mode, the CCLK pin is not driven unless it is used in the user design. If unused in the design, it is recommended to drive this pin to a logic level to prevent the pin from floating after configuration has completed.
So what's this "Thevenin equivalent" jargon? Here's Figure 2-22 from this document, which shows the desired configuration for one CCLK driver (the FPGA in Master mode) driving one CCLK receiver (the Flash chip):
However, I have issues with this. On my board, VCCO_2 is 3.3 volts. Ohm's Law says these resistors are going to draw 16.5mA just sitting there. After configuration the CCLK pin becomes a user I/O, and this arrangement will bias the pin smack in the middle of the digital no-man's land voltage range. That's a major no-no, and driving it to either 3.3V or ground per the fifth bullet's recommendation will double the current to 33mA. That's 109 milliwatts. To give perspective on how much that is, the typical 0402 chip resistor is only rated for a maximum dissipation of 100 mW, so that's right out. A typical 0603 chip resistor is rated for 128 mW, so that's 85% of its rated maximum.
I'm not an electrical engineer, so I went searching for good references on transmission line termination. There are a lot of good references, but the single reference I found most helpful is this article (PDF) by Douglas Brooks of UltraCAD Design, Inc. Another is this textbook supplement (PDF) I found at Portland State University. Through these and other articles I came to the conclusion that source termination was likely to be sufficient for my needs, and running some simplistic simulations with LTspice supports that. My layout includes a footprint for a series termination resistor next to the FPGA's CCLK pin. I then hedged my bets by placing footprints for a pair of 0603 resistors at the Flash ROM, just in case.
It may turn out that neither of these steps are necessary. As I understand it (and I may be wrong), reflections and transmission line effects only become problems if the transmission line is longer, time-wise, than a "critical length" equal to half the rise time of the signal. A signal propagates on a stripline transmission line at about 167ps per inch, which gives rise to the "rule of thumb" that any clock track longer than about 3 inches (1 ns round-trip) requires termination. I've placed the Flash ROM so that the CCLK signal is a bit under one inch in length from the FPGA.