This is Part 1 of a multi-part series. Please follow SIGMADESIGN on LinkedIn to read more!
SIGMADESIGN is well-armed with the expertise to tackle any engineering problem, and high-speed digital electronic design is certainly no exception. If your product must work with large volumes of data (such as video, networking, or any component or peripheral of a personal computer or smartphone), competent high-speed digital design will be a critical component to its success.
As data throughput requirements climb higher in proportion with our ever-increasing hunger for multimedia content and seemingly-instantaneous data transfer, the operating frequencies (or speeds) of digital signals also must increase. Examples of some high-speed digital interfaces are: HDMI and DisplayPort, both used for sending video to displays; USB, especially USB 3.0+; any interface used for memory or storage (SATA, DDR, etc.); PCIe, which is used for PC expansion cards; Ethernet, used for the networking and the Internet; and so on.
Designers of high-speed digital devices must take factors into consideration that aren’t necessary when working with low-speed products because the physical behavior of electricity changes as higher-frequency spectral content begins to dominate. To understand the physical and practical engineering consequences of electronic designs with higher digital frequencies, we must cover some complex topics. In order to do so in a digestible way, I will be writing this as a short series: first, we must understand what digital communication precisely is (regardless of speed), and then we can move on to spectral analysis, electromagnetic compatibility and compliance, and signal integrity.
ELECTROMAGNETIC COUPLING AND INTERFERENCE
Every electronic device involves analog electronics, but using analog signals to communicate information within or between systems has become less common. There are many reasons why digital technologies came to dominate today’s electronics, but we will first try to understand one of them: noise resilience.
One of the four fundamental forces of nature is electromagnetic force. Wherever there is an object with electrical charge or flow, there exists a near-field physical electromagnetic field around it that affects the behavior of other charged objects in its vicinity. If one were to run two wires alongside each other, their fields will couple and induce voltage within each other, even though there is no conductive path between them.
Usually, this coupling is undesirable (unintended) and produces noise in electronic signals: erroneous, unwanted voltage fluctuations that can produce inaccurate readings. If such noise interferes with a device’s operation, the noise is known as interference. When an emitter causes interference in another device, the emitter is known as the aggressor.
Sometimes, however, this coupling (via far-field electromagnetic radiation) is very much intentional and can achieve useful communication over very long distances wirelessly: we refer to such systems as radio and the coupled wires would be referred to as antennae. Most wireless communication, including Bluetooth and Wi-Fi, is achieved in this way.
ANALOG AND DIGITAL SIGNALS
Signals are the means by which electronic devices communicate with (or transfer information between) each other. In analog electronics, it is common that the voltage of a signal (or, the signal’s amplitude) has a continuous, one-to-one correspondence with an arbitrary, quantifiable, semantic value. What does that mean? As an example, let’s imagine an analog signal in a refrigerator: a “temperature” signal that informs a controller when to turn on (or off) the compressor, so that food neither freezes nor rots. The designer might decide (and document) that zero volts means that the temperature is “32 degrees Fahrenheit,” and five volts is “50 degrees Fahrenheit.” Every voltage in between those two values would be defined by a continuous transfer function: if it is linear, then 2.5 volts would mean “41 degrees Fahrenheit.”
Thus, the means of communication has been well defined, and so far, this approach to conveying information makes good sense. However, analog signals are highly susceptible to interference from noise, whether the aggressor be within or without the system. While analog signals are theoretically capable of infinite resolution (able to express temperature of 39.969623 degrees, for example), noise from both internal and external sources will practically limit that resolution to a certain noise floor.
Perhaps a light switch (an unintentional emitter) is located near our imagined fridge; whenever it is switched, the electromagnetic field interacts with our analog “temperature” signal. Or perhaps it is affected by a radio wave sent from a tower afar (an intentional emitter). Since these aggressors can induce a voltage into our “temperature” signal, the otherwise slowly-changing temperature might seem to suddenly change from 41 to 36 to 49 degrees and back in less than a millisecond! Did the temperature in the fridge really fluctuate so wildly, or has our information been corrupted by this external noise? These false readings could cause the refrigerator to turns its compressor on and off more than necessary, resulting in premature wear. Since voltage directly translates to a quantifiable meaning in analog communication, it is inherently vulnerable to fluctuations from electromagnetically induced noise.
In digital electronics, information is instead transferred exclusively as discrete binary values: at any one moment, the voltage is either low or high. So, in the digital version of our imagined “temperature” signal, a voltage above 3.7 volts would be considered high; a voltage below 1.3 volts would mean low; anything in between is undefined. At first, this doesn’t seem very useful because we are now limited to only two possible values at any moment. But if we instead think of this as a binary, base-two number system with the only digits being 0 (low) and 1 (high), then we could convey much more information over some period of time.
FIGURE 1: DIGITAL TEMPERATURE SIGNAL
We’ll call this new, digital “temperature” signal “temperature data”, and then we’ll add a second signal called “temperature clock”. While “temperature data” is transmitting its information, “temperature clock” is a simple square wave at some clock frequency. When “temperature data” is not transmitting, “temperature clock” is simply a constant “high” voltage. We’ll arbitrarily choose a clock frequency of 100 kilohertz—which, by the way, would be considered a low-speed digital interface. Every time the “temperature clock” signal transitions from low to high states (so, every 10 microseconds), we will read the value of “temperature data”: is it high or low? If it’s high, its meaning is 1, and if it’s low, we’ll read it as 0.
Binary is a base-two number system that only uses the digits 0 or 1, unlike the (0 through 9) decimal number system most of us are familiar with. Using binary, we can express any number as a sequence of 0s and 1s, equivalent to the “low” and “high” voltage states of a digital signal. For example, if we want to convey the decimal value of 47 (degrees Fahrenheit), we can use the binary sequence 00101111. So, with our 100 kHz clock frequency, we’ll need to change the voltage on the “temperature data” line to the necessary level every 10 microseconds; in sequence: low, low, high, low, high, high, high, high. (See Figure 1.)
FIGURE 2: TOP TWO LINES SHOW A NOISY DIGITAL SIGNAL; BOTTOM TWO SHOW THAT THE SAME INFORMATION IS READ EVEN WHEN MADE NOISIER. THIS DEMONSTRATES THE NOISE RESILIENCE OF DIGITAL SIGNALING
So, what have we gained by going digital? After all, now we’re taking 80 microseconds (instead of zero) to send the same amount of information, using two conductors (data and clock) instead of just one, and we’re also having to transition voltages quickly between low and high levels within a 10 microsecond period, which comes with its own penalties (that we will discuss later). One answer is resilience to noise: even if electromagnetic interference from other devices (or other signals within the same device) cause the voltage of the “temperature” signal to change by more than a volt, a one will still be a one and a zero will still be a zero, so the information itself will not be corrupted: the temperature reading will remain perfectly correct!
Hopefully, we now understand one reason why digital communication came to surpass analog. However, there are other, very important reasons that we shouldn’t minimize. For one, when transistors are used to implement digital logic, they can be operated in a way that uses very little power, and they can be made very, very tiny; this enables small, cheap, but very complex integrated circuits that contain hundreds of millions of transistors in a square centimeter of area. From this came programmable devices such as microprocessors, which are enormously powerful and have revolutionized technology. Also, even though we are using more conductors to transfer our digital data now, there is the possibility for many devices to share those same conductors in a signal bus. (For example, the electronics inside your car often talk to each other on shared CAN busses.)
NEXT TIME
Now that we understand what digital communication is, please join us next time as we continue to discuss the finer points of spectral analysis (as it pertains to the voltage of digital signals) and continue onward with our exploration of high-speed digital and the interrelated considerations of electromagnetic compatibility and signal integrity!