RF + DSP = SDR
Digital radio needs a solid digital platform for the evolution to SDRs. VMEbus and next-generation VXIbus promise to supply it.
It's no secret that digitizing RF signals has presented a major challenge for the digital world. Although much progress is being made in digital signal processing (DSP) technology, direct digitization of high-frequency RF and first- order IF frequencies is still difficult at best (of course, bear in mind that if you throw enough money at anything it can usually be made to happen. So difficult means considering both the technological and economic issues).
The last few years, however, have seen tremendous progress in clearing up the convoluted RF/digital marriage, as well as the evolution of technology and products. Given these developments, it is becoming easier to implement digital technology on basic superheterodyne systems. But, when it comes to complex systems that require multiple receivers and the components to support them, the challenge becomes more complex.
Enter the Computer One prominent approach in implementing digital radio technology is integrating it with a computer subsystem. When talking computers, the first thing that pops into mind is the Intel platform. However, the industry standard for industrial-grade computing is really the reduced instruction set computing (RISC)- and ALPHA-based systems running platforms such as the VersaModule Eurocard (VME)bus and its subset, the VME eXtensions for Instrumentation (VXI)bus. While other processor and bus architectures (MIPS, x86, compactPCI, ISA/PCI and PC/104) can be used, the modularity of the VMEbus and the added scalability of the VXIbus make it an attractive platform for digital radio system implementation.
The VMEbus is a subsystem that allows many of the analog stages of a RF system to be replaced by their digital counterparts. The modularity of the VMEbus can be examined by taking a look at how it addresses the traditional, analog-based radio system.
In analog-based radio systems, the components are traditionally implemented as separate pieces of equipment hardwired together. Each component contains fixed-function, hard-wired analog circuitry. Reconfiguring the typical analog radio requires either replacing the component (crystals, filters, power components), manually adjusting some type of analog device (resistor, capacitor, coil) or, in the worst case, replacing internal components or subsystems such as the demodulator or converter circuitry. Some of these options are tedious and often require a complete realignment of the receiver section as well. Additionally, interconnect cabling is almost sure to need disconnecting and reconnecting and soldered components often need to be replaced.
On the other hand, what are called "soft" components (those whose signal processing algorithms are implemented in software), have the ability to be reconfigured via software reprogramming. It is obvious what the advantage of this approach is. This "on-the-fly" reconfigurability eliminates the need to manually recalibrate and swap out components.
If, for example, the system needs to be expanded to handle additional downconverters, in a soft radio design it is simply a matter of reprogramming the application software to recognize the additional components and integrate them into the architecture.
Furthermore, as digital technology is applied closer and closer to the RF end, other techniques such as IF frequency varying and filter tuning can be done with simple reprogramming as well.
The platform Modularity is the buzzword of choice in many industries today. And, it is the enabling platform for software-defined radio (SDR) as well. This article focuses on the VME/VXIbus architecture and how it allows the implementation of the circuity. And, while VME/VXIbus topology is a desirable approach to SDR implementation, keep in mind that this is by no means the only way.
VME systems are a suitable platform for SDRs because VME is a fairly mature technology. This means that design cycle and costs are mature as well. But while VMEbus systems are modular in nature, they are not widely scalable. This factor limits their application.
One attempt to circumvent this scalability limitation has been the development of what is termed the mezzanine bus. Mezzanine boards are essentially a backplane system that makes interconnect easier and simpler. They offer a single point of interconnect for various function boards on the device side to the VME subsystem. A system using only VMEbus boards would require a number of interface buses to connect the function boards together. For example, the analog-to-digital (A/D) converter boards would be connected to the VMEbus downconverters through a parallel multidrop bus. This approach limits the number of downconverters that can be implemented. Additionally, the downconverters must interface to the demodulators through a lower-speed secondary bus.
The integration of the functional blocks of the system on mezzanine boards eliminates this problem, however. This works because mezzanine boards can contain a variety of signal processing modules with a common backbone interface. The modules are interfaced to the mezzanine boards, and the boards are then interfaced to the VME bus via the carrier board. It is easier to interface the modules to the one side of the mezzanine boards and interface the other side of the mezzanine boards to the backplane. In the case of the mezzanine boards, only one board is connected to the backplane, as opposed to several without the mezzanine bus. A second advantage of this method is that it helps to keep the bus speeds up and the interface complexity to a minimum.
Mezzanine and carrier boards and their functions Mezzanine boards are basically a set of independent function boards that offer better design flexibility than straight VMEbus boards. Because mezzanine boards are not directly VMEbus-compatible, they enlist the aid of carrier boards to provide the interface to the VMEbus backplane. These carrier boards then interface with the standard VMEbus backplane. These carrier boards can be intelligent boards or they can simply provide a passive interface. Either way, they provide added modularity to the VMEbus backplane. Because the carrier boards provide a standard interface to the VMEbus backplane on one side, and the interface to the mezzanine boards on the other, they allow the designer to concentrate on designing functions onto mezzanine boards rather than bothering with both functions and the actual board interface.
However, VMEbus is not without its shortcomings. Theoretically, VMEbus specifications max out at 40 MB/s (there is now a VME64 specification that supports theoretical bandwidth of 80 MB/s). However, in reality, VMEbus generally runs closer to 15 MB/s due to a number of issues such as system settling time, backplane propagation delay, circuit delay, register setup and hold times, synchronization times, bus acquisition times and intermixing D32, D16 and D8 cards (the system will run only as fast as the slowest component), and other protocol variables.
Enter the VXIbus Because of the above limitations of the standard VMEbus, an advanced VMEbus-compatible system that would not be hindered by standard limitations was developed. The system is called the VXIbus. The VXIbus improves upon the VMEbus in a number of ways. The VXIbus actually includes two buses; the VXIbus, which is a 32-bit VME-compatible bus with the same 40 MB/s maximum transfer rate; and a local bus. The VXIbus also supports high-performance parallel digital signal processing.
The advantage of the local bus is that it is designed to off-load the processing of analog or digital data between sets of modules, which, in turn, frees the system-wide VXI bus. This technique provides faster overall system performance because both buses can work independently of one another. It also provides a platform for parallel processing (just to note, the PC's technology has only recently implemented such a platform-wide design). Further, the local bus is an intelligent bus and can move data upwards of 100 MB/s among VXIbus modules.
The VXIbus also has better uniform specifications for dealing with RF issues such as EMC/RFI shielding and signal coupling, as well as power and cooling requirements. These tight specifications provide a clean, well-defined environment that allows sensitive, high-resolution I/O to coexist on the same board and in close proximity to potentially noisy DSP systems-the bane of mixed signal. Additionally, because of these tightly defined specifications, VXI can route data at much higher speeds and with better error correction and integrity than VMEbus.
Today's VXIbus systems make use of these tight integration specifications to mount components, including input/output (I/O), DSPs, direct memory access (DMA) controllers, central processing units (CPUs), memory modules and support peripherals to a common infrastructure. This is a far cry from the early VXI/VME systems that kludged a number of boards, interfaces and interconnect together; while typically providing only an expensive system with sub-par performance.
DSP gets on the VXIbus The VXI environment is one that can accommodate the significant variances of signal processing demands. It can support both the tightly designed parameters for optimal functionality and the dynamic requirements of multiple platforms.
Since VXI is capable of supporting fast real-time processing and I/O, there is the requirement that the processor be tightly coupled to the I/O subsystem. For this reason, typical RISC/CISC (reduced/complex instruction set computers) microprocessors cannot be readily integrated into radio subsystems (they run the computer platforms SDR's use). Therefore, digital signal processors become the processor of choice for SDRs. The DSP's internal structure supports design parameters such as fast response to interrupts for I/O events, multi-DSP configurations, multiple data/memory bus access at very high speeds, on-chip I/O peripherals, and optimized parallel signal processing instruction sets. Additionally, DSP can be programmed with standard C or assembler language.
The specs Today's DSPs can provide more than 1,200 millions of instruction per second (MIPS) of 64-bit performance and reach speeds over 1 GHz. Using multiple/parallel/concurrent configurations, it can deliver upward of 100 MB/s of throughput. Modern devices and systems can provide impressive performance specifications such as a continuous 512 point fast fourier transforms (FFTs) on a 1.5 kHz input stream, a continuous 100 tap, finite duration impulse response filter on a 300 kHz input stream and a continuous 1024 point correlation on a 28 kHz input streamed.
Further, today's devices offer built-in memory control hardware to facilitate concurrent I/O and interprocessor communications. This promotes a significant degree of scalability and frees the CPU to do what it does best-handle arithmetic operations.
One of the more notable features of cutting-edge DSP technology is its robust implementation of multiple communications (COM) ports, each capable of 10's of MB/s of throughput. Such port design provides the capability for the DSP to handle data concurrently and independently of the system's CPU, thereby facilitating parallel processing capability. These ports also serve as point-to-point links between processors, should the data need to be offloaded to support processors.
The key design of this type of system architecture is one that eliminates the need for complex schemes to enable shared memory parallel processing. Additionally, COM ports can be interconnected to form a virtually unlimited parallel-processing network across multiple systems.
A second function of the ports uses them as I/O interfaces. Because each port is usually associated with an on-board DMA processor (with one DMA channel for each port) that supports full bandwidth interprocessor communications, it keeps the communications on its own bus and does not burden the CPU. This vertical/horizontal implementation of the port/DMA controller hardware is why 100 MB/s data throughput is obtainable.
Addressing the software One of the advantages of having on-board DMA hardware is that it offers the ability to isolate the applications programmer from the topology of the finished system. This allows for the development of applications software independent of the system topology or hardware configuration. It even allows for concurrent software and hardware design and development. This helps to facilitate processor-to-processor messaging at the kernel level and supports dynamic allocation of tasks to balance the system loading and improve efficiency. There are a number of Parallel C development environments designed to facilitate parallel processing applications development. Parallel C is designed to develop code for multiprocessor environments similar to multitasking code development for single processors. The difference comes in at the final level when the code is configured to take advantage of multiprocessor environments.
The architecture becomes the module Most individual devices can be integrated into a standard configuration often referred to as a parallel-processing module. Such a model is usually a system with various configurations of processors, memory and I/O integrated onto a single board. These include single- and multi-DSP modules, analog input modules, video-capture modules, dynamic random access memory (DRAM) modules, high-speed serial I/O modules and digital downconverter modules.
In a typical application, one or more modules will be inserted onto a carrier board (the VXIbus carrier board's functionality is the same as VMEbus carrier boards except that they are designed for the VXIbus interface). When implemented for digital radio applications, a number of advantages become immediately apparent. First, because DSPs are fully programmable, new applications software can be implemented in already deployed systems. Enhancements can be easily added and new feature upgrades can be integrated. If standards change, the system can be upgraded to the new standard without swapping out components. Second, signal processing operations such as channel decoding, spectral analysis, signal demodulation and voice decoding can be reprogrammed or updated by simply implementing a small software program. This approach minimizes down time, eliminates errors and reduces the technical expertise required to modify these already deployed systems.
What it enables When considering soft radio architecture, there are many obvious advantages; some of which have been detailed in this article. However, in today's cost-conscious environment, none usually carry more weight than the benefit of eliminating the need to change out the hardware. Additional advantages include the ability to do real-time reconfiguration, upgrade to new standards, scalability, and the ability to handle multiple standards. There is little doubt that soft radio architecture will become ubiquitous as technology continues to refine devices to handle ever-wider bandwidth and ever-increasing device speeds. It is not unthinkable that within the next few years, almost all radio systems being deployed will use soft architecture based on ultra-high-speed, ultra-functional DSPs and peripheral interfaces.
Want to use this article? Click here for options!
© 2013 Penton Media Inc.
Acceptable Use Policy blog comments powered by Disqus
Most Popular Stories
CTIA Wireless IT & Entertainment 2010
Read the latest from the show...