For a while Huawei seemed to have a lock on much of the world’s initial 5G infrastructure build-out across the radio access network. Then geo-political concerns threw that into question. Now, all indications are that we’re back to a level playing field as long as the other players can ramp up quickly on the radio technology. The doors are wide open, however 5G creates significant new demands on baseband systems.
It’s challenging to know where to start in describing the complexity of 5G demands in the baseband. Certainly, there are massive processing requirements in supporting massive MIMO channels in multiple high frequency bands. The baseband must also support a wide range of services, from eMBB to URLLC, in both sub-6 GHz and mmWave spectrum, while continuing to support legacy LTE. This is a wide range of highly varied architectures, algorithms and modes.
To avoid getting lost in trying to consider all options simultaneously, think about a centralized radio access network (C-RAN) configuration. Remote radio units (RRUs) do very basic processing, lower-PHY handling and beamforming near the antenna and hand off most of the processing to a central office. Still, RRU processing must provide significant aggregation to handle that massive MIMO, beamforming and different radio access technologies (RATs) for all those communication links.
Back at the central office, more and faster links to all those RRUs also demand significant aggregation to provide high levels of parallelism. That aggregation should also support Baseband Unit (BBU) pooling, allowing processors to be shared effectively across connections to multiple RRUs, reducing capital and operating costs for network operators.
That’s one part of the challenge. A second part is that radio access networks themselves are evolving rapidly, from the traditional distributed RAN (D-RAN – most of the processing at the base of each cell tower) to centralized (C-RAN – most of the processing in central or edge office) to completely virtualized V-RAN, all in the spirit of lowered cost and greater flexibility. Private networks are another emerging option. Another hot trend is Open RAN (O-RAN), aiming to decouple hardware and software in RANs. Servicing each of these options requires much more flexibility in how compute is distributed and optimized for ROI.
A third consideration is a need to support multi-user and multi-computing tasks. Imagine a 5G signal, 100MHz wide (the maximum). You could allocate this to a single user, giving them a multi-gigabit link. Or you could aggregate many users in the same bandwidth, each getting a small bandwidth allocation. AR/VR adds another wrinkle, demanding ultra-low latency links which require short-time resource allocation. Efficient vector compute in such multi-user cases can become very challenging – standard big vector support (very effective for a single user) is not enough; the underlying hardware architecture needs to be carefully designed to accommodate cases with many small slots.
Finally, of course 5G is a moving target. 3GPP has ratified release 15, supporting initial deployment of the eMBB use-case. In release 16, expected within a few months, they will introduce URLCC, handling ultra-short latencies. Release 17 is expected in 2021 and so on. Hardwired solutions simply won’t be able to keep up. Staying on top of all these releases and adapting to the various new challenges they will introduce demands a hardware platform which is fast but built expressly for software defined operation.
Is this a market worth chasing? Analysts predict consistent RAN capital expenditures of $30B annually and expect 5G base station investment to dominate those expenditures from 2022 onwards, notwithstanding heavily advertised concerns about 5G rollout.
Meeting all of these needs at scale require a new DSP architecture, optimized specifically to baseband applications and developed in close collaboration with a leading infrastructure equipment maker. The core must be fast enough to meet highest performance needs in 5G, it must be very flexible in delivering the multi-core and multi-thread support need for dynamic resource allocation. And it should be very area efficient in support of the large core clusters that the applications will require.
You should check out CEVA’s officially released XC16 processor. We’re now the top IP provider for eNB (LTE base station) and gNB (5G base station) and we’ve already been adopted in both Nokia and ZTE baseband solutions. If you’re building 5G infrastructure solutions, we’re worth a closer look. You can learn more HERE.
Published on Embedded Computing Design