|Data Throughput - LTE - General|
Even though there is huge difference between LTE and Non-LTE in terms of physical layer implementation, you can just think of this as just extention of HSPA. Which means that in LTE you will have even bigger TBS than HSPA+ Dual Carrier. So in this case, very high performance IP data server is the most important factor for throughput test. For the detailed data path for LTE, you can refer to another pages of this site. Almost whole of this site is about LTE.
For example, if you are testing a Category 3 Max throughput case (System BW 20 Mhz, Number of RB = 100), the idea max throughput is around 100 Mbps. Even with PC to PC direct connect, it is not easy to achieve this level of throughput. So for this kind of extremely high throughput test, it would be the mandatory step to try the PC (both server and client) performance first by connecting the two PCs directly with a crossover LAN cable and try the throughput test. In rare case, even the quality of LAN cable would influence on the throughput. I recommend you to use Cat 6 LAN cable which support Gigabit data rate.
In addition, using 64QAM in downlink would be very common in LTE. In this kind of very high modulation scheme, the throughput is influenced greatly by channel quality (RF signal quality). Small change in downlink power, Fading, AWGN can create large changes in throughput.
One of the most frequent questions that I get on the throughput test is "What is the ideal throughput with this condition ?" In case of R99 or HSPA, the ideal throughput is described in a neat page of table and a lot of people knows what kind of throughput they have to expect, but in LTE the situation got much more complicated since there can be several factors determines the throughput and each of the factors can have so many different values. So the number of all the possible combinations for defining the throughput is so huge.
The most important factors to determine the ideal throughput are as follows :
The way to calculate the ideal throughput using these factors are explained in "Throughput Calculation Example" in Quick Reference page.
I made several examples of resource allocation and its ideal throughput as follows. These conditions are the most common condition for maximum throughput test. The values marked in gray cell is the one going over Category 3 LTE device capability. In most case, if I try the condition marked in gray cell with most commercial UEs that I tested (Category 3 UEs), I got one of the following result.
i) The throughput degrade a lot (normally much lower than 100 M)
ii) It reaches almost 100 M, but does not go over.
(Thank God ! Call drop didn't happen even in this case)
<< Downlink Resource Allocation and Throughput >>
<< Uplink Resource Allocation and Throughput >>
Now I got what to expect for the throughput. Do I get this value if I just plug in my device into the test equipment and do FTP/UDP ?
In most case, the answer is NO.
Why not ?
There are couple of factors you have to keep in mind as follows :
i) The ideal throughput value in the table is on the basis of physical layer operation, not based on higher layer (e.g, IP layer) throughput.
ii) The ideal throughput value in the table is based on the assumption that there is no physical layer overhead and we can allocate these resource at every subframe
When a stream of data comes from IP layer to the physical layer, there are some overhead being added (e.g, PDCP header, RLC header, MAC header etc). So the IP layer throughput gets lower than the physical layer.
What kind of other overheads we can think of ? Followings are the most common overhead.
i) SIB transmission
ii) Symbols allocated for PCFICH and PDCCH.
At the subframe where SIB is transmitted, you cannot allocate the full bandwidth for data transmission. If you can dynamically allocate a little smaller No of RBs in these slots, you only have to sacrifice the number of RBs for SIB transmission.. but in most test equipment the situation is even worse. The equipment does not allow such kind of dynamic resource allocation just to avoid the overlapping of SIB and user data. In this case, the equipment does not tranmit any user data at the subframe where SIB is being tranmitted. In such a case, SIB transmission can be pretty huge overhead for the max throughput test.
Another overhead is the one by PCFICH and PDCCH. As you learned from Downlink Framestructure section, there is at least one symbols (max 3 symbols of each subframe are allocated for PCFICH and PDCCH). If you allocate three symbols for PCFICH and PDCCH, which means that you set PCFICH, 3 out of 14 symbols are allocated for non-user data. However, speaking purely in ideal sense this overhead would not influence the ideal throughput since the transport block size determined by 3GPP for each combination of resource allocation took this overhead into account. But in reality, if you allocate too large Transport block size (too high MCS and No of RBs) and allocate large PCFICH (e.g, 2 or 3), it normally reads to a lot of CRC error which in turn results in throughput degradation.
As far as I tried with commercial device, the MAX IP layer throughput (UDP) that I achieved was around 90 Mbps with 20 Mhz system bandwith and MIMO condition. Physical layer throughput approaches almost 100 Mbps (only a couple of Mbps lower than 100 Mbps).