Finally AI/ML is getting into 3GPP -:). This note is not about what is AI/ML. I have a whole list of section about AI/ML itself and 3GPP also posted a short article about AI/ML. This note is more of how and where AI/ML will be used in 3GPP framework.
3GPP AI/ML Framework
As of now (Dec 2021), it is not clearly decided exactly what kind of AI/ML framework will be used in terms of 5G/NR network. One possible high level view is as shown below (this is based on TR 38.817 - Figure 4.2-1: Functional Framework for RAN Intelligence).
Principles of 3GPP AI/ML
When I was thinking of applying AI/ML in cellular technoloty (especially on RAN side), I had so many questions poping up in my head. What kind of neural network model will be used ? How it will be trained ? Who (gNB or Corenetwork) will train the network ? What kind of data will be used for training the network ? What kind of use cases will there be ? and so on.. and on ... and on. As of now (Dec 2021), I don't have clear answer to these questions... I would need to wait until the 3GPP specification of AI/ML is finalized (Rel 18). But at least I can get a glimpse of general principles of AI/ML implmentation from TR 38.817 - 4.1. It is stated as follows (Just reading these bullets gives me pretty good idea).
- The detailed AI/ML algorithms and models for use cases are implementation specific and out of RAN3 scope.
- The study focuses on AI/ML functionality and corresponding types of inputs/outputs.
- The input/output and the location of the Model Training and Model Inference function should be studied case by case.
- The study focuses on the analysis of data needed at the Model Training function from Data Collection, while the aspects of how the Model Training function uses inputs to train a model are out of RAN3 scope.
- The study focuses on the analysis of data needed at the Model Inference function from Data Collection, while the aspects of how the Model Inference function uses inputs to derive outputs are out of RAN3 scope.
- Where AI/ML functionality resides within the current RAN architecture, depends on deployment and on the specific use cases.
- The Model Training and Model Inference functions should be able to request, if needed, specific information to be used to train or execute the AI/ML algorithm and to avoid reception of unnecessary information. The nature of such information depends on the use case and on the AI/ML algorithm.
- The Model Inference function should signal the outputs of the model only to nodes that have explicitly requested them (e.g. via subscription), or nodes that are subject to actions based on the output from Model Inference.
- An AI/ML model used in a Model Inference function has to be initially trained, validated and tested before deployment.
- The generalized workflow should not prevent to “think beyond” the workflow if the use case requires so.
- User data privacy and anonymisation should be respected during AI/ML operation.
In short, these can be summarized as
- 3GPP would not define exact neural network definition / implementation. It would define only the interface and type of input/output of the neural network(Model).
- The whole AI/ML functionality would be comprised of several different compoments (e.g, Data Collection, Model Training, Model Interence, Actor). Where these components will be located will vary depending on specific use cases.