|
How to Set It Up ?
Sionna installation is straightforward once the basic environment is ready. The library is built on TensorFlow, so the first step is preparing a Python setup that matches the supported TensorFlow version. This usually means using Python 3.8 to 3.12 and running the installation on Linux or WSL, since these environments provide the most stable support for TensorFlow. GPU acceleration requires an NVIDIA GPU, a proper CUDA toolkit, and matching cuDNN libraries, but CPU-only installation works with
just pip. Most
installation issues come from missing AVX support, unsupported Python versions, or running inside VirtualBox where GPU access is blocked. With a compatible environment, installing Sionna is as simple as installing TensorFlow and running pip install. Therefore, the key to smooth installation is preparing the correct system environment before running the commands.
Official Instruction
For the official instruction from the publisher, check out this page : Sionna | Installation
My Setup
As with many open-source tools, installing Sionna does not always work smoothly by simply following the official instructions. There are several factors that can affect the setup. Python versions must match TensorFlow’s supported versions. TensorFlow itself requires AVX support on the CPU, so older hardware or some virtual machines will fail immediately. GPU installation adds more complexity because CUDA, cuDNN, TensorFlow, and the NVIDIA driver must align exactly. Some environments, like VirtualBox,
cannot expose a real GPU to the guest OS, so Sionna GPU mode becomes impossible regardless of configuration. Different Linux distributions may package dependencies differently, which can cause unexpected build errors. Even pip installations can behave differently depending on whether you use system Python, virtual environments, or conda. Therefore, preparing Sionna requires checking all these environmental factors first. Once the system environment is aligned with the requirements, installation usually becomes
straightforward.
So I thought it would be helpful to introduce my personal setup process to give me some idea on the installation process. It may or may not work if you just blindly copy and paste the process described here depending on PC hardware, Operating System etc. But it can be a good start to try and you may ask AI (e.g, chatGPT, Gemini etc) when you have problems.
NOTE : If you are using a PC with Windows only, this will be especially helpful.
NOTE : I tried to run sionna on Ubuntu in my virtual box, but it didn't work due to the factors mentioned in this note. So I set it up on wsl/Ubuntu on my Windows.
What I have ?
I am using a laptop with Windows 11. So the command line commands shown here are windows shell commands
Following is the system information of the PC and Operating system that I am currently using on which Sonnia is installed.
|
PS C:\> systeminfo
Host Name: ****
OS Name: Microsoft Windows 11 Home
OS Version: 10.0.26100 N/A Build 26100
OS Manufacturer: Microsoft Corporation
OS Configuration: Standalone Workstation
OS Build Type: Multiprocessor Free
Registered Owner: *****
Registered Organization: HP
Product ID: ******
Original Install Date: 2025-10-15, 6:09:48 PM
System Boot Time: 2025-11-20, 8:42:42 AM
System Manufacturer: HP
System Model: HP OmniBook 5 Laptop 16-af1xxx
System Type: x64-based PC
Processor(s): 1 Processor(s) Installed.
[01]: Intel64 Family 6 Model 181 Stepping 0 GenuineIntel ~2000 Mhz
BIOS Version: Insyde F.04, 2025-07-30
Windows Directory: C:\windows
System Directory: C:\windows\system32
Boot Device: \Device\HarddiskVolume1
System Locale: en-us;English (United States)
Input Locale: en-us;English (United States)
Time Zone: (UTC-05:00) Eastern Time (US & Canada)
Total Physical Memory: 32,218 MB
Available Physical Memory: 10,214 MB
Virtual Memory: Max Size: 64,986 MB
Virtual Memory: Available: 30,509 MB
Virtual Memory: In Use: 34,477 MB
Page File Location(s): C:\pagefile.sys
Domain: WORKGROUP
Logon Server: \\****
Hotfix(s): *****
Network Card(s): 3 NIC(s) Installed.
[01]: Intel(R) Wi-Fi 6E AX211 160MHz
Connection Name: Wi-Fi
DHCP Enabled: Yes
DHCP Server: 10.0.0.1
IP address(es)
****
[02]: Bluetooth Device (Personal Area Network)
Connection Name: Bluetooth Network Connection
Status: Media disconnected
[03]: VirtualBox Host-Only Ethernet Adapter
Connection Name: Ethernet 2
DHCP Enabled: No
IP address(es) ****
Required Security Properties:
Base Virtualization Support
Available Security Properties:
Base Virtualization Support
Secure Boot
DMA Protection
UEFI Code Readonly
SMM Security Mitigations 1.0
Mode Based Execution Control
APIC Virtualization
Services Configured:
Hypervisor enforced Code Integrity
Secure Launch
SMM Firmware Measurement
Services Running:
Hypervisor enforced Code Integrity
Secure Launch
SMM Firmware Measurement
App Control for Business policy: Enforced
App Control for Business user mode policy: Off
Security Features Enabled:
SMM Isolation Level: 30
Hyper-V Requirements: A hypervisor has been detected. Features required for Hyper-V will not be displayed.
|
Enable WSL with Ubuntu
Following is a short instroduction on enable WSL on the windows system and run it with Ubuntu. There are many different variations with commands. You may try with the copy and paste this command and ask AI if it cause any problem.
Upgrade Ubuntu Package
It may not be required, but always good practice to start with the latest package (You should run this within your wsl)
|
sudo apt update
sudo apt upgrade -y
|
Check out the Ubuntu version
This is the ubuntu release that is insalled on my wsl
|
jaeku@jaeku:/mnt/c/Users/jaeku$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.3 LTS
Release: 24.04
Codename: noble
|
Setup Python environment
Now let's setup a python environment (You should run these command within the wsl. Command prompt is not shown here)
Create a working directory that you want. This is my working directory
|
cd ~
mkdir -p ~/nvidia
cd ~/nvidia
|
Install Python Tools
Install python packages
|
sudo apt update
sudo apt install -y python3-venv python3-pip
|
- python3-venv → lets you create isolated Python environments
- python3-pip → Python package manager
Create a virtual environment for Sionna
Create a virtual environment for Sionna.
|
python3 -m venv venv-sionna
|
This creates a folder: ~/nvidia/venv-sionna
NOTE : Why we need this step ?
This is not required, but we create a virtual environment for Sionna to keep its Python dependencies isolated from the rest of the system so that installation and upgrades do not cause conflicts. TensorFlow, which Sionna relies on, is particularly sensitive to specific versions of Python, NumPy, and other libraries, and installing it directly into the system Python can easily break other tools or projects. By using a virtual environment, we ensure that Sionna and TensorFlow receive exactly
the versions they require while leaving the system Python untouched. This also allows multiple independent environments to coexist—for example, separate setups for Sionna, PyTorch, Whisper, or other ML frameworks—without interfering with each other. A virtual environment also makes it easy to recreate, reset, or delete the entire setup if needed, providing a clean, reproducible workspace that remains stable even when the underlying WSL or Ubuntu system updates.
Activate the virtual environment
Then acticate the virtual environment that you just created.
|
source venv-sionna/bin/activate
|
Then you will get the prompt something like this : (venv-sionna) jaeku@jaeku:~/nvidia$
Upgrade pip inside the venv
Upgrade the pip inwide the venv. Keep in mind that you should run this command within the venv as shown below
|
(venv-sionna) jaeku@jaeku:~/nvidia$ pip install --upgrade pip
|
Install Sionna (and TensorFlow)
Install and Sionna with TensorFlow within the venv. Keep in mind that you should run this command within the venv as shown below
|
(venv-sionna) jaeku@jaeku:~/nvidia$ pip install sionna
|
- This will automatically install a compatible TensorFlow 2.x for your Python version.
- On a machine without NVIDIA GPU / CUDA, TensorFlow will run in CPU mode (which is fine for small experiments).
Verify the installation status
You can check out the status of installation with a simple python command : python3 -c "import sionna, tensorflow as tf; print('Sionna:', sionna.__version__, 'TF:', tf.__version__); print(tf.config.list_physical_devices())"
To verify installation, run the one-line import test; if Sionna and TensorFlow versions print and at least /physical_device:CPU:0 appears, the setup is OK. CUDA-related warnings can be ignored if no GPU is configured.
|
(venv-sionna) jaeku@jaeku:~/nvidia$ python3 -c "import sionna, tensorflow as tf; print('Sionna:', sionna.__version__, 'TF:', tf.__version__); print(tf.config.list_physical_devices())"
2025-12-07 16:33:37.825133: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2025-12-07 16:33:37.841192: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-12-07 16:33:38.416914: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-12-07 16:33:40.508782: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-12-07 16:33:40.511709: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
Sionna: 1.2.1 TF: 2.20.0
2025-12-07 16:33:41.684353: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]
|
Key parts:
- Sionna: 1.2.1 TF: 2.20.0 → both libraries import fine inside the venv.
- [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')] → TensorFlow sees your CPU and will run on it.
- The CUDA messages: "Could not find cuda drivers on your machine, GPU will not be used.failed call to cuInit: ... UNKNOWN ERROR (303)" just mean: “no usable NVIDIA GPU / CUDA in this WSL, so I'll fall back to CPU.” That’s expected and harmless since I haven’t set up GPU/CUDA in WSL.
Check Out the Operation
Now check out operation in a little bit large scale. A kind of sionna hello world
|
import numpy as np
import tensorflow as tf
from sionna.phy.mapping import Mapper, Demapper, BinarySource
from sionna.phy.channel import AWGN
from sionna.phy.utils import ebnodb2no
# Simple parameters
num_bits_per_symbol = 2 # QPSK
num_bits = 1000
# Blocks
source = BinarySource()
mapper = Mapper("qam", num_bits_per_symbol)
demapper = Demapper("app", "qam", num_bits_per_symbol)
channel = AWGN()
# Generate random bits
b = source([num_bits]) # shape [num_bits]
x = mapper(b) # QAM symbols
ebno_db = 10.0
no = ebnodb2no(ebno_db, num_bits_per_symbol, 1.0)
# Send over AWGN
y = channel(x, no)
# Soft demap
llr = demapper(y, no)
print("b shape:", b.shape)
print("x shape:", x.shape)
print("y shape:", y.shape)
print("llr shape:", llr.shape)
|
Following is the result of this script
|
(venv-sionna) jaeku@jaeku:~/nvidia$ python3 hello_sionna.py
2025-12-07 16:44:18.611228: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2025-12-07 16:44:18.627213: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-12-07 16:44:19.217875: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-12-07 16:44:21.233638: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-12-07 16:44:21.236828: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2025-12-07 16:44:23.081131: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
b shape: (1000,)
x shape: (500,)
y shape: (500,)
llr shape: (1000,)
|
Key Points Execution Result
- Sionna / TensorFlow Status
- Sionna and TensorFlow ran successfully without errors.
- The script completed and printed tensor shapes, so the Sionna PHY chain is working correctly.
- Execution Mode (CPU vs GPU)
- Messages such as “Could not find cuda drivers on your machine, GPU will not be used” indicate that no CUDA driver or NVIDIA GPU is available in WSL.
- TensorFlow therefore runs in CPU-only mode, which is expected for this setup.
- The cuInit error (UNKNOWN ERROR 303) is just a consequence of CUDA not being configured and can be ignored for CPU-only use.
- CPU Optimizations
- TensorFlow reports that oneDNN custom operations are enabled, meaning it uses optimized CPU kernels.
- The message about AVX2 / AVX_VNNI / FMA indicates that TensorFlow is compiled to take advantage of available CPU instructions where possible.
- Verification via Tensor Shapes
b shape: (1000,) → 1000 input bits generated by the binary source.
x shape: (500,) → 500 QPSK symbols after mapping (2 bits per symbol).
y shape: (500,) → 500 noisy symbols after the AWGN channel.
llr shape: (1000,) → 1000 soft-output LLRs from the demapper, one per original bit.
- These shapes confirm that the Mapper → Channel → Demapper chain in Sionna is behaving as expected.
- Overall Conclusion
- The environment
(venv-sionna) is correctly configured for Sionna.
- The test script validates that Sionna can be used for PHY-level experiments on CPU in WSL.
|
|