SPDX-License-Identifier: Apache-2.0
Copyright (c) 2021 Intel Corporation
Intel® Smart Edge Open Provisioning Process
- Overview
- Preconditions and software requirements
- Provisioning Process Scenarios
- Provisioning Configuration
- GitHub Credentials
- Docker Pull Rate Limit
- Troubleshooting
Overview
The Intel® Smart Edge Open automated provisioning process relies on the Intel® Edge Software Provisioner (ESP). It provides a method of installation operating system automatically and the Intel® Smart Edge Open cluster deployment.
5G Private Wireless Experience Kit with Integrated RAN the pwek_aio_provision.py command-line utility, using the
Intel® Edge Software Provisioner toolchain to deliver a smooth installation experience.
The provisioning process requires a temporary provisioning system operating on a separate machine and routable from the subnet the provisioned machines are supposed to work on.
Preconditions and software requirements
If you want to read preconditions and software requirements please open the link, all the required information is there.
Provisioning Process Scenarios
Default Provisioning Scenario
The default provisioning process consists of the following stages:
- Repository Cloning
- Configuration
- Artifacts Building
- Services Start-Up
- Installation Media Flashing
- System Installation
- Services Shut Down
Repository Cloning
Each of the Intel® Smart Edge Open experience kits comes with its provisioning utility tailored to the kit’s
requirements. This script resides in the root directory of an experience kit repository, and its name matches the
following pattern: <experience-kit-name-abbreviation>_provision.py, e.g., pwek_aio_provision.py.
To be able to run the provisioning utility, clone the chosen experience kit repository. You can checkout the main
branch to access the latest experience kit version or select a specific release. In the second case, it is advised to
use the provisioning instruction published with the release to avoid incompatibilities caused by the process evolution.
For convenience, you can change the current directory to the directory the kit is cloned to, e.g.:
[Provisioning System] # git clone https://github.com/smart-edge-open/private-wireless-experience-kits --recursive --branch=main ~/pwek
[Provisioning System] # cd ~/pwek
Configuration
The Intel® Smart Edge Open default provisioning process is designed not to require any special configuration steps. The provisioning scripts should work without any configuration options specified. In some environments, it may, however, be necessary to customize some of them. For this purpose, the operator can set some of the most common parameters using the command line interface. For details, see the Command Line Arguments section.
If there is a need to adjust configuration parameters exposed by the configuration file then the Custom Provisioning Scenario should be followed.
Artifacts Building
To build the provisioning services, run the following command from the root directory of the Private Wireless Experience Kit
repository. You can also use command-line arguments like --registry-mirror to specify some typical options:
[Provisioning System] # ./pwek_aio_provision.py
Services Start-Up
[Provisioning System] # ./pwek_aio_provision.py --run-esp-for-usb-boot
Installation Media Flashing
To flash the installation image onto the flash drive, insert the drive into a USB port on the provisioning system and run the following command:
- The first way to flash the installation image onto the flash drive:
[Provisioning System] # cd esp/
[Provisioning System] # ./flashusb.sh --image ../out/Smart_Edge_Open_Private_Wireless_Experience_Kit-efi.img --bios efi
The command should present an interactive menu allowing the selection of the destination device. You can also use the
--dev option to explicitly specify the device.
- The second way to flash the installation image onto the flash drive:
To flash the installation image onto the flash drive, insert the drive into a USB port on the provisioning system and run the following command:
[Provisioning System] # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 31.1M 1 loop /snap/snapd/10707
loop1 7:1 0 69.9M 1 loop /snap/lxd/19188
loop2 7:2 0 55.4M 1 loop /snap/core18/1944
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 1.8T 0 part
├─ubuntu--vg-ubuntu--lv-real 253:0 0 880G 0 lvm
│ ├─ubuntu--vg-ubuntu--lv 253:1 0 880G 0 lvm /
│ └─ubuntu--vg-clean 253:3 0 880G 0 lvm
└─ubuntu--vg-clean-cow 253:2 0 400G 0 lvm
└─ubuntu--vg-clean 253:3 0 880G 0 lvm
sdb 8:16 0 1.8T 0 disk
sdc 8:32 1 57.3G 0 disk
├─sdc1 8:33 1 1.1G 0 part
├─sdc2 8:34 1 3.9M 0 part
└─sdc3 8:35 1 56.2G 0 part
The command should list all available block devices. Check which one is the inserted USB drive e.g. “/dev/sdc” and run the following command:
[Provisioning System] # cd esp/
[Provisioning System] # USBBLK="/dev/sdc"
[Provisioning System] # MBR_LOCATION="data/usr/share/nginx/html/mbr.bin"
[Provisioning System] # IMAGE="../out/Smart_Edge_Open_Private_Wireless_Experience_Kit-efi.img"
[Provisioning System] # USB_IMG_SIZE=$(du -b ${IMAGE} | awk '{print $1}')
[Provisioning System] # dd if=${IMAGE} status=none | pv -s ${USB_IMG_SIZE} | dd obs=1M oflag=direct status=none of=${USBBLK}
[Provisioning System] # dd bs=440 count=1 conv=notrunc status=none if=${MBR_LOCATION} of=${USBBLK}
Note: /dev/sdc is your usb flash drive. Use command ”lsblk“ to find it.
System Installation
Begin the installation by inserting the flash drive into the target system. Reboot the system, and enter the BIOS to boot from the installation media.
Log Into the System After Reboot
The system will reboot as part of the installation process.
The login screen will display the system’s IP address and the status of the experience kit deployment.
To log into the system, use smartedge-open as both the user name and password.
Check the Status of the Installation
When logging in using remote console or SSH, a message will be displayed that informs about the status of the deployment, for example,
Smart Edge Open Deployment Status: in progress
Three statuses are possible:
in progress- deployment is in progressdeployed- deployment was successful - Private Wireless Experience Kit cluster is readyfailed- error occurred during the deployment
Check the installation logs by running the following command:
[Provisioned System] $ sudo journalctl -xefu seo
Alternatively, you can inspect the deployment log found in /opt/seo/logs.
Services Shut Down
[Provisioning System] # ./pwek_aio_provision.py --stop-esp
Custom Provisioning Scenario
The custom provisioning scenario is very similar to the default scenario. The only difference is that it uses the configuration file to adjust some of the provisioning parameters.
See the Default Provisioning Scenario for the description of the common stages.
Configuration
Generate a new configuration file as described in the Configuration File Generation section:
[Provisioning System] # ./pwek_aio_provision.py --init-config > custom.yml
Artifacts Building
The provisioning artifacts are built in the same way as in the case of the
default scenario. The only difference is that the custom config file has to be specified using
the --config command-line option:
[Provisioning System] # ./pwek_aio_provision.py --config=custom.yml
Provisioning Configuration
Configuration Methods
The provisioning utility and consequently the provisioning process allows two configuration methods:
- Configuration via command line-arguments of the provisioning utility
- Configuration via provisioning configuration YAML file
These methods can be used exclusively or mixed. The configuration options provided by the command line arguments always override specific options provided by the configuration file.
Not all the options possible to be customized in the configuration file can also be customized using the command line arguments. However, the provisioning script is designed to allow the deployment of a standard experience kit cluster using the command-line options only.
Command Line Arguments
For the description of the options available for the command line use, see the provisioning utility help:
[Provisioning System] # ./pwek_aio_provision.py -h
Configuration File Generation
To generate a custom configuration file, use the --init-config option of the provisioning utility. When handling
this command, the utility prints an experience kit default configuration in the YAML format to the standard output. It
has to be redirected to a file of choice to keep it for further use:
[Provisioning System] # ./pwek_aio_provision.py --init-config > custom.yml
The operator can then modify the file to adjust needed options. To instruct the provisioning utility to use the custom
configuration file, use the --config option, e.g.:
[Provisioning System] # ./pwek_aio_provision.py --config=custom.yml
Configuration File Summary
For the description of the options available for the configuration file, see comments within the generated configuration file.
Experience Kit Configuration
For each provisioned experience kit, it is possible to adjust its configuration and specify user-provided files if
needed for a specific deployment variant. Both of these configuration adjustments are currently possible
through the configuration file only. For each of the provisioned experience kits (item
of the profiles list), it is possible to set its deployment variables through the group_vars, and hosts_vars
objects and the list of operator-provided files through the sideload list:
profiles:
- name: Smart_Edge_Open_Private_Wireless_Experience_Kit
[…]
group_vars:
groups:
all:
controller_group:
edgenode_group:
host_vars:
hosts:
controller:
node01:
sideload: []
The experience kit configuration variables specified in the provisioning configuration override the default values provided by the experience kit, so there is no need to adjust them in the default provisioning scenario.
The operator-provided files specified in the sideload list are read from a local location, copied to the
provisioning artifacts, and finally to the provisioned system.
GitHub Credentials
The access to some of the experience kits may be limited and controlled using git credentials. In such a case, the operator has to provide these credentials to the provisioning script.
The first method of providing them is through the github object of a custom
configuration file:
github:
user: '<user-name>'
token: '<user-token>'
The second method is to use the GitHub credentials options of the provisioning script:
[provisioning system] # ./pwek_aio_provision.py -h
[…]
--github-user NAME NAME of the GitHub user to be used to clone required Smart Edge Open repositories
--github-token VALUE GitHub token to be used to clone required Smart Edge Open repositories
[…]
The credentials are used during the provisioning script (e.g., pwek_aio_provision.py) execution and other contexts like
provisioning services containers and installer system, so the operator has to provide them explicitly.
The script will try to verify if it can access all the repositories specified through the configuration file and fail if they cannot be accessed anonymously or with the operator-provided credentials. This functionality doesn’t always work, and eventually, it is the operator’s responsibility to provide the credentials if needed.
The scenario in which different repositories can be accessed using different credentials is currently not supported. All the repositories must be either public or available for the specific user.
Docker Pull Rate Limit
It is possible to use local Docker registry mirrors or Docker Hub credentials to mitigate the Docker pull rate limit consequences.
Registry Mirror
It is the operator’s responsibility to provide a working Docker registry mirror. When it is up and running, its URL can be provided to the provisioning script.
The first method of providing it is through the docker object of a custom
configuration file:
docker:
registry_mirrors: ['http://example.local:5000']
The second method is to use the --registry-mirror option of the provisioning script:
[provisioning system] # ./pwek_aio_provision.py -h
[…]
--registry-mirror URL
add the URL to the list of local Docker registry mirrors
[…]
If the custom configuration file contains some registry_mirrors list items, then the URL specified using the
--registry-mirror option will be appended to the end of the list.
It is important to note that the provisioning script won’t configure the provisioning system to use the registry mirrors. It is the operator’s responsibility to set it up. The script only takes care of the configuration of the installer and the provisioned system.
Docker Hub Credentials
The provisioning script provides a possibility to specify Docker Hub credentials to be used by the installer when pulling images from Docker Hub.
The first method of providing it is through the docker object of a custom
configuration file:
docker:
dockerhub:
username: "<user-name>"
password: "<user-password>"
The second method is to use the Docker Hub credentials options of the provisioning script:
[provisioning system] # ./pwek_aio_provision.py -h
[…]
--dockerhub-user NAME
NAME of the user to authenticate with DockerHub during Live System stage
--dockerhub-pass VALUE
Password used to authenticate with DockerHub during Live System stage
[…]
Only the installer will use these credentials. They won’t affect the provisioning and the provisioned systems.
Troubleshooting
Docker has to be installed
Problem
One of the following error messages may appear when you attempt to run the provisioning script (e.g.,
pwek_aio_provision.py):
Docker has to be installed on this machine for the provisioning process to succeed. […]
Docker CLI has to be installed on this machine for the provisioning process to succeed. […]
The containerd.io runtime has to be installed on this machine for the provisioning process to succeed. […]
Solution
Install Docker software according to the official instruction: Install Docker Engine on Ubuntu.
The docker-compose tool has to be installed
Problem
The following error message appears when you attempt to run the provisioning script (e.g., pwek_aio_provision.py):
The docker-compose tool has to be installed on this machine for the provisioning process to succeed. […]
Solution
Install docker-compose tool according to the official instruction:
Install Docker Compose.
Failed to confirm that Docker is configured correctly
Problem
Basic Docker commands have failed to confirm that Docker is installed or has Internet connectivity.
Solution
Verify that both Docker and Docker Compose are installed. See the official Docker documentation for Docker and Docker Compose installation.
Verify network connection. If using proxy, set up proxy for following Docker components:
To investigate further, run provided failing command manually.
Installation image couldn’t be found in the expected location
Problem
The following error is displayed when you attempt to run the provisioning script (e.g., pwek_aio_provision.py):
ERROR: Installation image couldn't be found in the expected location
<img-file-path>
Solution
Retry the build attempt by rerunning the same provisioning command:
[Provisioning System] # ./pwek_aio_provision.py […]
If it doesn’t help, retry the build one more time with the --cleanup flag added. This option will force the complete
rebuild.
[Provisioning System] # ./pwek_aio_provision.py --cleanup […]
ESP script failed
Problem
One of the following error messages is displayed, and the execution of a provisioning script stops:
ERROR: ESP script failed: build.sh
ERROR: ESP script failed: makeusb.sh
ERROR: ESP script failed: run.sh
Solution
Retry the build attempt by rerunning the same provisioning command with the --cleanup flag added:
[Provisioning System] # ./pwek_aio_provision.py --cleanup […]
If it doesn’t help, you may inspect the ESP logs, typically located in the ./esp/builder.log file