top of page
BackGround_DarkMap.jpg

RUN SAP BETTER

Hypervisor

Hypervisor Overview

 

A hypervisor, also known as a virtual machine monitor (VMM) or virtualizer, is a type of computer software, firmware or hardware that creates and runs virtual machines. 

​

  • Host Machine: A computer on which a hypervisor runs one or more virtual machines 

  • Guest machine: Each virtual machine

 

The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Unlike an emulator, the guest executes most instructions on the native hardware. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical machine. 


This contrasts with operating-system–level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.

​

The term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisors.

​

Most used Hypervisors:

​

  • Hyper-V

  • Virtual Box

  • VMWare

  • KVM (Kernel-based Virtual Machine)

  • Proxmox VE (Virtual Environment) 

​

Type 1 Hypervisor

​

A type 1 hypervisor, or a bare metal hypervisor, interacts directly with the underlying machine hardware. A bare metal hypervisor is installed directly on the host machine’s physical hardware, not through an operating system. In some cases, a type 1 hypervisor is embedded in the machine’s firmware.

​

The type 1 hypervisor negotiates directly with server hardware to allocate dedicated resources to VMs. It can also flexibly share resources, depending on various VM requests.

​

Type 2 Hypervisor

​

A type 2 hypervisor, or hosted hypervisor, interacts with the underlying host machine hardware through the host machine’s operating system. You install it on the machine, where it runs as an application.

​

The type 2 hypervisor negotiates with the operating system to obtain underlying system resources. However, the host operating system prioritizes its own functions and applications over the virtual workloads.

​

Type 1 (Bare Metal) vs Type 2 (Hosted)​

​

Type 1 Hypervisor

  • Also Known as: Bare Metal Hypervisor

  • Runs on: Underlying physical host machine hardware

  • Best Suited for: Large, resource-intensive, or fixed-use workloads

  • Can it negotiate dedicated resources?: Yes

  • Knowledge Required: System administrator-level knowledge

  • Examples: VMware ESXi, KVM, Microsoft Hyper-V

​​

Type 2 Hypervisor

  • Also Known as: Hosted hypervisor

  • Runs on: Underlying operating system (host OS)

  • Best Suited for: Desktop and development environments

  • Can it negotiate dedicated resources?: No

  • Knowledge Required: Basic user knowledge

  • Examples: VMware Workstation, Oracle VM VirtualBox, Microsoft Virtual PC

​

KVM (Kernel-based Virtual Machine)

 

It's an open source virtualization technology that changes the Linux kernel into a hypervisor that can be used for virtualization and is an alternative to proprietary virtualization technologies, such as those offered by VMware.

​

VMWare

​

  • VMWare ESXi bare-metal hypervisor that installs directly onto a physical server. With direct access to and control of underlying resources, VMware ESXi partitions hardware to consolidate applications and cut costs.

  • VMWare Fusion is a virtual user session software built to run Windows applications on Macintosh.

WSL2 (Windows Subsystem for Linux) | Linux Distros

 

Installing Linux Distros on WSL2 (Windows Subsystem for Linux)

​​

  • Enable Virtualization in BIOS

  • Hyper-V in Windows Features

  • Install Windows Subsystem For Linux

    • Method 1: Microsoft store

    • Method 2: WSL Command line interface

    • Method 3: Winget

​

Method 2: WSL Command line interface

​

It is possible to install the same Distros available on the Windows Store directly from the command line in the:

 

  • PowerShell terminal

  • Command Prompt "CMD"

  • Windows Terminal (It's recommended, but you need to download it)​

​

Installing WSL using PowerShell

​​

  • Run PowerShell as Administrator 

  • To see all available distros: wsl --list --online

    • Or simply the command: wsl -l -o

  • ​(Optional) See all your currently installed distros: wsl -l -v

  • To install the Distro: wsl --install -d <DistroName>

    • Example 1: wsl --install -d Ubuntu

    • Example 2: wsl --install -d Debian

    • Example 3: wsl --install -d kali-linux

    • Example 4: wsl --install -d Ubuntu-24.04

    • Example 5: wsl --install -d SUSE-Linux-Enterprise-15-SP5

  • When finished, you will be prompted to create:

    • Username

    • Password

      • They don’t need to match your Windows user credential​​

​

PowerShell Commands

​

  • Accessing Default Distro | Command: wsl

  • Accessing Distro | Command: â€‹wsl -d <DistroName>

    • Example: wsl -d Ubuntu

  • List of Distros installed | Command: wsl --list​

  • See all Commands | Commands: wsl --help

  • See all your currently installed distros and which version of WSL they are using | Command: wsl -l -v

  • Updating WSL | Command: wsl --update

  • Shutting down WSL | Command: wsl --shutdown 

  • Desinstalling a Distro | Command: wsl --unregister <DistroName>

    • Example: wsl --unregister Ubuntu

​

Issues

​

  • Error '0x8000ffff' | If you are facing the '0x8000ffff' error in the installation: "An error occurred during installation. Distribution Name: 'Ubuntu' Error Code: 0x8000ffff"

    • You can try to update the WSL to solve it | Command: wsl --update

  • Virtualization Issues | You can check if your virtualization is ok using this command

    • Open Powershell | Command: systeminfo.exe

  • Reinstall WSL Windows Subsystem for Linux on Windows Server 2022 (If you need reinstall the WSL to fix unknown errors)

    • You can simply go to 'Remove Roles and Features'

    • Remove 'Windows Subsystem for Linux Service'

    • Reboot the system

    • Add again 'Windows Subsystem for Linux Service' by going to 'Add Roles and Features'​

Windows Terminal | Installation

​​

Windows Clients (Windows 10, Windows 11)

 

Microsoft Store Link: https://aka.ms/terminal

​

Windows Server (Windows Server 2022)

​

Since windows server doesn't have support for Microsoft Store anymore, you need to use PowerShell (Download from Microsoft GitHub).

​

Methods:

  • GitHub

  • Winget

  • Chocolatey

  • Scoop

​

GitHub Method

​

  • Prerequisites: Installing "C++ Runtime framework packages for Desktop Bridge". Microsoft provides C++ Runtime framework packages to allow applications to reference the C++ runtime from desktop applications distributed through the Windows Store.

    • Command to Download: Invoke-WebRequest -Uri https://aka.ms/Microsoft.VCLibs.x64.14.00.Desktop.appx -outfile Microsoft.VCLibs.x64.14.00.Desktop.appx

      • Version: x64

      • Parameter Uri: Location of the package

      • Parameter Outfile: To create the file 

      • Outcome: You can see the file created, if you check the directory. (example: If you execute the command in the C:\Users\Administrator, the file will be there)

    • Command to Execute the installation: Add-AppxPackage Microsoft.VCLibs.x64.14.00.Desktop.appx

​

Hyper-V| Virtual Machine with Nested Virtualization

​​

Nested Virtualization is a feature that allows you to run Hyper-V inside of a Hyper-V virtual machine (VM).

​

Prerequisites:

  • Intel processor with VT-x and EPT technology

    • The Hyper-V host must be either Windows Server 2016 or later, or Windows 10 or later

    • VM configuration version 8.0 or higher

  • AMD EPYC / Ryzen processor or later

    • The Hyper-V host must be either Windows Server 2022 or later, or Windows 11 or later

    • VM configuration version 9.3 or higher

​

Enable Nested Virtualization

  • Create a virtual machine

  • While the virtual machine is in the OFF state

  • Open Powershell

  • Command: Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true

  • Start the virtual machine

​

Disable Nested Virtualization

  • While the virtual machine is in the OFF state

  • Open Powershell

  • Command: Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $false

​

Networking options

There are two options for networking with nested virtual machines:

  • MAC address spoofing

  • NAT networking

​

MAC address spoofing
In order for network packets to be routed through two virtual switches, MAC address spoofing must be enabled on the first (L1) level of virtual switch. To enable MAC address spoofing, run the following PowerShell command.

  • Open Powershell

  • Command: Get-VMNetworkAdapter -VMName <VMName> | Set-VMNetworkAdapter -MacAddressSpoofing On

​

NAT Networking | Network Address Translation (NAT)
The second option relies on network address translation (NAT). This approach is best suited for cases where MAC address spoofing isn't possible, like in a public cloud environment.

First, a virtual NAT switch must be created in the host virtual machine (the "middle" VM). The following example creates a new internal switch named VmNAT and creates a NAT object for all IP addresses in the 192.168.100.0/24 subnet.

  • Open Powershell

  • Command: New-VMSwitch -Name VmNAT -SwitchType Internal

  • Command: New-NetNat –Name LocalNAT –InternalIPInterfaceAddressPrefix “192.168.100.0/24”

Next, assign an IP address to the net adapter:

  • Command: Get-NetAdapter "vEthernet (VmNat)" | New-NetIPAddress -IPAddress 192.168.100.1 -AddressFamily IPv4 -PrefixLength 24

Each nested virtual machine must have an IP address and gateway assigned to it. The gateway IP must point to the NAT adapter from the previous step. You may also want to assign a DNS server:

  • Command: Get-NetAdapter "vEthernet (VmNat)" | New-NetIPAddress -IPAddress 192.168.100.2 -DefaultGateway 192.168.100.1 -AddressFamily IPv4 -PrefixLength 24

  • Command: Netsh interface ip add dnsserver “vEthernet (VmNat)” address=<my DNS server>

Docker | Virtualization

​​

What is Docker?

​

Docker is a platform that helps developers build, test, and deploy applications quickly and efficiently.
 

  • Containers | Docker uses containers, which are standardized components that combine application code with the operating system (OS) libraries and dependencies

  • Isolation | Docker's virtualization facilities isolate applications from the operating environment, including the network, process trees, user IDs, and mounted file systems

  • Portability | Docker containers are lightweight and portable, so they can be easily deployed across different environments

  • Efficiency | Docker containers can run efficiently in different environments, and multiple containers can run on the same hardware

  • Speed | Docker can significantly reduce the time between writing code and running it in production

  • Cloud | Docker containers can operate the same way whether deployed on premises or using cloud computing technology

  • Hybrid Cloud | Docker apps that use multiple cloud offerings can be considered hybrid cloud or multi-cloud

 

Virtualization | Is Docker a Hypervisor?

​

No, Docker is not a hypervisor, but it does use the Docker Engine to power virtualization.

 

Hypervisors and Dockers are not the same, and neither can be used interchangeably. People are often confused between the two because of their applications related to virtualization.

​

  • Docker | An open platform that helps developers build, ship, and run applications. Docker containers are isolated environments that include the dependencies needed to run an application. Docker is installed on each server and provides commands for building, starting, and stopping containers

  • Hypervisor | Software that creates, manages, and runs virtual machines (VMs). A single hypervisor can manage multiple VMs on a single server, with each VM running hundreds of applications. There are two types of hypervisors: type 1, also known as a bare metal hypervisor, runs directly on the host hardware, while type 2 is a hosted hypervisor

Docker | Install Docker Engine

​​​

​This section describes how to install Docker Engine on Linux, also known as Docker CE. Docker Engine is also available through Docker Desktop for:

  • Windows

  • macOS

  • Linux

​

Docker Engine vs Docker Desktop

​

  • Docker Engine is the core technology that powers Docker containers

  • Docker Desktop for Linux provides a user-friendly graphical interface that simplifies the management of containers and services.

    • It includes Docker Engine as this is the core technology

    • It also comes with additional features like Docker Scout and Docker Extensions

​

Docker on Linux | Supported Platforms

​

  • CentOS | x86_64 / amd64 | arm64 / aarch64 | ppc64le

  • Debian | x86_64 / amd64 | arm64 / aarch64 | arm (32-bit) | ppc64le

  • Fedora | x86_64 / amd64 | arm64 / aarch64 | ppc64le

  • Raspberry Pi OS (32-bit) | arm (32-bit)

  • RHEL | x86_64 / amd64  (Experimental)| arm64 / aarch64 (Experimental) | s390x

  • SLES | s390x

  • Ubuntu | x86_64 / amd64 | arm64 / aarch64 | arm (32-bit) | ppc64le | s390x

  • Binaries | x86_64 / amd64 | arm64 / aarch64 | arm (32-bit)

​

Docker Engine on Linux 64-bit (Ubuntu)

​

Requirements

​

  • Operational System:

    • Ubuntu Noble 24.04 (LTS)

    • Ubuntu Jammy 22.04 (LTS)

    • Ubuntu Focal 20.04 (LTS)

  • Docker Old Version: You need to uninstall it

  • Firewall:

    • Docker is only compatible with "iptables-nft" and "iptables-legacy"

    • Make sure that any firewall rulesets you use are created with iptables or ip6tables

    • If you use ufw or firewalld to manage firewall settings, be aware that when you expose container ports using Docker, these ports bypass your firewall rules

​

Installing on Ubuntu

​

  • 1 Step | Add Docker's official GPG key

    • Update repository | Command: sudo apt-get update

    • Install Certificates | Command: sudo apt-get install ca-certificates curl

    • Key grouping | Command: sudo install -m 0755 -d /etc/apt/keyrings

    • Downloading | Command: sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc

    • Changing permission | Command: sudo chmod a+r /etc/apt/keyrings/docker.asc

  • 2 Step | Install Docker Packages

    • Install the latest version | Command: sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

  • 3 Step | Verify that the Docker Engine installation

    • Run the "hello-world" image | Command: sudo docker run hello-world

      • This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits

​

Docker Desktop on Linux 64-bit

​

Requirements

​

  • 64-bit kernel and CPU support for virtualization

  • KVM virtualization support

  • QEMU must be version 5.2 or later

  • systemd init system

  • Gnome, KDE, or MATE Desktop environment

  • At least 4 GB of RAM.

  • Enable configuring ID mapping in user namespaces

​

* QEMU is a generic and open source machine emulator and virtualizer (www.qemu.org)

Docker and Docker Compose| Commands​

Command
Description
Type
Example
Comments
docker create <image-name>
Create a docker container
Container Management
docker inspect <container-id-or-name>
Inspect a running container
Container Management
docker kill <container-id-or-name>
Kill a container
Container Management
docker logs -f <container-id-or-name>
Follow the logs of a container
Container Management
docker pause <container-id-or-name>
Pause a container
Container Management
docker ps
List running containers
Container Management
docker ps -a
List all containers
Container Management
sudo docker ps -a
docker rename <container-id-or-name> <new-name>
Rename a container
Container Management
docker restart <container-id-or-name>
Restart a container
Container Management
docker rm --force <container_id_or_name>
Stop and Remove Docker Container
Container Management
sudo docker rm --force ubuntu
docker rm <container-id-or-name>
Remove a stopped container
Container Management
sudo docker rm ubuntu
docker run --restart unless-stopped <image-name>
Run a container and automatically (re)start on boot or in the event of a crash
Container Management
sudo docker run --restart unless-stopped adguard/adguardhome
docker run -d <image-name>
Run a container in the background
Container Management
sudo docker run -d adguard/adguardhome
docker run -it <image-name>
Run a container in interactive mode
Container Management
docker run -p <host-port>:<container-port> <image-name>
Run a container with a port mapping
Container Management
sudo docker run -p 53:53/tcp adguard/adguardhome
docker run <image-name>
Run a container from an image
Container Management
docker start <container-id-or-name>
Start an existing container
Container Management
docker stop <container-id-or-name>
Stop a container
Container Management
docker unpause <container-id-or-name>
Unpause a container
Container Management
docker wait <container-id-or-name>
Wait until running container stops
Container Management
docker build -t <image-name>
Build an image
Image Management
docker commit <container-name-or-id> <image-name>
Create a docker image from a running container
Image Management
docker image prune
Remove all unused images
Image Management
docker images
List local images
Image Management
docker rmi <image-name>
Delete a local image
Image Management
docker run hello-world
Verify Docker Installation
Installation
sudo docker run hello-world
Verify if docker is successfully installed
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_name>
Docker Container IP Address
Network Management
sudo docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ubuntu
docker network connect <docker-network-name> <container-id-or-name>
Connect a container with a docker network
Network Management
docker network create --subnet <subnet> --gateway <gateway> <docker-network-name>
Create a docker network
Network Management
docker network disconnect <docker-network-name> <container-id-or-name>
Disconnect a container with a docker network
Network Management
docker network inspect <docker-network-name>
Inspect a docker network
Network Management
docker network ls
List all docker networks
Network Management
docker network rm <docker-network-name>
Remove a docker network
Network Management

Docker vs Docker Compose

​​

Both docker run and docker compose can be used to run a Docker container. That is how these two Docker constructs are similar. But that’s where the similarities end.

​

The key difference between docker run versus docker-compose:

  • Docker run is entirely command line based

    • Second major difference is that Docker run can only start one container at a time

  • Docker-Compose reads configuration data from a YAML file

    • Second major difference is docker-compose after configured can run multiple containers

​

​The only requirement for Docker Compose is docker, as docker compose is another Docker client that allows you to define everything in a yaml file.

  • Compose v1 was originally written in Python, so you can install it in a Python virtual environment. Usually you can see tutorials written for compose v1 using the syntax "docker compose" instead of "docker-compose"

  • Compose v2 was written in Go and it is a plugin for Docker. You can use the syntax "docker compose", but you could easily add an alias to that command as "docker-compose" which was how the command of Compose v1 was named

​

How it works in practice

​

Imagine you wanted to use an Nginx container to test your website on your desktop computer. There are two available approaches:

​

  • Approach 1 | Docker | Start the Nginx container on the command line with docker run

  • Approach 2 | Docker-Compose |Start the Nginx with the docker-compose up and a preconfigured YAML file

​

Approach 1 | â€‹Docker Run Syntax

​

Command: docker run -d --rm --name=my-website --cpus=1.5 --memory=2048m -p 80:80 -v $PWD/website:/usr/share/nginx/html/ nginx:latest

​

Approach 2 | â€‹Docker-Compose Syntax

​

The docker-compose command codes all runtime configuration data in an aptly named YAML file called docker-compose.yaml.

A docker-compose.yaml file equivalent to the docker run command above would look like this:

​

---

version: '3.9'

services:   

    nginx-service:     

         container_name: my-website     

         image: nginx:latest     

         cpus: 1.5     

         mem_limit: 2048m     

         ports:       

              - "80:80"     

         volumes:

             $PWD/website:/usr/share/nginx/html

---

​

Once the docker-compose.yaml file is configured, the described Docker container can be started simply by running the docker-compose up command in the same folder as the file:

 

Command: docker-compose up -d --rm

PVE | Proxmox Virtual Environment​

​

Proxmox Virtual Environment is a complete, open-source server management platform for enterprise virtualization. It tightly integrates the KVM hypervisor and Linux Containers (LXC), software-defined storage and networking functionality, on a single platform. With the integrated web-based user interface you can manage VMs and containers, high availability for clusters, or the integrated disaster recovery tools with ease.

​​

Features

​

By combining two virtualization technologies on a single platform, Proxmox VE is giving maximum flexibility to your production environment. Use KVM full virtualization for Windows and Linux images, and lightweight containers to run conflict-free Linux applications.

​

Proxmox Commands

Command
Description
Type
Example
Comments
pveam available
Show available templates
Installation
pveam available
Categories: System, Turnkey linux and mail. System creates templates for Ubuntu, Debian and other.
pveam download <storage_node> <template>
Download Templates
Installation
pveam download local ubuntu-20.04-standard_20.04-1_amd64.tar.gz
Out-of-the-box storage is "local". And the template is the name in "pveam available" command
pveam update
Update Proxmox
Installation
pveam update
Update the appliances and templates available
lxc-info -n <container_id>
Show IP Address of specific Container
Network
lxc-info -n 100
lxc-ls -f
Show IP Address(es) of Containers
Network
lxc-ls -f

Proxmox | No Subscription

​

The Proxmox default installation comes with the Enterprise repository enabled by default. This would be good only if you have the subscription.

​

Since most people don't pay for the subscription, every time the automatic or manual update is performed, you will get the message "IP not authorized for corporate repository" and you will also see the error:

​

​TASK ERROR: command 'apt-get update' failed: exit code 100

​

To stop this error, you just need to disable the Enterprise Repository, and enable the "No-Subscription" Repository.

Enabling the No-Subscription Database

​

  • Select PVE Node, go to Updates > Repository

  • Select the Enterprise Component line

  • Press Disable

  • After that, Press Add

  • Click OK for the Pop-up (This step does not prevent pop-ups)

  • Select "No-Subscription" Repository, and press "Add"

  • Restart the Proxmox

  • Go to PVE Node > Updates 

  • Press "Refresh"

  • Now you won't have more errors

Proxmox | Default Firewall Rules

​​

The following traffic is filtered by the default firewall configuration:

​

Datacenter incoming/outgoing DROP/REJECT


If the input or output policy for the firewall is set to DROP or REJECT, the following traffic is still allowed for all Proxmox VE hosts in the cluster:

  • Traffic over the loopback interface

  • Already established connections

  • Traffic using the IGMP protocol

  • TCP traffic from management hosts to port 8006 in order to allow access to the web interface

  • TCP traffic from management hosts to the port range 5900 to 5999 allowing traffic for the VNC web console

  • TCP traffic from management hosts to port 3128 for connections to the SPICE proxy

  • TCP traffic from management hosts to port 22 to allow ssh access

  • UDP traffic in the cluster network to ports 5405-5412 for corosync

  • UDP multicast traffic in the cluster network

  • ICMP traffic type 3 (Destination Unreachable), 4 (congestion control) or 11 (Time Exceeded)

​

The following traffic is dropped, but not logged even with logging enabled:

  • TCP connections with invalid connection state

  • Broadcast, multicast and anycast traffic not related to corosync, i.e., not coming through ports 5405-5412

  • TCP traffic to port 43

  • UDP traffic to ports 135 and 445

  • UDP traffic to the port range 137 to 139

  • UDP traffic form source port 137 to port range 1024 to 65535

  • UDP traffic to port 1900

  • TCP traffic to port 135, 139 and 445

  • UDP traffic originating from source port 53

​

The rest of the traffic is dropped or rejected, respectively, and also logged. This may vary depending on the additional options enabled in Firewall → Options, such as NDP, SMURFS and TCP flag filtering.

 

  • Inspect the output | Command: iptables-save

​

VM/CT incoming/outgoing DROP/REJECT


This drops or rejects all the traffic to the VMs, with some exceptions for DHCP, NDP, Router Advertisement, MAC and IP filtering depending on the set configuration. The same rules for dropping/rejecting packets are inherited from the datacenter, while the exceptions for accepted incoming/outgoing traffic of the host do not apply.

 

Again, you can use iptables-save (see above) to inspect all rules and chains applied.

​

Logging of firewall rules

​

By default, all logging of traffic filtered by the firewall rules is disabled. To enable logging, the loglevel for incoming and/or outgoing traffic has to be set in Firewall → Options. This can be done for the host as well as for the VM/CT firewall individually. By this, logging of Proxmox VE’s standard firewall rules is enabled and the output can be observed in Firewall → Log. Further, only some dropped or rejected packets are logged for the standard rules (see default firewall rules).

loglevel does not affect how much of the filtered traffic is logged. It changes a LOGID appended as prefix to the log output for easier filtering and post-processing.

​

loglevel is one of the following flags:

    

  • loglevel: nolog | LOGID: — 

  • loglevel: emerg | LOGID: 0

  • loglevel: alert | LOGID: 1

  • loglevel: crit | LOGID: 2

  • loglevel: err | LOGID: 3

  • loglevel: warning | LOGID: 4

  • loglevel: notice | LOGID: 5

  • loglevel: info | LOGID: 6

  • loglevel: debug | LOGID: 7

​

Ports used by Proxmox VE

​

  • Web interface: 8006 (TCP, HTTP/1.1 over TLS)

  • VNC Web console: 5900-5999 (TCP, WebSocket)

  • SPICE proxy: 3128 (TCP)

  • sshd (used for cluster actions): 22 (TCP)

  • rpcbind: 111 (UDP)

  • sendmail: 25 (TCP, outgoing)

  • corosync cluster traffic: 5405-5412 UDP

  • Live migration (VM memory and local-disk data): 60000-60050 (TCP)​

Proxmox | SSH

​

Accessing Proxmox CT (Container) using SSH with Root Login

​

If you are trying to access your Proxmox container, and despite it connecting, the password is not being authenticated, it is because you are using the "Root" login and you need to enable it.

​

  • Select the PVE Node, open the shell terminal

  • Edit the SSH Configuration | Command: nano /etc/ssh/sshd_config

  • Find the line | Line: #PermitRootLogin prohibit-password

  • Change to | Line: PermitRootLogin yes

    • Don't forget to remove "#"

  • Press Ctrl+X and Save it

  • Restart the SSH service | Command: systemctl restart sshd

  • And try to access the Container again | Command: ssh root@<ip-address>

  • Type your "root" password

Proxmox | Executing Script in Containers Remotely

​

How to run a PVE Node Script on to all Containers

​​

You just need to create a script:

To make the desired script available to the container, without copying it | Line: exec 3</home/update.sh

Find all Containers | Line: pct list | awk '/[0-9]/{print $1}'

Execute the Script in all available Containers | Line: pct exec "$CT" bash /proc/self/fd/3

​

Example:

---

#! /bin/bash

 

# Make the Script Available to Containers
exec 3</home/update.sh

 

# Execute the Script in All  Containers
for CT in $(pct list | awk '/[0-9]/{print $1}')
do
pct exec "$CT" bash /proc/self/fd/3
done

---

Proxmox | Disk Resize

How to resize a disk

​

  • Select the CT Container

  • Go to Resource

  • Select the Disk

  • Go to Volume Action > Resize

  • Insert the quantity that you want to increase (in Gb)

  • Click in Resize Disk

Proxmox | Notification | SMTP with GMail

How to setup GMail SMTP

​

  • Select the Datacenter, go to Notification

  • Select Add, and SMTP

  • Fill in the fields below:

    • Endpoint Name | Example: SMTP_Gmail

    • Enable | Check it

    • Server | GMail SMTP Server: smtp.gmail.com

    • Port | Default 465

    • Authenticate | Check it

    • Username | Your email account | <your-account@gmail.com>

    • Password | GMail Password App (It is not your gmail password)

    • From Address | Your email account | <your-account@gmail.com>

    • Recipient(s) | Select your user (it is previous configured)

  • Press Add

Testing the GMail SMTP

​

  • Select the Datacenter, go to Notification

  • Select the new SMTP created, and click "Test"

  • Click "Yes" on the pop-up

  • If the configuration was done correctly, you will receive a confirmation email

Proxmox | CRON (Job Scheduler)

​

The cron command-line utility is a job scheduler on Linux-like operating systems. Cron is most suitable for scheduling repetitive tasks. Users who set up and maintain software environments use cron to schedule jobs, also known as cron jobs, to run periodically at fixed times, dates, or intervals.

​

Hint: The definition at first glance seems complicated but it is simpler than it seems.

​

Job Definition


┌ --------------- Minute (0 - 59)
| ┌ ------------ Hour (0 - 23)
| | ┌ --------- Day of Month (1 - 31)
| | | ┌ ------ Month (1 - 12) OR jan,feb,mar,apr ...
| | | | ┌ ---- Day of Week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
| | | | |
* * * * * Command to be executed

​​

Examples of Job Definition

​

0 0 1 1 * | Run once a year at midnight of 1 January
0 0 1 * * | Run once a month at midnight of the first day of the month
0 0 * * 0 | Run once a week at midnight on Sunday
0 0 * * * | Run once a day at midnight
0 * * * * | Run once an hour at the beginning of the hour

​

Examples of whole syntax

​

Start the Container 103 everyday at 06:00 AM  | Line: 0 6 * * * /usr/sbin/pct start 103

Stop the Container 103 everyday at 11:00 PM  | Line: 0 23 * * * /user/sbin/pct stop 103

​

How to start a CT (Container) using CRON correctly

​

  • On the Top Node (PVE Node), open Shell terminal

  • Check your VMID | Command: pct list

  • Get the VMID of your (CT) Container

    • Example: 103

  • Open the Crontab | Command: crontab -e​

  • Select the editor | "1" = "/bin/nano <--- easiest"

  • Add this line at the end of the file | 0 6 * * * /usr/sbin/pct start 103

    • This line will start everyday 06:00 AM the Container 103

    • Explanation: <Job Definition> <Command> | <0 6 * * *> </usr/sbin/pct start 103>

      • Command: </usr/sbin/pct> <start> <103> | <Manage Linux Container Tool> <Action> <VMID Container ID> 

  • Save it | Ctrl+X, Select "Yes" and Press enter​

  • You will see "crontab: installing new crontab"

  • You don't need to do anything else​​​

References: Microsoft (www.microsoft.com); GitHub (github.com); Proxmox (www.proxmox.com)

bottom of page