Monday, 2 December 2024

Create a Directory from existing disk in proxmox server

Scenario:
 I have an NVMe disk with an ext4 partition that was previously used as a directory in a Proxmox server. After reinstalling Proxmox, I’m unable to add the drive as a directory because the disk selection page shows the message "No disks unused." I need to retain the data on the drive since it contains backups of all the VMs and cannot be formatted.



The issue occurs because Proxmox VE's interface doesn't automatically recognize pre-used disks with existing filesystems as "unused." To safely add the drive back as a directory without losing the data, follow these steps:

Steps to Add the Existing Disk

  1. Verify the Filesystem and Mount the Disk

    • SSH into your Proxmox server.

    • Identify your NVMe drive and its partition using:


      lsblk

      or

      fdisk -l

      Look for the partition (e.g., /dev/nvme0n1p1).

    • Check the filesystem to ensure it's still intact:


      file -s /dev/nvme0n1p1

      If it shows as an ext4 filesystem, you're good to proceed.

    • Mount the partition to a temporary directory to confirm its contents:


      mkdir /mnt/temp mount /dev/nvme0n1p1 /mnt/temp ls /mnt/temp

      Ensure you see the backup files.



  2. Create a Permanent Mount Point

    • Decide where you want to mount the drive. For example:


      mkdir /mnt/nvme-backups
    • Edit /etc/fstab to automatically mount the partition on boot:


      nano /etc/fstab

      Add an entry similar to this:


      /dev/nvme0n1p1 /mnt/nvme-backups ext4 defaults 0 2

      Replace /dev/nvme0n1p1 with your actual device path.


    • Mount the disk:


      mount -a

       


  3. Add the Directory to Proxmox Storage

    • Go to the Proxmox web interface.
    • Navigate to Datacenter > Storage > Add > Directory.
    • In the "Directory" field, input the mount point path (e.g., /mnt/nvme-backups).
    • Select the desired content types (e.
      g., VZDump backup file for backups).
  4. Test the Setup

    • Check if the backups are accessible in Proxmox.
    • Ensure the directory is listed in Datacenter > Storage and shows the correct size and usage.

Sunday, 1 December 2024

Change static Ip of a Xcp-ng server



SSH to the server . 

sudo -s

type password for xoa.  

$ xoa network static

? Static IP for this machine 192.168.0.10/24
? Gateway 192.168.0.1
? IP of the DNS server 8.8.8.8




Saturday, 2 November 2024

Create windows vm in Azure using terraform




Creating a Windows VM in Azure using Terraform is a great way to manage your infrastructure as code. Here's a step-by-step guide to help you get started:

Step 1: Prerequisites

  1. Azure Subscription: Ensure you have an active Azure subscription.

  2. Terraform: Install Terraform on your local machine.

  3. Azure CLI: Install and configure the Azure CLI.

Step 2: Create Terraform Configuration Files

Create a directory for your Terraform configuration files and navigate to it:

sh
mkdir terraform-windows-vm
cd terraform-windows-vm

Create a main.tf file and add the following Terraform code:

hcl
# Provider configuration
provider "azurerm" {
  features {}
}

# Resource group
resource "azurerm_resource_group" "rg" {
  name     = "terraform-windows-vm-rg"
  location = "East US"
}

# Virtual network
resource "azurerm_virtual_network" "vnet" {
  name                = "terraform-vnet"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
}

# Subnet
resource "azurerm_subnet" "subnet" {
  name                 = "terraform-subnet"
  address_prefix       = "10.0.1.0/24"
  virtual_network_name = azurerm_virtual_network.vnet.name
  resource_group_name  = azurerm_resource_group.rg.name
}

# Public IP
resource "azurerm_public_ip" "public_ip" {
  name                = "terraform-public-ip"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  allocation_method   = "Dynamic"
}

# Network interface
resource "azurerm_network_interface" "nic" {
  name                = "terraform-nic"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.subnet.id
    private_ip_address_allocation = "Dynamic"
  }

  ip_configuration {
    name                          = "public"
    subnet_id                     = azurerm_subnet.subnet.id
    public_ip_address_id          = azurerm_public_ip.public_ip.id
  }
}

# Storage account
resource "azurerm_storage_account" "storage" {
  name                     = "terraformstorage"
  location                 = azurerm_resource_group.rg.location
  resource_group_name      = azurerm_resource_group.rg.name
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

# Virtual machine
resource "azurerm_windows_virtual_machine" "vm" {
  name                  = "terraform-vm"
  location              = azurerm_resource_group.rg.location
  resource_group_name   = azurerm_resource_group.rg.name
  network_interface_ids = [azurerm_network_interface.nic.id]
  vm_size               = "Standard_D2_v4"
  storage_account_name  = azurerm_storage_account.storage.name
  storage_account_key   = azurerm_storage_account.storage.primary_access_key
  os_disk_size_gb       = 30
  os_disk_caching       = "ReadWrite"
  admin_username        = "adminuser"
  admin_password        = "Password1234!"
  computer_name         = "terraform-vm"
}

Step 3: Initialize Terraform

Run the following command to initialize Terraform and download the required providers:

sh
terraform init

Step 4: Create Execution Plan

Run the following command to create an execution plan:

sh
terraform plan

Step 5: Apply the Execution Plan

Run the following command to apply the execution plan and create the resources in Azure:

sh
terraform apply

Step 6: Verify the Results

Once the resources are created, you can verify the results by checking the Azure portal or using Azure CLI commands.




Sunday, 1 September 2024

Task and Thread in c#

 





In C#, threads and tasks are both used for asynchronous programming and parallel execution, but they serve different purposes and provide different levels of abstraction and control. Let's explore the differences between them:

Threads

  1. Low-Level Concept:

    • A thread is the basic unit of execution within a process.

    • It represents a separate path of execution in the application.

  2. Creation:

    • Threads can be created and managed using the System.Threading.Thread class.

    • Example:

      csharp
      using System;
      using System.Threading;
      
      class Program
      {
          static void Main()
          {
              Thread thread = new Thread(new ThreadStart(DoWork));
              thread.Start();
          }
      
          static void DoWork()
          {
              Console.WriteLine("Work on a separate thread");
          }
      }
      
  3. Control:

    • You have fine-grained control over the thread’s lifecycle (e.g., start, sleep, join, abort).

    • Example:

      csharp
      thread.Join(); // Wait for the thread to finish
      
      
  4. State Management:

    • Requires manual state management and synchronization (e.g., using locks, mutexes).

Tasks

  1. High-Level Abstraction:

    • A task is a higher-level abstraction provided by the Task Parallel Library (TPL) in the System.Threading.Tasks namespace.

    • It represents an asynchronous operation and is designed to simplify writing concurrent code.

  2. Creation:

    • Tasks can be created using the Task class.

    • Example:

      csharp
      using System;
      using System.Threading.Tasks;
      
      class Program
      {
          static async Task Main()
          {
              Task task = Task.Run(() => DoWork());
              await task;
          }
      
          static void DoWork()
          {
              Console.WriteLine("Work on a separate task");
          }
      }
      
  3. Control:

    • Tasks are easier to manage, with built-in support for continuations and cancellation.

    • Example:

      csharp
      Task task = Task.Run(() => DoWork());
      task.Wait(); // Wait for the task to finish

  4. State Management:

    • The TPL provides built-in mechanisms for state management and synchronization, reducing the complexity of concurrent programming.

Key Differences

FeatureThreadTask
Abstraction LevelLow-levelHigh-level
NamespaceSystem.ThreadingSystem.Threading.Tasks
Creationnew Thread(...)Task.Run(...), Task.Factory.StartNew(...)
ControlStart, Sleep, Join, AbortWait, ContinueWith, CancellationToken
State ManagementManual synchronization requiredBuilt-in support for synchronization and continuations
Use CaseFine-grained control neededSimplified asynchronous programming, parallelism

Summary

  • Threads: Lower-level, more control, requires manual synchronization, used for precise thread management.

  • Tasks: Higher-level abstraction, easier to use, built-in support for synchronization and continuation, ideal for parallelism and asynchronous programming.

Using tasks is generally recommended for modern C# programming because they are easier to manage and provide more features for handling asynchronous operations efficiently. 

Thursday, 1 August 2024

Kubernetes easy installation guide

 Install Kubernetes guide:



After lots of research, I've found a easy to follow tutorial to install Kubernetes cluster. Here's the link to the video: 

https://www.youtube.com/watch?v=I9goyp8mWfs

https://www.itsgeekhead.com/tuts/kubernetes-129-ubuntu-22-04-3/



UBUNTU SERVER LTS 24.04.0 - https://ubuntu.com/download/server

KUBERNETES 1.30.1         - https://kubernetes.io/releases/

CONTAINERD 1.7.18         - https://containerd.io/releases/

RUNC 1.2.0-rc.1               - https://github.com/opencontainers/runc/releases

CNI PLUGINS 1.5.0         - https://github.com/containernetworking/plugins/releases

CALICO CNI 3.28.0         - https://docs.tigera.io/calico/3.27/getting-started/kubernetes/quickstart


3 NODES, 2 vCPU, 8 GB RAM, 50GB Disk EACH

k8s-control   10.10.10.2

k8s-01         10.10.10.3

k8s-02         10.10.10.4




### ALL:


sudo su


printf "\n10.10.10.2 k8s-control\n10.10.10.3 k8s-1\n10.10.10.4 k8s-1\n\n" >> /etc/hosts


printf "overlay\nbr_netfilter\n" >> /etc/modules-load.d/containerd.conf


modprobe overlay

modprobe br_netfilter


printf "net.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\n" >> /etc/sysctl.d/99-kubernetes-cri.conf


sysctl --system


wget https://github.com/containerd/containerd/releases/download/v1.7.18/containerd-1.7.18-linux-amd64.tar.gz -P /tmp/

tar Cxzvf /usr/local /tmp/containerd-1.7.18-linux-amd64.tar.gz

wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -P /etc/systemd/system/

systemctl daemon-reload

systemctl enable --now containerd


wget https://github.com/opencontainers/runc/releases/download/v1.2.0-rc.1/runc.amd64 /tmp/

install -m 755 /tmp/runc.amd64 /usr/local/sbin/runc


wget https://github.com/containernetworking/plugins/releases/download/v1.5.0/cni-plugins-linux-amd64-v1.5.0.tgz -P /tmp/

mkdir -p /opt/cni/bin

tar Cxzvf /opt/cni/bin /tmp/cni-plugins-linux-amd64-v1.5.0.tgz


mkdir -p /etc/containerd

containerd config default | tee /etc/containerd/config.toml   <<<<<<<<<<< manually edit and change SystemdCgroup to true (not systemd_cgroup)

vi /etc/containerd/config.toml

systemctl restart containerd



swapoff -a  <<<<<<<< just disable it in /etc/fstab instead


apt-get update

apt-get install -y apt-transport-https ca-certificates curl gpg


mkdir -p -m 755 /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list



apt-get update


reboot


sudo su


  apt-get update

  apt-get install -y kubelet kubeadm kubectl

  apt-mark hold kubelet kubeadm kubectl


# check swap config, ensure swap is 0

free -m



### ONLY ON CONTROL NODE .. control plane install:

                kubeadm init --pod-network-cidr 10.10.0.0/16 --kubernetes-version 1.30.1 --node-name k8s-control


                export KUBECONFIG=/etc/kubernetes/admin.conf


                # add Calico 3.28.0 CNI

                kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml

                wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml

                vi custom-resources.yaml <<<<<< edit the CIDR for pods if its custom

                kubectl apply -f custom-resources.yaml


                # get worker node commands to run to join additional nodes into cluster

                kubeadm token create --print-join-command

                ###



### ONLY ON WORKER nodes

Run the command from the token create output above

Sunday, 9 June 2024

How to delete Local_lvm storage and resize local storage to use full disk space?

1) Delete the Local_lvm from othe proxmox interface . "datacenter" > Storage. Select the local_lvm storage and click "Remove" button.

2) Change the local directory content options by clicking "Edit" button and select Disk Image (and/or Container and snippets)


3) Now open the "pve" or proxmox server "shell"  


     Check the volume group space using one of the commands: "vgs" or "lvs"

     Extend Logical volume(LV): If you have free space on the "pve"  volume group, you can extend the logical volume (LV) for the root file system. Using the following command add all the available space. 

 Now run these commands 

lvremove /dev/pve/data

lvresize -l +100%FREE /dev/pve/root

resize2fs /dev/mapper/pve-root



:) All done. Verify the increased local storage from 100gb to 500gb: 



Thursday, 16 May 2024

Ssh to linux machine without using password



To SSH connect to a linux machine, a raspberry pi in my example from a PC without using a username and password, you can set up SSH key-based authentication. This method involves generating a public and private key pair on your PC and copying the public key to the Raspberry Pi. Here’s a step-by-step guide to do this:

Step 1: Generate SSH Key Pair on Your PC

  1. Open a terminal on your PC.

    • On Linux or macOS, open the terminal.
    • On Windows, you can use PowerShell or Git Bash.
  2. Generate the key pair:

    ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
    • The -t rsa -b 4096 options specify the type of encryption (RSA) and the key size (4096 bits).
    • The -C option adds a comment (usually your email) to the key for identification purposes.
  3. Follow the prompts:

    • When prompted to "Enter file in which to save the key," you can press Enter to accept the default location (usually ~/.ssh/id_rsa).
    • Choose whether to set a passphrase. A passphrase adds an extra layer of security, but for passwordless login, you can leave it empty by pressing Enter.

Step 2: Copy the Public Key to the Raspberry Pi

  1. Transfer the public key:

    • Use the ssh-copy-id command to copy the public key to your Raspberry Pi. Replace pi@raspberrypi with your actual username and Raspberry Pi hostname or IP address:
      ssh-copy-id pi@raspberrypi
    • If you haven't changed the default user, pi is the username, and raspberrypi is the hostname. If you've changed them, use your custom values.
  2. Enter the password:

    • You will need to enter the password for the Raspberry Pi user one last time. After this, the public key will be added to the ~/.ssh/authorized_keys file on the Raspberry Pi.

Step 3: Connect to the Raspberry Pi Using SSH

  1. Initiate the SSH connection:
    ssh pi@raspberrypi
    • You should now be able to connect without being prompted for a password.

Step 4: (Optional) Adjust SSH Configuration for Convenience

  1. Edit your SSH config file:

    • Open the SSH config file in a text editor:
      nano ~/.ssh/config
    • Add the following configuration:
      Host raspberrypi HostName raspberrypi User pi IdentityFile ~/.ssh/id_rsa
    • Adjust HostName and User according to your setup.
  2. Save and exit:

    • Press Ctrl+X to exit, Y to confirm changes, and Enter to save.
  3. Now you can simply connect using:

    ssh raspberrypi

Troubleshooting Tips

  • Ensure SSH is enabled on the Raspberry Pi: You can enable SSH via the Raspberry Pi configuration tool or by placing an empty file named ssh (without any extension) on the boot partition of the SD card.
  • Correct file permissions: Ensure that your .ssh directory and files have the correct permissions:
    chmod 700 ~/.ssh chmod 600 ~/.ssh/id_rsa chmod 644 ~/.ssh/id_rsa.pub chmod 600 ~/.ssh/authorized_keys

By following these steps, you should be able to SSH into your Raspberry Pi from your PC without needing to enter a username and password each time.

Create a Directory from existing disk in proxmox server

Scenario:   I have an NVMe disk with an ext4 partition that was previously used as a directory in a Proxmox server. After reinstalling Proxm...