Bench disk perf

dd if=/dev/zero of=/tmp/benchfile bs=4k count=2000000 && sync;rm /tmp/benchfile

Monitor disk I/O

iostat

Get read/s and writes/s

apt install sysstat

iostat -m -z -d 1
  • -m : MegaByte / second
  • -z : only active devices
  • -d : auto refresh every 1 second

Common

Can be

  • Disk
  • SSD
  • SD card (mmc)

    Find devices' location

    List where your devices are located.
    Your devices are hardwarely recognize by the kernel and then linked on the system with a file (because on linux, everything is a file) by udev (micro device) (systemd-udevd.service) not file system related.
dmesg -T

sudo udevadm monitor

Burn image (iso) to device

dd \
if=/home/baptiste/Downloads/2019-09-26-raspbian-buster-lite.img \
of=/dev/mmcblk0 \
bs=64K \
conv=noerror,sync \
status=progress

Full perfect archlinux doc

Manage MBR

Remove only the MBR of a disk.
How to zeroe the first 512 bytes (MBR size) of a disk

dd \
if=/dev/zero \
of=/dev/mmcblk0 \
bs=512 count=1 \
conv=noerror,sync \
status=progress

Fdisk

List disk

fdisk -l

lsblk

Manage partitions

fdisk /dev/sdx
m n p
t
8e
w

Delete partition

Follow the instruction

cfdisk /dev/sdx

Initialize a disk or PARTITION for use by LVM
/dev/sdx : file system path of a physical disk
/dev/sdxX : file system path of a partition of a physical disk

LVM

What it is ?

LVM stands for Logical Volume Management
Basically, you have 3 nested levels in lvm

  • Physical volume (pv)
  • Volume Group (vg)
  • Logical volume (lv) which is the only one you can mount on a system

Goal

Gracefully manage a filesystem. And make it easy for futur tasks.

Prerequisites

  • Have root or sudo access on system.
  • Have physical access to the server you manage (i.e. have VMWare access to manage virtual machine disk)

Case 1 : Enlarge existing mounted LV

Case 1.1 : Can increase existing disk size

Go to vcenter, enlarge the existing disk capacity you want.

In my example the disk I added is addressed by udev as /dev/sdi so think about replace sdi and adapt to meet your needs ! ;)

Rescan physical volume

echo 1 > /sys/class/block/sdX/device/rescan

echo 1 > /sys/class/block/sdi/device/rescan

Resize physical volume

pvresize /dev/sdi

Extend the logical volume

lvextend -l +100%FREE /dev/mapper/vg_backup-lv_backup

Resize the filesystem

resize2fs /dev/mapper/vg_backup-lv_backup

Verify

df -h

Case 1.2 : Can not increase existing disk size

Create physical volume

pvcreate /dev/sdi

Extend the volume volume

vgextend vg_backup /dev/sdi

Extend the logical volume

lvextend -l +100%FREE /dev/mapper/vg_backup-lv_backup

Resize the filesystem

resize2fs /dev/mapper/vg_backup-lv_backup

Case 2 : Create a new mount point

Add the disk

Go to vcenter, add your disk.

In my example the disk I added is addressed by udev as /dev/sdi so think about replace sdi and adapt to meet your needs ! ;)

Important : In order to be able to increase the size of your disk directly in VMWare and then just have to run extend command (lv extend + resi2ze) DO NOT create partition.

  • Good : /dev/sdi
  • Bad : /dev/sdi1

Create the new directory

mkdir /backup

create the physical volume

pvcreate /dev/sdi
pvdisplay

create the volume group

vgcreate vg_backup /dev/sdi
vgdisplay

create the logical volume

lvcreate --extents +100%FREE --name lv_backup vg_backup
lvdisplay

Create a file system

mkfs.ext4 /dev/mapper/vg_backup-lv_backup

mount the logicial volume in /backup

mount /dev/mapper/vg_backup-lv_backup /backup

Verify

df -h

Make it persistent

In order to make it persistent after reboot.
copy the first line, and replace with LV path

vim /etc/fstab

Add something like this :

/dev/mapper/vg_backup-lv_backup /backup         ext4    errors=remount-ro 0       1

Verify

Reboot the system (if possible, if it doesn't break anything running)
And then

df -h

Case 3 : Switch from Root FS to a specific LV

Bonus : Not additional space consumed + rights conserved ;)

Example with existing wrongly configured gitlab mounted under / (/var/opt/gitlab)

gitlab-ctl stop

pvcreate /dev/sdb
vgcreate vg_gitlab_data /dev/sdb
lvcreate --extents +100%FREE --name lv_gitlab_data vg_gitlab_data
mkfs.ext4 /dev/mapper/vg_gitlab_data-lv_gitlab_data

mkdir /var/opt/gitlab-temp
mv /var/opt/gitlab /var/opt/gitlab-temp

mkdir /var/opt/gitlab
mount /dev/mapper/vg_gitlab_data-lv_gitlab_data /var/opt/gitlab

mv /var/opt/gitlab-temp/gitlab/* /var/opt/gitlab

gitlab-ctl start
gitlab-ctl reconfigure

Case 4 : Properly remove already unpluged disk

I suddenly unpluged /dev/sdi.
So, I can not lvremove my logical volume.

lvremove /dev/vg_backup/lv_backup 

  /dev/vg_backup/lv_backup: read failed after 0 of 4096 at 1395860111360: Input/output error
  /dev/vg_backup/lv_backup: read failed after 0 of 4096 at 1395860168704: Input/output error
  /dev/vg_backup/lv_backup: read failed after 0 of 4096 at 0: Input/output error
  /dev/vg_backup/lv_backup: read failed after 0 of 4096 at 4096: Input/output error
  Volume group "vg_backup" not found
  Skipping volume group vg_backup

Find my logical volume

I found my lv using dmsetup (low level logical volume management).

dmsetup info

Remove it properly

dmsetup remove vg_backup-lv_backup

Ensure no error remains.

lvdisplay

vgdisplay

pvdisplay

Case 4 : Remove bad disk with the less data loss on other PVs

pvdisplay
Couldnt find device with uuid EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx.
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vg_srvlinux
  PV Size               931.51 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               0
  Allocated PE          238466
  PV UUID               xhwmxE-27ue-dHYC-xAk8-Xh37-ov3t-frl20d

  --- Physical volume ---
  PV Name               unknown device
  VG Name               vg_srvlinux
  PV Size               465.76 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              119234
  Free PE               0
  Allocated PE          119234
  PV UUID               EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx
vgreduce --removemissing --force vg_srvlinux
  Couldnt find device with uuid EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx.
  Removing partial LV LogVol00.
  Logical volume "LogVol00" successfully removed
  Wrote out consistent volume group vg_srvlinux
pvdisplay

 --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vg_srvlinux
  PV Size               931.51 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               238466
  Allocated PE          0
  PV UUID               xhwmxE-27ue-dHYC-xAk8-Xh37-ov3t-frl20d

Case 5 : Reduce a LV size (tricky)

If you want to remove a physical disk contained in a LV Tutorial

Ceph

Installation and other methods

baremetal, ansible, salt, pupet, k8s...
https://docs.ceph.com/en/latest/install/

Cheat sheet

https://sabaini.at/pages/ceph-cheatsheet.html

S3

Aws client


Installation


sudo dnf install awscli

# or
pip3 install awscli

# or
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Verify your installation with

aws --version

Configuration


You can configure your client with two methods

First, with aws configure command. You only have to complete all the required informations.

Second, modify ~/.aws/config and ~/.aws/credentials.

Example of config file (It work if the file is empty):

[default]
region = us-east-1
s3 =
  #endpoint_url = http://localhost:6007
  signature_version = s3v4
  max_concurrent_requests = 20
  max_queue_size = 100
  multipart_threshold = 1GB
  # Edit the multipart_chunksize value according to the file sizes that you want to upload.
  # The present configuration allows to upload files up to 10GB (100 requests * 100MB).
  # For example setting it to 10GB allows you to upload files up to 1TB.
  multipart_chunksize = 100MB

s3api =
  #endpoint_url = http://localhost:6007

Example of credentials file :

[default]
aws_access_key_id = 1234567890
aws_secret_access_key = 0987654321

You can verify your configuration with this command :

aws --endpoint-url http://${ENDPOINT}/ s3 ls --human-readable

Usefull commands

Action Command
For working with aws-cli and OpenIO, you need this environment variable export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
Listing the content of the S3 endpoint aws --endpoint-url http://${ENDPOINT}/ s3 ls --human-readable
Create bucket aws --endpoint-url http://${ENDPOINT}/ s3 mb s3://${BUCKET_NAME}
Upload file aws --endpoint-url http://${ENDPOINT}/ s3 cp /path/to/file s3://${BUCKET_NAME}
Download file aws --endpoint-url http://${ENDPOINT}/ s3 cp s3://${BUCKET_NAME}/file .
Delete Object aws --endpoint-url http://${ENDPOINT}/ s3 rm s3://${BUCKET_NAME}/file

Use case

File hosting

Upload

aws --endpoint-url https://IP:PORT/ s3 cp ~/Downloads/pic.png s3://communication

Manage access (ACL)

Think about the rights you want to give.
Possible values doc

  • private
  • public-read
  • public-read-write
  • authenticated-read
  • aws-exec-read
  • bucket-owner-read
  • bucket-owner-full-control

And set them !
For example : public-read

aws s3api --endpoint-url http://IP:PORT/ put-object-acl --bucket communication --key pic.png --acl public-read

Check access (ACL)

To get the access list associated to your object you can issue this cmd

aws s3api --endpoint-url http://IP:PORT/ get-object-acl --bucket communication --key pic.png
{
    "Owner": {
        "DisplayName": "communication",
        "ID": "communication-user"
    },
    "Grants": [
        {
            "Grantee": {
                "Type": "Group",
                "URI": "http://acs.amazonaws.com/groups/global/AllUsers"
            },
            "Permission": "READ"
        },
        {
            "Grantee": {
                "DisplayName": "communication",
                "ID": "communication-user",
                "Type": "CanonicalUser"
            },
            "Permission": "FULL_CONTROL"
        }
    ]
}

Resources

NFS

Server (debian 10)

There are multiple ports to open between server and client...
Which ports do I need to open in the firewall to use NFS?

apt install nfs-kernel-server

systemctl status rpcbind
systemctl status nfs-mountd.service
systemctl status nfs-server

Chose folders your want to share

vim /etc/exports
cat /etc/default/nfs-kernel-server

Check compatible nfs version

MANPAGER=cat man rpc.nfsd | grep -i version --color

Client

There are multiple ports to open between server and client...
Which ports do I need to open in the firewall to use NFS?

Nothing to do unless mount through network :D

mount 10.10.10.10:/remote_path /local_path

mount 10.10.10.10:/backup /backup

Stress test

dd if=/dev/zero of=/backup/benchfile bs=4k count=2000000 && sync

results matching ""

    No results matching ""

    results matching ""

      No results matching ""