# Storage

# Bench disk perf

dd if=/dev/zero of=/tmp/benchfile bs=4k count=2000000 && sync;rm /tmp/benchfile
1

# Monitor disk I/O

# iostat

Get read/s and writes/s

apt install sysstat

iostat -m -z -d 1
1
2
3
  • -m : MegaByte / second
  • -z : only active devices
  • -d : auto refresh every 1 second

# Common

Can be

  • Disk
  • SSD
  • SD card (mmc)

# Find devices' location

List where your devices are located.
Your devices are hardwarely recognize by the kernel and then linked on the system with a file (because on linux, everything is a file) by udev (micro device) (systemd-udevd.service) not file system related.

dmesg -T

sudo udevadm monitor
1
2
3

# Burn image (iso) to device

dd \
if=/home/baptiste/Downloads/2019-09-26-raspbian-buster-lite.img \
of=/dev/mmcblk0 \
bs=64K \
conv=noerror,sync \
status=progress
1
2
3
4
5
6

Full perfect archlinux docopen in new window

# Manage MBR

Remove only the MBR of a disk.
How to zeroe the first 512 bytes (MBR size) of a disk

dd \
if=/dev/zero \
of=/dev/mmcblk0 \
bs=512 count=1 \
conv=noerror,sync \
status=progress
1
2
3
4
5
6

# Fdisk

# List disk

fdisk -l

lsblk
1
2
3

# Manage partitions

fdisk /dev/sdx
m n p
t
8e
w
1
2
3
4
5

# Delete partition

Follow the instruction

cfdisk /dev/sdx
1

Initialize a disk or PARTITION for use by LVM
/dev/sdx : file system path of a physical disk
/dev/sdxX : file system path of a partition of a physical disk

# LVM

# What it is ?

LVM stands for Logical Volume Management
Basically, you have 3 nested levels in lvm

  • Physical volume (pv)
  • Volume Group (vg)
  • Logical volume (lv) which is the only one you can mount on a system

# Goal

Gracefully manage a filesystem. And make it easy for futur tasks.

# Prerequisites

  • Have root or sudo access on system.
  • Have physical access to the server you manage (i.e. have VMWare access to manage virtual machine disk)

# Case 1 : Enlarge existing mounted LV

# Case 1.1 : Can increase existing disk size

Go to vcenter, enlarge the existing disk capacity you want.

In my example the disk I added is addressed by udev as /dev/sdi so think about replace sdi and adapt to meet your needs ! 😉

# Rescan physical volume
echo 1 > /sys/class/block/sdX/device/rescan

echo 1 > /sys/class/block/sdi/device/rescan
1
2
3
# Resize physical volume
pvresize /dev/sdi
1
# Extend the logical volume
lvextend -l +100%FREE /dev/mapper/vg_backup-lv_backup
1
# Resize the filesystem
resize2fs /dev/mapper/vg_backup-lv_backup
1
# Verify
df -h
1

# Case 1.2 : Can not increase existing disk size

# Create physical volume
pvcreate /dev/sdi
1
# Extend the volume volume
vgextend vg_backup /dev/sdi
1
# Extend the logical volume
lvextend -l +100%FREE /dev/mapper/vg_backup-lv_backup
1
# Resize the filesystem
resize2fs /dev/mapper/vg_backup-lv_backup
1

# Case 2 : Create a new mount point

# Add the disk

Go to vcenter, add your disk.

In my example the disk I added is addressed by udev as /dev/sdi so think about replace sdi and adapt to meet your needs ! 😉

Important : In order to be able to increase the size of your disk directly in VMWare and then just have to run extend command (lv extend + resi2ze) DO NOT create partition.

  • Good : /dev/sdi
  • Bad : /dev/sdi1

# Create the new directory

mkdir /backup
1

# create the physical volume

pvcreate /dev/sdi
pvdisplay
1
2

# create the volume group

vgcreate vg_backup /dev/sdi
vgdisplay
1
2

# create the logical volume

lvcreate --extents +100%FREE --name lv_backup vg_backup
lvdisplay
1
2

# Create a file system

mkfs.ext4 /dev/mapper/vg_backup-lv_backup
1

# mount the logicial volume in /backup

mount /dev/mapper/vg_backup-lv_backup /backup
1

# Verify

df -h
1

# Make it persistent

In order to make it persistent after reboot.
copy the first line, and replace with LV path

vim /etc/fstab
1

Add something like this :

/dev/mapper/vg_backup-lv_backup /backup         ext4    errors=remount-ro 0       1
1
# Verify

Reboot the system (if possible, if it doesn't break anything running)
And then

df -h
1

# Case 3 : Switch from Root FS to a specific LV

Bonus : Not additional space consumed + rights conserved 😉

Example with existing wrongly configured gitlab mounted under / (/var/opt/gitlab)

gitlab-ctl stop

pvcreate /dev/sdb
vgcreate vg_gitlab_data /dev/sdb
lvcreate --extents +100%FREE --name lv_gitlab_data vg_gitlab_data
mkfs.ext4 /dev/mapper/vg_gitlab_data-lv_gitlab_data

mkdir /var/opt/gitlab-temp
mv /var/opt/gitlab /var/opt/gitlab-temp

mkdir /var/opt/gitlab
mount /dev/mapper/vg_gitlab_data-lv_gitlab_data /var/opt/gitlab

mv /var/opt/gitlab-temp/gitlab/* /var/opt/gitlab

gitlab-ctl start
gitlab-ctl reconfigure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

# Case 4 : Properly remove already unpluged disk

I suddenly unpluged /dev/sdi.
So, I can not lvremove my logical volume.

lvremove /dev/vg_backup/lv_backup 

  /dev/vg_backup/lv_backup: read failed after 0 of 4096 at 1395860111360: Input/output error
  /dev/vg_backup/lv_backup: read failed after 0 of 4096 at 1395860168704: Input/output error
  /dev/vg_backup/lv_backup: read failed after 0 of 4096 at 0: Input/output error
  /dev/vg_backup/lv_backup: read failed after 0 of 4096 at 4096: Input/output error
  Volume group "vg_backup" not found
  Skipping volume group vg_backup
1
2
3
4
5
6
7
8

# Find my logical volume

I found my lv using dmsetup (low level logical volume management).

dmsetup info
1

# Remove it properly

dmsetup remove vg_backup-lv_backup
1

# Ensure no error remains.

lvdisplay

vgdisplay

pvdisplay
1
2
3
4
5

# Case 4 : Remove bad disk with the less data loss on other PVs

pvdisplay
1
Couldnt find device with uuid EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx.
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vg_srvlinux
  PV Size               931.51 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               0
  Allocated PE          238466
  PV UUID               xhwmxE-27ue-dHYC-xAk8-Xh37-ov3t-frl20d

  --- Physical volume ---
  PV Name               unknown device
  VG Name               vg_srvlinux
  PV Size               465.76 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              119234
  Free PE               0
  Allocated PE          119234
  PV UUID               EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
vgreduce --removemissing --force vg_srvlinux
1
  Couldnt find device with uuid EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx.
  Removing partial LV LogVol00.
  Logical volume "LogVol00" successfully removed
  Wrote out consistent volume group vg_srvlinux
1
2
3
4
pvdisplay

 --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vg_srvlinux
  PV Size               931.51 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               238466
  Allocated PE          0
  PV UUID               xhwmxE-27ue-dHYC-xAk8-Xh37-ov3t-frl20d
1
2
3
4
5
6
7
8
9
10
11
12

# Case 5 : Reduce a LV size (tricky)

If you want to remove a physical disk contained in a LV Tutorialopen in new window

## Ceph

# Installation and other methods

baremetal, ansible, salt, pupet, k8s...
https://docs.ceph.com/en/latest/install/

# Cheat sheet

https://sabaini.at/pages/ceph-cheatsheet.html

# S3

# Aws client


# Installation


sudo dnf install awscli

# or
pip3 install awscli

# or
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
1
2
3
4
5
6
7
8
9

Verify your installation with

aws --version
1

# Configuration


You can configure your client with two methods

First, with aws configure command. You only have to complete all the required informations.

Second, modify ~/.aws/config and ~/.aws/credentials.

Example of config file (It work if the file is empty):

[default]
region = us-east-1
s3 =
  #endpoint_url = http://localhost:6007
  signature_version = s3v4
  max_concurrent_requests = 20
  max_queue_size = 100
  multipart_threshold = 1GB
  # Edit the multipart_chunksize value according to the file sizes that you want to upload.
  # The present configuration allows to upload files up to 10GB (100 requests * 100MB).
  # For example setting it to 10GB allows you to upload files up to 1TB.
  multipart_chunksize = 100MB

s3api =
  #endpoint_url = http://localhost:6007
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Example of credentials file :

[default]
aws_access_key_id = 1234567890
aws_secret_access_key = 0987654321
1
2
3

You can verify your configuration with this command :

aws --endpoint-url http://${ENDPOINT}/ s3 ls --human-readable
1

# Usefull commands

ActionCommand
For working with aws-cli and OpenIO, you need this environment variableexport REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
Listing the content of the S3 endpoint aws --endpoint-url http://${ENDPOINT}/ s3 ls --human-readable
Create bucket aws --endpoint-url http://${ENDPOINT}/ s3 mb s3://${BUCKET_NAME}
Upload fileaws --endpoint-url http://${ENDPOINT}/ s3 cp /path/to/file s3://${BUCKET_NAME}
Download fileaws --endpoint-url http://${ENDPOINT}/ s3 cp s3://${BUCKET_NAME}/file .
Delete Objectaws --endpoint-url http://${ENDPOINT}/ s3 rm s3://${BUCKET_NAME}/file

# Use case

# File hosting

# Upload
aws --endpoint-url https://IP:PORT/ s3 cp ~/Downloads/pic.png s3://communication
1
# Manage access (ACL)

Think about the rights you want to give.
Possible values docopen in new window

  • private
  • public-read
  • public-read-write
  • authenticated-read
  • aws-exec-read
  • bucket-owner-read
  • bucket-owner-full-control

And set them !
For example : public-read

aws s3api --endpoint-url http://IP:PORT/ put-object-acl --bucket communication --key pic.png --acl public-read
1
# Check access (ACL)

To get the access list associated to your object you can issue this cmd

aws s3api --endpoint-url http://IP:PORT/ get-object-acl --bucket communication --key pic.png
1
{
    "Owner": {
        "DisplayName": "communication",
        "ID": "communication-user"
    },
    "Grants": [
        {
            "Grantee": {
                "Type": "Group",
                "URI": "http://acs.amazonaws.com/groups/global/AllUsers"
            },
            "Permission": "READ"
        },
        {
            "Grantee": {
                "DisplayName": "communication",
                "ID": "communication-user",
                "Type": "CanonicalUser"
            },
            "Permission": "FULL_CONTROL"
        }
    ]
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

# Resources

# NFS

# Server (debian 10)

There are multiple ports to open between server and client...
Which ports do I need to open in the firewall to use NFS?open in new window

apt install nfs-kernel-server

systemctl status rpcbind
systemctl status nfs-mountd.service
systemctl status nfs-server
1
2
3
4
5

Chose folders your want to share

vim /etc/exports
cat /etc/default/nfs-kernel-server
1
2

Check compatible nfs version

MANPAGER=cat man rpc.nfsd | grep -i version --color
1

# Client

There are multiple ports to open between server and client...
Which ports do I need to open in the firewall to use NFS?open in new window

Nothing to do unless mount through network 😄

mount 10.10.10.10:/remote_path /local_path

mount 10.10.10.10:/backup /backup
1
2
3

Stress test

dd if=/dev/zero of=/backup/benchfile bs=4k count=2000000 && sync
1