July 27, 2022

OpenShift 4.10 I: S2I, start-build, BuildConfig and Deployment

$ oc new-app --name=php-helloworld --image-stream=php:7.3 https://github.com/magnuskkarlsson/DO180-apps#s2i --context-dir=php-helloworld

$ oc start-build buildconfig.build.openshift.io/nodejs-dev

$ oc logs -f buildconfig.build.openshift.io/nodejs-dev
...
Push successful

$ oc logs -f deployment.apps/nodejs-dev

OpenShift 4.10 I: Create an OCP Application from Image, S2I, Template

From Image

$ oc new-project myproj01

$ oc new-app --name=httpd-24 --image=registry.access.redhat.com/ubi8/httpd-24 --labels app=httpd-24

$ oc get all

$ oc logs pod/httpd-24-9fb54567d-n9slj

$ oc expose service/httpd-24

$ oc get all

$ curl http://httpd-24-ch06s03.apps-crc.testing/

$ oc exec pod/httpd-24-9fb54567d-n9slj -- ps -aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
1000650+       1  0.0  0.2 391296 21980 ?        Ss   19:54   0:00 httpd -D FOREGROUND

$ oc describe pod/httpd-24-9fb54567d-n9slj

Source-to-Image (S2I)

$ oc new-project myproj03

$ oc get is -n openshift

$ oc new-app --name=ruby-hello-world --labels app=myapp --image-stream=ruby https://github.com/openshift/ruby-hello-world

$ oc get all

$ oc logs -f pod/ruby-hello-world-1-build

$ oc describe pod/ruby-hello-world-1-build

$ oc describe service/ruby-hello-world

$ oc expose service/ruby-hello-world

$ curl http://ruby-hello-world-myproj03.apps-crc.testing/

$ oc get buildconfig
NAME               TYPE     FROM   LATEST
ruby-hello-world   Source   Git    1

$ oc start-build ruby-hello-world

From Template

$ oc new-project myproj02

$ oc get templates -n openshift

$ oc get templates -n openshift | grep mysql
mysql-ephemeral                                 MySQL database service, without persistent storage. For more information abou...   8 (3 generated)   3
mysql-persistent                                MySQL database service, with persistent storage. For more information about u...   9 (3 generated)   4

$ oc describe template mysql-persistent -n openshift

$ oc new-app --name=app-db --template=mysql-persistent \
  --param MYSQL_USER=myuser \
  --param MYSQL_PASSWORD=redhat123 \
  --param MYSQL_ROOT_PASSWORD=redhat123 \
  --param MYSQL_DATABASE=items \
  --labels app=app-db

$ oc get events

$ oc describe service/mysql
Name:              mysql
Namespace:         myproj02
Labels:            app=app-db
                   template=mysql-persistent-template
Annotations:       openshift.io/generated-by: OpenShiftNewApp
                   template.openshift.io/expose-uri: mysql://{.spec.clusterIP}:{.spec.ports[?(.name=="mysql")].port}
Selector:          name=mysql
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.217.4.67
IPs:               10.217.4.67
Port:              mysql  3306/TCP
TargetPort:        3306/TCP
Endpoints:         10.217.0.106:3306
Session Affinity:  None
Events:            <none>

$ oc get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql   Bound    pv0023   100Gi      RWO,ROX,RWX                   2m59s

$ oc describe pvc mysql
Name:          mysql
Namespace:     myproj02
StorageClass:  
Status:        Bound
Volume:        pv0023
Labels:        app=app-db
               template=mysql-persistent-template
Annotations:   openshift.io/generated-by: OpenShiftNewApp
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      100Gi
Access Modes:  RWO,ROX,RWX
VolumeMode:    Filesystem
Used By:       mysql-1-qncwj
Events:        <none>

$ oc port-forward pod/mysql-1-qncwj 3306:3306

$ mysql --host=127.0.0.1 --port=3306 --user=myuser --password=redhat123 --database=items --execute="show databases;"

OpenShift 4.10 I: OCP Manifest/Custom Resources Documentation

$ oc api-resources 
NAME                                  SHORTNAMES       APIVERSION                                    NAMESPACED   KIND
bindings                                               v1                                            true         Binding
componentstatuses                     cs               v1                                            false        ComponentStatus
configmaps                            cm               v1                                            true         ConfigMap
endpoints                             ep               v1                                            true         Endpoints
...

$ oc explain pod
KIND:     Pod
VERSION:  v1

DESCRIPTION:
     Pod is a collection of containers that can run on a host. This resource is
     created by clients and scheduled onto hosts.

FIELDS:
   apiVersion	<string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind	<string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata	<Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec	<Object>
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

   status	<Object>
     Most recently observed status of the pod. This data may not be up to date.
     Populated by the system. Read-only. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

$ oc explain pod.spec 
KIND:     Pod
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

     PodSpec is a description of a pod.

FIELDS:
   activeDeadlineSeconds	<integer>
     Optional duration in seconds the pod may be active on the node relative to
     StartTime before the system will actively try to mark it failed and kill
     associated containers. Value must be a positive integer.

   affinity	<Object>
     If specified, the pod's scheduling constraints
...

$ oc explain pod.spec --recursive 
KIND:     Pod
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

     PodSpec is a description of a pod.

FIELDS:
   activeDeadlineSeconds	<integer>
   affinity	<Object>
      nodeAffinity	<Object>
         preferredDuringSchedulingIgnoredDuringExecution	<[]Object>
            preference	<Object>
               matchExpressions	<[]Object>
                  key	<string>
                  operator	<string>
                  values	<[]string>
               matchFields	<[]Object>
                  key	<string>
                  operator	<string>
                  values	<[]string>
            weight	<integer>
         requiredDuringSchedulingIgnoredDuringExecution	<Object>
...

OpenShift 4.10 I: Create an OCP Application

First create a new OCP project.

$ oc new-project myapp

Create an application from an image.

$ oc new-app --name=todonodejs \
  --image=quay.io/redhattraining/do180-todonodejs-12 \
  --env MYSQL_ENV_MYSQL_DATABASE=tododb \
  --env MYSQL_ENV_MYSQL_USER=user1 \
  --env MYSQL_ENV_MYSQL_PASSWORD=redhat123 \
  --labels app=todonodejs

Create an application based on source code in a git repository - Source-to-Image (S2I).

$ oc new-app --name=nodejs-dev \
  --image-stream=nodejs:16-ubi8 \
  https://github.com/magnuskkarlsson/DO180-apps#troubleshoot-review \
  --context-dir=nodejs-app

Create an application from an existing template.

$ oc new-app --name=mysql --template=mysql-persistent \
  --param MYSQL_USER=user1 \
  --param MYSQL_PASSWORD=redhat123 \
  --param MYSQL_ROOT_PASSWORD=redhat123 \
  --param MYSQL_DATABASE=tododb \
  --labels app=todonodejs

OpenShift 4.10 I: Custom Container Images without Dockerfile/Containerfile

$ podman run -d --name httpd-24 -p 8080:8080 registry.access.redhat.com/ubi9/httpd-24

$ podman exec httpd-24 /bin/bash -c 'echo "custom httpd image" < /var/www/html/index.html'

$ curl http://127.0.0.1:8080/ 
custom httpd image

$ podman diff httpd-24

$ podman commit --author 'Magnus K Karlsson' httpd-24 httpd-24-custom

$ podman images
REPOSITORY                                       TAG         IMAGE ID      CREATED        SIZE
localhost/httpd-24-custom                        latest      0eb89261860f  2 minutes ago  387 MB

$ podman tag localhost/httpd-24-custom quay.io/magnus_k_karlsson/httpd-24-custom:1.0

$ podman images
REPOSITORY                                       TAG         IMAGE ID      CREATED        SIZE
localhost/httpd-24-custom                        latest      0eb89261860f  5 minutes ago  387 MB
quay.io/magnus_k_karlsson/httpd-24-custom        1.0         0eb89261860f  5 minutes ago  387 MB

$ podman login quay.io
Username: magnus_k_karlsson 
Password: 
Login Succeeded!

$ podman push quay.io/magnus_k_karlsson/httpd-24-custom:1.0

$ podman pull quay.io/magnus_k_karlsson/httpd-24-custom:1.0

$ podman run -d --name httpd-24-custom -p 18080:8080 quay.io/magnus_k_karlsson/httpd-24-custom:1.0
4ca8325b0670b1b1175e8eaac442987f4cfa7f37d677eeec8dbbde9f1d0ec77e
$ curl http://127.0.0.1:18080/
custom httpd image

$ podman stop -a

$ podman rm -a

$ podman save -o httpd-24-custom.tar localhost/httpd-24-custom

$ podman rmi -a

$ podman load -i httpd-24-custom.tar

$ podman run -d --name httpd-24-custom -p 8080:8080 localhost/httpd-24-custom

$ podman logs httpd-24-custom
$ curl http://127.0.0.1:8080/
custom httpd image

OpenShift 4.10 I: Common podman commands

$ podman pull registry.access.redhat.com/ubi9/httpd-24:latest
$ podman images registry.access.redhat.com/ubi9/httpd-24:latest
$ podman rmi registry.access.redhat.com/ubi9/httpd-24:latest
$ skopeo inspect docker://registry.access.redhat.com/ubi9/httpd-24:latest
$ podman inspect registry.access.redhat.com/ubi9/httpd-24:latest

$ podman search --list-tags registry.access.redhat.com/ubi9/httpd-24

$ podman run -d --name httpd-24 -p 8080:8080 registry.access.redhat.com/ubi9/httpd-24:latest 

$ podman exec -it httpd-24 /bin/bash

$ podman ps -a
$ podman logs httpd-24
$ podman inspect httpd-24

$ podman top httpd-24
$ podman stats

$ podman stop httpd-24
$ podman start httpd-24
$ podman restart httpd-24

$ podman kill httpd-24
$ podman rm httpd-24

$ podman kill -s [SIGTERM|SIGINT|SIGKILL] httpd-24
$ kill -l
 1) SIGHUP	 2) SIGINT	 3) SIGQUIT	 4) SIGILL	 5) SIGTRAP
 6) SIGABRT	 7) SIGBUS	 8) SIGFPE	 9) SIGKILL	10) SIGUSR1
11) SIGSEGV	12) SIGUSR2	13) SIGPIPE	14) SIGALRM	15) SIGTERM
...

Use first SIGTERM, then try SIGINT; and only if both fail, to try again with SIGKILL.

$ podman run --rm -it registry.access.redhat.com/ubi9/ubi:latest /bin/bash

$ podman commit 

$ podman save -o mysql.tar registry.redhat.io/rhel8/mysql-80
$ podman load -i mysql.tar

$ podman history 

OpenShift 4.10 I: Understand rootless Container

$ podman search ubi
NAME                                                DESCRIPTION
registry.access.redhat.com/ubi7                     The Universal Base Image is designed and engineered to be the base layer for 
registry.access.redhat.com/ubi7/ubi                 The Universal Base Image is designed and engineered to be the base layer 
registry.access.redhat.com/ubi8/ubi                 Provides the latest release of the Red Hat Universal Base Image 8
registry.access.redhat.com/ubi8                     The Universal Base Image is designed and engineered to be the base layer 
registry.access.redhat.com/ubi9/ubi                 rhcc_registry.access.redhat.com_ubi9/ubi
registry.access.redhat.com/ubi9                     rhcc_registry.access.redhat.com_ubi9

$ podman run --name as-user --rm --interactive --tty registry.access.redhat.com/ubi9/ubi:latest /bin/bash

[root@60e643438db3 /]# whoami 
root
[root@60e643438db3 /]# id
uid=0(root) gid=0(root) groups=0(root)
[root@60e643438db3 /]# sleep 1000

From another terminal window, run 

$ ps -aux | grep 'sleep 1000'
student    23933  0.0  0.0   5300  1368 pts/0    S+   12:18   0:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000

$ sudo podman run --name as-root --rm --interactive --tty registry.access.redhat.com/ubi9/ubi:latest /bin/bash

[root@ff6d34b2a1e0 /]# whoami 
root
[root@ff6d34b2a1e0 /]# id
uid=0(root) gid=0(root) groups=0(root)
[root@ff6d34b2a1e0 /]# sleep 1000

From another terminal window, run 

$ ps -aux | grep 'sleep 1000'
root       24134  0.0  0.0   5300  1368 pts/0    S+   12:24   0:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000

OpenShift 4.10 I: Container mysql with Persistent Storage, Unshare and SELinux

$ podman search mysql
NAME                                                  DESCRIPTION
registry.access.redhat.com/rhscl/mysql-56-rhel7       MySQL 5.6 SQL database server
registry.access.redhat.com/rhscl/mysql-57-rhel7       Docker image for running MySQL 5.7 server. This image can provide database
registry.access.redhat.com/rhscl/mysql-80-rhel7       This container image provides a containerized packaging of the MySQL mysqld

$ podman run -d --name mysql-80-rhel7 \
  -p 13306:3306 \
  -e MYSQL_ROOT_PASSWORD=redhat123 \
  -e MYSQL_DATABASE=items \
  -e MYSQL_USER=myuser \
  -e MYSQL_PASSWORD=redhat123 \
  registry.access.redhat.com/rhscl/mysql-80-rhel7

$ podman exec mysql-80-rhel7 ps -aux | grep mysql
mysql          1  0.9 20.2 1616588 303784 ?      Ssl  11:30   0:00 /opt/rh/rh-mysql80/root/usr/libexec/mysqld --defaults-file=/etc/my.cnf

$ podman exec mysql-80-rhel7 id mysql
uid=27(mysql) gid=27(mysql) groups=27(mysql),0(root)

$ podman exec mysql-80-rhel7 cat /etc/my.cnf.d/base.cnf
[mysqld]
datadir = /var/lib/mysql/data
...

$ podman stop mysql-80-rhel7 
$ podman rm mysql-80-rhel7 

$ mkdir /home/student/mysql-80-rhel7
$ podman unshare chown -R 27:27 /home/student/mysql-80-rhel7

$ podman run -d --name mysql-80-rhel7 \
  -p 13306:3306 \
  -e MYSQL_ROOT_PASSWORD=redhat123 \
  -e MYSQL_DATABASE=items \
  -e MYSQL_USER=myuser \
  -e MYSQL_PASSWORD=redhat123 \
  -v /home/student/mysql-80-rhel7:/var/lib/mysql/data:Z \
  registry.access.redhat.com/rhscl/mysql-80-rhel7

$ podman logs mysql-80-rhel7

$ vim db.sql
CREATE TABLE items.product (id int NOT NULL, name varchar(255) DEFAULT NULL, PRIMARY KEY (id));
INSERT INTO items.product (id, name) VALUES (1,'Bar');
SELECT * FROM items.product;

$ podman cp db.sql mysql-80-rhel7:/tmp

$ podman exec mysql-80-rhel7 /bin/bash -c \
  'mysql --host=127.0.0.1 --port=3306 --database=items --user=myuser --password=redhat123 < /tmp/db.sql'

$ podman exec mysql-80-rhel7 /bin/bash -c \
  "mysql --host=127.0.0.1 --port=3306 --database=items --user=myuser --password=redhat123 --execute='SELECT * FROM items.product';"

$ podman stop mysql-80-rhel7; podman rm mysql-80-rhel7

$ ls -aldZ /home/student/mysql-80-rhel7
drwxr-xr-x. 3 296634 296634 system_u:object_r:container_file_t:s0:c493,c605 102 Jul 25 13:40 /home/student/mysql-80-rhel7

OpenShift 4.10 I: Red Hat Official Container Repository/Registry

Installation

$ sudo dnf install container-tools

Container repositories/registries are configured in.

$ cat /etc/containers/registries.conf
...
unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "registry.centos.org", "quay.io", "docker.io"]
...

$ man 5 containers-registries.conf
...
       Container engines will use the $HOME/.config/containers/registries.conf if it exists, otherwise  they
       will use /etc/containers/registries.conf
...
   EXAMPLE
              unqualified-search-registries = ["example.com"]

              [[registry]]
              prefix = "example.com/foo"
              insecure = false
              blocked = false
              location = "internal-registry-for-example.com/bar"

              [[registry.mirror]]
              location = "example-mirror-0.local/mirror-for-foo"

              [[registry.mirror]]
              location = "example-mirror-1.local/mirrors/foo"
              insecure = true

       Given the above, a pull of example.com/foo/image:latest will try:
           1. example-mirror-0.local/mirror-for-foo/image:latest
           2. example-mirror-1.local/mirrors/foo/image:latest
           3. internal-registry-for-example.net/bar/image:latest
...
   EXAMPLE
       The  following  example  configuration  defines  two searchable registries, one insecure registry, and two
       blocked registries.

              [registries.search]
              registries = ['registry1.com', 'registry2.com']

              [registries.insecure]
              registries = ['registry3.com']

              [registries.block]
              registries = ['registry.untrusted.com', 'registry.unsafe.com']
...

$ echo 'unqualified-search-registries = ["registry.redhat.io"]' > /home/student/.config/containers/registries.conf

Browse official Red Hat Container Catalog

https://catalog.redhat.com/

https://catalog.redhat.com/software

https://catalog.redhat.com/software/containers/explore

Home > Software > Container images

Example Apache httpd: https://catalog.redhat.com/software/containers/ubi9/httpd-24/61a60c3e3e9240fca360f74a

Using Red Hat login
  Registry: registry.redhat.io
  Container: registry.redhat.io/ubi9/httpd-24

Unauthenticated
  Registry: registry.access.redhat.com
  Container: registry.access.redhat.com/ubi9/httpd-24

"Although both registry.access.redhat.com and registry.redhat.io hold essentially the same container images, some images that require a subscription are only available from registry.redhat.io."

https://access.redhat.com/RegistryAuthentication

"Red Hat Quay is a private container registry that stores, builds, and deploys container images."

https://www.redhat.com/en/resources/quay-datasheet

OpenShift 4.10 I: Overview Container

Namespaces

"Namespaces are a feature of the Linux kernel that partitions kernel resources such that one set of processes sees one set of resources while another set of processes sees a different set of resources."

"Resources may exist in multiple spaces. Examples of such resources are process IDs, hostnames, user IDs, file names, and some names associated with network access, and interprocess communication."

https://en.wikipedia.org/wiki/Linux_namespaces

Control groups (cgroups)

"cgroups (abbreviated from control groups) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes."

https://en.wikipedia.org/wiki/Cgroups

Seccomp

"seccomp (short for secure computing mode) is a computer security facility in the Linux kernel. seccomp allows a process to make a one-way transition into a "secure" state where it cannot make any system calls except exit(), sigreturn(), read() and write() to already-open file descriptors."

https://en.wikipedia.org/wiki/Seccomp

SELinux

"Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including mandatory access controls (MAC)."

https://en.wikipedia.org/wiki/Security-Enhanced_Linux

OpenShift 4.10 I: Write Dockerfile, Build, Tag and Push

Dockerfile/Containerfile

There is no docker file syntax man page on RHEL.

https://learn.redhat.com/t5/Containers-DevOps-OpenShift/Is-there-Dockerfile-format-or-example-in-RHEL-man-page-or/td-p/16739

$ sudo dnf provides "*Dockerfile"
Not root, Subscription Management repositories not updated
buildah-tests-1:1.24.2-4.el9_0.x86_64 : Tests for buildah
Repo        : @System
Matched from:
Other       : *Dockerfile

$ rpm -ql buildah-tests | egrep "Dockerfile|Containerfile"
/usr/share/buildah/test/system/bud/add-chmod/Dockerfile
/usr/share/buildah/test/system/bud/add-chmod/Dockerfile.bad
...
Dockerfile instructions Explenation Example
FROM Base image FROM registry.redhat.io/ubi8/ubi:8.5
MAINTAINER MAINTAINER Magnus K Karlsson <magnus.k.karlsson@antigo.se>
LABEL Adds metadata to an image LABEL com.example.version="0.0.1-beta"
ARG "Defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag." ARG user1=someuser
ENV Environment variable <key> to the value <value>

ENV MY_NAME="John Doe"

ENV PORT=8080

RUN RUN dnf install -y httpd
USER

"Use USER to change to a non-root user"

"Avoid switching USER back and forth frequently"

USER apache
EXPOSE EXPOSE ${PORT}
ADD or COPY

"generally speaking, COPY is preferred"

ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /

ADD files.tar.gz ${APACHE_HOME}

ADD http://example.com/foobar /

WORKDIR Set the working directory WORKDIR ${APACHE_HOME}
VOLUME Define a volume mount point VOLUME ${APACHE_HOME}/data
ENTRYPOINT "set the image’s main command ... then use CMD as the default flags" ENTRYPOINT ["/usr/sbin/httpd"]
CMD CMD ["sh", "my-start.sh"]

Reference:

https://docs.docker.com/engine/reference/builder/

https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#dockerfile-instructions

Examples

The order of the instructions is important for USER. First create container user (with specific uid and gid) and then instruct with USER.

After will the instructions be run with the dedicated USER, i.e. later instructions with COPY and ADD, those files will belong to USER.

FROM registry.access.redhat.com/ubi9/ubi
MAINTAINER Magnus K Karlsson <magnus.k.karlsson@antigo.se>
ENV PORT 8080
RUN dnf install -y httpd && \
  sed -i "s/Listen 80/Listen ${PORT}/g" /etc/httpd/conf/httpd.conf && \
  chown -R apache:apache /etc/httpd/logs/ && \
  chown -R apache:apache /run/httpd/
USER apache
EXPOSE ${PORT}
COPY ./index.html /var/www/html
CMD ["httpd", "-D", "FOREGROUND"]
FROM registry.redhat.io/ubi8/ubi:8.5
MAINTAINER Magnus K Karlsson <magnus.k.karlsson@antigo.se>

ARG MYSERVICE_VERSION=1.0.0
ENV MYSERVICE_HOME=/opt/myservice

RUN yum install -y java-1.8.0-openjdk-devel

RUN groupadd -g 2001 myservice && \
  useradd -u 2001 -g 2001 myservice && \
  chown -R myservice:myservice ${MYSERVICE_HOME} && \
  chmod -R 755 ${MYSERVICE_HOME}

USER myservice
EXPOSE 8080

ADD myservice-${MYSERVICE_VERSION}.tar.gz ${MYSERVICE_HOME}
ADD myservice-start.sh ${MYSERVICE_HOME}

WORKDIR ${MYSERVICE_HOME}

VOLUME ${MYSERVICE_HOME}/data

CMD ["sh", "myservice-start.sh"]

Build, Tag and Push

If using official Red Hat repo or other that requires login, you must first login.

$ podman login registry.redhat.io --username you@domain.com

$ podman login quay.io --username you_username

Build, tag and push

$ podman build -t httpd-24-custom:1.0 -f Dockerfile .

$ podman tag localhost/httpd-24-custom quay.io/magnus_k_karlsson/httpd-24-custom:1.0

$ podman push quay.io/magnus_k_karlsson/httpd-24-custom:1.0

Then run

Common Git Commands

Installation RHEL 9.0 and Fedora 35

$ sudo dnf install git

Configuration

$ git config --global --add user.email "you@example.com"
$ git config --global --add user.name "Username"

# Cache for 1 hour
git config --global credential.helper "cache --timeout=3600"

# Cache for 1 day
git config --global credential.helper "cache --timeout=86400"

# Cache for 1 week
git config --global credential.helper "cache --timeout=604800"

$ git config --global -l
user.email="you@example.com"
user.name="Username"
credential.helper=cache

$ git config --global --unset user.email
$ git config --global --unset user.name
$ git config --global --unset credential.helper

Working with github.com

github has disabled basic authentiction (username and password) and now requires login with OAuth access tokens.

https://github.com/settings/tokens

"Personal access tokens function like ordinary OAuth access tokens. They can be used instead of a password for Git over HTTPS, or can be used to authenticate to the API over Basic Authentication."

You use these tokens as normal password when logging in.

$ git clone https://github.com/magnuskkarlsson/DO180-apps.git

$ cd DO180-apps/

$ git status

Create local branch and pushing it to remote.

$ git checkout -b branch_delete

$ git push -u origin branch_delete
username for 'https://github.com': <"Username">
Password for 'https://you@github.com': <Your Personal Access Token>

$ git branch -a

Delete local branch and delete remote branch.

$ git checkout master

$ git branch -d branch_delete

$ git push origin --delete branch_delete

You cannot delete a file in git, you need to delete it ('git rm') and commit and push the delete.

$ touch FOO

$ git add FOO

$ git commit -m "added FOO"

$ git push -u origin master 

$ git rm FOO

$ git add .

$ git commit -m "deleted FOO"

$ git push -u origin master 

July 21, 2022

RHEL 9.0 Container Tools, Podman and Networking

Podman v4.0 Networking

Podman v4.0 supports two network back ends for containers, Netavark and CNI. Starting with RHEL 9, systems use Netavark by default.

$ podman info 
host:
...
  networkBackend: netavark
...

$ podman network ls
NETWORK ID    NAME        DRIVER
2f259bab93aa  podman      bridge

$ podman network inspect podman 
[
     {
          "name": "podman",
          "id": "2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9",
          "driver": "bridge",
          "network_interface": "podman0",
          "created": "2022-07-21T15:43:10.660389642+02:00",
          "subnets": [
               {
                    "subnet": "10.88.0.0/16",
                    "gateway": "10.88.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": false,
          "ipam_options": {
               "driver": "host-local"
          }
     }
]

Existing containers on the host that use the default Podman network cannot resolve each other's hostnames, because DNS is not enabled on the default network.

Use the podman network create command to create a DNS-enabled network.

$ podman network create --gateway 10.87.0.1 --subnet 10.87.0.0/16 db_net

Non-Root User

Red Hat recommends to use a non-privileged user to manage and run containers.

You need to login as an interactive user

$ ssh student@192.168.122.33

Connecting 2 Container with Networking

$ podman search ubi
NAME                                                DESCRIPTION
registry.access.redhat.com/ubi8/ubi                 Provides the latest release of the Red Hat Universal Base Image 8
registry.access.redhat.com/ubi9/ubi                 rhcc_registry.access.redhat.com_ubi9/ubi
...

$ vim Dockerfile
FROM registry.access.redhat.com/ubi9/ubi:latest
RUN dnf install -y python3 iputils procps-ng
CMD ["/bin/bash", "-c", "sleep infinity"]

$ podman build -t python3:0.2 -f Dockerfile .

$ podman run -d --name python3-01 localhost/python3:0.2

$ podman run -d --name python3-02 localhost/python3:0.2

$ podman exec -it python3-01 ps -aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.1   4912  1336 ?        Ss   14:13   0:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/s
root           2  0.0  0.4  15048  5764 pts/0    Rs+  14:13   0:00 ps -aux

$ podman exec -it python3-01 ping -c 3 python3-02
ping: python3-02: Name or service not known

$ podman kill python3-01 python3-02

$ podman rm python3-01 python3-02

$ podman network create backend

$ podman run -d --name python3-01 --network backend localhost/python3:0.2

$ podman run -d --name python3-02 --network backend localhost/python3:0.2

$ podman exec -it python3-01 ping -c 3 python3-02
PING python3-02.dns.podman (10.89.0.3) 56(84) bytes of data.
64 bytes from 10.89.0.3 (10.89.0.3): icmp_seq=1 ttl=64 time=0.061 ms

$ podman exec -it python3-02 ping -c 3 python3-01
PING python3-01.dns.podman (10.89.0.2) 56(84) bytes of data.
64 bytes from 10.89.0.2 (10.89.0.2): icmp_seq=1 ttl=64 time=0.032 ms

$ podman inspect python3-01
...
          "NetworkSettings": {
...
               "Networks": {
                    "backend": {
                         "EndpointID": "",
                         "Gateway": "10.89.0.1",
                         "IPAddress": "10.89.0.2",
                         "IPPrefixLen": 24,
                         "IPv6Gateway": "",
                         "GlobalIPv6Address": "",
                         "GlobalIPv6PrefixLen": 0,
                         "MacAddress": "fe:98:7f:9b:c2:6d",
                         "NetworkID": "backend",
                         "DriverOpts": null,
                         "IPAMConfig": null,
                         "Links": null,
                         "Aliases": [
                              "72b3e9a0d515"
                         ]
                    }
               }
          },
...

$ podman network create db_net
$ podman network connect db_net python3-01
$ podman network connect db_net python3-02

RHEL 9.0 Container Tools, Podman, Volume, SELinux and Systemd

Introduction Container Tools

Container Management Tools

  • podman manages containers and container images.
  • skopeo inspects, copies, deletes, and signs images.
  • buildah creates container images.

Red Hat Official Container Repos:

  • registry.redhat.io for containers that are based on official Red Hat products.
  • registry.connect.redhat.com for containers that are based on third-party products.

The default configuration file for container registries is the /etc/containers/registries.conf file.

Red Hat recommends to use a non-privileged user to manage and run containers.

Getting Started with Container Tools

You need to login as an interactive user

$ ssh student@192.168.122.33

$ sudo dnf install container-tools

$ man 5 containers-registries.conf
...
       Container engines will use the $HOME/.config/containers/registries.conf if it exists, otherwise they will use /etc/containers/registries.conf
...

$ podman info 
...
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - quay.io
  - docker.io
...

$ mkdir ~/.config/containers/
$ cp /etc/containers/registries.conf ~/.config/containers/registries.conf
$ vim ~/.config/containers/registries.conf
$ diff ~/.config/containers/registries.conf /etc/containers/registries.conf
22c22
< unqualified-search-registries = ["registry.access.redhat.com"]
---
> unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "registry.centos.org", "quay.io", "docker.io"]

$ podman search httpd
NAME                                                                         DESCRIPTION
registry.access.redhat.com/ubi9/httpd-24                                     rhcc_registry.access.redhat.com_ubi9/httpd-24
registry.access.redhat.com/rhscl/httpd-24-rhel7                              Apache HTTP 2.4 Server
registry.access.redhat.com/ubi8/httpd-24                                     Platform for running Apache httpd 2.4 or building httpd-based applicatio

$ skopeo inspect docker://registry.access.redhat.com/ubi8/httpd-24

$ podman pull registry.access.redhat.com/ubi8/python-38:latest

$ podman images

$ podman search ubi8

Building Custom Images

$ vim Dockerfile
FROM registry.access.redhat.com/ubi8/ubi
RUN dnf install -y python36 procps-ng
CMD ["/bin/bash", "-c", "sleep infinity"]

$ podman build --help
...
Examples:
  podman build .
  podman build --creds=username:password -t imageName -f Containerfile.simple .
...

$ podman build -t python36:0.1 -f Dockerfile .

$ podman images
REPOSITORY                           TAG         IMAGE ID      CREATED             SIZE
localhost/python36                   0.1         99d353d9a60e  About a minute ago  443 MB
registry.access.redhat.com/ubi8/ubi  latest      2fd9e1478809  4 weeks ago         225 MB

$ podman inspect localhost/python36:0.1
...
          "History": [
...
               {
                    "created": "2022-07-20T13:07:44.802532647Z",
                    "created_by": "/bin/sh -c dnf install -y python36 procps-ng",
                    "comment": "FROM registry.access.redhat.com/ubi8/ubi:latest"
               },
               {
                    "created": "2022-07-20T13:07:50.558640619Z",
                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\", \"-c\", \"sleep infinity\"]",
                    "empty_layer": true
               }
          ],
...

$ podman run -d --name python36 localhost/python36:0.1

$ podman ps
CONTAINER ID  IMAGE                   COMMAND               CREATED        STATUS            PORTS       NAMES
914c01e88482  localhost/python36:0.1  /bin/bash -c slee...  2 minutes ago  Up 2 minutes ago              python36

$ podman logs python36

$ podman exec --help
Run a process in a running container

Description:
  Execute the specified command inside a running container.


Usage:
  podman exec [options] CONTAINER [COMMAND [ARG...]]

Examples:
  podman exec -it ctrID ls
...

$ podman exec -it python36 ps -aux

Running MariaDB with Persistent Volume and Modified User Namespace

$ podman search mariadb
NAME                                                       DESCRIPTION
registry.access.redhat.com/rhscl/mariadb-101-rhel7         MariaDB server 10.1 for OpenShift and general usage
registry.access.redhat.com/rhscl/mariadb-100-rhel7         MariaDB 10.0 SQL database server
registry.access.redhat.com/openshift3/mariadb-apb          Ansible Playbook Bundle application definition for 
registry.access.redhat.com/rhscl/mariadb-102-rhel7         MariaDB is a multi-user, multi-threaded SQL database server. The container image provides a containerized packaging of the MariaDB mysqld daemon and client application. The mysqld server daemon accepts connections from clients and provides access to content from MariaDB databases on behalf of the clients.
registry.access.redhat.com/rhosp12/openstack-mariadb       Red Hat OpenStack Container image for openstack-mariadb

$ skopeo inspect docker://registry.access.redhat.com/rhscl/mariadb-102-rhel7
...
        "usage": "docker run -d -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 rhscl/mariadb-102-rhel7",
...
        "HOME=/var/lib/mysql",
...

$ podman run -d --name mariadb-102-rhel7 \
  -p 3306:3306 \
  --env MYSQL_ROOT_PASSWORD=redhat123 \
  --env MYSQL_DATABASE=mydb \
  --env MYSQL_USER=myuser \
  --env MYSQL_PASSWORD=redhat123 \
  registry.access.redhat.com/rhscl/mariadb-102-rhel7

$ podman ps
$ podman logs mariadb-102-rhel7

$ sudo dnf provides mysql
...
mysql-8.0.28-1.el9.x86_64 : MySQL client programs and shared libraries

$ sudo dnf install -y mysql

$ mysql --host=127.0.0.1 --port=3306 --user=myuser --password=redhat123 --execute='show databases;' mydb

$ podman exec -it mariadb-102-rhel7 ps -aux 
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
mysql          1  0.1  5.2 1544128 67856 ?       Ssl  13:31   0:00 /opt/rh/rh-mariadb102/root/usr/libexec/mysqld --defaults-fil
mysql        237  0.0  0.2  51748  3320 pts/0    Rs+  13:35   0:00 ps -aux

$ podman exec -it mariadb-102-rhel7 id mysql
uid=27(mysql) gid=27(mysql) groups=27(mysql),0(root)

$ podman unshare --help
Run a command in a modified user namespace

Description:
  Runs a command in a modified user namespace.

Usage:
  podman unshare [options] [COMMAND [ARG...]]

Examples:
  podman unshare id
  podman unshare cat /proc/self/uid_map,
  podman unshare podman-script.sh

$ mkdir /home/student/mariadb-102-rhel7-data
$ podman unshare chown -R 27:27 /home/student/mariadb-102-rhel7-data

$ podman stop mariadb-102-rhel7
$ podman rm mariadb-102-rhel7

$ podman run -d --name mariadb-102-rhel7 \
  -p 3306:3306 \
  --env MYSQL_ROOT_PASSWORD=redhat123 \
  --env MYSQL_DATABASE=mydb \
  --env MYSQL_USER=myuser \
  --env MYSQL_PASSWORD=redhat123 \
  -v /home/student/mariadb-102-rhel7-data:/var/lib/mysql:Z \
  registry.access.redhat.com/rhscl/mariadb-102-rhel7

$ podman logs mariadb-102-rhel7

$ mysql --host=127.0.0.1 --port=3306 --user=myuser --password=redhat123 --execute='show databases;' mydb

Running Apache with Persistent Volume and as User Systemd Service

$ podman search httpd
NAME                                                                         DESCRIPTION
registry.access.redhat.com/rhscl/httpd-24-rhel7                              Apache HTTP 2.4 Server
registry.access.redhat.com/ubi9/httpd-24                                     rhcc_registry.access.redhat.com_ubi9/httpd-24
registry.access.redhat.com/ubi8/httpd-24                                     Platform for running Apache httpd 2.4 or building httpd-
...

$ mkdir /home/student/httpd-24-data
$ echo "HELLO WORLD" > /home/student/httpd-24-data/index.html

$ podman run -d --name httpd-24 \
  -p 8080:8080 \
  -v /home/student/httpd-24-data:/var/www/html:Z \
  registry.access.redhat.com/ubi8/httpd-24 
  
$ podman ps
$ podman logs httpd-24
$ curl http://127.0.0.1:8080/
HELLO WORLD

$ man podman-generate-systemd
...
              $ sudo podman generate systemd --new --files --name bb310a0780ae
...
       To run the user services placed in $HOME/.config/systemd/user on first login of that user, enable the  service  with
       --user flag.

              $ systemctl --user enable <.service>

       The  systemd user instance is killed after the last session for the user is closed. The systemd user instance can be
       kept running ever after the user logs out by enabling lingering using

              $ loginctl enable-linger <username>
...

$ podman generate systemd --new --files --name httpd-24

$ mkdir -p /home/student/.config/systemd/user

$ mv /home/student/container-httpd-24.service /home/student/.config/systemd/user

$ podman stop httpd-24
$ podman rm httpd-24

$ systemctl --user daemon-reload 

$ systemctl --user enable --now container-httpd-24.service

$ systemctl --user status container-httpd-24.service
$ podman ps
$ podman logs httpd-24
$ curl http://127.0.0.1:8080/

$ sudo loginctl enable-linger student

$ sudo loginctl show-user student 
...
Linger=yes

July 19, 2022

RHEL 9.0 Finding Files with find

$ man find
...
       -name pattern
              Base  of  file  name  (the path with the leading directories removed) matches shell pattern pattern. 

       A numeric argument n can be specified to tests (like -amin, -mtime, -gid, -inum, -links, -size, -uid and -used) as

       +n     for greater than n,

       -n     for less than n,

       n      for exactly n.

       -mmin n
              File's data was last modified less than, more than or exactly n minutes ago.

       -perm -mode
              All of the permission bits mode are set for the file.  Symbolic modes are accepted in this form, and this is usu‐
              ally the way in which you would want to use them.  You must specify `u', `g' or `o' if you use a  symbolic  mode.
              See the EXAMPLES section for some illustrative examples.

       -size n[cwbkMG]
              File uses less than, more than or exactly n units of space, rounding up.  The following suffixes can be used:

              `b'    for 512-byte blocks (this is the default if no suffix is used)

              `c'    for bytes

              `w'    for two-byte words

              `k'    for kibibytes (KiB, units of 1024 bytes)

              `M'    for mebibytes (MiB, units of 1024 * 1024 = 1048576 bytes)

              `G'    for gibibytes (GiB, units of 1024 * 1024 * 1024 = 1073741824 bytes)

              The size is simply the st_size member of the struct stat populated by the lstat (or stat) system call, rounded up
              as  shown  above.  In other words, it's consistent with the result you get for ls -l.  Bear in mind that the `%k'
              and `%b' format specifiers of -printf handle sparse files differently.  The `b' suffix  always  denotes  512-byte
              blocks and never 1024-byte blocks, which is different to the behaviour of -ls.

              The + and - prefixes signify greater than and less than, as usual; i.e., an exact size of n units does not match.
              Bear in mind that the size  is  rounded  up  to  the  next  unit.   Therefore  -size -1M  is  not  equivalent  to
              -size -1048576c.  The former only matches empty files, the latter matches files from 0 to 1,048,575 bytes.

       -gid n File's numeric group ID is less than, more than or exactly n.

       -type c
              File is of type c:

              b      block (buffered) special

              c      character (unbuffered) special

              d      directory

              p      named pipe (FIFO)

              f      regular file

              l      symbolic link; this is never true if the -L option or the -follow option is in effect, unless the symbolic
                     link is broken.  If you want to search for symbolic links when -L is in effect, use -xtype.

              s      socket

              D      door (Solaris)

              To search for more than one type at once, you can supply the combined list of type letters separated by  a  comma
              `,' (GNU extension).

       -uid n File's numeric user ID is less than, more than or exactly n.
...

$ sudo find /etc -name '*pass*'
$ sudo find /home -user student
$ sudo find /home -group student 
$ sudo find -uid 1000
$ sudo find -gid 1000
$ sudo find /etc -perm 644 -ls
$ sudo find /etc -perm 644 -ls
$ sudo find / -size +1G 2> /dev/null

When used with / or - signs, the 0 value works as a wildcard because it means any permission.

$ sudo find / -perm -1000 -ls

To search for files for which the user has read permissions, or the group has at least read permissions, or others have at least write permission:

$ sudo find /home -perm /442 -ls

To search for all files with content that changed 120 minutes ago

$ sudo find / -mmin 120

To search for all files with content that changed 200 minutes ago:

$ sudo find / -mmin +200

The following example lists files that changed less than 150 minutes ago:

$ sudo find / -mmin -150

Search for all directories in the /etc directory:

$ sudo find /etc -type d

Search for all soft links in the / directory:

$ sudo find / -type l

Search for all block devices in the /dev directory:

$ sudo find /dev -type b

Search for all regular files with more than one hard link:

$ sudo find / -type f -links +1

RHEL 9.0 Archieve and Compress Using tar with gunzip, bzip2 and xz

$ sudo dnf install tar bzip2

$ man tar
...
    -c or --create : Create an archive file.
    -t or --list : List the contents of an archive.
    -x or --extract : Extract an archive.

   Compression options
       -z, --gzip, --gunzip, --ungzip
              Filter the archive through gzip(1).
       -j, --bzip2
              Filter the archive through bzip2(1).
       -J, --xz
              Filter the archive through xz(1).
       -Z, --compress, --uncompress
              Filter the archive through compress(1).

       -p, --preserve-permissions, --same-permissions
              extract information about file permissions (default for superuser)

   Extended file attributes
       --acls Enable POSIX ACLs support.
       --no-acls
              Disable POSIX ACLs support.
       --selinux
              Enable SELinux context support.
       --no-selinux
              Disable SELinux context support.
       --xattrs
              Enable extended attributes support.

       -C, --directory=DIR
              Change to DIR before performing any operations.  This option is order-sensitive, i.e. it affects all options that follow.
...

$ tar -cvf mybackup.tar anaconda-ks.cfg FOO
$ tar -czvf mybackup.tar.gz anaconda-ks.cfg FOO /etc
$ tar -cjvf mybackup.tar.bz2 anaconda-ks.cfg FOO
$ tar -cJvf mybackup.tar.xz anaconda-ks.cfg FOO

$ tar -tvf mybackup.tar
$ tar -tzvf mybackup.tar.gz
$ tar -tjvf mybackup.tar.bz2
$ tar -tJvf mybackup.tar.xz
$ tar -tJvf backup.tar.xz

$ tar -xzvf backup.tar.gz -C /tmp

RHEL 9.0 Configure Networking from the Command Line

Show

$ sudo nmcli device status 
DEVICE  TYPE      STATE      CONNECTION 
enp1s0  ethernet  connected  enp1s0     
lo      loopback  unmanaged  --       

$ sudo nmcli connection show 
NAME    UUID                                  TYPE      DEVICE 
enp1s0  05b0507e-85d1-330e-836f-40dec2d378c6  ethernet  enp1s0 

$ sudo nmcli connection show --active

$ sudo nmcli connection show enp1s0

Add Static IPv4 Connection

$ sudo nmcli connection add con-name enp1s0-stat ifname enp1s0 type ethernet ipv4.method manual ipv4.addresses 192.168.122.100/24 ipv4.gateway 192.168.122.1 ipv4.dns 192.168.122.1

$ sudo nmcli connection up enp1s0-stat

Add Dynamic IPv4 Connection

$ sudo nmcli connection add con-name enp1s0-dyn ifname enp1s0 type ethernet ipv4.method auto

$ sudo nmcli connection up enp1s0-dyn

Miscellaneous

Starting in Red Hat Enterprise Linux 8, ifcfg format configuration files and the /etc/sysconfig/network-scripts/ directory are deprecated. NetworkManager now uses an INI-style key file format, which is a key-value pair structure to organize properties. NetworkManager stores network profiles in the /etc/NetworkManager/system-connections/ directory. For compatibility with earlier versions, ifcfg format connections in the /etc/sysconfig/network-scripts/ directory are still recognized and loaded.

The /etc/NetworkManager/system-connections/ directory stores any changes with the nmcli con mod name command.

$ sudo man 5 NetworkManager.conf

$ sudo hostnamectl set-hostname host.example.com

$ sudo hostnamectl status

$ sudo nmcli connection mod ID +ipv4.dns IP

$ sudo cat /etc/resolv.conf

Modify the new connection so that it also uses the IP address 10.0.1.1/24.

$ sudo nmcli connection mod "lab" +ipv4.addresses 10.0.1.1/24

Configure the hosts file so that you can reference the 10.0.1.1 IP address with the private name.

$ sudo echo "10.0.1.1 private" >> /etc/hosts

RHEL 9.0 Troubleshooting Networking

Troubleshoot Connectivity Between Hosts

$ man ping
...
       -c count
           Stop after sending count ECHO_REQUEST packets. With deadline
           option, ping waits for count ECHO_REPLY packets, until the timeout
           expires.
...

$ ping -c 3 8.8.8.8

Troubleshoot Router Issues

$ ip route show
default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 
192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.58 metric 100 

Use the ip command -6 option to show the IPv6 routing table. 
$ ip -6 route

$ tracepath access.redhat.com
 1?: [LOCALHOST]                      pmtu 1500
 1:  fedora                                                0.451ms 
 1:  fedora                                                0.316ms 
 2:  home                                                  6.161ms 
...

Troubleshoot Port and Service Issues

Well-known names for standard ports are listed in the /etc/services file. The ss command is used to display socket statistics. The ss command replaces the older netstat tool, from the net-tools package, which might be more familiar to some system administrators but is not always installed.

$ ss -ta|--tcp --all

-n, --numeric       don't resolve service names
-t, --tcp           display only TCP sockets
-u, --udp           display only UDP sockets
-a, --all           display all sockets
-l, --listening     display listening sockets
-p, --processes     show process using socket
-A, --query=QUERY, --socket=QUERY
   QUERY := {all|inet|tcp|mptcp|udp|raw|unix|unix_dgram|unix_stream|unix_seqpacket|packet|netlink|vsock_stream|vsock_dgram|tipc}[,QUERY]

$ ss -a -A inet
Netid      State       Recv-Q      Send-Q                   Local Address:Port                  Peer Address:Port        Process      
icmp6      UNCONN      0           0                                    *:ipv6-icmp                        *:*                        
udp        ESTAB       0           0                192.168.122.58%enp1s0:bootpc               192.168.122.1:bootps                   
udp        UNCONN      0           0                            127.0.0.1:323                        0.0.0.0:*                        
udp        UNCONN      0           0                                [::1]:323                           [::]:*                        
tcp        LISTEN      0           128                            0.0.0.0:ssh                        0.0.0.0:*                        
tcp        ESTAB       0           0                       192.168.122.58:ssh                  192.168.122.1:38358                    
tcp        LISTEN      0           128                               [::]:ssh                           [::]:*    

RHEL 9.0 Getting Basic Network Setting

$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UPP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:d2:30:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.58/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0
       valid_lft 3184sec preferred_lft 3184sec
    inet6 fe80::5054:ff:fed2:30c4/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
  • state UP - An active interface is UP.
  • link/ether 52:54:00:d2:30:c4 - The link/ether string specifies the hardware (MAC) address of the device.
  • inet 192.168.122.58/24 - The inet string shows an IPv4 address, its network prefix length, and scope.
$ ip -s link show enp1s0
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UPP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:d2:30:c4 brd ff:ff:ff:ff:ff:ff
    RX:  bytes packets errors dropped  missed   mcast           
      33591874   26436      0   17583       0       0 
    TX:  bytes packets errors dropped carrier collsns           
       1399002    7811      0       0       0       0 

If the destination network does not match a more specific entry, then the packet will be routed using 0.0.0.0/0 default entry. This default route points to the gateway router on a local subnet that the host can reach.

$ ip route show
default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 
192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.58 metric 100 

A more complex example.

$ ip route show
default via 192.168.1.1 dev wlp2s0 proto dhcp metric 600 
192.168.1.0/24 dev wlp2s0 proto kernel scope link src 192.168.1.142 metric 600 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 
192.168.130.0/24 dev crc proto kernel scope link src 192.168.130.1 linkdown 

Getting DNS configuration.

$ cat /etc/resolv.conf 
# Generated by NetworkManager
search 0-01.mkk.se
nameserver 192.168.122.1

July 14, 2022

RHEL 9.0 Boot in emergency.target

Introduction

# systemctl list-units --type target --all 
  UNIT
  emergency.target
  rescue.target
...

# systemctl get-default 
multi-user.target

Boot in emergency.target

When the boot-loader menu appears, press any key to interrupt the countdown, except Enter.

Use the cursor keys to highlight the default boot-loader entry.

Press e to edit the current entry.

Use the cursor keys to navigate the line that starts with linux text.

Press Ctrl+e to move the cursor to the end of the line.

Append the systemd.unit=emergency.target text to the end of the line.

Press Ctrl+x to boot using the modified configuration.

# mount -o remount,rw /

# mount -a

# vim /etc/fstab

# systemctl daemon-reload

# mount -a

# systemctl reboot

RHEL 9.0 Reset the Root Password

When the boot-loader menu appears, press any key to interrupt the countdown, except the Enter key.

Use the cursor keys to highlight the rescue kernel boot-loader entry (the one with the word rescue in its name).

Press e to edit the current entry.

Use the cursor keys to navigate the line that starts with the linux text.

Press Ctrl+e to move the cursor to the end of the line.

Append the rd.break text to the end of the line.

Press Ctrl+x to boot using the modified configuration.

Press Enter to enter the maintenance mode.

sh-5.1# mount -o remount,rw /sysroot

sh-5.1# chroot /sysroot

sh-5.1# passwd root

sh-5.1# touch /.autorelabel

RHEL 9.0 Stratis

Introduction

# man stratis
...
EXAMPLES
       Example 1. Creating a Stratis pool

       stratis pool create mypool /dev/sdb /dev/sdc

       Example 2. Creating an encrypted pool

       stratis key set --capture-key someKeyDescription

       stratis pool create --key-desc someKeyDescription mypool /dev/sdb /dev/sdc

       Example 3. Creating a filesystem from a pool

       stratis filesystem create mypool data1
...

# lsblk /dev/vdc -pf
NAME     FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
/dev/vdc              

Configure

# dnf install -y stratis-cli stratisd

# systemctl enable --now stratisd

# stratis pool create mypool /dev/vdc

# stratis filesystem create mypool data1

# lsblk /dev/stratis/mypool/data1 --output UUID
UUID
e119c223-029f-4b45-a204-3672e37c556f

# find /usr/share/doc/ -type f | xargs grep x-systemd.requires
grep: /usr/share/doc/python3-setuptools/python: No such file or directory
grep: 2: No such file or directory
grep: sunset.rst: No such file or directory
/usr/share/doc/systemd/NEWS:        * New /etc/fstab options x-systemd.requires= and
/usr/share/doc/systemd/NEWS:          x-systemd.requires-mounts-for= are now supported to express

# mkdir /stratis

# vim /etc/fstab
...
UUID=e119c223-029f-4b45-a204-3672e37c556f   /stratis    xfs   defaults,x-systemd.requires=stratisd.service    0 0

# mount -a

Test

# echo "FOO" > /stratis/foo; cat /stratis/foo

RHEL 9.0 Virtual Data Optimizer (VDO)

Introduction

# dnf install -y vdo kmod-kvdo

Prerequisite

# parted /dev/vdb print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start  End  Size  File system  Name  Flags

Configure

# parted /dev/vdb mkpart first 0G 10G
# parted /dev/vdb set 1 lvm on

# pvcreate /dev/vdb1
# vgcreate myvg-vdo /dev/vdb1 

# lvcreate --name mylv-vdo --size 5G --type vdo myvg-vdo

# lsblk /dev/vdb -fp
NAME                                  FSTYPE FSVER LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
/dev/vdb                                                                                                       
└─/dev/vdb1                           LVM2_m LVM2        3O1e3e-NMIt-1y5q-jvWk-ONQh-doOn-3V719c                
  └─/dev/mapper/myvg--vdo-vpool0_vdata
                                                                                                               
    └─/dev/mapper/myvg--vdo-vpool0-vpool
                                                                                                               
      └─/dev/mapper/myvg--vdo-mylv--vdo

# mkfs.xfs /dev/mapper/myvg--vdo-mylv--vdo

# mkdir /myvg--vdo-mylv--vdo
# vim /etc/fstab
...
/dev/mapper/myvg--vdo-mylv--vdo   /myvg--vdo-mylv--vdo    xfs   defaults    0 0

# mount -a

Test

# echo "FOO" > /myvg--vdo-mylv--vdo/foo; cat /myvg--vdo-mylv--vdo/foo
FOO

RHEL 9.0 LVM, Extend and Swap

Introduction

Logical Volume Manager (LVM)

Physical Volumes (PVs)

Volume Groups (VGs)

Logical Volumes (LVs)

Create Partition Table, PV, VG and LV

# lsblk -fp
NAME                          FSTYPE      FSVER    LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
/dev/sr0                                                                                                       
/dev/vda                                                                                                       
├─/dev/vda1                   xfs                        e8e38d31-36a2-4ad7-9668-94023cd80424    817.8M    19% /boot
└─/dev/vda2                   LVM2_member LVM2 001       y0wzxQ-mGYD-OfjS-LxuL-l2gJ-Rfgt-h5UgR6                
  ├─/dev/mapper/rhel_rhel9-root
  │                           xfs                        221f6235-b21f-48e6-befc-489e271de1f0     15.9G     6% /
  └─/dev/mapper/rhel_rhel9-swap
                              swap        1              245cf443-6a9e-4d32-b7ad-0cbf15a9020d                  [SWAP]
/dev/vdb                                                                                                       
/dev/vdc             

# man parted 
...
              mklabel label-type
                     Create a new disklabel (partition table) of label-type.   label-type  should  be  one  of  "aix",
                     "amiga", "bsd", "dvh", "gpt", "loop", "mac", "msdos", "pc98", or "sun".

              mkpart [part-type name fs-type] start end
                     Create  a  new partition. part-type may be specified only with msdos and dvh partition tables, it
                     should be one of "primary", "logical", or "extended".  name is required for GPT partition  tables
                     and  fs-type  is  optional.   fs-type  can  be  one  of "btrfs", "ext2", "ext3", "ext4", "fat16",
                     "fat32", "hfs", "hfs+", "linux-swap", "ntfs", "reiserfs", "udf", or "xfs".
...
              set partition flag state
                     Change the state of the flag on partition to state.  Supported flags are: "boot", "root", "swap",
                     "hidden", "raid", "lvm",  "lba",  "legacy_boot",  "irst",  "msftres",  "esp",  "chromeos_kernel",
                     "bls_boot" and "palo".  state should be either "on" or "off".
...

# parted /dev/vdb mklabel gpt

# parted /dev/vdb mkpart first 0G 3G
# parted /dev/vdb set 1 lvm on 

# parted /dev/vdb mkpart second 3G 6G
# parted /dev/vdb set 2 lvm on 

# parted /dev/vdb print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name    Flags
 1      1049kB  3000MB  2999MB               first   lvm
 2      3000MB  6000MB  3000MB               second  lvm

# lsblk -fp
NAME                          FSTYPE      FSVER    LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
...
/dev/vdb                                                                                                       
├─/dev/vdb1                                                                                                    
└─/dev/vdb2                     

# pvcreate /dev/vdb1 /dev/vdb2

# vgcreate myvg01 /dev/vdb1

# lvcreate --name mylv01 --size 2.7G myvg01

# lsblk /dev/vdb -fp
NAME                          FSTYPE      FSVER    LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
/dev/vdb                                                                                                       
├─/dev/vdb1                   LVM2_member LVM2 001       soNUus-2dYc-cTHE-OTXg-Ks1y-hQ6U-TLMgGz                
│ └─/dev/mapper/myvg01-mylv01 xfs                        5ac89db3-6bec-41f1-866d-e6afc3241ccd                  
└─/dev/vdb2                   LVM2_member LVM2 001       MIXCZT-j2af-G0e7-dN30-yeCW-NL9W-m73AFA           

# mkfs.xfs /dev/mapper/myvg01-mylv01

# mkdir /myvg01-mylv01

# vim /etc/fstab
...
/dev/mapper/myvg01-mylv01   /myvg01-mylv01    xfs   defaults    0 0

# mount -a

# echo "FOO" > /myvg01-mylv01/foo; cat /myvg01-mylv01/foo
FOO

Extend VG, LV and Resize Filesystem

# vgextend myvg01 /dev/vdb2

# lvextend --size +2.7G --resizefs /dev/mapper/myvg01-mylv01

# df -h /myvg01-mylv01/
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/myvg01-mylv01  5.4G   72M  5.4G   2% /myvg01-mylv01

Create Swap

# parted /dev/vdb mkpart third 6G 8G

# parted /dev/vdb set 3 lvm on

# pvcreate /dev/vdb3

# vgcreate myvg02 /dev/vdb3

# lvcreate --name mylv02 --size 1.8G myvg02

# mkswap /dev/mapper/myvg02-mylv02

# free 
               total        used        free      shared  buff/cache   available
Mem:         1301304      193984      770756        6976      336564      952776
Swap:        2097148           0     2097148

# swapon /dev/mapper/myvg02-mylv02

# free 
               total        used        free      shared  buff/cache   available
Mem:         1301304      194772      769936        6976      336596      951988
Swap:        3985400           0     3985400

# vim /etc/fstab
...
/dev/mapper/myvg02-mylv02   none    swap    defaults    0 0

# mount -a

July 12, 2022

RHEL 9.0 Install NFS 4 Server and Client. Configure Mount and Automount Direct and Indirect Map

RHEL 9.0 Install NFS 4 Server

Lets start with one server and install NFS 4.

# dnf install -y nfs-utils

# man 5 exports
...
       root_squash
              Map requests from uid/gid 0 to the anonymous uid/gid. Note that this does not apply to any other uids or gids  that  might  be
              equally sensitive, such as user bin or group staff.
...
EXAMPLE
       # sample /etc/exports file
       /               master(rw) trusty(rw,no_root_squash)
...

Before configure NFS 4 Server, we will create a couple of directories with file permissions.

The user we create below, will be created with specific UID and GID, as they need to be the same on the clients machines.

# mkdir -p /nfs-share/john
# mkdir -p /nfs-share/jane
# mkdir -p /nfs-share/alice
# mkdir /nfs-share/tmp

# groupadd --gid 1101 john
# groupadd --gid 1102 jane
# groupadd --gid 1103 alice

# useradd --uid 1101 --gid 1101 john
# useradd --uid 1102 --gid 1102 jane
# useradd --uid 1103 --gid 1103 alice

# chown john:john /nfs-share/john
# chown jane:jane /nfs-share/jane
# chown alice:alice /nfs-share/alice

# chmod 750 /nfs-share/john
# chmod 750 /nfs-share/jane
# chmod 750 /nfs-share/alice
# chmod 1777 /nfs-share/tmp

# cp /etc/skel/.bash* /nfs-share/john/
# cp /etc/skel/.bash* /nfs-share/jane/
# cp /etc/skel/.bash* /nfs-share/alice/

# chown john:john /nfs-share/john/.bash*
# chown jane:jane /nfs-share/jane/.bash*
# chown alice:alice /nfs-share/alice/.bash*

And now for the NFS 4 Server configuration.

# vim /etc/exports
/nfs-share/john *(rw,root_squash) 
/nfs-share/jane *(rw,root_squash) 
/nfs-share/alice *(rw,root_squash) 

# systemctl enable --now nfs-server.service 

# firewall-cmd --add-service=nfs; firewall-cmd --add-service=nfs --permanent

Install NFS 4 on RHEL 9.0 Client

# dnf install -y nfs-utils

NFSv3 used the RPC protocol, which requires a file server that supports NFSv3 connections to run the rpcbind service. An NFSv3 client connects to the rpcbind service at port 111 on the server to request NFS service. The server responds with the current port for the NFS service. Use the showmount command to query the available exports on an RPC-based NFSv3 server.

# showmount --exports server

NFSv4 introduced an export tree that contains all of the paths for the server's exported directories.

$ sudo mount 192.168.122.76:/ /mnt
$ ls /mnt/
nfs-share
$ sudo umount /mnt

There are 4 different ways to mount NFS shares.

Way 1: Temporary Mount

$ sudo mkdir -p /nfs-share/tmp
$ sudo mount -t nfs -o rw,sync 192.168.122.76:/nfs-share/tmp /nfs-share/tmp

$ sudo mount | grep 192.168.122.76
192.168.122.76:/ on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.58,local_lock=none,addr=192.168.122.76)

$ sudo umount /nfs-share/tmp

Way 2: Permanent Mount

$ sudo mkdir -p /nfs-share/tmp
$ sudo vim /etc/fstab
...
192.168.122.76:/nfs-share/tmp   /nfs-share/tmp    nfs   rw,sync   0 0

$ sudo mount -a
$ sudo systemctl daemon-reload

Way 3: Automount Direct Map and Automount Indirect Map

Differene between Automount Direct Map and Indirect Map

An indirect automount is a well known and unchanging mount point that is known before hand. The indirect is the opposite, e.g. user home directory (/home), that you do not know before hand which user will login to a spefic server.

Way 3: Automount Direct Map

$ sudo dnf install -y autofs nfs-utils

$ man 5 auto.master
...
       For direct maps the mount point is always specified as:

       /-
...
EXAMPLE
         /-        auto.data
         /home     /etc/auto.home
         /mnt      yp:mnt.map

       This will generate two mountpoints for /home and /mnt and install direct mount triggers for each entry in the di‐
       rect mount map auto.data.  All accesses to /home will lead to the consultation of the map in  /etc/auto.home  and
       all  accesses  to /mnt will consult the NIS map mnt.map.  All accesses to paths in the map auto.data will trigger
       mounts when they are accessed and the Name Service Switch configuration will be used to locate the source of  the
       map auto.data.

       To  avoid  making edits to /etc/auto.master, /etc/auto.master.d may be used.  Files in that directory must have a
       ".autofs" suffix, e.g.  /etc/auto.master.d/extra.autofs.  Such files contain lines of the same format as the  au‐
       to.master file, e.g.

         /foo    /etc/auto.foo
         /baz    yp:baz.map
...

$ sudo vim /etc/auto.master.d/nfs-share-direct-tmp.autofs
/-    /etc/auto.nfs-share-direct-tmp

$ sudo vim /etc/auto.nfs-share-direct-tmp
/nfs-share-direct/tmp    -rw,sync    192.168.122.76:/nfs-share/tmp

$ sudo systemctl enable --now autofs

$ sudo mount | grep nfs-share-direct-tmp
/etc/auto.nfs-share-direct-tmp on /nfs-share-direct/tmp type autofs (rw,relatime,fd=17,pgrp=6250,timeout=300,minproto=5,maxproto=5,direct,pipe_ino=74858)

$ echo "HELLO" > /nfs-share-direct/tmp/HELLO

$ cat /nfs-share-direct/tmp/HELLO
HELLO

Way 4: Automount Indirect Map

$ sudo dnf install -y autofs nfs-utils

$ sudo vim /etc/auto.master.d/nfs-share-indirect-tmp.autofs
/nfs-share-indirect   /etc/auto.nfs-share-indirect-tmp

/nfs-share-indirect is the base for the final mount point. The next file is called mapping file.

# vim /etc/auto.nfs-share-indirect-tmp
tmp   -rw,sync    192.168.122.76:/nfs-share/tmp

The final mount point (path) is the combined path from and master mapping file, e.g. /shares/work.

Both the directory /nfs-share-indirect and /nfs-share-indirect/tmp are created and removed automatically by the aufofs service.

# systemctl enable --now autofs

$ man 5 autofs
...
              -fstype=
                     is used to specify a filesystem type if the filesystem is not of the default NFS type.  This option
                     is processed by the automounter and not by the mount command.

              -strict
                     is  used  to treat errors when mounting file systems as fatal. This is important when multiple file
                     systems should be mounted (`multi-mounts'). If this option is given, no file system is  mounted  at
                     all if at least one file system can't be mounted.
...
EXAMPLE
       Indirect map:

         kernel    -ro,soft            ftp.kernel.org:/pub/linux
         boot      -fstype=ext2        :/dev/hda1
         windoze   -fstype=smbfs       ://windoze/c
         removable -fstype=ext2        :/dev/hdd
         cd        -fstype=iso9660,ro  :/dev/hdc
         floppy    -fstype=auto        :/dev/fd0
         server    -rw,hard            / -ro myserver.me.org:/ \
                                       /usr myserver.me.org:/usr \
                                       /home myserver.me.org:/home

       In the first line we have a NFS remote mount of the kernel directory on ftp.kernel.org.  This  is  mounted  read-
       only.   The  second  line  mounts an ext2 volume from a local ide drive.  The third makes a share exported from a
       Windows machine available for automounting.  The rest should be fairly self-explanatory. The last entry (the last
       three lines) is an example of a multi-map (see below).

       If  you use the automounter for a filesystem without access permissions (like vfat), users usually can't write on
       such a filesystem because it is mounted as user  root.   You  can  solve  this  problem  by  passing  the  option
       gid=<gid>,  e.g. gid=floppy. The filesystem is then mounted as group floppy instead of root. Then you can add the
       users to this group, and they can write to the filesystem. Here's an example entry for an autofs map:

         floppy-vfat  -fstype=vfat,sync,gid=floppy,umask=002  :/dev/fd0

       Direct map:

         /nfs/apps/mozilla             bogus:/usr/local/moxill
         /nfs/data/budgets             tiger:/usr/local/budgets
         /tst/sbin                     bogus:/usr/sbin

FEATURES
   Map Key Substitution
       An & character in the location is expanded to the value of the key field that matched the  line  (which  probably
       only makes sense together with a wildcard key).

   Wildcard Key
       A map key of * denotes a wild-card entry. This entry is consulted if the specified key does not exist in the map.
       A typical wild-card entry looks like this:

         *         server:/export/home/&

       The special character '&' will be replaced by the provided key.  So, in the example above, a lookup for  the  key
       'foo' would yield a mount of server:/export/home/foo.
...

To map user homes directories.

$ sudo vim /etc/auto.master.d/nfs-share-indirect-home.autofs
/home   /etc/auto.nfs-share-indirect-home

$ vim /etc/auto.nfs-share-indirect-home
*   -rw,sync    192.168.122.76:/nfs-share/&

# systemctl enable --now autofs

# groupadd --gid 1101 john
# useradd --uid 1101 --gid 1101 john
# passwd john
# su - john 

$ echo "JOHN" > john
$ pwd
/home/john

July 10, 2022

Networking Basics

Netmask (n 1s, the rest 0s)

Network address (all host bits are 0s)

Broadcast address (all host bits are 1s)

Address range for hosts on subnet (Network address + 1 to Broadcast address - 1)

Number of hosts in network (2^h - 2)


IP address: 192.168.122.58/24 

Netmask: 255.255.255.0

Network: 192.168.122.0

Broadcast: 192.168.122.255

Range: 192.168.122.1 - 192.168.122.255

Number of hosts: 2^(32-24) - 2 = 254

IP address: 172.168.181.23/19

1010 1100 . 1010 1000 . 1011 0101 . 0001 0111
1010 1100 . 1010 1000 . 101                     19 first bits

Netmask (n 1s, the rest 0s)

1111 1111 . 1111 1111 . 1110 0000 . 0000 0000   255.255.224.0

Network address	(all host bits are 0s)

1010 1100 . 1010 1000 . 1010 0000 . 0000 0000   172.168.160.0

Broadcast address (all host bits are 1s)

1010 1100 . 1010 1000 . 1011 1111 . 1111 1111   172.168.191.255

Address range for hosts on subnet (Network address + 1 to Broadcast address - 1)

172.168.160.1 - 172.168.191.254

Number of hosts in network (2^h - 2)

2^(32-19) - 2 = 2^13 - 2 = 8190

IP address: 192.168.1.100/25

1100 0000 . 1010 1000 . 0000 0001 . 0110 0100
1100 0000 . 1010 1000 . 0000 0001 . 0           25 first bit

Netmask (n 1s, the rest 0s)

1111 1111 . 1111 1111 . 1111 1111 . 1000 0000   255.255.255.128

Network address	(all host bits are 0s)

1100 0000 . 1010 1000 . 0000 0001 . 0000 0000   192.168.1.0

Broadcast address (all host bits are 1s)

1100 0000 . 1010 1000 . 0000 0001 . 0111 1111   192.168.1.127

Address range for hosts on subnet (Network address + 1 to Broadcast address - 1)

192.168.1.1 - 192.168.1.126

Number of hosts in network (2^h - 2)

2^(32-25) - 2 = 2^7 - 2 = 126

IP addresses 172.16.5.34/26

1010 1100 . 0001 0000 . 0000 0101 . 0010 0010
1010 1100 . 0001 0000 . 0000 0101 . 00          26 first bits

Netmask (n 1s, the rest 0s)

1111 1111 . 1111 1111 . 1111 1111 . 1100 0000   255.255.255.192

Network address	(all host bits are 0s)

1010 1100 . 0001 0000 . 0000 0101 . 0000 0000   172.16.5.0

Broadcast address (all host bits are 1s)

1010 1100 . 0001 0000 . 0000 0101 . 0011 1111   172.16.5.63

Address range for hosts on subnet (Network address + 1 to Broadcast address - 1)

172.16.5.1 - 172.16.5.62

Number of hosts in network (2^h - 2)

2^(32-26) - 2 = 2^6 - 2 = 62

RHEL 9.0 Managing Date, Time and Time Zone

$ sudo timedatectl 
               Local time: Sun 2022-07-10 18:27:46 CEST
           Universal time: Sun 2022-07-10 16:27:46 UTC
                 RTC time: Sun 2022-07-10 16:27:46
                Time zone: Europe/Stockholm (CEST, +0200)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

$ sudo timedatectl -h
timedatectl [OPTIONS...] COMMAND ...

Query or change system time and date settings.

Commands:
  status                   Show current time settings
  show                     Show properties of systemd-timedated
  set-time TIME            Set system time
  set-timezone ZONE        Set system time zone
  list-timezones           Show known time zones
  set-local-rtc BOOL       Control whether RTC is in local time
  set-ntp BOOL             Enable or disable network time synchronization

systemd-timesyncd Commands:
  timesync-status          Show status of systemd-timesyncd
  show-timesync            Show properties of systemd-timesyncd

Options:
  -h --help                Show this help message
     --version             Show package version
     --no-pager            Do not pipe output into a pager
     --no-ask-password     Do not prompt for password
  -H --host=[USER@]HOST    Operate on remote host
  -M --machine=CONTAINER   Operate on local container
     --adjust-system-clock Adjust system clock when changing local RTC mode
     --monitor             Monitor status of systemd-timesyncd
  -p --property=NAME       Show only properties by this name
  -a --all                 Show all properties, including empty ones
     --value               When showing properties, only print the value

See the timedatectl(1) man page for details.

$ sudo timedatectl list-timezones
$ sudo timedatectl set-ntp true
$ sudo chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^- ntp1.vmar.se                  2   7   377   114   +527us[ +527us] +/-   28ms
^- ec2-16-16-55-166.eu-nort>     2   7   377   116  -1587us[-1283us] +/-   43ms
^* time.cloudflare.com           3   7   377   115   +519us[ +823us] +/- 2212us
^- lul1.ntp.netnod.se            1   7   377   115  -3057us[-2753us] +/-   14ms

$ sudo cat /etc/chrony.conf
$ sudo man 5 chrony.conf
$ sudo systemctl status chronyd.service 
● chronyd.service - NTP client/server
     Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
...

RHEL 9.0 Managing journalctl

$ sudo journalctl -p err
$ sudo journalctl --since "2022-07-01" --until "2022-07-10 15:00:00"
       -S, --since=, -U, --until=
           Start showing entries on or newer than the specified date, or on or older than the specified date, respectively.
           Date specifications should be of the format "2012-10-30 18:17:16". If the time part is omitted, "00:00:00" is
           assumed.
$ sudo journalctl _PID=1
$ sudo journalctl _UID=81
EXAMPLES
...
         _SYSTEMD_UNIT=name.service
             + UNIT=name.service _PID=1
             + OBJECT_SYSTEMD_UNIT=name.service _UID=0
             + COREDUMP_UNIT=name.service _UID=0 MESSAGE_ID=fc2e22bc6ee647b6b90729ab34a250b1
...
$ sudo cat /etc/systemd/journald.conf 
...
# See journald.conf(5) for details.

[Journal]
#Storage=auto
$ sudo man 5 journald.conf
...
       Storage=
           Controls where to store journal data. One of "volatile", "persistent", "auto" and "none". If "volatile", journal
           log data will be stored only in memory, i.e. below the /run/log/journal hierarchy (which is created if needed).
...
$ sudo systemctl restart systemd-journald.service

RHEL 9.0 Manage Systemd Units

$ sudo systemctl list-units --type service --all

Systemd units can be of three kinds:

  • Service units have a .service extension and represent system services.
  • Socket units have a .socket extension and represent inter-process communication (IPC) sockets that systemd should monitor.
  • Path units have a .path extension and delay the activation of a service until a specific file-system change occurs.

Custom or override systemd units are stored:

/etc/systemd/system/

System defaults or rpm installed systemd units goes into are stored:

/usr/lib/systemd/system/

$ sudo systemctl cat sshd.service
$ sudo systemctl edit sshd.service
$ sudo systemctl daemon-reload
$ sudo systemctl status sshd.service 
$ sudo systemctl restart sshd.service 

Completely disabled, so that any start operation on it fails.

$ sudo systemctl mask sendmail.service
$ sudo  systemctl unmask sendmail.service
$ sudo systemctl enable httpd.service
$ sudo systemctl disable httpd.service 
$ sudo systemctl status httpd.service 
$ sudo systemctl is-enabled httpd.service 

RHEL 9.0 Administratively Log Out Users

# man w
...
NAME
       w - Show who is logged on and what they are doing.
...
# w
 14:08:39 up 3 min,  2 users,  load average: 0.03, 0.08, 0.03
USER     TTY        LOGIN@   IDLE   JCPU   PCPU WHAT
student  pts/0     14:04    1:16   0.07s  0.05s sshd: student [priv]
root     pts/1     14:07    1.00s  0.04s  0.01s w
# man pgrep
...
NAME
       pgrep,  pkill, pidwait - look up, signal, or wait for processes based on name and other at‐
       tributes
...
# pgrep -l -u student
1725 systemd
1728 (sd-pam)
1735 sshd
1736 bash
# kill -l
 1) SIGHUP	 2) SIGINT	 3) SIGQUIT	 4) SIGILL	 5) SIGTRAP
 6) SIGABRT	 7) SIGBUS	 8) SIGFPE	 9) SIGKILL	10) SIGUSR1
11) SIGSEGV	12) SIGUSR2	13) SIGPIPE	14) SIGALRM	15) SIGTERM

Use first SIGTERM, then try SIGINT; and only if both fail, to try again with SIGKILL.

# pkill -SIGKILL -u student

Verify that all users process are terminated with pgrep and w.

July 9, 2022

RHEL 9.0 Bash Completion and in VIM use Space Instead of Tab

$ sudo dnf install vim-enhanced bash-completion -y
$ sudo vim /etc/vimrc
...
set tabstop=2
set shiftwidth=2
set expandtab

Set default editor to vim.

$ sudo vim /etc/profile
...
EDITOR=vim

After installing bash completion you need to login out and then login before completion takes affect.