oct

17

Posted by : admin | On : 17 octobre 2019

download the oc command line from minishift on the right top corner

this template will set up for you a complete CI/CD with gogs, jenckins sonar with in backend postgre sql

I’ve encounter start issue on my desktop, a set up at least 10go RAM is required, then restart individually the process it shall work :)

oc process -f https://raw.githubusercontent.com/OpnShiftDemos/openshift-cd-demo/master/cicd-gogs-template.yaml | oc create -f -

oc process -f  https://raw.githubusercontent.com/siamaksade/openshift-cd-demo/ocp-4.1/cicd-template.yaml | oc create -f -

 

 

# Create Projects
oc new-project dev --display-name="Tasks - Dev"
oc new-project stage --display-name="Tasks - Stage"
oc new-project cicd --display-name="CI/CD"

# Grant Jenkins Access to Projects
oc policy add-role-to-group edit system:serviceaccounts:cicd -n dev
oc policy add-role-to-group edit system:serviceaccounts:cicd -n stage
# Deploy Demo
oc new-app -n cicd -f https://raw.githubusercontent.com/siamaksade/openshift-cd-demo/ocp-4.1/cicd-template.yaml
# the yaml bellow


 

 

 

apiVersion: v1
kind: Template
labels:
  template: cicd
  group: cicd
metadata:
  annotations:
    iconClass: icon-jenkins
    tags: instant-app,jenkins,gogs,nexus,cicd
  name: cicd
message: "Use the following credentials for login:\nJenkins: use your OpenShift credentials\nNexus: admin/admin123\nSonarQube: admin/admin\nGogs Git Server: gogs/gogs"
parameters:
- displayName: DEV project name
  value: dev
  name: DEV_PROJECT
  required: true
- displayName: STAGE project name
  value: stage
  name: STAGE_PROJECT
  required: true
- displayName: Ephemeral
  description: Use no persistent storage for Gogs and Nexus
  value: "true"
  name: EPHEMERAL
  required: true
- description: Webhook secret
  from: '[a-zA-Z0-9]{8}'
  generate: expression
  name: WEBHOOK_SECRET
  required: true
- displayName: Integrate Quay.io
  description: Integrate image build and deployment with Quay.io
  value: "false"
  name: ENABLE_QUAY
  required: true
- displayName: Quay.io Username
  description: Quay.io username to push the images to tasks-sample-app repository on your Quay.io account
  name: QUAY_USERNAME
- displayName: Quay.io Password
  description: Quay.io password to push the images to tasks-sample-app repository on your Quay.io account
  name: QUAY_PASSWORD
- displayName: Quay.io Image Repository
  description: Quay.io repository for pushing Tasks container images
  name: QUAY_REPOSITORY
  required: true
  value: tasks-app
objects:
- apiVersion: v1
  groupNames: null
  kind: RoleBinding
  metadata:
    name: default_admin
  roleRef:
    name: admin
  subjects:
  - kind: ServiceAccount
    name: default
# Pipeline
- apiVersion: v1
  kind: BuildConfig
  metadata:
    annotations:
      pipeline.alpha.openshift.io/uses: '[{"name": "jenkins", "namespace": "", "kind": "DeploymentConfig"}]'
    labels:
      app: cicd-pipeline
      name: cicd-pipeline
    name: tasks-pipeline
  spec:
    triggers:
      - type: GitHub
        github:
          secret: ${WEBHOOK_SECRET}
      - type: Generic
        generic:
          secret: ${WEBHOOK_SECRET}
    runPolicy: Serial
    source:
      type: None
    strategy:
      jenkinsPipelineStrategy:
        env:
        - name: DEV_PROJECT
          value: ${DEV_PROJECT}
        - name: STAGE_PROJECT
          value: ${STAGE_PROJECT}
        - name: ENABLE_QUAY
          value: ${ENABLE_QUAY}
        jenkinsfile: |-
          def mvnCmd = "mvn -s configuration/cicd-settings-nexus3.xml"

          pipeline {
            agent {
              label 'maven'
            }
            stages {
              stage('Build App') {
                steps {
                  git branch: 'eap-7', url: 'http://gogs:3000/gogs/openshift-tasks.git'
                  sh "${mvnCmd} install -DskipTests=true"
                }
              }
              stage('Test') {
                steps {
                  sh "${mvnCmd} test"
                  step([$class: 'JUnitResultArchiver', testResults: '**/target/surefire-reports/TEST-*.xml'])
                }
              }
              stage('Code Analysis') {
                steps {
                  script {
                    sh "${mvnCmd} sonar:sonar -Dsonar.host.url=http://sonarqube:9000 -DskipTests=true"
                  }
                }
              }
              stage('Archive App') {
                steps {
                  sh "${mvnCmd} deploy -DskipTests=true -P nexus3"
                }
              }
              stage('Build Image') {
                steps {
                  sh "cp target/openshift-tasks.war target/ROOT.war"
                  script {
                    openshift.withCluster() {
                      openshift.withProject(env.DEV_PROJECT) {
                        openshift.selector("bc", "tasks").startBuild("--from-file=target/ROOT.war", "--wait=true")
                      }
                    }
                  }
                }
              }
              stage('Deploy DEV') {
                steps {
                  script {
                    openshift.withCluster() {
                      openshift.withProject(env.DEV_PROJECT) {
                        openshift.selector("dc", "tasks").rollout().latest();
                      }
                    }
                  }
                }
              }
              stage('Promote to STAGE?') {
                agent {
                  label 'skopeo'
                }
                steps {
                  timeout(time:15, unit:'MINUTES') {
                      input message: "Promote to STAGE?", ok: "Promote"
                  }

                  script {
                    openshift.withCluster() {
                      if (env.ENABLE_QUAY.toBoolean()) {
                        withCredentials([usernamePassword(credentialsId: "${openshift.project()}-quay-cicd-secret", usernameVariable: "QUAY_USER", passwordVariable: "QUAY_PWD")]) {
                          sh "skopeo copy docker://quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:latest docker://quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:stage --src-creds \"$QUAY_USER:$QUAY_PWD\" --dest-creds \"$QUAY_USER:$QUAY_PWD\" --src-tls-verify=false --dest-tls-verify=false"
                        }
                      } else {
                        openshift.tag("${env.DEV_PROJECT}/tasks:latest", "${env.STAGE_PROJECT}/tasks:stage")
                      }
                    }
                  }
                }
              }
              stage('Deploy STAGE') {
                steps {
                  script {
                    openshift.withCluster() {
                      openshift.withProject(env.STAGE_PROJECT) {
                        openshift.selector("dc", "tasks").rollout().latest();
                      }
                    }
                  }
                }
              }
            }
          }
      type: JenkinsPipeline
- apiVersion: v1
  kind: ConfigMap
  metadata:
    labels:
      app: cicd-pipeline
      role: jenkins-slave
    name: jenkins-slaves
  data:
    maven-template: |-
      <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
        <inheritFrom></inheritFrom>
        <name>maven</name>
        <privileged>false</privileged>
        <alwaysPullImage>false</alwaysPullImage>
        <instanceCap>2147483647</instanceCap>
        <idleMinutes>0</idleMinutes>
        <label>maven</label>
        <serviceAccount>jenkins</serviceAccount>
        <nodeSelector></nodeSelector>
        <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
        <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
          <memory>false</memory>
        </workspaceVolume>
        <volumes />
        <containers>
          <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <name>jnlp</name>
            
            <privileged>false</privileged>
            <alwaysPullImage>false</alwaysPullImage>
            <workingDir>/tmp</workingDir>
            <command></command>
            <args>${computer.jnlpmac} ${computer.name}</args>
            <ttyEnabled>false</ttyEnabled>
            <resourceRequestCpu>200m</resourceRequestCpu>
            <resourceRequestMemory>512Mi</resourceRequestMemory>
            <resourceLimitCpu>2</resourceLimitCpu>
            <resourceLimitMemory>4Gi</resourceLimitMemory>
            <envVars/>
          </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
        </containers>
        <envVars/>
        <annotations/>
        <imagePullSecrets/>
      </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
    skopeo-template: |-
      <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
        <inheritFrom></inheritFrom>
        <name>skopeo</name>
        <privileged>false</privileged>
        <alwaysPullImage>false</alwaysPullImage>
        <instanceCap>2147483647</instanceCap>
        <idleMinutes>0</idleMinutes>
        <label>skopeo</label>
        <serviceAccount>jenkins</serviceAccount>
        <nodeSelector></nodeSelector>
        <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
        <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
          <memory>false</memory>
        </workspaceVolume>
        <volumes />
        <containers>
          <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <name>jnlp</name>
            
            <privileged>false</privileged>
            <alwaysPullImage>false</alwaysPullImage>
            <workingDir>/tmp</workingDir>
            <command></command>
            <args>${computer.jnlpmac} ${computer.name}</args>
            <ttyEnabled>false</ttyEnabled>
            <envVars/>
          </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
        </containers>
        <envVars/>
        <annotations/>
        <imagePullSecrets/>
      </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
# Setup Demo
- apiVersion: batch/v1
  kind: Job
  metadata:
    name: cicd-demo-installer
  spec:
    activeDeadlineSeconds: 400
    completions: 1
    parallelism: 1
    template:
      spec:
        containers:
        - env:
          - name: CICD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          command:
          - /bin/bash
          - -x
          - -c
          - |
            # adjust jenkins
            oc set resources dc/jenkins --limits=cpu=2,memory=2Gi --requests=cpu=100m,memory=512Mi
            oc label dc jenkins app=jenkins --overwrite 

            # setup dev env
            oc import-image wildfly --from=openshift/wildfly-120-centos7 --confirm -n ${DEV_PROJECT} 

            if [ "${ENABLE_QUAY}" == "true" ] ; then
              # cicd
              oc create secret generic quay-cicd-secret --from-literal="username=${QUAY_USERNAME}" --from-literal="password=${QUAY_PASSWORD}" -n ${CICD_NAMESPACE}
              oc label secret quay-cicd-secret credential.sync.jenkins.openshift.io=true -n ${CICD_NAMESPACE}

              # dev
              oc create secret docker-registry quay-cicd-secret --docker-server=quay.io --docker-username="${QUAY_USERNAME}" --docker-password="${QUAY_PASSWORD}" --docker-email=cicd@redhat.com -n ${DEV_PROJECT}
              oc new-build --name=tasks --image-stream=wildfly:latest --binary=true --push-secret=quay-cicd-secret --to-docker --to='quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:latest' -n ${DEV_PROJECT}
              oc new-app --name=tasks --docker-image=quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:latest --allow-missing-images -n ${DEV_PROJECT}
              oc set triggers dc tasks --remove-all -n ${DEV_PROJECT}
              oc patch dc tasks -p '{"spec": {"template": {"spec": {"containers": [{"name": "tasks", "imagePullPolicy": "Always"}]}}}}' -n ${DEV_PROJECT}
              oc delete is tasks -n ${DEV_PROJECT}
              oc secrets link default quay-cicd-secret --for=pull -n ${DEV_PROJECT}

              # stage
              oc create secret docker-registry quay-cicd-secret --docker-server=quay.io --docker-username="${QUAY_USERNAME}" --docker-password="${QUAY_PASSWORD}" --docker-email=cicd@redhat.com -n ${STAGE_PROJECT}
              oc new-app --name=tasks --docker-image=quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:stage --allow-missing-images -n ${STAGE_PROJECT}
              oc set triggers dc tasks --remove-all -n ${STAGE_PROJECT}
              oc patch dc tasks -p '{"spec": {"template": {"spec": {"containers": [{"name": "tasks", "imagePullPolicy": "Always"}]}}}}' -n ${STAGE_PROJECT}
              oc delete is tasks -n ${STAGE_PROJECT}
              oc secrets link default quay-cicd-secret --for=pull -n ${STAGE_PROJECT}
            else
              # dev
              oc new-build --name=tasks --image-stream=wildfly:latest --binary=true -n ${DEV_PROJECT}
              oc new-app tasks:latest --allow-missing-images -n ${DEV_PROJECT}
              oc set triggers dc -l app=tasks --containers=tasks --from-image=tasks:latest --manual -n ${DEV_PROJECT}

              # stage
              oc new-app tasks:stage --allow-missing-images -n ${STAGE_PROJECT}
              oc set triggers dc -l app=tasks --containers=tasks --from-image=tasks:stage --manual -n ${STAGE_PROJECT}
            fi

            # dev project
            oc expose dc/tasks --port=8080 -n ${DEV_PROJECT}
            oc expose svc/tasks -n ${DEV_PROJECT}
            oc set probe dc/tasks --readiness --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=30 --failure-threshold=10 --period-seconds=10 -n ${DEV_PROJECT}
            oc set probe dc/tasks --liveness  --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=180 --failure-threshold=10 --period-seconds=10 -n ${DEV_PROJECT}
            oc rollout cancel dc/tasks -n ${STAGE_PROJECT}

            # stage project
            oc expose dc/tasks --port=8080 -n ${STAGE_PROJECT}
            oc expose svc/tasks -n ${STAGE_PROJECT}
            oc set probe dc/tasks --readiness --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=30 --failure-threshold=10 --period-seconds=10 -n ${STAGE_PROJECT}
            oc set probe dc/tasks --liveness  --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=180 --failure-threshold=10 --period-seconds=10 -n ${STAGE_PROJECT}
            oc rollout cancel dc/tasks -n ${DEV_PROJECT}

            # deploy gogs
            HOSTNAME=$(oc get route jenkins -o template --template='{{.spec.host}}' | sed "s/jenkins-${CICD_NAMESPACE}.//g")
            GOGS_HOSTNAME="gogs-$CICD_NAMESPACE.$HOSTNAME"

            if [ "${EPHEMERAL}" == "true" ] ; then
              oc new-app -f https://raw.githubusercontent.com/siamaksade/gogs-openshift-docker/master/openshift/gogs-template.yaml \
                  --param=GOGS_VERSION=0.11.34 \
                  --param=DATABASE_VERSION=9.6 \
                  --param=HOSTNAME=$GOGS_HOSTNAME \
                  --param=SKIP_TLS_VERIFY=true
            else
              oc new-app -f https://raw.githubusercontent.com/siamaksade/gogs-openshift-docker/master/openshift/gogs-persistent-template.yaml \
                  --param=GOGS_VERSION=0.11.34 \
                  --param=DATABASE_VERSION=9.6 \
                  --param=HOSTNAME=$GOGS_HOSTNAME \
                  --param=SKIP_TLS_VERIFY=true
            fi

            sleep 5

            if [ "${EPHEMERAL}" == "true" ] ; then
              oc new-app -f https://raw.githubusercontent.com/siamaksade/sonarqube/master/sonarqube-template.yml --param=SONARQUBE_MEMORY_LIMIT=2Gi
            else
              oc new-app -f https://raw.githubusercontent.com/siamaksade/sonarqube/master/sonarqube-persistent-template.yml --param=SONARQUBE_MEMORY_LIMIT=2Gi
            fi

            oc set resources dc/sonardb --limits=cpu=200m,memory=512Mi --requests=cpu=50m,memory=128Mi
            oc set resources dc/sonarqube --limits=cpu=1,memory=2Gi --requests=cpu=50m,memory=128Mi

            if [ "${EPHEMERAL}" == "true" ] ; then
              oc new-app -f https://raw.githubusercontent.com/OpenShiftDemos/nexus/master/nexus3-template.yaml --param=NEXUS_VERSION=3.13.0 --param=MAX_MEMORY=2Gi
            else
              oc new-app -f https://raw.githubusercontent.com/OpenShiftDemos/nexus/master/nexus3-persistent-template.yaml --param=NEXUS_VERSION=3.13.0 --param=MAX_MEMORY=2Gi
            fi

            oc set resources dc/nexus --requests=cpu=200m --limits=cpu=2

            GOGS_SVC=$(oc get svc gogs -o template --template='{{.spec.clusterIP}}')
            GOGS_USER=gogs
            GOGS_PWD=gogs

            oc rollout status dc gogs

            _RETURN=$(curl -o /tmp/curl.log -sL --post302 -w "%{http_code}" http://$GOGS_SVC:3000/user/sign_up \
              --form user_name=$GOGS_USER \
              --form password=$GOGS_PWD \
              --form retype=$GOGS_PWD \
              --form email=admin@gogs.com)

            sleep 5

            if [ $_RETURN != "200" ] && [ $_RETURN != "302" ] ; then
              echo "ERROR: Failed to create Gogs admin"
              cat /tmp/curl.log
              exit 255
            fi

            sleep 10

            cat <<EOF > /tmp/data.json
            {
              "clone_addr": "https://github.com/OpenShiftDemos/openshift-tasks.git",
              "uid": 1,
              "repo_name": "openshift-tasks"
            }
            EOF

            _RETURN=$(curl -o /tmp/curl.log -sL -w "%{http_code}" -H "Content-Type: application/json" \
            -u $GOGS_USER:$GOGS_PWD -X POST http://$GOGS_SVC:3000/api/v1/repos/migrate -d @/tmp/data.json)

            if [ $_RETURN != "201" ] ;then
              echo "ERROR: Failed to import openshift-tasks GitHub repo"
              cat /tmp/curl.log
              exit 255
            fi

            sleep 5

            cat <<EOF > /tmp/data.json
            {
              "type": "gogs",
              "config": {
                "url": "https://openshift.default.svc.cluster.local/apis/build.openshift.io/v1/namespaces/$CICD_NAMESPACE/buildconfigs/tasks-pipeline/webhooks/${WEBHOOK_SECRET}/generic",
                "content_type": "json"
              },
              "events": [
                "push"
              ],
              "active": true
            }
            EOF

            _RETURN=$(curl -o /tmp/curl.log -sL -w "%{http_code}" -H "Content-Type: application/json" \
            -u $GOGS_USER:$GOGS_PWD -X POST http://$GOGS_SVC:3000/api/v1/repos/gogs/openshift-tasks/hooks -d @/tmp/data.json)

            if [ $_RETURN != "201" ] ; then
              echo "ERROR: Failed to set webhook"
              cat /tmp/curl.log
              exit 255
            fi

            oc label dc sonarqube "app.kubernetes.io/part-of"="sonarqube" --overwrite
            oc label dc sonardb "app.kubernetes.io/part-of"="sonarqube" --overwrite
            oc label dc jenkins "app.kubernetes.io/part-of"="jenkins" --overwrite
            oc label dc nexus "app.kubernetes.io/part-of"="nexus" --overwrite
            oc label dc gogs "app.kubernetes.io/part-of"="gogs" --overwrite
            oc label dc gogs-postgresql "app.kubernetes.io/part-of"="gogs" --overwrite

          image: quay.io/openshift/origin-cli:v4.0
          name: cicd-demo-installer-job
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        restartPolicy: Never

oct

16

Posted by : admin | On : 16 octobre 2019

installation sous windows 10

* Pouvoir executer le power shell en administrateur

1 – Install Hyper-V in Windows 10. See Windows instructions.

2– Add the user to the local Hyper-V Administrators group.

PS command :

 

([adsi]"WinNT://./Hyper-V Administrators,group").Add("WinNT://$env:UserDomain/$env:Username,user")

ou bien manuellement sur pannaeau configuration -> outils administration

 

 

 

 

– Add an External Virtual Switch.  => (à Gauche  =>  Gestionnaire commutateur virtuel)
=> bien verifier le nom de la carte réseau ASSOCIE
Identify first the net adapter to use.
PS command :
Get-NetAdapter

 

Then store which one you want to use for Minishift network access.
PS command :
$net = Get-NetAdapter -Name 'Wi-Fi' A

At last create a virtual switch for Hyper-V :
New-VMSwitch -Name "External VM Switch" -AllowManagementOS $True -NetAdapterName $net.Name

I named it « External VM Switch » but the name doesn’t matter.
From now Powershell is not required any longer. You can use either PowerShell or cmd.exe.

 

 

2) Starting Up Minishift

  • Astuse Pour demarrer appliquer la commande  suivante sur windows

minishift config set skip-check-hyperv-driver true

Minishift has to aware of the external virtual switch to use.
You can set its name from the Minishift command arg or as a environment variable.

In theory, that should be enough but it will not the case on Windows 10:
minishift.exe config set hyperv-virtual-switch « External VM Switch »

minishift start

 

* Output power shell

– Starting profile ‘minishift’
– Check if deprecated options are used … OK
– Checking if https://github.com is reachable … OK
– Checking if requested OpenShift version ‘v3.11.0′ is valid … OK
– Checking if requested OpenShift version ‘v3.11.0′ is supported … OK
– Checking if requested hypervisor ‘hyperv’ is supported on this platform … OK
– Checking if Powershell is available … OK
– Checking if Hyper-V driver is installed … SKIP
– Checking if Hyper-V driver is configured to use a Virtual Switch … SKIP
– Checking if user is a member of the Hyper-V Administrators group … SKIP
– Checking the ISO URL … OK
– Downloading OpenShift binary ‘oc’ version ‘v3.11.0′
53.59 MiB / 53.59 MiB [===================================================================================] 100.00% 0s– Downloading OpenShift v3.11.0 checksums … OK
– Checking if provided oc flags are supported … OK
– Starting the OpenShift cluster using ‘hyperv’ hypervisor …
– Minishift VM will be configured with …
Memory:    4 GB
vCPUs :    2
Disk size: 20 GB

Downloading ISO ‘https://github.com/minishift/minishift-centos-iso/releases/download/v1.15.0/minishift-centos7.iso’
355.00 MiB / 355.00 MiB [

 

This command execution will download the Minishift ISO but will fail when it try to retrieve the downloaded ISO on Windows 10 because of the path specified in the running script that relies on the Linux based filesystem layout.
After the failure, a simple workaround is to move the downloaded ISO where you like and to refer it as you execute the start command of the Minishift executable.
It would give :
minishift start --hyperv-virtual-switch "External VM Switch" --iso-url file://D:/minishift/minishift-centos7.iso
Where you should refer in iso-url the file URI of the Minishift ISO.
The startup will take a some time. At the end you should see something like :

 

3) Stop Minishift

As you finished to use Minishift in most of case you don’t want to delete all files created to start it.
So you want to release memory/cpu resources allocated to Minishift by stopping it :
minishift stop

4) Delete Minishift

A weird issue with Minishift or the need to start from scratch : delete files associated to the Minishift instance/cluster : minishift delete --force

4) Problems to start up Minishift ?

 

Q : I have a broad error but I don’t see the cause.
A : Increase the logging verbosity by adding  arguments in the minishift command :  --show-libmachine-logs -v5

Q : I feel that I fixed what it was wrong or missing but the minishift startup still fails.
A : delete startup files and restart Minishift from scratch.
This command will help :
minishift delete --force

5) Opening Minishift console

Either use the URL provided in the logs or use the console option that is minishift console :

minishift-logging

Install ISTIO, next step !

 

autre source ; RHEL official doc

https://www.marksei.com/openshift-minishift-widnows/

http://myjavaadventures.com/blog/2018/11/15/installer-minishift/

https://blog.dbi-services.com/openshift-on-my-windows-10-laptop-with-minishift/

https://www.informatiweb.net/tutoriels/informatique/11-virtualisation/268–virtualbox-comment-utiliser-et-configurer-les-differents-modes-d-acces-reseau-d-une-machine-virtuelle–2.html#host-only-adapter

https://access.redhat.com/documentation/en-us/red_hat_container_development_kit/3.3/html/getting_started_guide/getting_started_with_container_development_kit