avr

25

Posted by : admin | On : 25 avril 2025

Sécurité informatique

 

Introduction

La sécurité informatique est devenu de nos jours un élément indispensable dans la sécurisation de l’information tant pour un particulier que pour une entreprise. Afin d’évaluer le degré de sécurité à appliquer il faudra s’attacher dans un premier temps de définir le degré d’importance de l’information visant à être protéger et les moyens technique que nous souhaitons mettre en oeuvre afin de protéger ces informations .

Pour sécuriser les systèmes d’information, la démarche consiste à :

  • évaluer les risques et leur criticité : quels risques et quelles menaces, sur quelle donnée et quelle activité, avec quelles conséquences ?
On parle de « cartographie des risques ». De la qualité de cette cartographie dépend la qualité de la sécurité qui va être mise en oeuvre.
  • rechercher et sélectionner les parades : que va-t-on sécuriser, quand et comment ?
Etape difficile des choix de sécurité : dans un contexte de ressources limitées (en temps, en compétences et en argent), seules certaines solutions pourront être mises en oeuvre.
  • mettre en œuvre les protections, et vérifier leur efficacité.
C’est l’aboutissement de la phase d’analyse et là que commence vraiment la protection du système d’information. Une faiblesse fréquente de cette phase est d’omettre de vérifier que les protections sont bien efficaces (tests de fonctionnement en mode dégradé, tests de reprise de données, tests d’attaque malveillante, etc.)

Nous pouvons décliner à plusieurs niveau la sécurité informatique :

Dans cet article nous allons essayer de voir plus particulièrement la sécurité applicative que nous pouvons appliquer via des algorithme de chiffrement et de hashage. Nous allons essayer d’explorer la librairie Java Security API (JCA Cryptography Architecture) ainsi que la librairie Cipher


Pour commencer un bonne introduction sur le domaine de la sécurité informatique un article que j’ai trouvé intéressant sur techno science : http://www.techno-science.net/?onglet=glossaire&definition=6154 http://igm.univ-mlv.fr/~dr/XPOSE2006/depail/fonctionnement.html

Cryptographie asymétrique

Les concepts de signature numérique sont principalement basés sur la cryptographie asymétrique. Cette technique permet de chiffrer avec un mot de passe et de déchiffrer avec un autre, les deux étant indépendants. Par exemlpe, imaginons que Bob souhaite envoyer des messages secret à Alice. Ils vont pour cela utiliser la cryptographie symétrique. Alice génère tout d’abord un couple de clés. Une clé privée (en rouge) et une clé publique (en vert). Ces clés ont des propriétés particulière vis à vis des algorithmes utilisés. En effet, un message chiffré avec une clé ne peut être déchiffré qu’avec l’autre clé. Il s’agit de fonctions à sens unique.

Alice génère un couple de clés

Alice transmet ensuite la clé publique (en vert) à Bob. Grâce à cette clé, Bob peut chiffrer un texte et l’envoyer à Alice.

Bob chiffre avec la clé publique d'Alice

En utilisant la clé publique d’Alice, Bob est certain de deux choses :

  • Personne ne peut lire le message, puisqu’il est crypté
  •  

  • Seule Alice peut déchiffrer le message, car elle est la seule a possèder la clé privée.
  •  

Nous venons de répondre au besoin de confidentialité des données. Mais la cryptographie asymétrique peut être utilisée d’une autre façon. En effet, on peut également utiliser la clé privée pour chiffrer, la clé publique servant alors à déchiffrer. Le message ainsi chiffré est lisible par toute personne disposant de la clé publique. Ceci n’est pas très utile si l’on cherche la confidentialité. En revanche, une seule personne est susceptible d’avoir chiffré ce message : Alice. Ainsi, si l’on peut déchiffrer un message avec la clé publique d’Alice, c’est forcément la personne à avoir chiffré ce message.

Fonctions de hachage

Je vais maintenant décrire les mécanismes permettant de s’assurer que des données n’ont pas été modifiées : les fonctions de hachage. Une fonction de hachage est un procédé à sens unique permettant d’obtenir une suite d’octets (une empreinte) caractérisant un ensemble de données. Pour tout ensemble de données de départ, l’empreinte obtenue est toujours la même. Dans le cadre de la signature numérique, nous nous intéresseront tout particulièrement aux fonctions de hachage cryptographiques. Celles-ci assurent qu’il est impossible de créer un ensemble de données de départ donnant la même empreinte qu’un autre ensemble. Nous pouvons donc utiliser ces fonctions pour nous assurer de l’intégrité d’un document. Les deux algorithme les plus utilisées sont MD5 et SHA.  A noter que MD5 n’est plus considéré comme sûr par les spécialistes. En effet, une équipe chinoise aurait réussi à trouver une collision complète, c’est à dire deux jeux de données donnant la même empreinte, sans utiliser de méthode de force brute. Aujourd’hui, il serait notamment possible de créer deux pages html au contenu différent, ayant pourtant les mêmes empreintes MD5 (en utilisant notamment les balises <meta>, invisibles dans le navigateur). La falsification de documents pourrait donc être possible.

Signer un document

La signature d’un document utilise à la fois la cryptographie asymétrique et les fonctions de hachage. C’est en effet par l’association de ces deux techniques que nous pouvons obtenir les 5 caractéristiques d’une signature (authentique, infalsifiable, non réutilisable, inaltérable, irrévocable). Imaginons que Alice souhaite envoyer un document signé à Bob.

  • Tout d’abord, elle génére l’empreinte du document au moyen d’une fonction de hachage.
  •  

  • Puis, elle crypte cette empreinte avec sa clé privée.
  •  

Alice signe le document pour l'envoyer à Bob

 

  • Elle obtient ainsi la signature de son document. Elle envoie donc ces deux éléments à Bob
  •  

Alice envoie les deux éléments à Bob
  • Pour vérifier la validité du document, Bob doit tout d’abord déchiffrer la signature en utilisant la clé publique d’Alice. Si cela ne fonctionne pas, c’est que le document n’a pas été envoyé par Alice.
  •  

  • Ensuite, Bob génère l’empreinte du document qu’il a reçu, en utilisant la même fonction de hachage qu’Alice (On supposera qu’ils suivent un protocole établi au préalable).
  •  

  • Puis, il compare l’empreinte générée et celle issue de la signature.
  •  

Bob vérifie la signature d'Alice
  • Si les deux empreintes sont identiques, la signature est validée. Nous sommes donc sûr que :
    • C’est Alice qui a envoyé le document,
    •  

    • Le document n’a pas été modifié depuis qu’Alice l’a signé.
    •  

     

  •  

  • Dans le cas contraire, cela peut signifier que :
    • Le document a été modifié depuis sa signature par Alice,
    •  

    • Ce n’est pas ce document qu’Alice a signé
    •  

     

  •  

 

 

 

/**
 * <p>
 * step 1 create a singleton of the service
 * step 2 Java security API (JCA (Java Cryptography Architecture) to obtain an instance of a message digest object using the algorithm supplied
 * step 3 convert plain data to  byte-representation using UTF-8 encoding format. generate an array of bytes that represent the digested (encrypted) password value
 * step 4 Create a String representation of the byte array representing the digested password value.
 * step 5 newly generated hash stock in String
 *
 * </p>
 * @author artaud antoine
 *
 */

public final class PasswordServiceEncryption
{
	private static PasswordServiceEncryption instance;

	private PasswordServiceEncryption()
	{
	}
       //step 1 create a singleton of the service
	public static synchronized PasswordServiceEncryption getInstance()
	{
		if(instance == null)
		{
			instance = new PasswordServiceEncryption();
		}
		return instance;
	}

	public synchronized String encrypt(String plaintext) throws NoSuchAlgorithmException, UnsupportedEncodingException {
		MessageDigest md = null;
		md = MessageDigest.getInstance("SHA-512"); //step 2 Java security API (JCA (Java Cryptography Architecture) to obtain an instance of a message digest object using the algorithm supplied
		System.out.println(plaintext);
		md.reset();
		md.update(plaintext.getBytes("UTF-8")); //step 3 convert plain data to  byte-representation using UTF-8 encoding format. generate an array of bytes that represent the digested (encrypted) password value

		byte raw[] = md.digest(); //step 4 Create a String representation of the byte array representing the digested password value.
		String hash = (new BASE64Encoder()).encode(raw); //step 5 newly generated hash stock in String 

		System.out.println(hash);
		return hash; //step 6
	}

nov

16

Posted by : admin | On : 16 novembre 2019

npm install -g npm@latest
npm install -g @angular/cli
ng new mon-premier-projet
cd mon-premier-projet
ng serve --open
ng new mon-projet-angular --style=scss --skip-tests=true.

npm install bootstrap@3.3.7 --save.

ng serve

import { Component } from '@angular/core';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss']
})
export class AppComponent {
  title = 'app';
}

ng generate component mon-premier 

lastUpdate = new Promise((resolve, reject) => {
    const date = new Date();
    setTimeout(
      () => {
        resolve(date);
      }, 2000
    );
  });

<p>Mis à jour : {{ lastUpdate | async | date: 'yMMMMEEEEd' | uppercase }}</p>

app/services
export class AppareilService {

}

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';

import { AppComponent } from './app.component';
import { MonPremierComponent } from './mon-premier/mon-premier.component';
import { AppareilComponent } from './appareil/appareil.component';
import { FormsModule } from '@angular/forms';
import { AppareilService } from './services/appareil.service';

@NgModule({
  declarations: [
    AppComponent,
    MonPremierComponent,
    AppareilComponent
  ],
  imports: [
    BrowserModule,
    FormsModule
  ],
  providers: [
    AppareilService
  ],
  bootstrap: [AppComponent]
})
export class AppModule { }
constructor(private appareilService: AppareilService) {
    setTimeout(
      () => {
        this.isAuth = true;
      }, 4000
    );
  }

export class AppareilService {
  appareils = [
    {
      name: 'Machine à laver',
      status: 'éteint'
    },
    {
      name: 'Frigo',
      status: 'allumé'
    },
    {
      name: 'Ordinateur',
      status: 'éteint'
    }
  ];
}

Les routes app.module.ts
import { NgModule } from '@angular/core';

import { AppComponent } from './app.component';
import { MonPremierComponent } from './mon-premier/mon-premier.component';
import { AppareilComponent } from './appareil/appareil.component';
import { FormsModule } from '@angular/forms';
import { AppareilService } from './services/appareil.service';
import { AuthComponent } from './auth/auth.component';
import { AppareilViewComponent } from './appareil-view/appareil-view.component';
import { Routes } from '@angular/router';

const appRoutes: Routes = [
  { path: 'appareils', component: AppareilViewComponent },
  { path: 'auth', component: AuthComponent },
  { path: '', component: AppareilViewComponent }
];

<router-outlet></router-outlet>

import { Component, OnInit } from '@angular/core';
import { AuthService } from '../services/auth.service';

@Component({
  selector: 'app-auth',
  templateUrl: './auth.component.html',
  styleUrls: ['./auth.component.scss']
})
export class AuthComponent implements OnInit {

  authStatus: boolean;

  constructor(private authService: AuthService) { }

  ngOnInit() {
    this.authStatus = this.authService.isAuth;
  }

  onSignIn() {
    this.authService.signIn().then(
      () => {
        console.log('Sign in successful!');
        this.authStatus = this.authService.isAuth;
      }
    );
  }

  onSignOut() {
    this.authService.signOut();
    this.authStatus = this.authService.isAuth;
  }

}
<h2>Authentification</h2>
<button class="btn btn-success" *ngIf="!authStatus" (click)="onSignIn()">Se connecter</button>
<button class="btn btn-danger" *ngIf="authStatus" (click)="onSignOut()">Se déconnecter</button>

oct

17

Posted by : admin | On : 17 octobre 2019

La lecture

https://thierry-leriche-dessirier.developpez.com/tutoriels/java/vertx/discuter-via-event-bus/

oct

17

Posted by : admin | On : 17 octobre 2019

download the oc command line from minishift on the right top corner

this template will set up for you a complete CI/CD with gogs, jenckins sonar with in backend postgre sql

I’ve encounter start issue on my desktop, a set up at least 10go RAM is required, then restart individually the process it shall work :)

oc process -f https://raw.githubusercontent.com/OpnShiftDemos/openshift-cd-demo/master/cicd-gogs-template.yaml | oc create -f -

oc process -f  https://raw.githubusercontent.com/siamaksade/openshift-cd-demo/ocp-4.1/cicd-template.yaml | oc create -f -

 

 

# Create Projects
oc new-project dev --display-name="Tasks - Dev"
oc new-project stage --display-name="Tasks - Stage"
oc new-project cicd --display-name="CI/CD"

# Grant Jenkins Access to Projects
oc policy add-role-to-group edit system:serviceaccounts:cicd -n dev
oc policy add-role-to-group edit system:serviceaccounts:cicd -n stage
# Deploy Demo
oc new-app -n cicd -f https://raw.githubusercontent.com/siamaksade/openshift-cd-demo/ocp-4.1/cicd-template.yaml
# the yaml bellow


 

 

 

apiVersion: v1
kind: Template
labels:
  template: cicd
  group: cicd
metadata:
  annotations:
    iconClass: icon-jenkins
    tags: instant-app,jenkins,gogs,nexus,cicd
  name: cicd
message: "Use the following credentials for login:\nJenkins: use your OpenShift credentials\nNexus: admin/admin123\nSonarQube: admin/admin\nGogs Git Server: gogs/gogs"
parameters:
- displayName: DEV project name
  value: dev
  name: DEV_PROJECT
  required: true
- displayName: STAGE project name
  value: stage
  name: STAGE_PROJECT
  required: true
- displayName: Ephemeral
  description: Use no persistent storage for Gogs and Nexus
  value: "true"
  name: EPHEMERAL
  required: true
- description: Webhook secret
  from: '[a-zA-Z0-9]{8}'
  generate: expression
  name: WEBHOOK_SECRET
  required: true
- displayName: Integrate Quay.io
  description: Integrate image build and deployment with Quay.io
  value: "false"
  name: ENABLE_QUAY
  required: true
- displayName: Quay.io Username
  description: Quay.io username to push the images to tasks-sample-app repository on your Quay.io account
  name: QUAY_USERNAME
- displayName: Quay.io Password
  description: Quay.io password to push the images to tasks-sample-app repository on your Quay.io account
  name: QUAY_PASSWORD
- displayName: Quay.io Image Repository
  description: Quay.io repository for pushing Tasks container images
  name: QUAY_REPOSITORY
  required: true
  value: tasks-app
objects:
- apiVersion: v1
  groupNames: null
  kind: RoleBinding
  metadata:
    name: default_admin
  roleRef:
    name: admin
  subjects:
  - kind: ServiceAccount
    name: default
# Pipeline
- apiVersion: v1
  kind: BuildConfig
  metadata:
    annotations:
      pipeline.alpha.openshift.io/uses: '[{"name": "jenkins", "namespace": "", "kind": "DeploymentConfig"}]'
    labels:
      app: cicd-pipeline
      name: cicd-pipeline
    name: tasks-pipeline
  spec:
    triggers:
      - type: GitHub
        github:
          secret: ${WEBHOOK_SECRET}
      - type: Generic
        generic:
          secret: ${WEBHOOK_SECRET}
    runPolicy: Serial
    source:
      type: None
    strategy:
      jenkinsPipelineStrategy:
        env:
        - name: DEV_PROJECT
          value: ${DEV_PROJECT}
        - name: STAGE_PROJECT
          value: ${STAGE_PROJECT}
        - name: ENABLE_QUAY
          value: ${ENABLE_QUAY}
        jenkinsfile: |-
          def mvnCmd = "mvn -s configuration/cicd-settings-nexus3.xml"

          pipeline {
            agent {
              label 'maven'
            }
            stages {
              stage('Build App') {
                steps {
                  git branch: 'eap-7', url: 'http://gogs:3000/gogs/openshift-tasks.git'
                  sh "${mvnCmd} install -DskipTests=true"
                }
              }
              stage('Test') {
                steps {
                  sh "${mvnCmd} test"
                  step([$class: 'JUnitResultArchiver', testResults: '**/target/surefire-reports/TEST-*.xml'])
                }
              }
              stage('Code Analysis') {
                steps {
                  script {
                    sh "${mvnCmd} sonar:sonar -Dsonar.host.url=http://sonarqube:9000 -DskipTests=true"
                  }
                }
              }
              stage('Archive App') {
                steps {
                  sh "${mvnCmd} deploy -DskipTests=true -P nexus3"
                }
              }
              stage('Build Image') {
                steps {
                  sh "cp target/openshift-tasks.war target/ROOT.war"
                  script {
                    openshift.withCluster() {
                      openshift.withProject(env.DEV_PROJECT) {
                        openshift.selector("bc", "tasks").startBuild("--from-file=target/ROOT.war", "--wait=true")
                      }
                    }
                  }
                }
              }
              stage('Deploy DEV') {
                steps {
                  script {
                    openshift.withCluster() {
                      openshift.withProject(env.DEV_PROJECT) {
                        openshift.selector("dc", "tasks").rollout().latest();
                      }
                    }
                  }
                }
              }
              stage('Promote to STAGE?') {
                agent {
                  label 'skopeo'
                }
                steps {
                  timeout(time:15, unit:'MINUTES') {
                      input message: "Promote to STAGE?", ok: "Promote"
                  }

                  script {
                    openshift.withCluster() {
                      if (env.ENABLE_QUAY.toBoolean()) {
                        withCredentials([usernamePassword(credentialsId: "${openshift.project()}-quay-cicd-secret", usernameVariable: "QUAY_USER", passwordVariable: "QUAY_PWD")]) {
                          sh "skopeo copy docker://quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:latest docker://quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:stage --src-creds \"$QUAY_USER:$QUAY_PWD\" --dest-creds \"$QUAY_USER:$QUAY_PWD\" --src-tls-verify=false --dest-tls-verify=false"
                        }
                      } else {
                        openshift.tag("${env.DEV_PROJECT}/tasks:latest", "${env.STAGE_PROJECT}/tasks:stage")
                      }
                    }
                  }
                }
              }
              stage('Deploy STAGE') {
                steps {
                  script {
                    openshift.withCluster() {
                      openshift.withProject(env.STAGE_PROJECT) {
                        openshift.selector("dc", "tasks").rollout().latest();
                      }
                    }
                  }
                }
              }
            }
          }
      type: JenkinsPipeline
- apiVersion: v1
  kind: ConfigMap
  metadata:
    labels:
      app: cicd-pipeline
      role: jenkins-slave
    name: jenkins-slaves
  data:
    maven-template: |-
      <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
        <inheritFrom></inheritFrom>
        <name>maven</name>
        <privileged>false</privileged>
        <alwaysPullImage>false</alwaysPullImage>
        <instanceCap>2147483647</instanceCap>
        <idleMinutes>0</idleMinutes>
        <label>maven</label>
        <serviceAccount>jenkins</serviceAccount>
        <nodeSelector></nodeSelector>
        <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
        <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
          <memory>false</memory>
        </workspaceVolume>
        <volumes />
        <containers>
          <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <name>jnlp</name>
            
            <privileged>false</privileged>
            <alwaysPullImage>false</alwaysPullImage>
            <workingDir>/tmp</workingDir>
            <command></command>
            <args>${computer.jnlpmac} ${computer.name}</args>
            <ttyEnabled>false</ttyEnabled>
            <resourceRequestCpu>200m</resourceRequestCpu>
            <resourceRequestMemory>512Mi</resourceRequestMemory>
            <resourceLimitCpu>2</resourceLimitCpu>
            <resourceLimitMemory>4Gi</resourceLimitMemory>
            <envVars/>
          </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
        </containers>
        <envVars/>
        <annotations/>
        <imagePullSecrets/>
      </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
    skopeo-template: |-
      <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
        <inheritFrom></inheritFrom>
        <name>skopeo</name>
        <privileged>false</privileged>
        <alwaysPullImage>false</alwaysPullImage>
        <instanceCap>2147483647</instanceCap>
        <idleMinutes>0</idleMinutes>
        <label>skopeo</label>
        <serviceAccount>jenkins</serviceAccount>
        <nodeSelector></nodeSelector>
        <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
        <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
          <memory>false</memory>
        </workspaceVolume>
        <volumes />
        <containers>
          <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <name>jnlp</name>
            
            <privileged>false</privileged>
            <alwaysPullImage>false</alwaysPullImage>
            <workingDir>/tmp</workingDir>
            <command></command>
            <args>${computer.jnlpmac} ${computer.name}</args>
            <ttyEnabled>false</ttyEnabled>
            <envVars/>
          </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
        </containers>
        <envVars/>
        <annotations/>
        <imagePullSecrets/>
      </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
# Setup Demo
- apiVersion: batch/v1
  kind: Job
  metadata:
    name: cicd-demo-installer
  spec:
    activeDeadlineSeconds: 400
    completions: 1
    parallelism: 1
    template:
      spec:
        containers:
        - env:
          - name: CICD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          command:
          - /bin/bash
          - -x
          - -c
          - |
            # adjust jenkins
            oc set resources dc/jenkins --limits=cpu=2,memory=2Gi --requests=cpu=100m,memory=512Mi
            oc label dc jenkins app=jenkins --overwrite 

            # setup dev env
            oc import-image wildfly --from=openshift/wildfly-120-centos7 --confirm -n ${DEV_PROJECT} 

            if [ "${ENABLE_QUAY}" == "true" ] ; then
              # cicd
              oc create secret generic quay-cicd-secret --from-literal="username=${QUAY_USERNAME}" --from-literal="password=${QUAY_PASSWORD}" -n ${CICD_NAMESPACE}
              oc label secret quay-cicd-secret credential.sync.jenkins.openshift.io=true -n ${CICD_NAMESPACE}

              # dev
              oc create secret docker-registry quay-cicd-secret --docker-server=quay.io --docker-username="${QUAY_USERNAME}" --docker-password="${QUAY_PASSWORD}" --docker-email=cicd@redhat.com -n ${DEV_PROJECT}
              oc new-build --name=tasks --image-stream=wildfly:latest --binary=true --push-secret=quay-cicd-secret --to-docker --to='quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:latest' -n ${DEV_PROJECT}
              oc new-app --name=tasks --docker-image=quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:latest --allow-missing-images -n ${DEV_PROJECT}
              oc set triggers dc tasks --remove-all -n ${DEV_PROJECT}
              oc patch dc tasks -p '{"spec": {"template": {"spec": {"containers": [{"name": "tasks", "imagePullPolicy": "Always"}]}}}}' -n ${DEV_PROJECT}
              oc delete is tasks -n ${DEV_PROJECT}
              oc secrets link default quay-cicd-secret --for=pull -n ${DEV_PROJECT}

              # stage
              oc create secret docker-registry quay-cicd-secret --docker-server=quay.io --docker-username="${QUAY_USERNAME}" --docker-password="${QUAY_PASSWORD}" --docker-email=cicd@redhat.com -n ${STAGE_PROJECT}
              oc new-app --name=tasks --docker-image=quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:stage --allow-missing-images -n ${STAGE_PROJECT}
              oc set triggers dc tasks --remove-all -n ${STAGE_PROJECT}
              oc patch dc tasks -p '{"spec": {"template": {"spec": {"containers": [{"name": "tasks", "imagePullPolicy": "Always"}]}}}}' -n ${STAGE_PROJECT}
              oc delete is tasks -n ${STAGE_PROJECT}
              oc secrets link default quay-cicd-secret --for=pull -n ${STAGE_PROJECT}
            else
              # dev
              oc new-build --name=tasks --image-stream=wildfly:latest --binary=true -n ${DEV_PROJECT}
              oc new-app tasks:latest --allow-missing-images -n ${DEV_PROJECT}
              oc set triggers dc -l app=tasks --containers=tasks --from-image=tasks:latest --manual -n ${DEV_PROJECT}

              # stage
              oc new-app tasks:stage --allow-missing-images -n ${STAGE_PROJECT}
              oc set triggers dc -l app=tasks --containers=tasks --from-image=tasks:stage --manual -n ${STAGE_PROJECT}
            fi

            # dev project
            oc expose dc/tasks --port=8080 -n ${DEV_PROJECT}
            oc expose svc/tasks -n ${DEV_PROJECT}
            oc set probe dc/tasks --readiness --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=30 --failure-threshold=10 --period-seconds=10 -n ${DEV_PROJECT}
            oc set probe dc/tasks --liveness  --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=180 --failure-threshold=10 --period-seconds=10 -n ${DEV_PROJECT}
            oc rollout cancel dc/tasks -n ${STAGE_PROJECT}

            # stage project
            oc expose dc/tasks --port=8080 -n ${STAGE_PROJECT}
            oc expose svc/tasks -n ${STAGE_PROJECT}
            oc set probe dc/tasks --readiness --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=30 --failure-threshold=10 --period-seconds=10 -n ${STAGE_PROJECT}
            oc set probe dc/tasks --liveness  --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=180 --failure-threshold=10 --period-seconds=10 -n ${STAGE_PROJECT}
            oc rollout cancel dc/tasks -n ${DEV_PROJECT}

            # deploy gogs
            HOSTNAME=$(oc get route jenkins -o template --template='{{.spec.host}}' | sed "s/jenkins-${CICD_NAMESPACE}.//g")
            GOGS_HOSTNAME="gogs-$CICD_NAMESPACE.$HOSTNAME"

            if [ "${EPHEMERAL}" == "true" ] ; then
              oc new-app -f https://raw.githubusercontent.com/siamaksade/gogs-openshift-docker/master/openshift/gogs-template.yaml \
                  --param=GOGS_VERSION=0.11.34 \
                  --param=DATABASE_VERSION=9.6 \
                  --param=HOSTNAME=$GOGS_HOSTNAME \
                  --param=SKIP_TLS_VERIFY=true
            else
              oc new-app -f https://raw.githubusercontent.com/siamaksade/gogs-openshift-docker/master/openshift/gogs-persistent-template.yaml \
                  --param=GOGS_VERSION=0.11.34 \
                  --param=DATABASE_VERSION=9.6 \
                  --param=HOSTNAME=$GOGS_HOSTNAME \
                  --param=SKIP_TLS_VERIFY=true
            fi

            sleep 5

            if [ "${EPHEMERAL}" == "true" ] ; then
              oc new-app -f https://raw.githubusercontent.com/siamaksade/sonarqube/master/sonarqube-template.yml --param=SONARQUBE_MEMORY_LIMIT=2Gi
            else
              oc new-app -f https://raw.githubusercontent.com/siamaksade/sonarqube/master/sonarqube-persistent-template.yml --param=SONARQUBE_MEMORY_LIMIT=2Gi
            fi

            oc set resources dc/sonardb --limits=cpu=200m,memory=512Mi --requests=cpu=50m,memory=128Mi
            oc set resources dc/sonarqube --limits=cpu=1,memory=2Gi --requests=cpu=50m,memory=128Mi

            if [ "${EPHEMERAL}" == "true" ] ; then
              oc new-app -f https://raw.githubusercontent.com/OpenShiftDemos/nexus/master/nexus3-template.yaml --param=NEXUS_VERSION=3.13.0 --param=MAX_MEMORY=2Gi
            else
              oc new-app -f https://raw.githubusercontent.com/OpenShiftDemos/nexus/master/nexus3-persistent-template.yaml --param=NEXUS_VERSION=3.13.0 --param=MAX_MEMORY=2Gi
            fi

            oc set resources dc/nexus --requests=cpu=200m --limits=cpu=2

            GOGS_SVC=$(oc get svc gogs -o template --template='{{.spec.clusterIP}}')
            GOGS_USER=gogs
            GOGS_PWD=gogs

            oc rollout status dc gogs

            _RETURN=$(curl -o /tmp/curl.log -sL --post302 -w "%{http_code}" http://$GOGS_SVC:3000/user/sign_up \
              --form user_name=$GOGS_USER \
              --form password=$GOGS_PWD \
              --form retype=$GOGS_PWD \
              --form email=admin@gogs.com)

            sleep 5

            if [ $_RETURN != "200" ] && [ $_RETURN != "302" ] ; then
              echo "ERROR: Failed to create Gogs admin"
              cat /tmp/curl.log
              exit 255
            fi

            sleep 10

            cat <<EOF > /tmp/data.json
            {
              "clone_addr": "https://github.com/OpenShiftDemos/openshift-tasks.git",
              "uid": 1,
              "repo_name": "openshift-tasks"
            }
            EOF

            _RETURN=$(curl -o /tmp/curl.log -sL -w "%{http_code}" -H "Content-Type: application/json" \
            -u $GOGS_USER:$GOGS_PWD -X POST http://$GOGS_SVC:3000/api/v1/repos/migrate -d @/tmp/data.json)

            if [ $_RETURN != "201" ] ;then
              echo "ERROR: Failed to import openshift-tasks GitHub repo"
              cat /tmp/curl.log
              exit 255
            fi

            sleep 5

            cat <<EOF > /tmp/data.json
            {
              "type": "gogs",
              "config": {
                "url": "https://openshift.default.svc.cluster.local/apis/build.openshift.io/v1/namespaces/$CICD_NAMESPACE/buildconfigs/tasks-pipeline/webhooks/${WEBHOOK_SECRET}/generic",
                "content_type": "json"
              },
              "events": [
                "push"
              ],
              "active": true
            }
            EOF

            _RETURN=$(curl -o /tmp/curl.log -sL -w "%{http_code}" -H "Content-Type: application/json" \
            -u $GOGS_USER:$GOGS_PWD -X POST http://$GOGS_SVC:3000/api/v1/repos/gogs/openshift-tasks/hooks -d @/tmp/data.json)

            if [ $_RETURN != "201" ] ; then
              echo "ERROR: Failed to set webhook"
              cat /tmp/curl.log
              exit 255
            fi

            oc label dc sonarqube "app.kubernetes.io/part-of"="sonarqube" --overwrite
            oc label dc sonardb "app.kubernetes.io/part-of"="sonarqube" --overwrite
            oc label dc jenkins "app.kubernetes.io/part-of"="jenkins" --overwrite
            oc label dc nexus "app.kubernetes.io/part-of"="nexus" --overwrite
            oc label dc gogs "app.kubernetes.io/part-of"="gogs" --overwrite
            oc label dc gogs-postgresql "app.kubernetes.io/part-of"="gogs" --overwrite

          image: quay.io/openshift/origin-cli:v4.0
          name: cicd-demo-installer-job
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        restartPolicy: Never

oct

16

Posted by : admin | On : 16 octobre 2019

installation sous windows 10

* Pouvoir executer le power shell en administrateur

1 – Install Hyper-V in Windows 10. See Windows instructions.

2– Add the user to the local Hyper-V Administrators group.

PS command :

 

([adsi]"WinNT://./Hyper-V Administrators,group").Add("WinNT://$env:UserDomain/$env:Username,user")

ou bien manuellement sur pannaeau configuration -> outils administration

 

 

 

 

– Add an External Virtual Switch.  => (à Gauche  =>  Gestionnaire commutateur virtuel)
=> bien verifier le nom de la carte réseau ASSOCIE
Identify first the net adapter to use.
PS command :
Get-NetAdapter

 

Then store which one you want to use for Minishift network access.
PS command :
$net = Get-NetAdapter -Name 'Wi-Fi' A

At last create a virtual switch for Hyper-V :
New-VMSwitch -Name "External VM Switch" -AllowManagementOS $True -NetAdapterName $net.Name

I named it « External VM Switch » but the name doesn’t matter.
From now Powershell is not required any longer. You can use either PowerShell or cmd.exe.

 

 

2) Starting Up Minishift

  • Astuse Pour demarrer appliquer la commande  suivante sur windows

minishift config set skip-check-hyperv-driver true

Minishift has to aware of the external virtual switch to use.
You can set its name from the Minishift command arg or as a environment variable.

In theory, that should be enough but it will not the case on Windows 10:
minishift.exe config set hyperv-virtual-switch « External VM Switch »

minishift start

 

* Output power shell

– Starting profile ‘minishift’
– Check if deprecated options are used … OK
– Checking if https://github.com is reachable … OK
– Checking if requested OpenShift version ‘v3.11.0′ is valid … OK
– Checking if requested OpenShift version ‘v3.11.0′ is supported … OK
– Checking if requested hypervisor ‘hyperv’ is supported on this platform … OK
– Checking if Powershell is available … OK
– Checking if Hyper-V driver is installed … SKIP
– Checking if Hyper-V driver is configured to use a Virtual Switch … SKIP
– Checking if user is a member of the Hyper-V Administrators group … SKIP
– Checking the ISO URL … OK
– Downloading OpenShift binary ‘oc’ version ‘v3.11.0′
53.59 MiB / 53.59 MiB [===================================================================================] 100.00% 0s– Downloading OpenShift v3.11.0 checksums … OK
– Checking if provided oc flags are supported … OK
– Starting the OpenShift cluster using ‘hyperv’ hypervisor …
– Minishift VM will be configured with …
Memory:    4 GB
vCPUs :    2
Disk size: 20 GB

Downloading ISO ‘https://github.com/minishift/minishift-centos-iso/releases/download/v1.15.0/minishift-centos7.iso’
355.00 MiB / 355.00 MiB [

 

This command execution will download the Minishift ISO but will fail when it try to retrieve the downloaded ISO on Windows 10 because of the path specified in the running script that relies on the Linux based filesystem layout.
After the failure, a simple workaround is to move the downloaded ISO where you like and to refer it as you execute the start command of the Minishift executable.
It would give :
minishift start --hyperv-virtual-switch "External VM Switch" --iso-url file://D:/minishift/minishift-centos7.iso
Where you should refer in iso-url the file URI of the Minishift ISO.
The startup will take a some time. At the end you should see something like :

 

3) Stop Minishift

As you finished to use Minishift in most of case you don’t want to delete all files created to start it.
So you want to release memory/cpu resources allocated to Minishift by stopping it :
minishift stop

4) Delete Minishift

A weird issue with Minishift or the need to start from scratch : delete files associated to the Minishift instance/cluster : minishift delete --force

4) Problems to start up Minishift ?

 

Q : I have a broad error but I don’t see the cause.
A : Increase the logging verbosity by adding  arguments in the minishift command :  --show-libmachine-logs -v5

Q : I feel that I fixed what it was wrong or missing but the minishift startup still fails.
A : delete startup files and restart Minishift from scratch.
This command will help :
minishift delete --force

5) Opening Minishift console

Either use the URL provided in the logs or use the console option that is minishift console :

minishift-logging

Install ISTIO, next step !

 

autre source ; RHEL official doc

https://www.marksei.com/openshift-minishift-widnows/

http://myjavaadventures.com/blog/2018/11/15/installer-minishift/

https://blog.dbi-services.com/openshift-on-my-windows-10-laptop-with-minishift/

https://www.informatiweb.net/tutoriels/informatique/11-virtualisation/268–virtualbox-comment-utiliser-et-configurer-les-differents-modes-d-acces-reseau-d-une-machine-virtuelle–2.html#host-only-adapter

https://access.redhat.com/documentation/en-us/red_hat_container_development_kit/3.3/html/getting_started_guide/getting_started_with_container_development_kit

août

20

Posted by : admin | On : 20 août 2019

vimeo

https://vimeo.com/39796236

source code https://github.com/DoctusHartwald/ticket-monster

 

sample pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<!--
 Copyright 2016-2017 Red Hat, Inc, and individual contributors.

 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.example</groupId>
  <artifactId>fruits</artifactId>
  <version>15-SNAPSHOT</version>

  <name>Simple Fruits Application</name>
  <description>Spring Boot - CRUD Booster</description>

  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>

    <maven.min.version>3.3.9</maven.min.version>
    <postgresql.version>9.4.1212</postgresql.version>
    <openjdk18-openshift.version>1.3</openjdk18-openshift.version>

    <spring-boot.version>2.1.3.RELEASE</spring-boot.version>
    <spring-boot-bom.version>2.1.3.Final-redhat-00001</spring-boot-bom.version>
    <maven-surefire-plugin.version>2.20</maven-surefire-plugin.version>
    <fabric8-maven-plugin.version>3.5.40</fabric8-maven-plugin.version>
    <fabric8.openshift.trimImageInContainerSpec>true</fabric8.openshift.trimImageInContainerSpec>
    <fabric8.skip.build.pom>true</fabric8.skip.build.pom>
    <fabric8.generator.from>
      registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:${openjdk18-openshift.version}
    </fabric8.generator.from>
  </properties>

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>me.snowdrop</groupId>
        <artifactId>spring-boot-bom</artifactId>
        <version>${spring-boot-bom.version}</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>

  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter</artifactId>
    </dependency>

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
    </dependency>

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>

<!-- TODO: ADD Actuator dependency here -->

    <!-- Testing -->
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
    </dependency>

<!-- /actuator/health 
management.endpoints.web.exposure.include=*
/actuator/metrics

/acutuator/metrics/[metric-name]
/actuator/beans
Jaeger and Tracing
-->
<dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
  <dependency>
          <groupId>org.postgresql</groupId>
          <artifactId>postgresql</artifactId>
          <version>${postgresql.version}</version>
          <scope>runtime</scope>
        </dependency>

</dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>${spring-boot.version}</version> <configuration> <profiles> <profile>local</profile> </profiles> <classifier>exec</classifier> </configuration> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <profiles> <profile> <id>local</id> <activation> <activeByDefault>true</activeByDefault> </activation> <dependencies> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <scope>runtime</scope> </dependency> </dependencies> </profile> <profile> <id>openshift</id> <dependencies> <!-- TODO: ADD PostgreSQL database dependency here --> </dependencies> <build> <plugins> <plugin> <groupId>io.fabric8</groupId> <artifactId>fabric8-maven-plugin</artifactId> <version>${fabric8-maven-plugin.version}</version> <executions> <execution> <id>fmp</id> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> </project>

test class sample

@RunWith(SpringRunner.class)

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class ApplicationTest {

    @Autowired
    private FruitRepository fruitRepository;

    @Before
    public void beforeTest() {
    }

    @Test
    public void testGetAll() {
      assertTrue(fruitRepository.findAll().spliterator().getExactSizeIfKnown()==3);
    }

    @Test
    public void getOne() {
      assertTrue(fruitRepository.findById(1).orElse(null)!=null);
    }


Spring REst services
package com.example.service;

import java.util.List;
import java.util.Objects;
import java.util.Spliterator;
import java.util.stream.Collectors;
import java.util.stream.StreamSupport;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.ResponseStatus;

@Controller
@RequestMapping(value = "/api/fruits")
public class FruitController {

    private final FruitRepository repository;

    @Autowired
    public FruitController(FruitRepository repository) {
        this.repository = repository;
    }

    @ResponseBody
    @GetMapping(produces = MediaType.APPLICATION_JSON_VALUE)
    public List getAll() {
        return StreamSupport
                .stream(repository.findAll().spliterator(), false)
                .collect(Collectors.toList());
    }
@ResponseBody
    @ResponseStatus(HttpStatus.CREATED)
    @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
    public Fruit post(@RequestBody(required = false) Fruit fruit) {
        verifyCorrectPayload(fruit);

        return repository.save(fruit);
    }

    @ResponseBody
    @GetMapping(value = "/{id}", produces = MediaType.APPLICATION_JSON_VALUE)
    public Fruit get(@PathVariable("id") Integer id) {
        verifyFruitExists(id);

        return repository.findById(id).orElse(null);
    }

}

exemple fabric 8

apiVersion: v1
kind: Deployment
metadata:
  name: ${project.artifactId}
spec:
  template:
    spec:
      containers:
        - env:
            - name: DB_USERNAME
              valueFrom:
                 secretKeyRef:
                   name: my-database-secret
                   key: user
            - name: DB_PASSWORD
              valueFrom:
                 secretKeyRef:
                   name: my-database-secret
                   key: password
            - name: JAVA_OPTIONS
              value: "-Dspring.profiles.active=openshift"
Properties openShift
spring.datasource.url=jdbc:postgresql://${MY_DATABASE_SERVICE_HOST}:${MY_DATABASE_SERVICE_PORT}/my_data
spring.datasource.username=${DB_USERNAME}
spring.datasource.password=${DB_PASSWORD}
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.hibernate.ddl-auto=create

mai

06

Posted by : admin | On : 6 mai 2019

196	242	3	881250949
186	302	3	891717742
22	377	1	878887116
244	51	2	880606923
166	346	1	886397596
298	474	4	884182806
115	265	2	881171488
253	465	5	891628467
305	451	3	886324817
6	86	3	883603013
62	257	2	879372434
286	1014	5	879781125
200	222	5	876042340
210	40	3	891035994
224	29	3	888104457
303	785	3	879485318
122	387	5	879270459
194	274	2	879539794
291	1042	4	874834944
234	1184	2	892079237
119	392	4	886176814
167	486	4	892738452
299	144	4	877881320
291	118	2	874833878
308	1	4	887736532
95	546	2	879196566
38	95	5	892430094
102	768	2	883748450
63	277	4	875747401
160	234	5	876861185
50	246	3	877052329
301	98	4	882075827
225	193	4	879539727
290	88	4	880731963
97	194	3	884238860
157	274	4	886890835
181	1081	1	878962623
278	603	5	891295330
276	796	1	874791932
7	32	4	891350932
10	16	4	877888877
284	304	4	885329322
201	979	2	884114233
276	564	3	874791805
287	327	5	875333916
246	201	5	884921594
242	1137	5	879741196
249	241	5	879641194
99	4	5	886519097
178	332	3	882823437
251	100	4	886271884
81	432	2	876535131
260	322	4	890618898
25	181	5	885853415
59	196	5	888205088
72	679	2	880037164
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.test</groupId>
    <artifactId>initiation-spark-java</artifactId>
    <version>1.0.0-SNAPSHOT</version>

    <repositories>
        <repository>
            <id>Apache Spark temp - Release Candidate repo</id>
            <url>https://repository.apache.org/content/repositories/orgapachespark-1080/</url>
        </repository>
    </repositories>
    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>1.3.1</version>
            <!--<scope>provided</scope>--><!-- cette partie là a été omise dans notre projet pour pouvoir lancer depuis maven notre projet -->
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>1.3.1</version>
            <!--<scope>provided</scope>-->
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <!-- we want JDK 1.8 source and binary compatiblility -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.2</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>
package spark;

import java.io.Serializable;
import java.time.LocalDateTime;

// Spark nécessite des structures sérializables !!
public class Rating implements Serializable {

    public int rating;
    public long movie;
    public long user;
    public LocalDateTime timestamp;

    public Rating(long user, long movie, int rating, LocalDateTime timestamp) {
        this.user = user;
        this.movie = movie;
        this.rating = rating;
        this.timestamp = timestamp;
    }
}
package spark;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;

import java.net.URISyntaxException;
import java.nio.file.Paths;
import java.time.Instant;
import java.time.LocalDateTime;
import java.time.ZoneId;
import java.util.Comparator;
import java.util.Scanner;

/**
 * Calcule la moyenne, le min, le max et le nombre de votes de l'utilisateur n°200.
 */
public class Workshop1 {

    public void run() throws URISyntaxException {
        SparkConf conf = new SparkConf().setAppName("Workshop").setMaster("local[*]");
        JavaSparkContext sc = new JavaSparkContext(conf);
        String ratingsPath = Paths.get(getClass().getResource("/ratings.txt").getPath()).toString();

        JavaRDD<Rating> ratings = sc.textFile(ratingsPath)
                .map(line -> line.split("\\t"))
                .map(row -> new Rating(
                        Long.parseLong(row[0]),
                        Long.parseLong(row[1]),
                        Integer.parseInt(row[2]),
                        LocalDateTime.ofInstant(Instant.ofEpochSecond(Long.parseLong(row[3]) * 1000), ZoneId.systemDefault())

                ));

        double mean = ratings
                .filter(rating -> rating.user == 200)
                .mapToDouble(rating -> rating.rating)
                .mean();

        double max = ratings
                .filter(rating -> rating.user == 200)
                .mapToDouble(rating -> rating.rating)
                .max(Comparator.<Double>naturalOrder());

        double min = ratings
                .filter(rating -> rating.user == 200)
                .mapToDouble(rating -> rating.rating)
                .min(Comparator.<Double>naturalOrder());

        double count = ratings
                .filter(rating -> rating.user == 200)
                .count();

        System.out.println("mean: " + mean);
        System.out.println("max: " + max);
        System.out.println("min: " + min);
        System.out.println("count: " + count);
        Scanner s=new Scanner(System.in);
        s.hasNextLine();
    }

    public static void main(String... args) throws URISyntaxException {
        new Workshop1().run();
    }
}

 

 

mai

03

Posted by : admin | On : 3 mai 2019

https://javaetmoi.com/2015/04/initiation-apache-spark-en-java-devoxx/

https://github.com/arey/initiation-spark-java/blob/master/src/main/java/org/devoxx/spark/lab/devoxx2015/FirstRDD.java

avr

12

Posted by : admin | On : 12 avril 2018

 

https://www.youtube.com/watch?v=vq2nnJ4g6N0&t=6787s

https://www.youtube.com/watch?v=u4alGiomYP4

https://www.youtube.com/watch?v=fTUwdXUFfI8

https://github.com/random-forests/tutorials/blob/master/ep7.ipynb

Dataset :

http://archive.ics.uci.edu/ml/machine-learning-databases/undocumented/connectionist-bench/sonar/sonar.all-data

https://github.com/selva86/datasets/blob/master/Sonar.csv

 

 

support material

https://www.edureka.co/blog/perceptron-learning-algorithm/

https://github.com/tensorflow/tensorflow

https://www.edureka.co/blog/deep-learning-tutorial?utm_source=youtube&utm_campaign=deep-learning-180717-wr&utm_medium=description

Sample Sonar and mine

https://www.edureka.co/blog/perceptron-learning-algorithm/

* data

https://www.kaggle.com/mattcarter865/mines-vs-rocks/data

https://www.youtube.com/watch?v=yX8KuPZCAMo

 

SOURCE  : Youtube link

https://www.youtube.com/watch?v=yX8KuPZCAMo

 

example 1

node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0 , tf.float32)
#print (node1,node)

sess = tf.Session()
#print(sess.run([node1,node2]))
#sess.close()

with tf.Session() as sess :
File_Writer = tf.summary.FileWriter(‘C:\\Users\\aartaud\\pycharmproject\\Tensorflow\\graph’,sess.graph)
 output =sess.run([node1,node2])
 print(output)

 

cd C:\\Users\\aartaud\\pycharmproject\\

tensorboard -logdir= « TensorFlow »

check localhost:606

(computation graphe only no result is done, just for understanding the computation)

 

example 2

import tensorflow as tf
a = tf.placeholder(tf.float32)
b = tf.placefolder(tf.float32)

adder_node = a + b 

sess = tf.Session()

print(sess.run(adder_node,{a:[1,3],b:[2,4]})

example 3

import tensorflow as tf 

#Model Parameters.
W = tf.Variable([.3],tf.float32)
b = tf.Variable([-.3],tf.float32)

 # Inputs and outputs
x = tf.placeholder(tf.float32)

linear_model = W * x + b
y= tf.placeholder(tf.float32)

#loss function
squared_delta = tf.square(linear_model – y)
loss = tf.reduce_sum(squared_delta)

#Optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimzer.minimize(loss)

init = tf.global_variable_initializer()

sess = tf.Session()
sess.run(init)

for i in range (1000)
   sess.run(train , {x:[1,2,3,4],y:[0,-1,-2,-3]}  ) 

print(sess.run([W,b]))
#print(sess.run(loss,{x:[1,2,3,4],y:[0,-1,-2,-3]}
#build the model , calculate loss and train the model

example 4

 

 

import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split

#reading the dataset
def read_dataset() :
      df = pd.read_csv(«C:\\users\aartaud\pycharmProjects\\Tensorflow\sonar.csv » )
      #print(len(df.columns))
      X = df[df.columns[0:60]].values
      y = df[df.columns[60]]

      #Encode the dependant variable
      encode = LabelEncoder()
     encoder.fit(y)
     y=encoder.transform(y)
     Y = one_hot_encode(y)
     print(X.shape)
     return(X,Y)

#define the encoder function
def one_hot_encode(labels) :
     n_labels=len(labels)
     n_unique_labels = len(np.unique(labels))
     one_hot_encode = np.zeros((n_labels , n_unique_labels ) )
     one_hot_encode[np.arrange(n_labels),labels] = 1
     return one_hot_encode

#read the dataset
X, Y = read_dataset()

#shuffle the dataset to mix up the rows
X, Y =shuffle(X,Y, random_state=1)

#convert the dataset into train and test part
train_x , test_x , train_y, test_y = train_test_split(X,Y,test_size=0.20,random_state=415)

#inspect the shape of the training and testing
print(train_x.shape)
print(train_y.shape)
print(test_x.shape)

#define the important parameters and variable to work with the tensors
learning_rate = 0.3
training_epoch =1000 # number  of iteration to minimize the error
cost_history = np.empty(shape[1], dtype=float)
n_dim = X.shape[1]

print(« n_dim »,n_dim)
n_class  = 2 #mine and rock so its 2
model_path = «C:\\users\\aartaud\\PycharmProjects\\Tensorflow\\NMI »

# define the number of hidden layers and number of neurons for each layer
n_hidden_1 = 60
n_hidden_2 = 60
n_hidden₃  = 60

x= tf.placefolder(tf.float32,[None,n_dim])
W= tf.Variable(tf.zeos[n_dim,n_class]))
 b=tf.variable(tf.zeros[n_class]))
y_ = tf.placeholder(tf.float32,[None,n_class])

# define the model
def multilayer_perceptron (x,weights, biases ) :
      #hidden layer with RELU activaction fct
      layer₁  = tf.add(tf.matmul(x,weight[‘h1’] , biases[‘b1’])
      layer₁ = tf.nn.sigmoid (layer₁) #actiation function 

#hidden layer with sigmoid activation
layer₂ = tf.add(tf.matmul(layer₁,weights[‘h2’],biases[‘b2’])
layer₂ = tf.nn.sigmoid(layer₂) 

#hidden layer with sigmoid activation
layer₃ = tf.add(tf.matmut(layer₂,weights[‘h3’]),biases[‘b3’])
layer₃ = tf.nn.sigmoid(layer₃)

#hidden layer with RELU activation
layer4 = tf.add(tf.matmul(layer₃,weight[‘h4’],biases[‘b4’])
layer₄ = tf.nn.relu(layer₄)

#outpur layer with the linear activation
out_layer   = tf.matmul(layer₄  , weight[‘out’] + biases[‘out’]
return out_layer

# Define the weights and biases for each layer 

weights = {
 ‘h1’ : tf.Variable(tf.truncated_normal([n_dim,n_hidden₁])
‘h2’ : tf.Variable(tf.truncated_normal([n_hidden₁,n_hidden2])),
‘h3’ : tf.Variable(tf.truncated_normal([n_hidden₂,n_hidden₃])),
‘h4’:tf.Variable(tf.truncated_normal([n_hidden₃,n_hidden4])),
‘out’:tf.Variable(tf.truncated_normal([n_hidden₄,n_class]))
}

biases = {
  ‘b1’ : tf.Variable(tf.truncated_normal([n_hidden₁])),
  ‘b2’ : tf.Variable(tf.truncated_normal(n_hidden₂])),
  ‘b3’ : tf.Variable(tf.truncated_normal(n_hidden3])),
  ‘b4’ : tf.Variable(tf.truncated_normal(n_hidden4])),
 ‘out’ : tf.Variable(tf.truncated_normal([n_class]))
}
#initialize all variable
init = tf.global_variable_initializer()

saver  = tf.train.Saver()

#call your model deined
y = multilayer_perceptron(x,weights,biases)

#define the cost function and optimizer
cost_function = tf.reduce.mean(tf.softmax_cross_entropy_with_logits(logit=y,labels =y_))
training_step  = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)

sess = tf.Session()
sess.run(init)

#calculate the cost and the accuracy for each epoch

mse_history = []
accuracy_history = []

for epoch in range(training_epochs) :
      sess.run(training_step,feed_dict{x:  train_x ,  y_ = train_y})
      cost = sess.run(cost_function , feed_dict{x:  train_x ,  y_ = train_y})
      cost_history = np.append(cost_history,cost)
      correct_prediction = tf.equal(tf.argmax(y,1) , tf.argmax(y_,1))
      #difference between correct output and the model one 

      accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
      #print(«Accuracy :  », (sess.run(accuracy,feed_dict={x : test_x , y_:test_y})))

     pred_y= sess.run(y,feed_dict={x:test_x})
     mse  = tf.reduce_mean(tf.square(pred_y -test_y))
     mse_ =sess.run(mse)
     mse_history.append(mse₎
     accuracy = (sess.run(accuracy, feed_dict={x:train_x, y:train_y} ))
     accuracy_history.append(accuracy)

     print(‘epoch : ’,epoch, ‘ - ’ , ‘cost : ‘ , cost , « -MSE : »,  mse_ , «  - Train Accuracy »,accuracy)

save_path = saver.save(sess, model_path)
print(«Model saved in file %s » % save_path)

#plot mse and accuracy graph 

plt.plot(mse_history, ‘r’)
plt.show()
plt.plot(accuracy_history)  

#print the final accuracy 

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_ , 1 ))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(« Test accuracy : » , (sess.run(accuracy,feed_dict={x:test_x, y_: test_y})))

# Print the final mean square error
pred_y = sess.run(y,feed_dict={x, test_x})
mse=tf.reduce_mean(tf.square(pred_y, test_y))
print(« MSE : %.4f »% sess.run(mse))

 

 

Restore the model

import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
import random 

### READ CSV Function ####

X,Y,y1  = read_dataset()
model_path = » C:\\users\\aartaud\\PycharmProjects\\Tensorflow\\NMI»
learning_rate = 0.3
training_epoch = 1000
cost_history = np.empty (shape=[1],dtype =float )
n_dim = 60
n_class = 2  

init=tf.global_variable_initializer()
saver = tf.train.Saver()
sess = tf.Session()
sess.run(init)
saver.restore(sess,model_path) #HERE IT IS !!!

prediction = tf.argmax(y,1)
correct_prediction = tf.equal(prediction, tf.argmax(y_,1))
accuract = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

#print (accuracy_run)
print (‘ *************************************’)
print(« 0 stands for a M i.e. Mine & 1 stand for a rock » )
print(«***************************** »)
for i in range (93,101)
     prediction_run=sess.run(prediction,feed_dict={x : X[i].reshape(1,60}
     accuracy_run = sess.run(accuracy, feed_dict={x : X[i].reshape(1,60),y_ :}
     print(«original class : » , y1[i] , « predicted values : » , prediction_run)

nov

29

Posted by : admin | On : 29 novembre 2017

https://www.youtube.com/watch?v=miD9QwMKv48

https://github.com/apache/geode

How to create a SpringBoot Gemfire RestfulApi
- Link: http://javasampleapproach.com/spring-…

Spring Data REST provides a mechanics for creating and retrieving an Object from Gemfire storage.
In the article, JavaSampleApproach will introduce you how to create a Gemfire RestfulAPI.

Related Articles:
– How to start Embedded Gemfire Application with SpringBoot
Link: http://javasampleapproach.com/spring-…
– Infinispan Cache Solution | Spring Cache | Spring Boot
Link: http://javasampleapproach.com/spring-…

1. Technologies for SpringBoot Gemfire Restful Api

– Java 1.8
– Maven 3.3.9
– Spring Tool Suite – Version 3.8.1.RELEASE

2. Step to do

– Create a Spring Boot project
– Create a Java Model class
– Config Gemfire storage
– Create a Gemfire repository
– Run & Check result