mar

09

Posted by : admin | On : 9 mars 2010


 

In this article, we will walk through some of the capabilities provided by Spring batch Framework

 

Why use Spring Batch? Personally, when I sit down to work with batch, I get frustrated with the amount of code that you need to write to do simple things I/O, manage batches error cases  …

 

Spring batch let’s us use all capability of spring and provide powerful tool with few lines XML configuration.

 

Source :

 

You can checkout the source in Eclipse IDE using SVN here is the address of the source :

 

 

http://code.google.com/p/artaud/source/browse/#svn/springbatch

 

https://github.com/aartaud300/artaud

https://github.com/DoctusHartwald/artaud

In this article we’ll walk through these case :

 

-Case 1 : Hello World
-Case 2 : Import  CSV => Database
Case 3 : Export Database => flat file
-Case 4 : Export Database=> XML
-Case 5 : Couple Spring Batch and  Velocity template.
-Case 6 : export our database to a Jasper Report in PDF.

 

Technologies used :

 

I assume the reader to have good knowledge in basic Spring and Spring Annotations.

 

Spring

  1. Spring Annotation
  2. SpringBatch
  3. Spring OXM
  4. Spring Test

 

Related framework :

  1. Apache  Velocity
  2. Xstream /Stax  (for xml)
  3. Jasper

 

Database : MySQL

 

ENVIRONNEMENT

 

On windows :

 

JDK 5 or more
Apache Maven
MySQL or an other Database provider
Internet Connection

 

Eclipse

 

On Linux :

 

JDK 5 or more
Apache Maven
MySQL or an other Database provider
sudo apt-get  install apache2 mysql-server php5 php5-mysql phpmyadmin

 

Eclipse

 

APACHE MAVEN
In you ~/.bashrc add the following lines .

 

 

 

export JAVA_HOME=/usr/lib/jvm/java-6-sun/jre
 export PATH=${PATH}:$JAVA_HOME/bin
 export M2_HOME=/usr/local/apache-maven/apache-maven-2.2.1
 export M2=$M2_HOME/bin
 export MAVEN_OPTS="-Xms512m -Xmx1024m"
 export PATH=${PATH}:$M2
 export ROO_HOME=/home/<USER>/dev/spring-roo-1.0.2.RELEASE
 export ROO_OPTS="-Droo.bright=true"

 

RUN THE PROJET

SEE job.sh in in the folder documentation/ download the code source here.

Configuration of Maven2

First let’s see the pom.xml

 

 

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
 <modelVersion>4.0.0</modelVersion>
 <groupId>org.batch</groupId>
 <artifactId>springbatch</artifactId>
 <name>SpringbatchDemo</name>
 <version>1.0</version>
 <packaging>jar</packaging>
 <description>Springbatch Sample Projet </description>
 <properties>
 <spring.version>2.5.6</spring.version>
 <!--<spring-batch.version>1.0.1.RELEASE</spring-batch.version>-->
 <spring-batch.version>2.1.0.RELEASE</spring-batch.version>
 <mysql-version>5.1.6</mysql-version>
 </properties>
 <dependencies>
 <!-- MySQL -->
 <dependency>
 <groupId>mysql</groupId>
 <artifactId>mysql-connector-java</artifactId>
 <version>${mysql-version}</version>
 </dependency>
 <!-- SPRING -->
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-jdbc</artifactId>
 <version>${spring.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework.batch</groupId>
 <artifactId>spring-batch-core</artifactId>
 <version>${spring-batch.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-aop</artifactId>
 <version>${spring.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-core</artifactId>
 <version>${spring.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-beans</artifactId>
 <version>${spring.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-context</artifactId>
 <version>${spring.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-aspects</artifactId>
 <version>${spring.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring</artifactId>
 <version>${spring.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-jms</artifactId>
 <version>${spring.version}</version>
 </dependency>
 <dependency>
 <groupId>org.springframework.ws</groupId>
 <artifactId>spring-oxm</artifactId>
 <version>1.5.5</version>
 <optional>true</optional>
 </dependency>
 <dependency>
 <groupId>org.springframework.osgi</groupId>
 <artifactId>spring-osgi-core</artifactId>
 <version>1.1.2</version>
 <optional>true</optional>
 </dependency>
 <!-- Log  -->
 <dependency>
 <groupId>commons-logging</groupId>
 <artifactId>commons-logging</artifactId>
 <version>1.1.1</version>
 </dependency>
 <dependency>
 <groupId>log4j</groupId>
 <artifactId>log4j</artifactId>
 <version>1.2.14</version>
 <optional>true</optional>
 </dependency>
 <dependency>
 <groupId>log4j</groupId>
 <artifactId>log4j</artifactId>
 <version>1.2.13</version>
 </dependency>
 <!-- Junit -->
 <dependency>
 <groupId>junit</groupId>
 <artifactId>junit</artifactId>
 <version>4.4</version>
 </dependency>
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>org.springframework.test</artifactId>
 <version>2.5.6</version>
 <scope>test</scope>
 </dependency>
 <!-- Jasper report -->
 <dependency>
 <groupId>jasperreports</groupId>
 <artifactId>jasperreports</artifactId>
 <version>3.1.2</version>
 </dependency>
 <!-- XML STAX -->
 <dependency>
 <groupId>stax</groupId>
 <artifactId>stax</artifactId>
 <version>1.2.0</version>
 </dependency>
 <dependency>
 <groupId>org.springframework.batch</groupId>
 <artifactId>spring-batch-test</artifactId>
 <version>${spring-batch.version}</version>
 </dependency>
 <dependency>
 <groupId>commons-lang</groupId>
 <artifactId>commons-lang</artifactId>
 <version>2.4</version>
 </dependency>
 <!-- Velocity -->
 <dependency>
 <groupId>org.apache.velocity</groupId>
 <artifactId>velocity</artifactId>
 <version>1.6.3</version>
 </dependency>
 <dependency>
 <groupId>org.apache.velocity</groupId>
 <artifactId>velocity-tools</artifactId>
 <version>1.3</version>
 </dependency>
 </dependencies>
 <distributionManagement>
 <repository>
 <id>spring-release</id>
 <name>Spring Release Repository</name>
 <url>s3://maven.springframework.org/release</url>
 </repository>
 <snapshotRepository>
 <id>spring-snapshot</id>
 <name>Spring Snapshot Repository</name>
 <url>s3://maven.springframework.org/snapshot</url>
 </snapshotRepository>
 </distributionManagement>
 <repositories>
 <repository>
 <id>objectstyle</id>
 <name>ObjectStyle.org Repository</name>
 <url>http://objectstyle.org/maven2</url>
 <snapshots>
 <enabled>false</enabled>
 </snapshots>
 </repository>
 <repository>
 <id>com.springsource.repository.bundles.release</id>
 <name>EBR Spring Release Repository</name>
 <url>http:// repository.springsource.com/maven/bundles/release</url>
 </repository>
 <repository>
 <id>com.springsource.repository.bundles.external</id>
 <name>EBR External Release Repository</name>
 <url>http:// repository.springsource.com/maven/bundles/external</url>
 </repository>
 <repository>
 <id>codehaus</id>
 <name>Maven Codehaus repository</name>
 <url>http://repository.codehaus.org/</url>
 </repository>
 </repositories>
 <pluginRepositories>
 <pluginRepository>
 <id>com.springsource.repository.bundles.milestone</id>
 <name>SpringSource Enterprise Bundle Repository - SpringSource Bundle Milestones</name>
 <url>http://repository.springsource.com/maven/bundles/milestone</url>
 <snapshots>
 <enabled>false</enabled>
 </snapshots>
 </pluginRepository>
 <pluginRepository>
 <id>com.springsource.repository.bundles.snapshot</id>
 <name>SpringSource Enterprise Bundle Repository - SpringSource Bundle Snapshots</name>
 <url>http://repository.springsource.com/maven/bundles/snapshot</url>
 </pluginRepository>
 </pluginRepositories>
 <ciManagement>
 <system>Hudson</system>
 </ciManagement>

 <build>
 <plugins>
 <plugin>
 <groupId>org.apache.maven.plugins</groupId>
 <artifactId>maven-compiler-plugin</artifactId>
 <configuration>
 <source>1.5</source>
 <target>1.5</target>
 </configuration>
 </plugin>
 <plugin>
 <artifactId>maven-surefire-plugin</artifactId>
 <version>2.4.3</version>
 <configuration>
 </configuration>
 </plugin>
 <!--
 Decomment this blog if you want to run it with
 -Dexec.mainClass="org.springframework.batch.core.launch.support.CommandLineJobRunner"
 -Dexec.args="com/batch/simpletask/simpletaskletcontext.xml
 simpleJob"
 -->
 <plugin>
 <groupId>org.codehaus.mojo</groupId>
 <artifactId>exec-maven-plugin</artifactId>
 <executions>
 <execution>
 <id>glue-processing</id>
 <phase>install</phase>
 <goals>
 <goal>exec</goal>
 </goals>
 </execution>
 </executions>
 <configuration>
 <debug>true</debug>
 <executable>java</executable>
 <mainClass>app_example2.Example2</mainClass>
 <!--
 <mainClass>org.springframework.batch.core.launch.support.CommandLineJobRunner</mainClass>
 <arguments>
 <argument>com/batch/simpletask/simpletaskletcontext.xml</argument>
 <argument>simpleJob</argument> </arguments>
 -->
 </configuration>
 </plugin>
 </plugins>
 </build>
 </project>

 

 

We’ll use Spring annotation couple with standard Spring to realize our sample project

create a package src/main/java/com.batch

these 2  class AppJobExecutionListener and ItemFailureLoggerListener let us log the job execution.

 

 
 package com.batch;
 import org.apache.log4j.Logger;
 import org.springframework.batch.core.BatchStatus;
 import org.springframework.batch.core.JobExecution;
 import org.springframework.batch.core.JobExecutionListener;
 import org.springframework.stereotype.Component;
 @Component("appJobExecutionListener")
 public class AppJobExecutionListener implements JobExecutionListener {
 private final static Logger logger = Logger
 .getLogger(AppJobExecutionListener.class);
 public void afterJob(JobExecution jobExecution) {
 if (jobExecution.getStatus() == BatchStatus.COMPLETED) {
 logger.info("Job completed: " + jobExecution.getJobId());
 } else if (jobExecution.getStatus() == BatchStatus.FAILED) {
 logger.info("Job failed: " + jobExecution.getJobId());
 }
 }
 public void beforeJob(JobExecution jobExecution) {
 if (jobExecution.getStatus() == BatchStatus.COMPLETED) {
 logger.info("Job completed: " + jobExecution.getJobId());
 } else if (jobExecution.getStatus() == BatchStatus.FAILED) {
 logger.info("Job failed: " + jobExecution.getJobId());
 }
 }
 }
 package com.batch;
 import org.apache.log4j.Logger;
 import org.springframework.batch.core.listener.ItemListenerSupport;
 import org.springframework.stereotype.Component;
 @Component("itemFailureLoggerListener")
 public class ItemFailureLoggerListener extends ItemListenerSupport {
 private final static Logger logger = Logger
 .getLogger(ItemFailureLoggerListener.class);
 public void onReadError(Exception ex) {
 logger.error("Encountered error on read", ex);
 }
 public void onWriteError(Exception ex, Object item) {
 logger.error("Encountered error on write", ex);
 }
 }

 

 

First example in Spring Batch

HelloWorld in Spring Batch

Create a package name com.batch.simpletask under src/main/java

 

 

package com.batch.simpletask;
 import org.springframework.batch.core.StepContribution;
 import org.springframework.batch.core.scope.context.ChunkContext;
 import org.springframework.batch.core.step.tasklet.Tasklet;
 import org.springframework.batch.repeat.RepeatStatus;
 public class HelloTask implements Tasklet {
 private String taskStartMessage;
 public void setTaskStartMessage(String taskStartMessage) {
 this.taskStartMessage = taskStartMessage;
 }
 public RepeatStatus execute(StepContribution arg0, ChunkContext arg1)
 throws Exception {
 System.out.println(taskStartMessage);
 return RepeatStatus.FINISHED;
 }
 }

 

 

package com.batch.simpletask;
 import java.util.Calendar;
 import org.springframework.batch.core.StepContribution;
 import org.springframework.batch.core.scope.context.ChunkContext;
 import org.springframework.batch.core.step.tasklet.Tasklet;
 import org.springframework.batch.repeat.RepeatStatus;
 public class TimeTask implements Tasklet {
 public RepeatStatus execute(StepContribution arg0, ChunkContext arg1)
 throws Exception {
 System.out.println(Calendar.getInstance().getTime());
 return RepeatStatus.FINISHED;
 }
 }

 

 

Spring Configuration

 

 
 <?xml version="1.0" encoding="UTF-8"?>
 <beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop"
 xmlns:tx="http://www.springframework.org/schema/tx" xmlns:batch="http://www.springframework.org/schema/batch"
 xmlns:context="http://www.springframework.org/schema/context"
 xsi:schemaLocation="
 http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
 http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
 http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd
 http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch-2.0.xsd
 http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd">
 <!-- 1) USE ANNOTATIONS TO IDENTIFY AND WIRE SPRING BEANS. -->
 <!-- We indicate to Spring that we scan @Component annotation based on package com.batch -->
 <context:component-scan base-package="com.batch" />
 <bean id="transactionManager"
 class="org.springframework.batch.support.transaction.ResourcelessTransactionManager">
 </bean>
 <!-- 3) JOB REPOSITORY - WE USE IN-MEMORY REPOSITORY FOR OUR EXAMPLE -->
 <bean id="jobRepository"
 class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">
 <property name="transactionManager" ref="transactionManager" />
 </bean>
 <!-- 4) LAUNCH JOBS FROM A REPOSITORY -->
 <bean id="jobLauncher"
 class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
 <property name="jobRepository" ref="jobRepository" />
 </bean>
 <!-- 5) Beans representing the 2 job steps. -->
 <!-- Step1 - print hello world -->
 <bean id="helloTask">
 <property name="taskStartMessage" value="Hello World - the time is now " />
 </bean>
 <!-- Step2 - print current time -->
 <bean id="timeTask" />
 <!-- 6) FINALLY OUR JOB DEFINITION. THIS IS A 2 STEP JOB -->
 <batch:job id="simpleJob">
 <batch:listeners>
 <batch:listener ref="appJobExecutionListener" />
 </batch:listeners>
 <batch:step id="step1" next="step2">
 <batch:tasklet ref="helloTask" />
 </batch:step>
 <batch:step id="step2">
 <batch:tasklet ref="timeTask" />
 </batch:step>
 </batch:job>
 </beans>

 

 

Test our Case 1 :

 

 package com.batch.simpletask;
 import org.apache.log4j.Logger;
 import org.apache.log4j.PropertyConfigurator;
 import org.junit.Before;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.springframework.batch.core.Job;
 import org.springframework.batch.core.JobParameters;
 import org.springframework.batch.core.launch.JobLauncher;
 import org.springframework.beans.factory.annotation.Autowired;
 import org.springframework.test.AbstractDependencyInjectionSpringContextTests;
 import org.springframework.test.context.ContextConfiguration;
 import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
 import org.springframework.util.StopWatch;
 @ContextConfiguration(locations = "classpath*:com/batch/simpletask/simpletaskletcontext.xml")
 @RunWith(SpringJUnit4ClassRunner.class)
 public class SimpleTaskletTestCase extends
 AbstractDependencyInjectionSpringContextTests {
 private final static Logger logger = Logger
 .getLogger(SimpleTaskletTestCase.class);
 @Autowired
 private JobLauncher launcher;
 @Autowired
 private Job job;
 private JobParameters jobParameters = new JobParameters();
 @Before
 public void setup() {
 //PropertyConfigurator.configure("C:/Users/admin/workspace-test/springbatch2/src/com/batch/log4j.properties");
 }
 @Test
 public void testLaunchJob() throws Exception {
 StopWatch sw = new StopWatch();
 sw.start();
 launcher.run(job, jobParameters);
 sw.stop();
 logger.info(">>> TIME ELAPSED:" + sw.prettyPrint());
 }
 @Autowired
 public void setLauncher(JobLauncher bootstrap) {
 this.launcher = bootstrap;
 }
 @Autowired
 public void setJob(Job job) {
 this.job = job;
 }
 }

 

 

Install a LAMP server

  • on Linux :

 

sudo apt-get isntall apache2 mysql-server php5 php5-mysql phpmyadmin

 

Check that the process are running

 

 

sudo /etc/init.d/apache2 start
 sudo /etc/init.d/mysql restart

 

  • on Window

Download EasyPHP or Xampp and launch the application

Connect to MySQL

Connect to MySql whether by prompt or via Phpmyadmin http://localhost/phpmyadmin

Create a new user

  • via phpmyadmin

Enter login : test / password testand tick the right rules on the database as the following screenshot

Via prompt :

 

sudo mysql -u root -p

 

CREATE USER 'test'@'%' IDENTIFIED  BY  '***';
 GRANT  SELECT ,
 INSERT ,
 UPDATE ,
 DELETE ,
 CREATE ,
 DROP ,
 FILE ,
 INDEX ,
 ALTER ,
 CREATE  TEMPORARY  TABLES ,
 CREATE  VIEW ,
 EVENT,
 TRIGGER,
 SHOW  VIEW ,
 CREATE ROUTINE,
 ALTER ROUTINE,
 EXECUTE  ON  *  .  *  TO  'test'@'%' IDENTIFIED  BY  '***' WITH  MAX_QUERIES_PER_HOUR 0  MAX_CONNECTIONS_PER_HOUR 0  MAX_UPDATES_PER_HOUR 0  MAX_USER_CONNECTIONS 0 ;

 

 

Create a Database

  • create a new database name seamdb
 CREATE  DATABASE  `seamdb` ;

Create a new Table

[sql]

create table ledger (

ID INT NOT NULL AUTO_INCREMENT,

rcv_dt date,

mbr_nm VARCHAR(100) not null,

chk_nbr VARCHAR(10) not null,

chk_dt date,

pymt_typ VARCHAR(50) not null,

dpst_amt double,

pymt_amt double,

comments VARCHAR(100),

PRIMARY KEY (ID)

);

[/sql]

Example 2: CSV to Database

create a new package com.batch.todb in /src/main/java

Create a Model Bean call Ledger

 

package com.batch.todb;

import java.util.Date;

public class Ledger {

private int id;

private Date receiptDate;

private String memberName;

private String checkNumber;

private Date checkDate;

private String paymentType;

private double depositAmount;

private double paymentAmount;

private String comments;

public int getId() {

return id;

}

public void setId(int id) {

this.id = id;

}

public Date getReceiptDate() {

return receiptDate;

}

public void setReceiptDate(Date receiptDate) {

this.receiptDate = receiptDate;

}

public String getMemberName() {

return memberName;

}

public void setMemberName(String memberName) {

this.memberName = memberName;

}

public String getCheckNumber() {

return checkNumber;

}

public void setCheckNumber(String checkNumber) {

this.checkNumber = checkNumber;

}

public Date getCheckDate() {

return checkDate;

}

public void setCheckDate(Date checkDate) {

this.checkDate = checkDate;

}

public String getPaymentType() {

return paymentType;

}

public void setPaymentType(String paymentType) {

this.paymentType = paymentType;

}

public double getDepositAmount() {

return depositAmount;

}

public void setDepositAmount(double depositAmount) {

this.depositAmount = depositAmount;

}

public double getPaymentAmount() {

return paymentAmount;

}

public void setPaymentAmount(double paymentAmount) {

this.paymentAmount = paymentAmount;

}

public String getComments() {

return comments;

}

public void setComments(String comments) {

this.comments = comments;

}

}

package com.batch.todb;

public interface LedgerDAO {

public void save(final Ledger note);

}

package com.batch.todb;

import java.sql.PreparedStatement;

import java.sql.SQLException;

import javax.sql.DataSource;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.jdbc.core.JdbcTemplate;

import org.springframework.jdbc.core.PreparedStatementSetter;

import org.springframework.stereotype.Component;

import org.springframework.transaction.annotation.Propagation;

import org.springframework.transaction.annotation.Transactional;

@Component

public class LedgerDAOImpl extends JdbcTemplate implements LedgerDAO {

@Autowired

public void setDataSource(DataSource dataSource) {

super.setDataSource(dataSource);

}

@Transactional(propagation = Propagation.REQUIRED)

public void save(final Ledger item) {

super

.update(

« insert into ledger (rcv_dt, mbr_nm, chk_nbr, chk_dt, pymt_typ, dpst_amt, pymt_amt, comments) values(?,?,?,?,?,?,?,?) »,

new PreparedStatementSetter() {

public void setValues(PreparedStatement stmt)

throws SQLException {

stmt.setDate(1, new java.sql.Date(item

.getReceiptDate().getTime()));

stmt.setString(2, item.getMemberName());

stmt.setString(3, item.getCheckNumber());

stmt.setDate(4, new java.sql.Date(item

.getCheckDate().getTime()));

stmt.setString(5, item.getPaymentType());

stmt.setDouble(6, item.getDepositAmount());

stmt.setDouble(7, item.getPaymentAmount());

stmt.setString(8, item.getComments());

}

});

}

}

package com.batch.todb;

import java.text.DecimalFormat;

import java.text.ParseException;

import org.springframework.batch.item.file.mapping.FieldSetMapper;

import org.springframework.batch.item.file.transform.FieldSet;

import org.springframework.stereotype.Component;

import org.springframework.stereotype.Service;

@Component(« ledgerMapper »)

public class LedgerMapper implements FieldSetMapper {

private final static String DATE_PATTERN = « mm/DD/yy »;

private final static String DOLLAR_PATTERN = « $###,###.### »;

public Object mapFieldSet(FieldSet fs) {

Ledger item = new Ledger();

int idx = 0;

item.setReceiptDate(fs.readDate(idx++, DATE_PATTERN));

item.setMemberName(fs.readString(idx++));

item.setCheckNumber(fs.readString(idx++));

item.setCheckDate(fs.readDate(idx++, DATE_PATTERN));

item.setPaymentType(fs.readString(idx++));

// deposit amount

try {

DecimalFormat fmttr = new DecimalFormat(DOLLAR_PATTERN);

Number number = fmttr.parse(fs.readString(idx++));

item.setDepositAmount(number.doubleValue());

} catch (ParseException e) {

item.setDepositAmount(0);

}

// payment amount

try {

DecimalFormat fmttr = new DecimalFormat(DOLLAR_PATTERN);

Number number = fmttr.parse(fs.readString(idx++));

item.setPaymentAmount(number.doubleValue());

} catch (ParseException e) {

item.setPaymentAmount(0);

}

//

return item;

}

}

package com.batch.todb;

import java.util.Iterator;

import java.util.List;

import org.springframework.batch.item.ItemWriter;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.stereotype.Component;

@Component(« itemWriter »)

public class LedgerWriter implements ItemWriter {

@Autowired

private LedgerDAO itemDAO;

public void write(List items) throws Exception {

for (Iterator<Ledger> iterator = items.iterator(); iterator.hasNext();) {

Ledger item = iterator.next();

itemDAO.save(item);

}

}

}

Launch Junit Test

In src/test/java create this class of Test

We’ll launch our test using Spring Test framework by using these annotations :

[java]

<pre>@ContextConfiguration(locations = « classpath:com/batch/todb/contextToDB.xml »)

@RunWith(SpringJUnit4ClassRunner.class)

@TransactionConfiguration(transactionManager = « transactionManager », defaultRollback = false)</pre>

[/java]

[java]

package com.batch.todb;

import org.apache.log4j.Logger;

import org.apache.log4j.PropertyConfigurator;

import org.junit.Before;

import org.junit.Test;

import org.junit.runner.RunWith;

import org.springframework.batch.core.Job;

import org.springframework.batch.core.JobParameters;

import org.springframework.batch.core.launch.JobLauncher;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.test.context.ContextConfiguration;

import org.springframework.test.context.junit4.AbstractTransactionalJUnit4SpringContextTests;

import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import org.springframework.test.context.transaction.TransactionConfiguration;

import org.springframework.util.StopWatch;

@ContextConfiguration(locations = « classpath:com/batch/todb/contextToDB.xml »)

@RunWith(SpringJUnit4ClassRunner.class)

@TransactionConfiguration(transactionManager = « transactionManager », defaultRollback = false)

public class ToDBBatchTestCase extends

AbstractTransactionalJUnit4SpringContextTests {

private final static Logger logger = Logger.getLogger(ToDBBatchTestCase.class);

@Autowired

private JobLauncher launcher;

@Autowired

private Job job;

private JobParameters jobParameters = new JobParameters();

@Before

public void setup() {

//PropertyConfigurator.configure(« C:/Users/admin/workspace-test/springbatch2/src/com/batch/log4j.properties »);

}

@Test

public void testLaunchJob() throws Exception {

StopWatch sw = new StopWatch();

sw.start();

launcher.run(job, jobParameters);

sw.stop();

logger.info(« >>> TIME ELAPSED: » + sw.shortSummary());

}

@Autowired

public void setLauncher(JobLauncher bootstrap) {

this.launcher = bootstrap;

}

@Autowired

public void setJob(Job job) {

this.job = job;

}

}

[/java]

Case 3 : Export Database => flat file

Case 4 : Export Database=> XML

Case 5 : Couple Spring Batch and  Velocity template.

the source are locate into com/batch/velocity

The apache Velocity framework is issue from Jakarta project and is a powerful tool for generating SQL code,Java class C++ , whatever ..

Aim :

In this study case I propose to see how with Spring batch we can couple the two framework together.

We could also use different implementation of  ItemProcessor<T,V> between the ItemReader<T> and ItemWriter<T>  provided by Spring batch , but in this study case it’s not the purpose.

The purpose it to respond fast to our need , generate a bunch of file according to a pattern file .

For instance , you want to generate « a large and complex bouchon class » or « the same SQL code a number of time «   .

The problem is that in a project time is precious so you don’t want to waste it by doing long and repetitif task . I think you grab I want to say ;)

in this sample I start from CSV file and want to replace two variable $name and $ project using this file pattern (the .vm file in Apache Velocity)

Raw input that I have

-Input :

Apache Velocity,Jakarta

Spring batch,Spring source

Spring Roo,Spring Source

-Output expected by the client :

Hello from $name in the $project project.

Spring configuration

What we need is to configure a of a velocityEngine and ItemReader to read our CSV .

Configuration of Velocity Engine

[xml]

<bean id= »velocityEngine »

>

<property name= »velocityProperties »>

<value>

resource.loader=class

class.resource.loader.class=org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader

</value>

</property>

</bean>

[/xml]

Configuration of the ItemReader

[xml]

<bean id= »itemReader »>

<property name= »resource » value= »classpath:com/batch/velocity/project.txt » />

<!– property name= »linesToSkip » value= »1″ /–>

<property name= »lineMapper »>

<bean>

<property name= »lineTokenizer »>

<bean

class= »org.springframework.batch.item.file.transform.DelimitedLineTokenizer »>

<property name= »names » value= »name,project » />

</bean>

</property>

<property name= »fieldSetMapper » ref= »projectMapper » />

</bean>

</property>

</bean>

[/xml]

Full spring configuration

[xml]

<?xml version= »1.0″ encoding= »UTF-8″?>

<beans xmlns= »http://www.springframework.org/schema/beans »

xmlns:xsi= »http://www.w3.org/2001/XMLSchema-instance » xmlns:p= »http://www.springframework.org/schema/p »

xmlns:aop= »http://www.springframework.org/schema/aop » xmlns:tx= »http://www.springframework.org/schema/tx »

xmlns:batch= »http://www.springframework.org/schema/batch »

xmlns:context= »http://www.springframework.org/schema/context »

xmlns:jms= »http://www.springframework.org/schema/jms » xmlns:amq= »http://activemq.apache.org/schema/core »

xmlns:util= »http://www.springframework.org/schema/util »

xsi:schemaLocation= »http://www.springframework.org/schema/beans   http://www.springframework.org/schema/beans/spring-beans.xsd

http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch-2.1.xsd

http://www.springframework.org/schema/context

http://www.springframework.org/schema/context/spring-context.xsd

http://www.springframework.org/schema/jms

http://www.springframework.org/schema/jms/spring-jms.xsd

http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd

http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd

« >

<!– 1) USE ANNOTATIONS TO CONFIGURE SPRING BEANS –>

<context:component-scan base-package= »com.batch.velocity » />

<!– Author Artaud Antoine  –>

<bean id= »velocityEngine »

class= »org.springframework.ui.velocity.VelocityEngineFactoryBean »>

<property name= »velocityProperties »>

<value>

resource.loader=class

class.resource.loader.class=org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader

</value>

</property>

</bean>

<bean id= »itemReader »>

<property name= »resource » value= »classpath:com/batch/velocity/project.txt » />

<!– property name= »linesToSkip » value= »1″ /–>

<property name= »lineMapper »>

<bean>

<property name= »lineTokenizer »>

<bean

class= »org.springframework.batch.item.file.transform.DelimitedLineTokenizer »>

<property name= »names » value= »name,project » />

</bean>

</property>

<property name= »fieldSetMapper » ref= »projectMapper » />

</bean>

</property>

</bean>

</beans>

[/xml]

Case 6 : export our database to a Jasper Report in PDF.

Case 7 : Couple Spring Batch and  Velocity template + treatement

-Aim : use the 2 framework together and add some basic treatement on data you receive from a CSV file for instance

The configuration XML is the same as previously.

- the only difference reside in the vm template (VTL) that we write in Apache Velocity template .

[xml]

<?xml version= »1.0″ encoding= »UTF-8″?>

<beans xmlns= »http://www.springframework.org/schema/beans »

xmlns:xsi= »http://www.w3.org/2001/XMLSchema-instance » xmlns:p= »http://www.springframework.org/schema/p »

xmlns:aop= »http://www.springframework.org/schema/aop » xmlns:tx= »http://www.springframework.org/schema/tx »

xmlns:batch= »http://www.springframework.org/schema/batch »

xmlns:context= »http://www.springframework.org/schema/context »

xmlns:jms= »http://www.springframework.org/schema/jms » xmlns:amq= »http://activemq.apache.org/schema/core »

xmlns:util= »http://www.springframework.org/schema/util »

xsi:schemaLocation= »http://www.springframework.org/schema/beans   http://www.springframework.org/schema/beans/spring-beans.xsd

http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch-2.1.xsd

http://www.springframework.org/schema/context

http://www.springframework.org/schema/context/spring-context.xsd

http://www.springframework.org/schema/jms

http://www.springframework.org/schema/jms/spring-jms.xsd

http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd

http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd

« >

<!– 1) USE ANNOTATIONS TO CONFIGURE SPRING BEANS –>

<context:component-scan base-package= »com.batch.velocity.genClass » />

<!– Author Artaud Antoine  –>

<bean id= »velocityEngine »

class= »org.springframework.ui.velocity.VelocityEngineFactoryBean »>

<property name= »velocityProperties »>

<value>

resource.loader=cla

ss

class.resource.loader.class=org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader

</value>

</property>

</bean>

<bean id= »itemReader »>

<property name= »resource » value= »classpath:com/batch/velocity/genClass/dataTest.csv » />

<!– property name= »linesToSkip » value= »1″ /–>

<property name= »lineMapper »>

<bean>

<property name= »lineTokenizer »>

<bean>

<!–<property name= »delimiter » value= »MonDel »/> –>

<property name= »names » value= »champ1,data1Type,documentation1,champ2,data2Type,documentation2″ />

</bean>

</property>

<property name= »fieldSetMapper » ref= »genMapperFactoryMapper » />

</bean>

</property>

</bean>

</beans>

[/xml]

Apache Velocity template

what interest us is the bold part of the bellow code .It indicate to apache Velocity that you call this custom metod treatement on the data you furnish on the « pipe ».

SEE the it /src/test/ressources/com/batch/velocity/genClass/example2.vm

[bash]

/**

*

*/

public class $name {

#foreach( $element in $mylist )

private  $element.Data1Type   $element.champ1;

/**

*

* Map : $element.champ1,$element.data1Type,$element.documentation1

* To : $element.champ2,$element.data2Type,$element.documentation2

* $element.documentation1

* @return the $element.champ1

*

*/

public $element.data1Type <strong>get${utility.getInputU($element.champ1)}</strong>() {

return  $element.champ1;

}

/**

* Map : $element.champ1,$element.data1Type,$element.documentation1

* To : $element.champ2,$element.data2Type,$element.documentation2

* $element.documentation1

* @param $element.champ1 the $element.champ1 to set

* #set( $setter = « set » )

*/

public void  <strong>set${utility.getInputU($element.champ1)}</strong>($element.data1Type $element.champ1) {

this.$element.champ1 = $element.champ1;

}

#end

}

[/bash]

Bellow the Java code that is being called by Apache Velocity

[java]

public class GeneratorUtility {

/**

* set$utility.firstToUpperCase($att.Name)

* @param input the input to set

*/

public String  <strong>getInput</strong>U(String pInput) {

pInput.split(« _ »);

return StringUtils.capitalizeFirstLetter(pInput);

}

}

[/java]

Junit Test

[java]

@ContextConfiguration(locations = « classpath:com/batch/velocity/genClass/contextVelocity.xml »)

@RunWith(SpringJUnit4ClassRunner.class)

public class VelocityCodeTemplateGenClassGenerator {

@Autowired

private VelocityEngine velocityEngine;

@Autowired

private FlatFileItemReader vFlatFileItemReader;

@Test

public void testFlatReaderItemToVelocityProcessor(){

System.out.println(« [INFO] ———————————————————————————- »);

System.out.println(« [INFO] Running test : testFlatReaderItemToVelocityProcessor / « + getClass().getName());

System.out.println(« [INFO] ———————————————————————————- »);

vFlatFileItemReader.open(new ExecutionContext());

boolean hasNext = true ;

GenMapperFactory  vGenMapperFactory = null;

GeneratorUtility vUtility = new GeneratorUtility();

List<GenMapperFactory> vGenMapperFactoryList = new ArrayList<GenMapperFactory>();

//vMyClasseGen.setGenMapperFactory(new ArrayList<GenMapperFactory>());

while (hasNext) {

try {

vGenMapperFactory = (GenMapperFactory) vFlatFileItemReader.read();

vGenMapperFactoryList.add(vGenMapperFactory);

} catch (UnexpectedInputException e) {

e.printStackTrace();

} catch (ParseException e) {

e.printStackTrace();

} catch (Exception e) {

e.printStackTrace();

}

if (vGenMapperFactory == null) {

hasNext = false;

}

else {

Map model = new HashMap();

model.put(« utility », vUtility);

model.put(« mylist », vGenMapperFactoryList);

model.put(« name », « ClassDeTest »);

String text = VelocityEngineUtils.mergeTemplateIntoString(velocityEngine, « com/batch/velocity/genClass/example2.vm », model);

System.out.println( » « +text);

System.out.println(vGenMapperFactory.toString());

}

}

}

}

[/java]

déc

02

Posted by : admin | On : 2 décembre 2011

Le Grand collisionneur de hadrons (LHC) est un gigantesque instrument scientifique situé près de Genève, à cheval sur la frontière franco-suisse, à environ 100 mètres sous terre. C’est un accélérateur de particules, avec lequel les physiciens étudient les plus petites particules connues : les composants fondamentaux de la matière. Le LHC va révolutionner notre compréhension du monde, de l’infiniment petit, à l’intérieur des atomes, à l’infiniment grand de l’Univers.

Deux faisceaux de particules subatomiques de la famille des « hadrons » (des protons ou des ions de plomb) circulent en sens inverse à l’intérieur de l’accélérateur circulaire, emmagasinant de l’énergie à chaque tour. En faisant entrer en collision frontale les deux faisceaux à une vitesse proche de celle de la lumière et à de très hautes énergies, le LHC recrée les conditions qui existaient juste après le Big Bang. Des équipes de physiciens du monde entier analysent les particules issues de ces collisions en utilisant des détecteurs spéciaux.

Il existe de nombreuses théories quant aux résultats de ces collisions. Les physiciens s’attendent en tous cas à une nouvelle ère de physique, apportant de nouvelles connaissances sur le fonctionnement de l’Univers. Pendant des décennies, les physiciens se sont appuyés sur le modèle standard de la physique des particules pour essayer de comprendre les lois fondamentales de la Nature. Mais ce modèle est insuffisant. Les données expérimentales obtenues grâce aux énergies très élevées du LHC permettront de repousser les frontières du savoir, mettant au défi ceux qui cherchent à confirmer les théories actuelles et ceux qui rêvent à de nouveaux paradigmes.

source CERN

mar

29

Posted by : admin | On : 29 mars 2024

Sécurité informatique

 

Introduction

La sécurité informatique est devenu de nos jours un élément indispensable dans la sécurisation de l’information tant pour un particulier que pour une entreprise. Afin d’évaluer le degré de sécurité à appliquer il faudra s’attacher dans un premier temps de définir le degré d’importance de l’information visant à être protéger et les moyens technique que nous souhaitons mettre en oeuvre afin de protéger ces informations .

Pour sécuriser les systèmes d’information, la démarche consiste à :

  • évaluer les risques et leur criticité : quels risques et quelles menaces, sur quelle donnée et quelle activité, avec quelles conséquences ?
On parle de « cartographie des risques ». De la qualité de cette cartographie dépend la qualité de la sécurité qui va être mise en oeuvre.
  • rechercher et sélectionner les parades : que va-t-on sécuriser, quand et comment ?
Etape difficile des choix de sécurité : dans un contexte de ressources limitées (en temps, en compétences et en argent), seules certaines solutions pourront être mises en oeuvre.
  • mettre en œuvre les protections, et vérifier leur efficacité.
C’est l’aboutissement de la phase d’analyse et là que commence vraiment la protection du système d’information. Une faiblesse fréquente de cette phase est d’omettre de vérifier que les protections sont bien efficaces (tests de fonctionnement en mode dégradé, tests de reprise de données, tests d’attaque malveillante, etc.)

Nous pouvons décliner à plusieurs niveau la sécurité informatique :

Dans cet article nous allons essayer de voir plus particulièrement la sécurité applicative que nous pouvons appliquer via des algorithme de chiffrement et de hashage. Nous allons essayer d’explorer la librairie Java Security API (JCA Cryptography Architecture) ainsi que la librairie Cipher


Pour commencer un bonne introduction sur le domaine de la sécurité informatique un article que j’ai trouvé intéressant sur techno science : http://www.techno-science.net/?onglet=glossaire&definition=6154 http://igm.univ-mlv.fr/~dr/XPOSE2006/depail/fonctionnement.html

Cryptographie asymétrique

Les concepts de signature numérique sont principalement basés sur la cryptographie asymétrique. Cette technique permet de chiffrer avec un mot de passe et de déchiffrer avec un autre, les deux étant indépendants. Par exemlpe, imaginons que Bob souhaite envoyer des messages secret à Alice. Ils vont pour cela utiliser la cryptographie symétrique. Alice génère tout d’abord un couple de clés. Une clé privée (en rouge) et une clé publique (en vert). Ces clés ont des propriétés particulière vis à vis des algorithmes utilisés. En effet, un message chiffré avec une clé ne peut être déchiffré qu’avec l’autre clé. Il s’agit de fonctions à sens unique.

Alice génère un couple de clés

Alice transmet ensuite la clé publique (en vert) à Bob. Grâce à cette clé, Bob peut chiffrer un texte et l’envoyer à Alice.

Bob chiffre avec la clé publique d'Alice

En utilisant la clé publique d’Alice, Bob est certain de deux choses :

  • Personne ne peut lire le message, puisqu’il est crypté
  •  

  • Seule Alice peut déchiffrer le message, car elle est la seule a possèder la clé privée.
  •  

Nous venons de répondre au besoin de confidentialité des données. Mais la cryptographie asymétrique peut être utilisée d’une autre façon. En effet, on peut également utiliser la clé privée pour chiffrer, la clé publique servant alors à déchiffrer. Le message ainsi chiffré est lisible par toute personne disposant de la clé publique. Ceci n’est pas très utile si l’on cherche la confidentialité. En revanche, une seule personne est susceptible d’avoir chiffré ce message : Alice. Ainsi, si l’on peut déchiffrer un message avec la clé publique d’Alice, c’est forcément la personne à avoir chiffré ce message.

Fonctions de hachage

Je vais maintenant décrire les mécanismes permettant de s’assurer que des données n’ont pas été modifiées : les fonctions de hachage. Une fonction de hachage est un procédé à sens unique permettant d’obtenir une suite d’octets (une empreinte) caractérisant un ensemble de données. Pour tout ensemble de données de départ, l’empreinte obtenue est toujours la même. Dans le cadre de la signature numérique, nous nous intéresseront tout particulièrement aux fonctions de hachage cryptographiques. Celles-ci assurent qu’il est impossible de créer un ensemble de données de départ donnant la même empreinte qu’un autre ensemble. Nous pouvons donc utiliser ces fonctions pour nous assurer de l’intégrité d’un document. Les deux algorithme les plus utilisées sont MD5 et SHA.  A noter que MD5 n’est plus considéré comme sûr par les spécialistes. En effet, une équipe chinoise aurait réussi à trouver une collision complète, c’est à dire deux jeux de données donnant la même empreinte, sans utiliser de méthode de force brute. Aujourd’hui, il serait notamment possible de créer deux pages html au contenu différent, ayant pourtant les mêmes empreintes MD5 (en utilisant notamment les balises <meta>, invisibles dans le navigateur). La falsification de documents pourrait donc être possible.

Signer un document

La signature d’un document utilise à la fois la cryptographie asymétrique et les fonctions de hachage. C’est en effet par l’association de ces deux techniques que nous pouvons obtenir les 5 caractéristiques d’une signature (authentique, infalsifiable, non réutilisable, inaltérable, irrévocable). Imaginons que Alice souhaite envoyer un document signé à Bob.

  • Tout d’abord, elle génére l’empreinte du document au moyen d’une fonction de hachage.
  •  

  • Puis, elle crypte cette empreinte avec sa clé privée.
  •  

Alice signe le document pour l'envoyer à Bob

 

  • Elle obtient ainsi la signature de son document. Elle envoie donc ces deux éléments à Bob
  •  

Alice envoie les deux éléments à Bob
  • Pour vérifier la validité du document, Bob doit tout d’abord déchiffrer la signature en utilisant la clé publique d’Alice. Si cela ne fonctionne pas, c’est que le document n’a pas été envoyé par Alice.
  •  

  • Ensuite, Bob génère l’empreinte du document qu’il a reçu, en utilisant la même fonction de hachage qu’Alice (On supposera qu’ils suivent un protocole établi au préalable).
  •  

  • Puis, il compare l’empreinte générée et celle issue de la signature.
  •  

Bob vérifie la signature d'Alice
  • Si les deux empreintes sont identiques, la signature est validée. Nous sommes donc sûr que :
    • C’est Alice qui a envoyé le document,
    •  

    • Le document n’a pas été modifié depuis qu’Alice l’a signé.
    •  

     

  •  

  • Dans le cas contraire, cela peut signifier que :
    • Le document a été modifié depuis sa signature par Alice,
    •  

    • Ce n’est pas ce document qu’Alice a signé
    •  

     

  •  

 

 

 

/**
 * <p>
 * step 1 create a singleton of the service
 * step 2 Java security API (JCA (Java Cryptography Architecture) to obtain an instance of a message digest object using the algorithm supplied
 * step 3 convert plain data to  byte-representation using UTF-8 encoding format. generate an array of bytes that represent the digested (encrypted) password value
 * step 4 Create a String representation of the byte array representing the digested password value.
 * step 5 newly generated hash stock in String
 *
 * </p>
 * @author artaud antoine
 *
 */

public final class PasswordServiceEncryption
{
	private static PasswordServiceEncryption instance;

	private PasswordServiceEncryption()
	{
	}
       //step 1 create a singleton of the service
	public static synchronized PasswordServiceEncryption getInstance()
	{
		if(instance == null)
		{
			instance = new PasswordServiceEncryption();
		}
		return instance;
	}

	public synchronized String encrypt(String plaintext) throws NoSuchAlgorithmException, UnsupportedEncodingException {
		MessageDigest md = null;
		md = MessageDigest.getInstance("SHA-512"); //step 2 Java security API (JCA (Java Cryptography Architecture) to obtain an instance of a message digest object using the algorithm supplied
		System.out.println(plaintext);
		md.reset();
		md.update(plaintext.getBytes("UTF-8")); //step 3 convert plain data to  byte-representation using UTF-8 encoding format. generate an array of bytes that represent the digested (encrypted) password value

		byte raw[] = md.digest(); //step 4 Create a String representation of the byte array representing the digested password value.
		String hash = (new BASE64Encoder()).encode(raw); //step 5 newly generated hash stock in String 

		System.out.println(hash);
		return hash; //step 6
	}

nov

16

Posted by : admin | On : 16 novembre 2019

npm install -g npm@latest
npm install -g @angular/cli
ng new mon-premier-projet
cd mon-premier-projet
ng serve --open
ng new mon-projet-angular --style=scss --skip-tests=true.

npm install bootstrap@3.3.7 --save.

ng serve

import { Component } from '@angular/core';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss']
})
export class AppComponent {
  title = 'app';
}

ng generate component mon-premier 

lastUpdate = new Promise((resolve, reject) => {
    const date = new Date();
    setTimeout(
      () => {
        resolve(date);
      }, 2000
    );
  });

<p>Mis à jour : {{ lastUpdate | async | date: 'yMMMMEEEEd' | uppercase }}</p>

app/services
export class AppareilService {

}

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';

import { AppComponent } from './app.component';
import { MonPremierComponent } from './mon-premier/mon-premier.component';
import { AppareilComponent } from './appareil/appareil.component';
import { FormsModule } from '@angular/forms';
import { AppareilService } from './services/appareil.service';

@NgModule({
  declarations: [
    AppComponent,
    MonPremierComponent,
    AppareilComponent
  ],
  imports: [
    BrowserModule,
    FormsModule
  ],
  providers: [
    AppareilService
  ],
  bootstrap: [AppComponent]
})
export class AppModule { }
constructor(private appareilService: AppareilService) {
    setTimeout(
      () => {
        this.isAuth = true;
      }, 4000
    );
  }

export class AppareilService {
  appareils = [
    {
      name: 'Machine à laver',
      status: 'éteint'
    },
    {
      name: 'Frigo',
      status: 'allumé'
    },
    {
      name: 'Ordinateur',
      status: 'éteint'
    }
  ];
}

Les routes app.module.ts
import { NgModule } from '@angular/core';

import { AppComponent } from './app.component';
import { MonPremierComponent } from './mon-premier/mon-premier.component';
import { AppareilComponent } from './appareil/appareil.component';
import { FormsModule } from '@angular/forms';
import { AppareilService } from './services/appareil.service';
import { AuthComponent } from './auth/auth.component';
import { AppareilViewComponent } from './appareil-view/appareil-view.component';
import { Routes } from '@angular/router';

const appRoutes: Routes = [
  { path: 'appareils', component: AppareilViewComponent },
  { path: 'auth', component: AuthComponent },
  { path: '', component: AppareilViewComponent }
];

<router-outlet></router-outlet>

import { Component, OnInit } from '@angular/core';
import { AuthService } from '../services/auth.service';

@Component({
  selector: 'app-auth',
  templateUrl: './auth.component.html',
  styleUrls: ['./auth.component.scss']
})
export class AuthComponent implements OnInit {

  authStatus: boolean;

  constructor(private authService: AuthService) { }

  ngOnInit() {
    this.authStatus = this.authService.isAuth;
  }

  onSignIn() {
    this.authService.signIn().then(
      () => {
        console.log('Sign in successful!');
        this.authStatus = this.authService.isAuth;
      }
    );
  }

  onSignOut() {
    this.authService.signOut();
    this.authStatus = this.authService.isAuth;
  }

}
<h2>Authentification</h2>
<button class="btn btn-success" *ngIf="!authStatus" (click)="onSignIn()">Se connecter</button>
<button class="btn btn-danger" *ngIf="authStatus" (click)="onSignOut()">Se déconnecter</button>

oct

17

Posted by : admin | On : 17 octobre 2019

La lecture

https://thierry-leriche-dessirier.developpez.com/tutoriels/java/vertx/discuter-via-event-bus/

oct

17

Posted by : admin | On : 17 octobre 2019

download the oc command line from minishift on the right top corner

this template will set up for you a complete CI/CD with gogs, jenckins sonar with in backend postgre sql

I’ve encounter start issue on my desktop, a set up at least 10go RAM is required, then restart individually the process it shall work :)

oc process -f https://raw.githubusercontent.com/OpnShiftDemos/openshift-cd-demo/master/cicd-gogs-template.yaml | oc create -f -

oc process -f  https://raw.githubusercontent.com/siamaksade/openshift-cd-demo/ocp-4.1/cicd-template.yaml | oc create -f -

 

 

# Create Projects
oc new-project dev --display-name="Tasks - Dev"
oc new-project stage --display-name="Tasks - Stage"
oc new-project cicd --display-name="CI/CD"

# Grant Jenkins Access to Projects
oc policy add-role-to-group edit system:serviceaccounts:cicd -n dev
oc policy add-role-to-group edit system:serviceaccounts:cicd -n stage
# Deploy Demo
oc new-app -n cicd -f https://raw.githubusercontent.com/siamaksade/openshift-cd-demo/ocp-4.1/cicd-template.yaml
# the yaml bellow


 

 

 

apiVersion: v1
kind: Template
labels:
  template: cicd
  group: cicd
metadata:
  annotations:
    iconClass: icon-jenkins
    tags: instant-app,jenkins,gogs,nexus,cicd
  name: cicd
message: "Use the following credentials for login:\nJenkins: use your OpenShift credentials\nNexus: admin/admin123\nSonarQube: admin/admin\nGogs Git Server: gogs/gogs"
parameters:
- displayName: DEV project name
  value: dev
  name: DEV_PROJECT
  required: true
- displayName: STAGE project name
  value: stage
  name: STAGE_PROJECT
  required: true
- displayName: Ephemeral
  description: Use no persistent storage for Gogs and Nexus
  value: "true"
  name: EPHEMERAL
  required: true
- description: Webhook secret
  from: '[a-zA-Z0-9]{8}'
  generate: expression
  name: WEBHOOK_SECRET
  required: true
- displayName: Integrate Quay.io
  description: Integrate image build and deployment with Quay.io
  value: "false"
  name: ENABLE_QUAY
  required: true
- displayName: Quay.io Username
  description: Quay.io username to push the images to tasks-sample-app repository on your Quay.io account
  name: QUAY_USERNAME
- displayName: Quay.io Password
  description: Quay.io password to push the images to tasks-sample-app repository on your Quay.io account
  name: QUAY_PASSWORD
- displayName: Quay.io Image Repository
  description: Quay.io repository for pushing Tasks container images
  name: QUAY_REPOSITORY
  required: true
  value: tasks-app
objects:
- apiVersion: v1
  groupNames: null
  kind: RoleBinding
  metadata:
    name: default_admin
  roleRef:
    name: admin
  subjects:
  - kind: ServiceAccount
    name: default
# Pipeline
- apiVersion: v1
  kind: BuildConfig
  metadata:
    annotations:
      pipeline.alpha.openshift.io/uses: '[{"name": "jenkins", "namespace": "", "kind": "DeploymentConfig"}]'
    labels:
      app: cicd-pipeline
      name: cicd-pipeline
    name: tasks-pipeline
  spec:
    triggers:
      - type: GitHub
        github:
          secret: ${WEBHOOK_SECRET}
      - type: Generic
        generic:
          secret: ${WEBHOOK_SECRET}
    runPolicy: Serial
    source:
      type: None
    strategy:
      jenkinsPipelineStrategy:
        env:
        - name: DEV_PROJECT
          value: ${DEV_PROJECT}
        - name: STAGE_PROJECT
          value: ${STAGE_PROJECT}
        - name: ENABLE_QUAY
          value: ${ENABLE_QUAY}
        jenkinsfile: |-
          def mvnCmd = "mvn -s configuration/cicd-settings-nexus3.xml"

          pipeline {
            agent {
              label 'maven'
            }
            stages {
              stage('Build App') {
                steps {
                  git branch: 'eap-7', url: 'http://gogs:3000/gogs/openshift-tasks.git'
                  sh "${mvnCmd} install -DskipTests=true"
                }
              }
              stage('Test') {
                steps {
                  sh "${mvnCmd} test"
                  step([$class: 'JUnitResultArchiver', testResults: '**/target/surefire-reports/TEST-*.xml'])
                }
              }
              stage('Code Analysis') {
                steps {
                  script {
                    sh "${mvnCmd} sonar:sonar -Dsonar.host.url=http://sonarqube:9000 -DskipTests=true"
                  }
                }
              }
              stage('Archive App') {
                steps {
                  sh "${mvnCmd} deploy -DskipTests=true -P nexus3"
                }
              }
              stage('Build Image') {
                steps {
                  sh "cp target/openshift-tasks.war target/ROOT.war"
                  script {
                    openshift.withCluster() {
                      openshift.withProject(env.DEV_PROJECT) {
                        openshift.selector("bc", "tasks").startBuild("--from-file=target/ROOT.war", "--wait=true")
                      }
                    }
                  }
                }
              }
              stage('Deploy DEV') {
                steps {
                  script {
                    openshift.withCluster() {
                      openshift.withProject(env.DEV_PROJECT) {
                        openshift.selector("dc", "tasks").rollout().latest();
                      }
                    }
                  }
                }
              }
              stage('Promote to STAGE?') {
                agent {
                  label 'skopeo'
                }
                steps {
                  timeout(time:15, unit:'MINUTES') {
                      input message: "Promote to STAGE?", ok: "Promote"
                  }

                  script {
                    openshift.withCluster() {
                      if (env.ENABLE_QUAY.toBoolean()) {
                        withCredentials([usernamePassword(credentialsId: "${openshift.project()}-quay-cicd-secret", usernameVariable: "QUAY_USER", passwordVariable: "QUAY_PWD")]) {
                          sh "skopeo copy docker://quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:latest docker://quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:stage --src-creds \"$QUAY_USER:$QUAY_PWD\" --dest-creds \"$QUAY_USER:$QUAY_PWD\" --src-tls-verify=false --dest-tls-verify=false"
                        }
                      } else {
                        openshift.tag("${env.DEV_PROJECT}/tasks:latest", "${env.STAGE_PROJECT}/tasks:stage")
                      }
                    }
                  }
                }
              }
              stage('Deploy STAGE') {
                steps {
                  script {
                    openshift.withCluster() {
                      openshift.withProject(env.STAGE_PROJECT) {
                        openshift.selector("dc", "tasks").rollout().latest();
                      }
                    }
                  }
                }
              }
            }
          }
      type: JenkinsPipeline
- apiVersion: v1
  kind: ConfigMap
  metadata:
    labels:
      app: cicd-pipeline
      role: jenkins-slave
    name: jenkins-slaves
  data:
    maven-template: |-
      <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
        <inheritFrom></inheritFrom>
        <name>maven</name>
        <privileged>false</privileged>
        <alwaysPullImage>false</alwaysPullImage>
        <instanceCap>2147483647</instanceCap>
        <idleMinutes>0</idleMinutes>
        <label>maven</label>
        <serviceAccount>jenkins</serviceAccount>
        <nodeSelector></nodeSelector>
        <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
        <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
          <memory>false</memory>
        </workspaceVolume>
        <volumes />
        <containers>
          <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <name>jnlp</name>
            
            <privileged>false</privileged>
            <alwaysPullImage>false</alwaysPullImage>
            <workingDir>/tmp</workingDir>
            <command></command>
            <args>${computer.jnlpmac} ${computer.name}</args>
            <ttyEnabled>false</ttyEnabled>
            <resourceRequestCpu>200m</resourceRequestCpu>
            <resourceRequestMemory>512Mi</resourceRequestMemory>
            <resourceLimitCpu>2</resourceLimitCpu>
            <resourceLimitMemory>4Gi</resourceLimitMemory>
            <envVars/>
          </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
        </containers>
        <envVars/>
        <annotations/>
        <imagePullSecrets/>
      </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
    skopeo-template: |-
      <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
        <inheritFrom></inheritFrom>
        <name>skopeo</name>
        <privileged>false</privileged>
        <alwaysPullImage>false</alwaysPullImage>
        <instanceCap>2147483647</instanceCap>
        <idleMinutes>0</idleMinutes>
        <label>skopeo</label>
        <serviceAccount>jenkins</serviceAccount>
        <nodeSelector></nodeSelector>
        <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
        <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
          <memory>false</memory>
        </workspaceVolume>
        <volumes />
        <containers>
          <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <name>jnlp</name>
            
            <privileged>false</privileged>
            <alwaysPullImage>false</alwaysPullImage>
            <workingDir>/tmp</workingDir>
            <command></command>
            <args>${computer.jnlpmac} ${computer.name}</args>
            <ttyEnabled>false</ttyEnabled>
            <envVars/>
          </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
        </containers>
        <envVars/>
        <annotations/>
        <imagePullSecrets/>
      </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
# Setup Demo
- apiVersion: batch/v1
  kind: Job
  metadata:
    name: cicd-demo-installer
  spec:
    activeDeadlineSeconds: 400
    completions: 1
    parallelism: 1
    template:
      spec:
        containers:
        - env:
          - name: CICD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          command:
          - /bin/bash
          - -x
          - -c
          - |
            # adjust jenkins
            oc set resources dc/jenkins --limits=cpu=2,memory=2Gi --requests=cpu=100m,memory=512Mi
            oc label dc jenkins app=jenkins --overwrite 

            # setup dev env
            oc import-image wildfly --from=openshift/wildfly-120-centos7 --confirm -n ${DEV_PROJECT} 

            if [ "${ENABLE_QUAY}" == "true" ] ; then
              # cicd
              oc create secret generic quay-cicd-secret --from-literal="username=${QUAY_USERNAME}" --from-literal="password=${QUAY_PASSWORD}" -n ${CICD_NAMESPACE}
              oc label secret quay-cicd-secret credential.sync.jenkins.openshift.io=true -n ${CICD_NAMESPACE}

              # dev
              oc create secret docker-registry quay-cicd-secret --docker-server=quay.io --docker-username="${QUAY_USERNAME}" --docker-password="${QUAY_PASSWORD}" --docker-email=cicd@redhat.com -n ${DEV_PROJECT}
              oc new-build --name=tasks --image-stream=wildfly:latest --binary=true --push-secret=quay-cicd-secret --to-docker --to='quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:latest' -n ${DEV_PROJECT}
              oc new-app --name=tasks --docker-image=quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:latest --allow-missing-images -n ${DEV_PROJECT}
              oc set triggers dc tasks --remove-all -n ${DEV_PROJECT}
              oc patch dc tasks -p '{"spec": {"template": {"spec": {"containers": [{"name": "tasks", "imagePullPolicy": "Always"}]}}}}' -n ${DEV_PROJECT}
              oc delete is tasks -n ${DEV_PROJECT}
              oc secrets link default quay-cicd-secret --for=pull -n ${DEV_PROJECT}

              # stage
              oc create secret docker-registry quay-cicd-secret --docker-server=quay.io --docker-username="${QUAY_USERNAME}" --docker-password="${QUAY_PASSWORD}" --docker-email=cicd@redhat.com -n ${STAGE_PROJECT}
              oc new-app --name=tasks --docker-image=quay.io/${QUAY_USERNAME}/${QUAY_REPOSITORY}:stage --allow-missing-images -n ${STAGE_PROJECT}
              oc set triggers dc tasks --remove-all -n ${STAGE_PROJECT}
              oc patch dc tasks -p '{"spec": {"template": {"spec": {"containers": [{"name": "tasks", "imagePullPolicy": "Always"}]}}}}' -n ${STAGE_PROJECT}
              oc delete is tasks -n ${STAGE_PROJECT}
              oc secrets link default quay-cicd-secret --for=pull -n ${STAGE_PROJECT}
            else
              # dev
              oc new-build --name=tasks --image-stream=wildfly:latest --binary=true -n ${DEV_PROJECT}
              oc new-app tasks:latest --allow-missing-images -n ${DEV_PROJECT}
              oc set triggers dc -l app=tasks --containers=tasks --from-image=tasks:latest --manual -n ${DEV_PROJECT}

              # stage
              oc new-app tasks:stage --allow-missing-images -n ${STAGE_PROJECT}
              oc set triggers dc -l app=tasks --containers=tasks --from-image=tasks:stage --manual -n ${STAGE_PROJECT}
            fi

            # dev project
            oc expose dc/tasks --port=8080 -n ${DEV_PROJECT}
            oc expose svc/tasks -n ${DEV_PROJECT}
            oc set probe dc/tasks --readiness --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=30 --failure-threshold=10 --period-seconds=10 -n ${DEV_PROJECT}
            oc set probe dc/tasks --liveness  --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=180 --failure-threshold=10 --period-seconds=10 -n ${DEV_PROJECT}
            oc rollout cancel dc/tasks -n ${STAGE_PROJECT}

            # stage project
            oc expose dc/tasks --port=8080 -n ${STAGE_PROJECT}
            oc expose svc/tasks -n ${STAGE_PROJECT}
            oc set probe dc/tasks --readiness --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=30 --failure-threshold=10 --period-seconds=10 -n ${STAGE_PROJECT}
            oc set probe dc/tasks --liveness  --get-url=http://:8080/ws/demo/healthcheck --initial-delay-seconds=180 --failure-threshold=10 --period-seconds=10 -n ${STAGE_PROJECT}
            oc rollout cancel dc/tasks -n ${DEV_PROJECT}

            # deploy gogs
            HOSTNAME=$(oc get route jenkins -o template --template='{{.spec.host}}' | sed "s/jenkins-${CICD_NAMESPACE}.//g")
            GOGS_HOSTNAME="gogs-$CICD_NAMESPACE.$HOSTNAME"

            if [ "${EPHEMERAL}" == "true" ] ; then
              oc new-app -f https://raw.githubusercontent.com/siamaksade/gogs-openshift-docker/master/openshift/gogs-template.yaml \
                  --param=GOGS_VERSION=0.11.34 \
                  --param=DATABASE_VERSION=9.6 \
                  --param=HOSTNAME=$GOGS_HOSTNAME \
                  --param=SKIP_TLS_VERIFY=true
            else
              oc new-app -f https://raw.githubusercontent.com/siamaksade/gogs-openshift-docker/master/openshift/gogs-persistent-template.yaml \
                  --param=GOGS_VERSION=0.11.34 \
                  --param=DATABASE_VERSION=9.6 \
                  --param=HOSTNAME=$GOGS_HOSTNAME \
                  --param=SKIP_TLS_VERIFY=true
            fi

            sleep 5

            if [ "${EPHEMERAL}" == "true" ] ; then
              oc new-app -f https://raw.githubusercontent.com/siamaksade/sonarqube/master/sonarqube-template.yml --param=SONARQUBE_MEMORY_LIMIT=2Gi
            else
              oc new-app -f https://raw.githubusercontent.com/siamaksade/sonarqube/master/sonarqube-persistent-template.yml --param=SONARQUBE_MEMORY_LIMIT=2Gi
            fi

            oc set resources dc/sonardb --limits=cpu=200m,memory=512Mi --requests=cpu=50m,memory=128Mi
            oc set resources dc/sonarqube --limits=cpu=1,memory=2Gi --requests=cpu=50m,memory=128Mi

            if [ "${EPHEMERAL}" == "true" ] ; then
              oc new-app -f https://raw.githubusercontent.com/OpenShiftDemos/nexus/master/nexus3-template.yaml --param=NEXUS_VERSION=3.13.0 --param=MAX_MEMORY=2Gi
            else
              oc new-app -f https://raw.githubusercontent.com/OpenShiftDemos/nexus/master/nexus3-persistent-template.yaml --param=NEXUS_VERSION=3.13.0 --param=MAX_MEMORY=2Gi
            fi

            oc set resources dc/nexus --requests=cpu=200m --limits=cpu=2

            GOGS_SVC=$(oc get svc gogs -o template --template='{{.spec.clusterIP}}')
            GOGS_USER=gogs
            GOGS_PWD=gogs

            oc rollout status dc gogs

            _RETURN=$(curl -o /tmp/curl.log -sL --post302 -w "%{http_code}" http://$GOGS_SVC:3000/user/sign_up \
              --form user_name=$GOGS_USER \
              --form password=$GOGS_PWD \
              --form retype=$GOGS_PWD \
              --form email=admin@gogs.com)

            sleep 5

            if [ $_RETURN != "200" ] && [ $_RETURN != "302" ] ; then
              echo "ERROR: Failed to create Gogs admin"
              cat /tmp/curl.log
              exit 255
            fi

            sleep 10

            cat <<EOF > /tmp/data.json
            {
              "clone_addr": "https://github.com/OpenShiftDemos/openshift-tasks.git",
              "uid": 1,
              "repo_name": "openshift-tasks"
            }
            EOF

            _RETURN=$(curl -o /tmp/curl.log -sL -w "%{http_code}" -H "Content-Type: application/json" \
            -u $GOGS_USER:$GOGS_PWD -X POST http://$GOGS_SVC:3000/api/v1/repos/migrate -d @/tmp/data.json)

            if [ $_RETURN != "201" ] ;then
              echo "ERROR: Failed to import openshift-tasks GitHub repo"
              cat /tmp/curl.log
              exit 255
            fi

            sleep 5

            cat <<EOF > /tmp/data.json
            {
              "type": "gogs",
              "config": {
                "url": "https://openshift.default.svc.cluster.local/apis/build.openshift.io/v1/namespaces/$CICD_NAMESPACE/buildconfigs/tasks-pipeline/webhooks/${WEBHOOK_SECRET}/generic",
                "content_type": "json"
              },
              "events": [
                "push"
              ],
              "active": true
            }
            EOF

            _RETURN=$(curl -o /tmp/curl.log -sL -w "%{http_code}" -H "Content-Type: application/json" \
            -u $GOGS_USER:$GOGS_PWD -X POST http://$GOGS_SVC:3000/api/v1/repos/gogs/openshift-tasks/hooks -d @/tmp/data.json)

            if [ $_RETURN != "201" ] ; then
              echo "ERROR: Failed to set webhook"
              cat /tmp/curl.log
              exit 255
            fi

            oc label dc sonarqube "app.kubernetes.io/part-of"="sonarqube" --overwrite
            oc label dc sonardb "app.kubernetes.io/part-of"="sonarqube" --overwrite
            oc label dc jenkins "app.kubernetes.io/part-of"="jenkins" --overwrite
            oc label dc nexus "app.kubernetes.io/part-of"="nexus" --overwrite
            oc label dc gogs "app.kubernetes.io/part-of"="gogs" --overwrite
            oc label dc gogs-postgresql "app.kubernetes.io/part-of"="gogs" --overwrite

          image: quay.io/openshift/origin-cli:v4.0
          name: cicd-demo-installer-job
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        restartPolicy: Never

oct

16

Posted by : admin | On : 16 octobre 2019

installation sous windows 10

* Pouvoir executer le power shell en administrateur

1 – Install Hyper-V in Windows 10. See Windows instructions.

2– Add the user to the local Hyper-V Administrators group.

PS command :

 

([adsi]"WinNT://./Hyper-V Administrators,group").Add("WinNT://$env:UserDomain/$env:Username,user")

ou bien manuellement sur pannaeau configuration -> outils administration

 

 

 

 

– Add an External Virtual Switch.  => (à Gauche  =>  Gestionnaire commutateur virtuel)
=> bien verifier le nom de la carte réseau ASSOCIE
Identify first the net adapter to use.
PS command :
Get-NetAdapter

 

Then store which one you want to use for Minishift network access.
PS command :
$net = Get-NetAdapter -Name 'Wi-Fi' A

At last create a virtual switch for Hyper-V :
New-VMSwitch -Name "External VM Switch" -AllowManagementOS $True -NetAdapterName $net.Name

I named it « External VM Switch » but the name doesn’t matter.
From now Powershell is not required any longer. You can use either PowerShell or cmd.exe.

 

 

2) Starting Up Minishift

  • Astuse Pour demarrer appliquer la commande  suivante sur windows

minishift config set skip-check-hyperv-driver true

Minishift has to aware of the external virtual switch to use.
You can set its name from the Minishift command arg or as a environment variable.

In theory, that should be enough but it will not the case on Windows 10:
minishift.exe config set hyperv-virtual-switch « External VM Switch »

minishift start

 

* Output power shell

– Starting profile ‘minishift’
– Check if deprecated options are used … OK
– Checking if https://github.com is reachable … OK
– Checking if requested OpenShift version ‘v3.11.0′ is valid … OK
– Checking if requested OpenShift version ‘v3.11.0′ is supported … OK
– Checking if requested hypervisor ‘hyperv’ is supported on this platform … OK
– Checking if Powershell is available … OK
– Checking if Hyper-V driver is installed … SKIP
– Checking if Hyper-V driver is configured to use a Virtual Switch … SKIP
– Checking if user is a member of the Hyper-V Administrators group … SKIP
– Checking the ISO URL … OK
– Downloading OpenShift binary ‘oc’ version ‘v3.11.0′
53.59 MiB / 53.59 MiB [===================================================================================] 100.00% 0s– Downloading OpenShift v3.11.0 checksums … OK
– Checking if provided oc flags are supported … OK
– Starting the OpenShift cluster using ‘hyperv’ hypervisor …
– Minishift VM will be configured with …
Memory:    4 GB
vCPUs :    2
Disk size: 20 GB

Downloading ISO ‘https://github.com/minishift/minishift-centos-iso/releases/download/v1.15.0/minishift-centos7.iso’
355.00 MiB / 355.00 MiB [

 

This command execution will download the Minishift ISO but will fail when it try to retrieve the downloaded ISO on Windows 10 because of the path specified in the running script that relies on the Linux based filesystem layout.
After the failure, a simple workaround is to move the downloaded ISO where you like and to refer it as you execute the start command of the Minishift executable.
It would give :
minishift start --hyperv-virtual-switch "External VM Switch" --iso-url file://D:/minishift/minishift-centos7.iso
Where you should refer in iso-url the file URI of the Minishift ISO.
The startup will take a some time. At the end you should see something like :

 

3) Stop Minishift

As you finished to use Minishift in most of case you don’t want to delete all files created to start it.
So you want to release memory/cpu resources allocated to Minishift by stopping it :
minishift stop

4) Delete Minishift

A weird issue with Minishift or the need to start from scratch : delete files associated to the Minishift instance/cluster : minishift delete --force

4) Problems to start up Minishift ?

 

Q : I have a broad error but I don’t see the cause.
A : Increase the logging verbosity by adding  arguments in the minishift command :  --show-libmachine-logs -v5

Q : I feel that I fixed what it was wrong or missing but the minishift startup still fails.
A : delete startup files and restart Minishift from scratch.
This command will help :
minishift delete --force

5) Opening Minishift console

Either use the URL provided in the logs or use the console option that is minishift console :

minishift-logging

Install ISTIO, next step !

 

autre source ; RHEL official doc

https://www.marksei.com/openshift-minishift-widnows/

http://myjavaadventures.com/blog/2018/11/15/installer-minishift/

https://blog.dbi-services.com/openshift-on-my-windows-10-laptop-with-minishift/

https://www.informatiweb.net/tutoriels/informatique/11-virtualisation/268–virtualbox-comment-utiliser-et-configurer-les-differents-modes-d-acces-reseau-d-une-machine-virtuelle–2.html#host-only-adapter

https://access.redhat.com/documentation/en-us/red_hat_container_development_kit/3.3/html/getting_started_guide/getting_started_with_container_development_kit

août

20

Posted by : admin | On : 20 août 2019

vimeo

https://vimeo.com/39796236

source code https://github.com/DoctusHartwald/ticket-monster

 

sample pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<!--
 Copyright 2016-2017 Red Hat, Inc, and individual contributors.

 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.example</groupId>
  <artifactId>fruits</artifactId>
  <version>15-SNAPSHOT</version>

  <name>Simple Fruits Application</name>
  <description>Spring Boot - CRUD Booster</description>

  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>

    <maven.min.version>3.3.9</maven.min.version>
    <postgresql.version>9.4.1212</postgresql.version>
    <openjdk18-openshift.version>1.3</openjdk18-openshift.version>

    <spring-boot.version>2.1.3.RELEASE</spring-boot.version>
    <spring-boot-bom.version>2.1.3.Final-redhat-00001</spring-boot-bom.version>
    <maven-surefire-plugin.version>2.20</maven-surefire-plugin.version>
    <fabric8-maven-plugin.version>3.5.40</fabric8-maven-plugin.version>
    <fabric8.openshift.trimImageInContainerSpec>true</fabric8.openshift.trimImageInContainerSpec>
    <fabric8.skip.build.pom>true</fabric8.skip.build.pom>
    <fabric8.generator.from>
      registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:${openjdk18-openshift.version}
    </fabric8.generator.from>
  </properties>

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>me.snowdrop</groupId>
        <artifactId>spring-boot-bom</artifactId>
        <version>${spring-boot-bom.version}</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>

  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter</artifactId>
    </dependency>

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
    </dependency>

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>

<!-- TODO: ADD Actuator dependency here -->

    <!-- Testing -->
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
    </dependency>

<!-- /actuator/health 
management.endpoints.web.exposure.include=*
/actuator/metrics

/acutuator/metrics/[metric-name]
/actuator/beans
Jaeger and Tracing
-->
<dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
  <dependency>
          <groupId>org.postgresql</groupId>
          <artifactId>postgresql</artifactId>
          <version>${postgresql.version}</version>
          <scope>runtime</scope>
        </dependency>

</dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>${spring-boot.version}</version> <configuration> <profiles> <profile>local</profile> </profiles> <classifier>exec</classifier> </configuration> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <profiles> <profile> <id>local</id> <activation> <activeByDefault>true</activeByDefault> </activation> <dependencies> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <scope>runtime</scope> </dependency> </dependencies> </profile> <profile> <id>openshift</id> <dependencies> <!-- TODO: ADD PostgreSQL database dependency here --> </dependencies> <build> <plugins> <plugin> <groupId>io.fabric8</groupId> <artifactId>fabric8-maven-plugin</artifactId> <version>${fabric8-maven-plugin.version}</version> <executions> <execution> <id>fmp</id> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> </project>

test class sample

@RunWith(SpringRunner.class)

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class ApplicationTest {

    @Autowired
    private FruitRepository fruitRepository;

    @Before
    public void beforeTest() {
    }

    @Test
    public void testGetAll() {
      assertTrue(fruitRepository.findAll().spliterator().getExactSizeIfKnown()==3);
    }

    @Test
    public void getOne() {
      assertTrue(fruitRepository.findById(1).orElse(null)!=null);
    }


Spring REst services
package com.example.service;

import java.util.List;
import java.util.Objects;
import java.util.Spliterator;
import java.util.stream.Collectors;
import java.util.stream.StreamSupport;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.ResponseStatus;

@Controller
@RequestMapping(value = "/api/fruits")
public class FruitController {

    private final FruitRepository repository;

    @Autowired
    public FruitController(FruitRepository repository) {
        this.repository = repository;
    }

    @ResponseBody
    @GetMapping(produces = MediaType.APPLICATION_JSON_VALUE)
    public List getAll() {
        return StreamSupport
                .stream(repository.findAll().spliterator(), false)
                .collect(Collectors.toList());
    }
@ResponseBody
    @ResponseStatus(HttpStatus.CREATED)
    @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
    public Fruit post(@RequestBody(required = false) Fruit fruit) {
        verifyCorrectPayload(fruit);

        return repository.save(fruit);
    }

    @ResponseBody
    @GetMapping(value = "/{id}", produces = MediaType.APPLICATION_JSON_VALUE)
    public Fruit get(@PathVariable("id") Integer id) {
        verifyFruitExists(id);

        return repository.findById(id).orElse(null);
    }

}

exemple fabric 8

apiVersion: v1
kind: Deployment
metadata:
  name: ${project.artifactId}
spec:
  template:
    spec:
      containers:
        - env:
            - name: DB_USERNAME
              valueFrom:
                 secretKeyRef:
                   name: my-database-secret
                   key: user
            - name: DB_PASSWORD
              valueFrom:
                 secretKeyRef:
                   name: my-database-secret
                   key: password
            - name: JAVA_OPTIONS
              value: "-Dspring.profiles.active=openshift"
Properties openShift
spring.datasource.url=jdbc:postgresql://${MY_DATABASE_SERVICE_HOST}:${MY_DATABASE_SERVICE_PORT}/my_data
spring.datasource.username=${DB_USERNAME}
spring.datasource.password=${DB_PASSWORD}
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.hibernate.ddl-auto=create

mai

06

Posted by : admin | On : 6 mai 2019

196	242	3	881250949
186	302	3	891717742
22	377	1	878887116
244	51	2	880606923
166	346	1	886397596
298	474	4	884182806
115	265	2	881171488
253	465	5	891628467
305	451	3	886324817
6	86	3	883603013
62	257	2	879372434
286	1014	5	879781125
200	222	5	876042340
210	40	3	891035994
224	29	3	888104457
303	785	3	879485318
122	387	5	879270459
194	274	2	879539794
291	1042	4	874834944
234	1184	2	892079237
119	392	4	886176814
167	486	4	892738452
299	144	4	877881320
291	118	2	874833878
308	1	4	887736532
95	546	2	879196566
38	95	5	892430094
102	768	2	883748450
63	277	4	875747401
160	234	5	876861185
50	246	3	877052329
301	98	4	882075827
225	193	4	879539727
290	88	4	880731963
97	194	3	884238860
157	274	4	886890835
181	1081	1	878962623
278	603	5	891295330
276	796	1	874791932
7	32	4	891350932
10	16	4	877888877
284	304	4	885329322
201	979	2	884114233
276	564	3	874791805
287	327	5	875333916
246	201	5	884921594
242	1137	5	879741196
249	241	5	879641194
99	4	5	886519097
178	332	3	882823437
251	100	4	886271884
81	432	2	876535131
260	322	4	890618898
25	181	5	885853415
59	196	5	888205088
72	679	2	880037164
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.test</groupId>
    <artifactId>initiation-spark-java</artifactId>
    <version>1.0.0-SNAPSHOT</version>

    <repositories>
        <repository>
            <id>Apache Spark temp - Release Candidate repo</id>
            <url>https://repository.apache.org/content/repositories/orgapachespark-1080/</url>
        </repository>
    </repositories>
    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>1.3.1</version>
            <!--<scope>provided</scope>--><!-- cette partie là a été omise dans notre projet pour pouvoir lancer depuis maven notre projet -->
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>1.3.1</version>
            <!--<scope>provided</scope>-->
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <!-- we want JDK 1.8 source and binary compatiblility -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.2</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>
package spark;

import java.io.Serializable;
import java.time.LocalDateTime;

// Spark nécessite des structures sérializables !!
public class Rating implements Serializable {

    public int rating;
    public long movie;
    public long user;
    public LocalDateTime timestamp;

    public Rating(long user, long movie, int rating, LocalDateTime timestamp) {
        this.user = user;
        this.movie = movie;
        this.rating = rating;
        this.timestamp = timestamp;
    }
}
package spark;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;

import java.net.URISyntaxException;
import java.nio.file.Paths;
import java.time.Instant;
import java.time.LocalDateTime;
import java.time.ZoneId;
import java.util.Comparator;
import java.util.Scanner;

/**
 * Calcule la moyenne, le min, le max et le nombre de votes de l'utilisateur n°200.
 */
public class Workshop1 {

    public void run() throws URISyntaxException {
        SparkConf conf = new SparkConf().setAppName("Workshop").setMaster("local[*]");
        JavaSparkContext sc = new JavaSparkContext(conf);
        String ratingsPath = Paths.get(getClass().getResource("/ratings.txt").getPath()).toString();

        JavaRDD<Rating> ratings = sc.textFile(ratingsPath)
                .map(line -> line.split("\\t"))
                .map(row -> new Rating(
                        Long.parseLong(row[0]),
                        Long.parseLong(row[1]),
                        Integer.parseInt(row[2]),
                        LocalDateTime.ofInstant(Instant.ofEpochSecond(Long.parseLong(row[3]) * 1000), ZoneId.systemDefault())

                ));

        double mean = ratings
                .filter(rating -> rating.user == 200)
                .mapToDouble(rating -> rating.rating)
                .mean();

        double max = ratings
                .filter(rating -> rating.user == 200)
                .mapToDouble(rating -> rating.rating)
                .max(Comparator.<Double>naturalOrder());

        double min = ratings
                .filter(rating -> rating.user == 200)
                .mapToDouble(rating -> rating.rating)
                .min(Comparator.<Double>naturalOrder());

        double count = ratings
                .filter(rating -> rating.user == 200)
                .count();

        System.out.println("mean: " + mean);
        System.out.println("max: " + max);
        System.out.println("min: " + min);
        System.out.println("count: " + count);
        Scanner s=new Scanner(System.in);
        s.hasNextLine();
    }

    public static void main(String... args) throws URISyntaxException {
        new Workshop1().run();
    }
}

 

 

mai

03

Posted by : admin | On : 3 mai 2019

https://javaetmoi.com/2015/04/initiation-apache-spark-en-java-devoxx/

https://github.com/arey/initiation-spark-java/blob/master/src/main/java/org/devoxx/spark/lab/devoxx2015/FirstRDD.java

nov

23

Posted by : admin | On : 23 novembre 2018

tuto install blueetooth

https://www.testsavisetcompagnie.fr/blea-sur-raspberry-pi-zero-w/

nov

15

Posted by : admin | On : 15 novembre 2018

Installation

 

Temps 5-10min avec une connection fibre

Telecharger Gladys sur le site , ainsi que etcher

selectionner image zip puis le « drive » la carte micro sd

rendez vous sur votre browser http://gladys.local/installation , Gladys wi then install all the necessary for you

 

connect to your raspberry pi , donload network scanner and connect to your wifi .

In my house for instance its 192.168.1.99

 

ssh pi@192.168.1.99The authenticity of host ’192.168.1.99 (192.168.1.99)’ can’t be established.ECDSA key fingerprint is SHA256:xxxxxxxYvQTxxxxxxxxXgibtw.Are you sure you want to continue connecting (yes/no)? yes

default password raspberry

Linux gladys 4.14.30-v7+ #1102 SMP Mon Mar 26 16:45:49 BST 2018 armv7l
The programs included with the Debian GNU/Linux system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extentpermitted by applicable law.Last login: Sun Apr  8 12:51:13 2018 from 192.168.0.23
SSH is enabled and the default password for the ‘pi’ user has not been changed.This is a security risk – please login as the ‘pi’ user and type ‘passwd’ to set a new password.

 

Previous install remove installated ssh key

ssh-keygen -f  « /root/.ssh/known_hosts » -R 192.168.1.99

 

source for further details on ssh connection https://the-raspberry.com/ssh-raspberry-pi

Update your Gladys install

/home/pi/rpi-update.sh

Install 433-868Mhz

if you want to DIY for cheap take an adruino uno and follow this tutorial.

If you want to buy a full built in device Buy RF Player RF1000 or a RfxCom

https://www.aeq-web.com/arduino-10mw-cc1101-ism-rf-transceiver/?lang=en

Install Heating

JST ZH 1.5mm 3-Pin Femelle

DS18B20 de la société DALLAS

http://www.touteladomotique.com/index.php?option=com_content&view=article&id=1820:fabriquer-une-sonde-de-temperature-pour-un-micromodule-qubino&catid=82:diy&Itemid=87

 

https://community.gladysproject.com/t/zwave-qubino-fil-pilote-pilotage-xiaomi-sensor/3664/41

Screenshot_20181113-062807

Objectif Code une box Qubino

 

https://jsfiddle.net/LePetitGeek/803e2ud3/

GUI

mettre a disposition des utilisateurs le fait de pouvoir passer un mode a autre facilement

https://github.com/GladysProject/Gladys/blob/master/views/boxs/device-room.ejs

  <div class="box-body ng-cloak">

        <div ng-show="vm.selectRoom" class="ng-cloak">
            <p>Choose the room you want to display in this box:</p>
            <div class="row">
                <div class="col-xs-offset-2 col-xs-6">
                    <select ng-model="vm.selectedRoomId" class="form-control">
                        <option ng-repeat="room in vm.rooms" value="{{room.id}}">{{room.name}}</option>
                    </select>
                </div>
                <div class="col-xs-2"> <button class="btn btn-success btn-flat" ng-click="vm.selectRoomId(vm.selectedRoomId);">Save</button>
                </div>
            </div>
        </div>

        <div ng-show="!vm.selectRoom">
            <div class="table-responsive">
                        <table class="table">
                            <tbody>
                                <tr ng-show="type.display" ng-repeat="type in vm.room.deviceTypes" class="ng-cloak">
                                    <td>
                                        <span ng-show="{{type.deviceTypeName != null}}">{{type.deviceTypeName}}</span>
                                        <span ng-show="{{type.deviceTypeName == null}}">{{type.name}} <span ng-show="{{type.type != 'binary' && type.type.length}}"> - {{type.type}}</span></span>
                                    </td>
                                    <td>
                                        <!-- If the deviceType is a sensor, display last data -->
                                        <div ng-show="type.sensor == 1 && type.type != 'binary'">{{type.lastValue }} {{type.unit}}</div>
                                        <div ng-show="type.sensor == 1 && type.type == 'binary'">
                                            <i ng-show="type.lastValue == 1" class="fa fa-circle" aria-hidden="true"></i>
                                            <i ng-show="type.lastValue == 0" class="fa fa-circle-o" aria-hidden="true"></i>
                                        </div>

                                        <!-- If the deviceType is not a sensor and is not a binary, display input field -->
                                        <form class="form-inline" ng-show="!type.sensor && type.type != 'binary'"  >

                                            <slider id="blue" ng-model="type.lastValue" min="type.min" step="1" max="type.max" value="type.lastValue" ng-model-options='{ debounce: 100 }' ng-change="vm.changeValue(type, type.lastValue);" ></slider>

                                        </form>
                                        <!-- If the deviceType is not a sensor and is  a binary, display toogle -->
                                        <div class="toogle" ng-click="vm.changeValue(type, !type.lastValue);">
                                                <input type="checkbox" ng-show="!type.sensor && type.type == 'binary'" ng-model="type.lastValue" ng-true-value="1" ng-false-value="0" class="toogle-checkbox toogle-blue" />
                                                <label class="toogle-label" for="mytoogle" ng-show="!type.sensor && type.type == 'binary'"></label>
                                        </div>
                                    </td>
                                </tr>
                            </tbody>
                        </table>

            </div>
        </div>
    </div>
</div>
Au niveau du modele

gladys.utils.sql(‘ SELECT device,type,category,tag,sensor,unit, min, max,lastValue FROM devicetype ‘)

	.then((rows) => {
		console.log(rows);

	})
	.catch((err) => {
		console.log(err);
	});
Reponse dans pm2 logs
0|gladys   | [ RowDataPacket {
0|gladys   |     device: 1,
0|gladys   |     type: 'zwave',
0|gladys   |     category: null,
0|gladys   |     tag: null,
0|gladys   |     sensor: 0,
0|gladys   |     unit: null,
0|gladys   |     min: 0,
0|gladys   |     max: 99,
0|gladys   |     lastValue: null } ]

Scripting sur Gladys

architecturalement parlant Gladys est bati sur une api /core et expose son model de donnée sur api/models

pour recuperer par exemple les event avec user associe on peut faire

gladys.utils.sql(' SELECT datetime,value,user FROM event  ')
	.then((rows) => {
		console.log(rows);

	})
	.catch((err) => {
		console.log(err);
	});

 

Appel aux module

C’est un peu pareil, par exemple prenons le module weather en lisant la doc ; la then  => correspond a la promise donc au retour de api , sino ca plante

var options = {
  latitude: 45,
  longitude: 45
};

gladys.weather.get(options)
        .then((result) =>{
           console.log(result.temperature);
           console.log(result.weather);
           console.log(result.humidity);
        })
        .catch(console.log);

 

https://howtomechatronics.com/tutorials/arduino/arduino-wireless-communication-nrf24l01-tutorial/

Gladys  gateway

La grande nouveauté ! La gateway clé en main gladys depuis extérieur

I noticed there was a main problem with Gladys: It’s easy to access your Raspberry Pi installation when you are at home, but when you are outside it’s hard to make Gladys publicly accessible without having security issues: bot trying to hack your Raspberry Pi, that kind of creepy stuff.

So I thought about it, and decided to build the Gladys Gateway: The first End-To-End Encrypted Gateway for Home Automation. It’s a web base UI accessible at gateway.gladysproject.com that allows you to control your Gladys instance from anywhere in the world, without having to open your local network to the public, so your Raspberry Pi stays safe.

 

 

https://gateway.gladysproject.com/login