nov

23

Posted by : admin | On : 23 novembre 2018

tuto install blueetooth

https://www.testsavisetcompagnie.fr/blea-sur-raspberry-pi-zero-w/

nov

15

Posted by : admin | On : 15 novembre 2018

Installation

 

Temps 5-10min avec une connection fibre

Telecharger Gladys sur le site , ainsi que etcher

selectionner image zip puis le « drive » la carte micro sd

rendez vous sur votre browser http://gladys.local/installation , Gladys wi then install all the necessary for you

 

connect to your raspberry pi , donload network scanner and connect to your wifi .

In my house for instance its 192.168.1.99

 

ssh pi@192.168.1.99The authenticity of host ’192.168.1.99 (192.168.1.99)’ can’t be established.ECDSA key fingerprint is SHA256:xxxxxxxYvQTxxxxxxxxXgibtw.Are you sure you want to continue connecting (yes/no)? yes

default password raspberry

Linux gladys 4.14.30-v7+ #1102 SMP Mon Mar 26 16:45:49 BST 2018 armv7l
The programs included with the Debian GNU/Linux system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extentpermitted by applicable law.Last login: Sun Apr  8 12:51:13 2018 from 192.168.0.23
SSH is enabled and the default password for the ‘pi’ user has not been changed.This is a security risk – please login as the ‘pi’ user and type ‘passwd’ to set a new password.

 

Previous install remove installated ssh key

ssh-keygen -f  « /root/.ssh/known_hosts » -R 192.168.1.99

 

source for further details on ssh connection https://the-raspberry.com/ssh-raspberry-pi

Update your Gladys install

/home/pi/rpi-update.sh

Install 433-868Mhz

if you want to DIY for cheap take an adruino uno and follow this tutorial.

If you want to buy a full built in device Buy RF Player RF1000 or a RfxCom

https://www.aeq-web.com/arduino-10mw-cc1101-ism-rf-transceiver/?lang=en

Install Heating

JST ZH 1.5mm 3-Pin Femelle

DS18B20 de la société DALLAS

http://www.touteladomotique.com/index.php?option=com_content&view=article&id=1820:fabriquer-une-sonde-de-temperature-pour-un-micromodule-qubino&catid=82:diy&Itemid=87

 

https://community.gladysproject.com/t/zwave-qubino-fil-pilote-pilotage-xiaomi-sensor/3664/41

Screenshot_20181113-062807

Objectif Code une box Qubino

 

https://jsfiddle.net/LePetitGeek/803e2ud3/

GUI

mettre a disposition des utilisateurs le fait de pouvoir passer un mode a autre facilement

https://github.com/GladysProject/Gladys/blob/master/views/boxs/device-room.ejs

  <div class="box-body ng-cloak">

        <div ng-show="vm.selectRoom" class="ng-cloak">
            <p>Choose the room you want to display in this box:</p>
            <div class="row">
                <div class="col-xs-offset-2 col-xs-6">
                    <select ng-model="vm.selectedRoomId" class="form-control">
                        <option ng-repeat="room in vm.rooms" value="{{room.id}}">{{room.name}}</option>
                    </select>
                </div>
                <div class="col-xs-2"> <button class="btn btn-success btn-flat" ng-click="vm.selectRoomId(vm.selectedRoomId);">Save</button>
                </div>
            </div>
        </div>

        <div ng-show="!vm.selectRoom">
            <div class="table-responsive">
                        <table class="table">
                            <tbody>
                                <tr ng-show="type.display" ng-repeat="type in vm.room.deviceTypes" class="ng-cloak">
                                    <td>
                                        <span ng-show="{{type.deviceTypeName != null}}">{{type.deviceTypeName}}</span>
                                        <span ng-show="{{type.deviceTypeName == null}}">{{type.name}} <span ng-show="{{type.type != 'binary' && type.type.length}}"> - {{type.type}}</span></span>
                                    </td>
                                    <td>
                                        <!-- If the deviceType is a sensor, display last data -->
                                        <div ng-show="type.sensor == 1 && type.type != 'binary'">{{type.lastValue }} {{type.unit}}</div>
                                        <div ng-show="type.sensor == 1 && type.type == 'binary'">
                                            <i ng-show="type.lastValue == 1" class="fa fa-circle" aria-hidden="true"></i>
                                            <i ng-show="type.lastValue == 0" class="fa fa-circle-o" aria-hidden="true"></i>
                                        </div>

                                        <!-- If the deviceType is not a sensor and is not a binary, display input field -->
                                        <form class="form-inline" ng-show="!type.sensor && type.type != 'binary'"  >

                                            <slider id="blue" ng-model="type.lastValue" min="type.min" step="1" max="type.max" value="type.lastValue" ng-model-options='{ debounce: 100 }' ng-change="vm.changeValue(type, type.lastValue);" ></slider>

                                        </form>
                                        <!-- If the deviceType is not a sensor and is  a binary, display toogle -->
                                        <div class="toogle" ng-click="vm.changeValue(type, !type.lastValue);">
                                                <input type="checkbox" ng-show="!type.sensor && type.type == 'binary'" ng-model="type.lastValue" ng-true-value="1" ng-false-value="0" class="toogle-checkbox toogle-blue" />
                                                <label class="toogle-label" for="mytoogle" ng-show="!type.sensor && type.type == 'binary'"></label>
                                        </div>
                                    </td>
                                </tr>
                            </tbody>
                        </table>

            </div>
        </div>
    </div>
</div>
Au niveau du modele

gladys.utils.sql(‘ SELECT device,type,category,tag,sensor,unit, min, max,lastValue FROM devicetype ‘)

	.then((rows) => {
		console.log(rows);

	})
	.catch((err) => {
		console.log(err);
	});
Reponse dans pm2 logs
0|gladys   | [ RowDataPacket {
0|gladys   |     device: 1,
0|gladys   |     type: 'zwave',
0|gladys   |     category: null,
0|gladys   |     tag: null,
0|gladys   |     sensor: 0,
0|gladys   |     unit: null,
0|gladys   |     min: 0,
0|gladys   |     max: 99,
0|gladys   |     lastValue: null } ]

Scripting sur Gladys

architecturalement parlant Gladys est bati sur une api /core et expose son model de donnée sur api/models

pour recuperer par exemple les event avec user associe on peut faire

gladys.utils.sql(' SELECT datetime,value,user FROM event  ')
	.then((rows) => {
		console.log(rows);

	})
	.catch((err) => {
		console.log(err);
	});

 

Appel aux module

C’est un peu pareil, par exemple prenons le module weather en lisant la doc ; la then  => correspond a la promise donc au retour de api , sino ca plante

var options = {
  latitude: 45,
  longitude: 45
};

gladys.weather.get(options)
        .then((result) =>{
           console.log(result.temperature);
           console.log(result.weather);
           console.log(result.humidity);
        })
        .catch(console.log);

 

https://howtomechatronics.com/tutorials/arduino/arduino-wireless-communication-nrf24l01-tutorial/

Gladys  gateway

La grande nouveauté ! La gateway clé en main gladys depuis extérieur

I noticed there was a main problem with Gladys: It’s easy to access your Raspberry Pi installation when you are at home, but when you are outside it’s hard to make Gladys publicly accessible without having security issues: bot trying to hack your Raspberry Pi, that kind of creepy stuff.

So I thought about it, and decided to build the Gladys Gateway: The first End-To-End Encrypted Gateway for Home Automation. It’s a web base UI accessible at gateway.gladysproject.com that allows you to control your Gladys instance from anywhere in the world, without having to open your local network to the public, so your Raspberry Pi stays safe.

 

 

https://gateway.gladysproject.com/login

avr

12

Posted by : admin | On : 12 avril 2018

 

https://www.youtube.com/watch?v=vq2nnJ4g6N0&t=6787s

https://www.youtube.com/watch?v=u4alGiomYP4

https://www.youtube.com/watch?v=fTUwdXUFfI8

https://github.com/random-forests/tutorials/blob/master/ep7.ipynb

Dataset :

http://archive.ics.uci.edu/ml/machine-learning-databases/undocumented/connectionist-bench/sonar/sonar.all-data

https://github.com/selva86/datasets/blob/master/Sonar.csv

 

 

support material

https://www.edureka.co/blog/perceptron-learning-algorithm/

https://github.com/tensorflow/tensorflow

https://www.edureka.co/blog/deep-learning-tutorial?utm_source=youtube&utm_campaign=deep-learning-180717-wr&utm_medium=description

Sample Sonar and mine

https://www.edureka.co/blog/perceptron-learning-algorithm/

* data

https://www.kaggle.com/mattcarter865/mines-vs-rocks/data

https://www.youtube.com/watch?v=yX8KuPZCAMo

 

SOURCE  : Youtube link

https://www.youtube.com/watch?v=yX8KuPZCAMo

 

example 1

node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0 , tf.float32)
#print (node1,node)

sess = tf.Session()
#print(sess.run([node1,node2]))
#sess.close()

with tf.Session() as sess :
File_Writer = tf.summary.FileWriter(‘C:\\Users\\aartaud\\pycharmproject\\Tensorflow\\graph’,sess.graph)
 output =sess.run([node1,node2])
 print(output)

 

cd C:\\Users\\aartaud\\pycharmproject\\

tensorboard -logdir= « TensorFlow »

check localhost:606

(computation graphe only no result is done, just for understanding the computation)

 

example 2

import tensorflow as tf
a = tf.placeholder(tf.float32)
b = tf.placefolder(tf.float32)

adder_node = a + b 

sess = tf.Session()

print(sess.run(adder_node,{a:[1,3],b:[2,4]})

example 3

import tensorflow as tf 

#Model Parameters.
W = tf.Variable([.3],tf.float32)
b = tf.Variable([-.3],tf.float32)

 # Inputs and outputs
x = tf.placeholder(tf.float32)

linear_model = W * x + b
y= tf.placeholder(tf.float32)

#loss function
squared_delta = tf.square(linear_model – y)
loss = tf.reduce_sum(squared_delta)

#Optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimzer.minimize(loss)

init = tf.global_variable_initializer()

sess = tf.Session()
sess.run(init)

for i in range (1000)
   sess.run(train , {x:[1,2,3,4],y:[0,-1,-2,-3]}  ) 

print(sess.run([W,b]))
#print(sess.run(loss,{x:[1,2,3,4],y:[0,-1,-2,-3]}
#build the model , calculate loss and train the model

example 4

 

 

import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split

#reading the dataset
def read_dataset() :
      df = pd.read_csv(«C:\\users\aartaud\pycharmProjects\\Tensorflow\sonar.csv » )
      #print(len(df.columns))
      X = df[df.columns[0:60]].values
      y = df[df.columns[60]]

      #Encode the dependant variable
      encode = LabelEncoder()
     encoder.fit(y)
     y=encoder.transform(y)
     Y = one_hot_encode(y)
     print(X.shape)
     return(X,Y)

#define the encoder function
def one_hot_encode(labels) :
     n_labels=len(labels)
     n_unique_labels = len(np.unique(labels))
     one_hot_encode = np.zeros((n_labels , n_unique_labels ) )
     one_hot_encode[np.arrange(n_labels),labels] = 1
     return one_hot_encode

#read the dataset
X, Y = read_dataset()

#shuffle the dataset to mix up the rows
X, Y =shuffle(X,Y, random_state=1)

#convert the dataset into train and test part
train_x , test_x , train_y, test_y = train_test_split(X,Y,test_size=0.20,random_state=415)

#inspect the shape of the training and testing
print(train_x.shape)
print(train_y.shape)
print(test_x.shape)

#define the important parameters and variable to work with the tensors
learning_rate = 0.3
training_epoch =1000 # number  of iteration to minimize the error
cost_history = np.empty(shape[1], dtype=float)
n_dim = X.shape[1]

print(« n_dim »,n_dim)
n_class  = 2 #mine and rock so its 2
model_path = «C:\\users\\aartaud\\PycharmProjects\\Tensorflow\\NMI »

# define the number of hidden layers and number of neurons for each layer
n_hidden_1 = 60
n_hidden_2 = 60
n_hidden₃  = 60

x= tf.placefolder(tf.float32,[None,n_dim])
W= tf.Variable(tf.zeos[n_dim,n_class]))
 b=tf.variable(tf.zeros[n_class]))
y_ = tf.placeholder(tf.float32,[None,n_class])

# define the model
def multilayer_perceptron (x,weights, biases ) :
      #hidden layer with RELU activaction fct
      layer₁  = tf.add(tf.matmul(x,weight[‘h1’] , biases[‘b1’])
      layer₁ = tf.nn.sigmoid (layer₁) #actiation function 

#hidden layer with sigmoid activation
layer₂ = tf.add(tf.matmul(layer₁,weights[‘h2’],biases[‘b2’])
layer₂ = tf.nn.sigmoid(layer₂) 

#hidden layer with sigmoid activation
layer₃ = tf.add(tf.matmut(layer₂,weights[‘h3’]),biases[‘b3’])
layer₃ = tf.nn.sigmoid(layer₃)

#hidden layer with RELU activation
layer4 = tf.add(tf.matmul(layer₃,weight[‘h4’],biases[‘b4’])
layer₄ = tf.nn.relu(layer₄)

#outpur layer with the linear activation
out_layer   = tf.matmul(layer₄  , weight[‘out’] + biases[‘out’]
return out_layer

# Define the weights and biases for each layer 

weights = {
 ‘h1’ : tf.Variable(tf.truncated_normal([n_dim,n_hidden₁])
‘h2’ : tf.Variable(tf.truncated_normal([n_hidden₁,n_hidden2])),
‘h3’ : tf.Variable(tf.truncated_normal([n_hidden₂,n_hidden₃])),
‘h4’:tf.Variable(tf.truncated_normal([n_hidden₃,n_hidden4])),
‘out’:tf.Variable(tf.truncated_normal([n_hidden₄,n_class]))
}

biases = {
  ‘b1’ : tf.Variable(tf.truncated_normal([n_hidden₁])),
  ‘b2’ : tf.Variable(tf.truncated_normal(n_hidden₂])),
  ‘b3’ : tf.Variable(tf.truncated_normal(n_hidden3])),
  ‘b4’ : tf.Variable(tf.truncated_normal(n_hidden4])),
 ‘out’ : tf.Variable(tf.truncated_normal([n_class]))
}
#initialize all variable
init = tf.global_variable_initializer()

saver  = tf.train.Saver()

#call your model deined
y = multilayer_perceptron(x,weights,biases)

#define the cost function and optimizer
cost_function = tf.reduce.mean(tf.softmax_cross_entropy_with_logits(logit=y,labels =y_))
training_step  = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)

sess = tf.Session()
sess.run(init)

#calculate the cost and the accuracy for each epoch

mse_history = []
accuracy_history = []

for epoch in range(training_epochs) :
      sess.run(training_step,feed_dict{x:  train_x ,  y_ = train_y})
      cost = sess.run(cost_function , feed_dict{x:  train_x ,  y_ = train_y})
      cost_history = np.append(cost_history,cost)
      correct_prediction = tf.equal(tf.argmax(y,1) , tf.argmax(y_,1))
      #difference between correct output and the model one 

      accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
      #print(«Accuracy :  », (sess.run(accuracy,feed_dict={x : test_x , y_:test_y})))

     pred_y= sess.run(y,feed_dict={x:test_x})
     mse  = tf.reduce_mean(tf.square(pred_y -test_y))
     mse_ =sess.run(mse)
     mse_history.append(mse₎
     accuracy = (sess.run(accuracy, feed_dict={x:train_x, y:train_y} ))
     accuracy_history.append(accuracy)

     print(‘epoch : ’,epoch, ‘ - ’ , ‘cost : ‘ , cost , « -MSE : »,  mse_ , «  - Train Accuracy »,accuracy)

save_path = saver.save(sess, model_path)
print(«Model saved in file %s » % save_path)

#plot mse and accuracy graph 

plt.plot(mse_history, ‘r’)
plt.show()
plt.plot(accuracy_history)  

#print the final accuracy 

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_ , 1 ))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(« Test accuracy : » , (sess.run(accuracy,feed_dict={x:test_x, y_: test_y})))

# Print the final mean square error
pred_y = sess.run(y,feed_dict={x, test_x})
mse=tf.reduce_mean(tf.square(pred_y, test_y))
print(« MSE : %.4f »% sess.run(mse))

 

 

Restore the model

import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
import random 

### READ CSV Function ####

X,Y,y1  = read_dataset()
model_path = » C:\\users\\aartaud\\PycharmProjects\\Tensorflow\\NMI»
learning_rate = 0.3
training_epoch = 1000
cost_history = np.empty (shape=[1],dtype =float )
n_dim = 60
n_class = 2  

init=tf.global_variable_initializer()
saver = tf.train.Saver()
sess = tf.Session()
sess.run(init)
saver.restore(sess,model_path) #HERE IT IS !!!

prediction = tf.argmax(y,1)
correct_prediction = tf.equal(prediction, tf.argmax(y_,1))
accuract = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

#print (accuracy_run)
print (‘ *************************************’)
print(« 0 stands for a M i.e. Mine & 1 stand for a rock » )
print(«***************************** »)
for i in range (93,101)
     prediction_run=sess.run(prediction,feed_dict={x : X[i].reshape(1,60}
     accuracy_run = sess.run(accuracy, feed_dict={x : X[i].reshape(1,60),y_ :}
     print(«original class : » , y1[i] , « predicted values : » , prediction_run)