Wednesday, June 22, 2016

A possible way to store solar power for when the sun isn't shining.

I first read about Peltier Tiles when i read about Ann Makoskinski inventing the hollow flashlight that uses electricity created by using Peltier Tiles from difference of temperature between body heat(hand heat) and room temperature.http://www.dailymail.co.uk/news/article-2351791/Ann-Makosinski-Canadian-girl-invents-flashlight-powered-body-heat-earns-spot-Google-Science-Fair-finals.html

And then I read about Vancouver's Austin Wang altering genes of bacteria to create bacteria that is 20 times more efficient at break down waste while creating electricity...

Then I saw a video on youtube about solar panels/solar power.

Here's the idea, when the sun is shining, we create electricity for mass consumption, and at the same time use excess energy from that to heat up something that's good at storing heat over-night (like water).  Then when the sun isn't shining we use this heated (water) and apply it to one side of Peltier Tiles to generate electricity.  I don't know the efficiency of Peltier Tiles but judging based on the fact that if it can light up some LEDs for a flashlight with only the difference from body heat(hand heat) and room temperature which isn't that much of a difference, the difference between heated (water) content from solar and room temperature should be much greater.

Disclamer: I am not an inventor,  It's just an idea and I don't have enough equipment or the know how to create a working model.  The most I have done with solar is creating a backyard hotdog cooker that is in a shape of a cone that focuses sunlight onto a rod where my hotdog was standing.  I hope someone/anyone can take this idea and test it out.

Wednesday, June 8, 2016

Roll of Toilet Paper Anaglyph Red/Cyan 3D sunglasses.

To properly view this, use Red/Cyan 3D glasses.

Friday, June 3, 2016

Neural Networks used to get numbers for Lotteries(649,LottoMax,Powerball,Megamillion)

To run these neural networks.
You need Anaconda (Python 2.7) installed.
Libraries needed are: neurolab, numpy, termcolor, urllib2, os, zipfile (use pip to install these after having Anaconda installed, you can just install anyone of these libraries by using "pip install neurolab" from a terminal/command window where neurolab is your library name)
After you have these above installed.
You can download and extract this file: http://bakon.ca/gimplearn/viewtopic.php?f=5&t=283
It'll have 4 folders, one for each lottery type.
While inside one of the lottery folders, you can run from command/terminal

"python create.py" to initially create the neural network. This only needs to run once.

"python get_history.py" to grab historical data for that type of lottery from the internet into that working folder. This can be run whenever there's a new draw result for that lottery type.

"python run.py" to continually train the network and simulate to get output of simulation (numbers with probabilities) that you can use to play the lottery with.  This process loops forever so Control-C to break out of it it'll save the network in file called "one.net" if "one.net" is corrupted by accident because of Control-C process, just copy "one.net.bk" over to "one.net".

Good luck.


Like Us on Our NeuralNetworksLottery facebook page!

Why neural network will/won't work when it comes to predicting the lottery based on past results

Neural network is great at learning possible repeatable trends/patterns when it comes to guessing numbers.

If the lottery machines were controlled by robotic arms and that each ball draw was exactly timed every time the machines starts up and that while the machines are running, the power was at a constant (never changing) by fluctuations in the power supply that might be caused by someone near by turning on a stove/oven.  Then whatever the physical rules are, because everything is kept constant, neural networks can be trained with historical data to predict the next draw based on current draw.  The neural net will learn what the physical laws are or come close to it and will significantly improve its chances.

However, this isn't the case in real life.  People work the machines, maybe even different people each time, they move differently each time not like robots of a constant power supply, they might place the balls in the machine differently each time.  And also the power the machine uses isn't constant, there are fluctuations in power supply no matter how small that might increase/decrease the physical bounces of the lottery balls.

So our hope here is that even though there are these fluctuations, there can be some sort of an average...so even though each time is different, but there is an average number that each time can come close to.  It is only a hope, it's not proven.  If there is such an average way of placing the balls...and an average way of the lottery balls result coming out.  Then the neural net might slightly increase its chance at predicting by learning these rules its errors reduce but it's prediction is closer to the average.  So maybe not an accurate prediction but a prediction that has a higher chance of occurring.

If you're interesting in using neural networks to predict your Canadian LottoMax, 6/49 or American Powerball, MegaMillions you can go to the below links:
Mega Millions (neural network)
Powerball (neural network)
Canadian 6/49 and LottoMax (neural network) 

Cách chạy mạng lưới thần kinh (neural networks) để đoán xổ số.

Cách chạy mạng lưới thần kinh (neural networks) để đoán xổ số.

Trước hết phải install Anaconda
Anaconda có thể cho người dùng chạy ngôn ngữ lập trình Python.
Bạn có thể download và install Anaconda từ trang này:https://www.continuum.io/downloads
Lựa operating system, rồi lựa Python 2.7
Sau khi đã install Anaconda, bạn hãy install libraries neurolab, numpy, termcolor.
Muốn install neurolab,numpy,termcolor có thể mở một command/terminal window và đánh vào theo như dưới rồi để cho nó tự động install sau mỗi hàng.
pip install neurolab
pip install numpy
pip install termcolor

Sau đó bạn có thể đi vào những trang sau đây để lấy python code cho "create.py" file và "run.py" file và tài liệu lịch sử xổ số từ những trang khác nhau tùy theo loại xổ số nào.
Mega Millions (neural network)
Powerball (neural network)
Canadian 6/49 and LottoMax (neural network)

Fancy Neural Network to predict Mega Millions White balls.

(Note: You'll need to install Anaconda Python 2.7, neurolab library, numpy library and termcolor library)

Fancy Neural Network to predict MegaMillion white ball numbers based on historical results.

How to run the network.

First create a folder on your desktop or whevever, let's call this folder MEGA_NN.
Download http://www.txlottery.org/export/sites/lottery/Games/Mega_Millions/Winning_Numbers/download.html
and save it in the MEGA_NN folder as megamillions.csv
 This file has megamillions' historical results.

Now create a file in MEGA_NN folder and name it "create.py" and paste this code into it.


import neurolab as nl
net = nl.net.newff([[0,1]] * 75, [75,75])
net.save("one.net")
 
Now create a file in MEGA_NN folder and name it  "run.py" and paste this code into it

import neurolab as nl
import numpy as np
from termcolor import colored

#read inputs and targets
f = open("megamillions.csv",'r')
input = []
line_count = 0
for line in f.readlines():
 line = line.split(',')
 line_count += 1
 if line_count > 0: #don't skip line 1
  
  lineinput = [0] * 75
  for i in range(4,9):
   lineinput[int(line[i])-1] = 1
  input.append(lineinput)

#set target outputs for training to be from row 1 to end of list, skipping first row 0
target = input[1:len(input)]
#set siminput to be last row of input to use for simulation to predict output
siminput = [input[len(input)-1]]

#set input, make input one row less since the last row wouldn't have target output
input = input[0:len(input)-1]


#make it numpy
target = np.array(target)
input = np.array(input)
siminput = np.array(siminput)


def output(o):
 array = o[0]
 order = array.argsort()
 #ranks = order.argsort()
 
 for i in range(74,-1,-1):
  index = order[i] + 1
  print colored(str(index),"green") + colored("(" + str(int(array[index-1] * 1000)/1000.0) + ")","red"),
  if i % 5 == 0:
   print
 print 
 
net = nl.load("one.net")
print colored("Mega Million","blue") + colored(" number","green") + colored("(probability)","red")
out = net.sim(siminput)
output(out)
while True:
 net = nl.load("one.net")
 print "Training..."
 error = net.train(input,target,epochs=10,show=1)
 
 # with line below we can specify training to stop when error[0] is smaller than goal it stops
 # but since we're inside a forever loop, there's no need to specify goal.
 # error = net.train(input,target,epochs=10,show=1,goal=0.01)
 
 # just a fail safe, just an extra save so that if we Ctrl-C while we're saving we'll have at least copy of latest network that works
 net.save("one.net.bk")
 net.save("one.net")
 
 print colored("Mega Million","blue") + colored(" number","green") + colored("(probability)","red")
 out = net.sim(siminput)
 
 #output for display purposes only
 output(out)
Now we have all the files we need.
You''ll have to update megamillions.csv file whenever there's a new megamilion  lottery result out to keep the information up-to-date.

Now, first we run from a command/terminal window: python create.py
This will create the initial network (untrained).

Now we run from a command/terminal window: python run.py
and let that run, as it trains more the error will reduce and it's prediction becomes more accurate/fancy based on historical data.

 It'll train 10 epochs/times then simulate once show its prediction once. and repeat this process forever.

You can always Control-C to exit the run process and the trained network up to that point will be saved in one.net (neural net file).

If you have bad luck and happen to Control-C while it's trying to save one.net you'll have a corrupted one.net but that's okay because you can always copy the file one.net.bk over to one.net if program complains able not being able to read one.net.

The output shows numbers from highest probability down to lowest one... so even though it shows all 75 numbers, you might only want to play the first 5 (like chosen megamillions whiteballs).

Fancy Neural Network to predict Powerball white ball numbers.

This post is also posted on GIMP LEARN forum - Anything goes

(Note: You'll need to install Anaconda Python 2.7, neurolab library, numpy library and termcolor library)

Fancy Neural Network to predict Powerball white ball numbers based on historical results.

How to run the network.

First create a folder on your desktop or whevever, let's call this folder POWER_NN.
Download http://www.powerball.com/powerball/winnums-text.txt
and save it in the POWER_NN folder as winnums-text.txt.
This file has powerball's historical results.

Now create a file in POWER_NN folder and name it "create.py" and paste this code into it.

import neurolab as nl
net = nl.net.newff([[0,1]] * 69, [69,69])
net.save("one.net")
 
Now create a file in POWER_NN folder and name it  "run.py" and paste this code into it

import neurolab as nl
import numpy as np
from termcolor import colored

#read inputs and targets
f = open("winnums-text.txt",'r')
input = []
line_count = 0
for line in f.readlines():
 line = line.split('  ')
 line_count += 1
 if line_count > 1: #skip line 1
  
  lineinput = [0] * 69
  for i in range(1,6):
   lineinput[int(line[i])-1] = 1
  input.append(lineinput)
input.reverse() #powerball shows results in reverse order from newest to oldest so we will reverse this list

#set target outputs for training to be from row 1 to end of list, skipping first row 0
target = input[1:len(input)]
#set siminput to be last row of input to use for simulation to predict output
siminput = [input[len(input)-1]]

#set input, make input one row less since the last row wouldn't have target output
input = input[0:len(input)-1]


#make it numpy
target = np.array(target)
input = np.array(input)
siminput = np.array(siminput)


def output(o):
 array = o[0]
 order = array.argsort()
 #ranks = order.argsort()
 
 for i in range(68,-1,-1):
  index = order[i] + 1
  print colored(str(index),"green") + colored("(" + str(int(array[index-1] * 1000)/1000.0) + ")","red"),
  if i % 5 == 0:
   print
 print 
 
net = nl.load("one.net")
print colored("Powerball","blue") + colored(" number","green") + colored("(probability)","red")
out = net.sim(siminput)
output(out)
while True:
 net = nl.load("one.net")
 print "Training..."
 error = net.train(input,target,epochs=10,show=1)
 
 # with line below we can specify training to stop when error[0] is smaller than goal it stops
 # but since we're inside a forever loop, there's no need to specify goal.
 # error = net.train(input,target,epochs=10,show=1,goal=0.01)
 
 # just a fail safe, just an extra save so that if we Ctrl-C while we're saving we'll have at least copy of latest network that works
 net.save("one.net.bk")
 net.save("one.net")
 
 print colored("Powerball","blue") + colored(" number","green") + colored("(probability)","red")
 out = net.sim(siminput)
 
 #output for display purposes only
 output(out)
 

Now we have all the files we need.
You''ll have to update winnums-text.txt file whenever there's a new powerball lottery result out to keep the information up-to-date.

Now, first we run from a command/terminal window: python create.py
This will create the initial network (untrained).

Now we run from a command/terminal window: python run.py
and let that run, as it trains more the error will reduce and it's prediction becomes more accurate/fancy based on historical data.

 It'll train 10 epochs/times then simulate once show its prediction once. and repeat this process forever.

You can always Control-C to exit the run process and the trained network up to that point will be saved in one.net (neural net file).

If you have bad luck and happen to Control-C while it's trying to save one.net you'll have a corrupted one.net but that's okay because you can always copy the file one.net.bk over to one.net if program complains able not being able to read one.net.

The output shows numbers from highest probability down to lowest one... so even though it shows all 69 numbers, you might only want to play the first 5 (like chosen powerball whiteballs).


Thursday, June 2, 2016

Fancy Neural Net Quickpicks for LottoMax and Lottery 6/49

I call it "Fancy" because it uses Neural Networks to come up with numbers but I don't think it's any better than normal quickpicks (but you never know right?).

I take 6/49 results from this page:http://lotto.bclc.com/winning-numbers/lotto-649-and-extra.html (at bottom of page, there's a "Download Lotto 6/49" button)
and LottoMax results from this page: http://lotto.bclc.com/winning-numbers/lotto-max-and-extra.html (at bottom of page, there's a "Download LOTTO MAX" button).

I create a neural network using python's neurolab library, using the below code in a file called create.py
 
import neurolab as nl

net = nl.net.newff([[0,1]] * 49, [49,49])

net.save("one.net")

This would create a network of  49 input nodes accepting values from 0 to 1, 49 hidden nodes, and 49 output nodes.

I then read from the 649.csv file (downloaded above). and as rows of 49 inputs of 0 with resulted numbers at the right indexes set to 1 to feed into the network, my target output for training are simply the next draw.  So one input row takes the row below it as target output.
and feed that into the created neural net ("one.net") for training.
I use the below code for "649.csv".

import neurolab as nl
import numpy as np


#read inputs and targets
f = open("649.csv",'r')
input = []
line_count = 0
for line in f.readlines():
 line = line.split(',')
 line_count += 1
 if line_count > 1: #skip line 1
  
  lineinput = [0] * 49
  for i in range(4,10):
   lineinput[int(line[i])-1] = 1
  input.append(lineinput)

#set target outputs for training to be from row 1 to end of list, skipping first row 0
target = input[1:len(input)]
#set siminput to be last row of input to use for simulation to predict output
siminput = [input[len(input)-1]]

#set input, make input one row less since the last row wouldn't have target output
input = input[0:len(input)-1]


#make it numpy
target = np.array(target)
input = np.array(input)
siminput = np.array(siminput)


def output(o):
 array = o[0]
 order = array.argsort()
 #ranks = order.argsort()
 
 for i in range(48,-1,-1):
  index = order[i] + 1
  print str(index) + "(" + str(int(array[index-1] * 1000)/1000.0) + ")",
  if i % 5 == 0:
   print
 print 
 
net = nl.load("one.net")
print "Lotto 649 number(probability)"
out = net.sim(siminput)
output(out)
while True:
 net = nl.load("one.net")
 print "Training..."
 error = net.train(input,target,epochs=10,show=1)
 
 # with line below we can specify training to stop when error[0] is smaller than goal it stops
 # but since we're inside a forever loop, there's no need to specify goal.
 # error = net.train(input,target,epochs=10,show=1,goal=0.01)
 
 # just a fail safe, just an extra save so that if we Ctrl-C while we're saving we'll have at least copy of latest network that works
 net.save("one.net.bk")
 net.save("one.net")
 
 print "Lotto 649 number(probability)"
 out = net.sim(siminput)
 
 #output for display purposes only
 output(out)
 

And i save this above code as run.py
 Then when to run the code. I run "python create.py" from command line, after that finishes, i run  "python run.py", and let it trains as long as i like to, each time it trains the error is reduced, and after 10 epochs (times it trains) it'll simulate using the updated trained network with the last row of the csv file as input and so we get output results of our fancy quickpick numbers.

Similarly, I use the below code for LOTTOMAX.csv.


import neurolab as nl
import numpy as np


#read inputs and targets
f = open("LOTTOMAX.csv",'r')
input = []
line_count = 0
for line in f.readlines():
 line = line.split(',')
 line_count += 1
 if line_count > 1: #skip line 1
  
  lineinput = [0] * 49
  for i in range(4,11):
   lineinput[int(line[i])-1] = 1
  input.append(lineinput)

#set target outputs for training to be from row 1 to end of list, skipping first row 0
target = input[1:len(input)]
#set siminput to be last row of input to use for simulation to predict output
siminput = [input[len(input)-1]]

#set input, make input one row less since the last row wouldn't have target output
input = input[0:len(input)-1]


#make it numpy
target = np.array(target)
input = np.array(input)
siminput = np.array(siminput)


def output(o):
 array = o[0]
 order = array.argsort()
 #ranks = order.argsort()
 
 for i in range(48,0,-1):
  index = order[i] + 1
  print str(index) + "(" + str(int(array[index-1] * 1000)/1000.0) + ")",
  if i % 5 == 0:
   print
 print 
 
net = nl.load("one.net")
print "Lotto MAX number(probability)"
out = net.sim(siminput)
output(out)
while True:
 net = nl.load("one.net")
 print "Training..."
 error = net.train(input,target,epochs=10,show=1)
 
 # with line below we can specify training to stop when error[0] is smaller than goal it stops
 # but since we're inside a forever loop, there's no need to specify goal.
 # error = net.train(input,target,epochs=10,show=1,goal=0.01)
 
 # just a fail safe, just an extra save so that if we Ctrl-C while we're saving we'll have at least copy of latest network that works
 net.save("one.net.bk")
 net.save("one.net")
 
 print "Lotto MAX number(probability)"
 out = net.sim(siminput)
 
 #output for display purposes only
 output(out)
 


And that's it.  You can copy the code above and put it in the same folder, with the downloadable .csv files from the bclc.com site and run these neural networks yourself or change the structure of the neural network configuration in create.py.

The information above was intended to for me so that if I ever factory reset my laptop, i'd have the information here to run my fancy neural net quickpicks for Lottery 6/49 and LottoMax.

HOW TO RUN this neural net.
 Install Anaconda by getting it from here: https://www.continuum.io/downloads
Choose Python 2.7 (at least that's what i use).
Once installed, from a terminal/command window run "pip install neurolab".
Also from a terminal/command window run "pip install numpy".
Copy the codes from above and put them in their respective filenames (create.py and run.py), create.py and run.py that tries to read from 649.csv should go in the same folder as 649.csv file
create.py and run.py that tries to read from LOTTOMAX.csv should go in the same folder as LOTTOMAX.csv file.
I would just create 2 separate folders (one for 649 files and one for LOTTOMAX files).
Download the 649.csv and LOTTOMAX.csv files from bclc.com and put them in the created folders.
While inside 649 folder for example, run "python create.py" then "python run.py".
And similarly for LOTTOMAX, while inside that folder you can run "python create.py" then "python run.py".