From Pavlov Dogs to Deep Learning :

Evolution of AI and AI scientist

Dr. Dmitry O. Gorodnichy, 20 June 2018

www.IVIM.ca/dg/ai (https://bookdown.org/gorodnichy/ai1)

Disclaimer

Official CBSA video on Improving Technology for the border:
https://www.cbsa-asfc.gc.ca/multimedia/it-pt/menu-eng.html

This presentation is not associated with CBSA.
All references to CBSA are obtained from Public domain.

Outline

Defining AI

What’s Common ?

USSR Germany USA
Cybernetics Center in Kiev Institut für Neuroinformatik in Bochum Brain Team in Google

Defining AI

What’s Common ?

USSR Germany USA
Cybernetics Center in Kiev Institut für Neuroinformatik in Bochum Brain Team in Google
It’s all about the same !

My former colleagues: Ivakhnenko: library(GMDH), Kussul, Reznik

What I really want to know

1980 1999 2018 2037
?

I want to be able to predict future

1980 1999 2018 2037
Observation Hypothesis Validation Application
?

I want to be able to predict future

1980 1999 2018 2037
Scientific calculators made for everyone Larger population of programmers Deep Learning is everyone’s “scientific calculator” AI everywhere
No code sharing Most AI coding done in in-house - C++ language of choice AI codes reusable, modular, scalable, shareable - R / RStudio IDE of choice Full automation - AI helping built AI

“But something in our minds will always stay”

Our Objective:

To built highly intelligent systems

Many religions, one God.
Many notations, one AI.
“One Vision”

This is beyond Nouse (Intelligent Vision Interface)

Discovery Channel showing NRC M-50: https://youtu.be/L0EYYIuYp4M
Credits: NRC-CNRC, 2003

Credits: NRC-CNRC, 2003

This is about all Recognition systems

… which we built in the past, and

… which we can built now and in future

Presentation to Director of National Intelligence / Intelligence Advanced Research Projects Activity (DNI / IARPA), Washington DC, November 2006

URL: http://videorecognition.com/doc/presentations/2006-10-VRS-4VACE-briefed.pdf

Video Recognition Systems of NRC  

Project Leader:  Dr. Dmitry Gorodnichy

DNI IARPA VACE, Phase III Meeting 
Washington DC, 1 November 2006

How did we do it then?

Presentation at Canadian Conference on Artificial Intelligence conference (AI 2005), Victorial, May 2015

URL: http://videorecognition.com/doc/presentations/2005-AI-PavlovDogs.pdf, NRC 48209

Projection learning vs correlation learning: 
from Pavlov dogs to face recognition.  

Project Leader: D.O. Gorodnichy 

AI'05 Workshop on Correlation learning, 
Victoria, BC, Canada, 
May 8-12, 2005. 

From Past to Present to Future

Different terminology - same idea

Now with much more efficient programming workflow and support !

How do we do it now ?

Intro Tensorflow with Keras in RStudio

Example 1: FRiV redone with Keras

# Define model
model <- keras_model_sequential() %>%
  layer_conv_2d(filters = 32, kernel_size = c(3,3), activation = 'relu',
                input_shape = input_shape) %>%
  layer_conv_2d(filters = 64, kernel_size = c(3,3), activation = 'relu') %>%
  layer_max_pooling_2d(pool_size = c(2, 2)) %>% 
  layer_dropout(rate = 0.25) %>%
  layer_flatten() %>%
  layer_dense(units = 128, activation = 'relu') %>%
  layer_dropout(rate = 0.5) %>%
  layer_dense(units = 10, activation = 'softmax')
  
# Compile model  
model %>% compile(
  loss = loss_categorical_crossentropy,
  optimizer = optimizer_adadelta(),
  metrics = c('accuracy')
  
  
# Train model
model %>% fit(
  x_train, y_train, validation_split = 0.2,
  batch_size = batch_size, epochs = epochs 
)
...

Example 2: Predicting future (sequential data)

Data from Open Government -> Historical Border Wait Times:

library(data.table);library(ggplot2);library(lubridate); 
library(magrittr); library(stringr);  
dt <- fread("http://cbsa-asfc.gc.ca/data/bwt-taf-2016-07-01--2016-09-30-en.csv")

Cleaning and selecting data:

dt <- dt[str_detect(Location, "BC")] # subset: BWT in British Columbia
dt[, Updated := as.POSIXct(Updated, format = "%Y-%m-%d %H:%M ")  ]
dt$BWT <- str_replace(dt$`Travellers Flow`, "No delay", "0") %>%  
  str_replace("Closed", "0") %>%   str_replace("Not applicable", "0") %>% 
  str_replace("Missed entry", "NA") %>% as.numeric()

dt <- dt[, .(Updated, BWT, Location)]

Example 2: Visualized

theme_set(theme_minimal())
ggplot(dt[Updated<dmy("01/09/2016")], aes(Updated,BWT,col=Location)) + 
  geom_step() + facet_grid(Location ~ .)

Knowing Past data, can we use “Deep Learning” to predict Future data ?

Your answer ?
model2 <- keras_model_sequential() %>%
  layer_gru(units = 32, dropout = 0.2, recurrent_dropout = 0.2,
            input_shape = list(NULL, dim(dt)[[-1]])) %>%
  layer_dense(units = 1)

model2 %>% compile(
  optimizer = optimizer_rmsprop(),  loss = "mae"
)

model2 %>% fit_generator(
  train_gen,  steps_per_epoch = 500,  epochs = 40,
  validation_data = val_gen,  validation_steps = val_steps
)

Conclusions

Looking into the Future

What’s common ?

It will be possible

… to automatially deduct that

Two booksin different languages (e.g., “Book of Joy” by Dalai Lama and Desmond Tutu, written in 2015, and “Deux Vies” by Concordia Antarova, written in Russian in 1955) are about the same…

… through intelligent NN-based text analysis (eg. RNN, LSTM, PINN).

THANK YOU !

This presentation available online: www.IVIM.ca/dg/ai

The RMarkdown/ R source of this document: https://raw.githubusercontent.com/gorodnichy/LA-Rmd-Shiny/master/D/pres_AI-NRC-2018.Rmd

Related:

Contact: dg@ivim.ca (dmitry@gorodnichy.ca)