Dr. Dmitry O. Gorodnichy, 20 June 2018
www.IVIM.ca/dg/ai (https://bookdown.org/gorodnichy/ai1)
Official CBSA video on Improving Technology for the border:
https://www.cbsa-asfc.gc.ca/multimedia/it-pt/menu-eng.html
This presentation is not associated with CBSA.
All references to CBSA are obtained from Public domain.
What’s Common ?
USSR | Germany | USA |
---|---|---|
Cybernetics Center in Kiev | Institut für Neuroinformatik in Bochum | Brain Team in Google |
What’s Common ?
USSR | Germany | USA |
---|---|---|
Cybernetics Center in Kiev | Institut für Neuroinformatik in Bochum | Brain Team in Google |
My former colleagues: Ivakhnenko: library(GMDH)
, Kussul, Reznik
1980 | 1999 | 2018 | 2037 |
---|---|---|---|
… | … | … | ? |
1980 | 1999 | 2018 | 2037 |
---|---|---|---|
Observation | Hypothesis | Validation | Application |
… | … | … | ? |
1980 | 1999 | 2018 | 2037 |
---|---|---|---|
Scientific calculators made for everyone | Larger population of programmers | Deep Learning is everyone’s “scientific calculator” | AI everywhere |
… | … | … | … |
No code sharing | Most AI coding done in in-house - C++ language of choice | AI codes reusable, modular, scalable, shareable - R / RStudio IDE of choice | Full automation - AI helping built AI |
To built highly intelligent systems
… which we built in the past, and
… which we can built now and in future
URL: http://videorecognition.com/doc/presentations/2006-10-VRS-4VACE-briefed.pdf
Video Recognition Systems of NRC
Project Leader: Dr. Dmitry Gorodnichy
DNI IARPA VACE, Phase III Meeting
Washington DC, 1 November 2006
Dmitry O. Gorodnichy, Associative neural networks as means for low-resolution video-based recognition, International Joint Conference on Neural Networks (IJCNN’05), NRC 48217.
D.O. Gorodnichy, A.M. Reznik, (1997) “Increasing Attraction of Pseudo-Inverse Autoassociative Networks” , Neural Processing Letters, volume 5, issue 2, pp. 123-127,
URL: http://videorecognition.com/doc/presentations/2005-AI-PavlovDogs.pdf, NRC 48209
Projection learning vs correlation learning:
from Pavlov dogs to face recognition.
Project Leader: D.O. Gorodnichy
AI'05 Workshop on Correlation learning,
Victoria, BC, Canada,
May 8-12, 2005.
Different terminology - same idea
library(keras)
Now with much more efficient programming workflow and support !
They are XXI century’s “Scientific calculator” for everyone
library(keras)
: it installs library(tensorflow)
%>%
worksflow
# Define model
model <- keras_model_sequential() %>%
layer_conv_2d(filters = 32, kernel_size = c(3,3), activation = 'relu',
input_shape = input_shape) %>%
layer_conv_2d(filters = 64, kernel_size = c(3,3), activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_dropout(rate = 0.25) %>%
layer_flatten() %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dropout(rate = 0.5) %>%
layer_dense(units = 10, activation = 'softmax')
# Compile model
model %>% compile(
loss = loss_categorical_crossentropy,
optimizer = optimizer_adadelta(),
metrics = c('accuracy')
# Train model
model %>% fit(
x_train, y_train, validation_split = 0.2,
batch_size = batch_size, epochs = epochs
)
...
Data from Open Government -> Historical Border Wait Times:
library(data.table);library(ggplot2);library(lubridate);
library(magrittr); library(stringr);
dt <- fread("http://cbsa-asfc.gc.ca/data/bwt-taf-2016-07-01--2016-09-30-en.csv")
Cleaning and selecting data:
dt <- dt[str_detect(Location, "BC")] # subset: BWT in British Columbia
dt[, Updated := as.POSIXct(Updated, format = "%Y-%m-%d %H:%M ") ]
dt$BWT <- str_replace(dt$`Travellers Flow`, "No delay", "0") %>%
str_replace("Closed", "0") %>% str_replace("Not applicable", "0") %>%
str_replace("Missed entry", "NA") %>% as.numeric()
dt <- dt[, .(Updated, BWT, Location)]
theme_set(theme_minimal())
ggplot(dt[Updated<dmy("01/09/2016")], aes(Updated,BWT,col=Location)) +
geom_step() + facet_grid(Location ~ .)
Knowing Past data, can we use “Deep Learning” to predict Future data ?
Your answer ?model2 <- keras_model_sequential() %>%
layer_gru(units = 32, dropout = 0.2, recurrent_dropout = 0.2,
input_shape = list(NULL, dim(dt)[[-1]])) %>%
layer_dense(units = 1)
model2 %>% compile(
optimizer = optimizer_rmsprop(), loss = "mae"
)
model2 %>% fit_generator(
train_gen, steps_per_epoch = 500, epochs = 40,
validation_data = val_gen, validation_steps = val_steps
)
Looking into the Future
What’s common ?
… to automatially deduct that
Two booksin different languages (e.g., “Book of Joy” by Dalai Lama and Desmond Tutu, written in 2015, and “Deux Vies” by Concordia Antarova, written in Russian in 1955) are about the same…
… through intelligent NN-based text analysis (eg. RNN, LSTM, PINN).
This presentation available online: www.IVIM.ca/dg/ai
Experimental beta versions:
The RMarkdown/ R source of this document: https://raw.githubusercontent.com/gorodnichy/LA-Rmd-Shiny/master/D/pres_AI-NRC-2018.Rmd
Related:
Contact: dg@ivim.ca (dmitry@gorodnichy.ca)