It’s not a secret, that deep learning already made a revolution in several perception fields as vision, language and speech understanding and keeps pushing the frontiers. Meanwhile, one important data type which includes time series, digital signals and any sequential observations is still mainly processed with rather standard mathematical and algorithmic routines. In this talk, we will review, what are the main
sources of time series in the world, what are the "basic" algorithms and how exactly they might be improved and replaced with different neural network architectures. Apart of the models’ details, we will also study the typical tasks that have to be solved while working with time series: classification, prediction, anomaly detection, simulation and others and exactly deep learning can be leveraged to solve them on the state-of-the-art level. Some previous experience with time series/signal processing is useful, but not required.