Deep Learning is all the hype these days, beating another record most every week. And it’s not just for Google, Facebook, Microsoft & Co. – it can work just fine with not-so-big-data and moderate resources, too. Deep learning frameworks abound, and we will see more and more applications starting to integrate deep learning in one or the other way. However, writing code for deep learning is not just coding – it really helps if you have a basic understanding of what’s going on beneath.
In this session, Sigrid Keydana, a data scientist with the DACH-based IT consulting company Trivadis, explains the indispensable bits of matrix algebra and calculus you should know. She also offers some tips and tricks to get started with deep learning frameworks like Keras, PyTorch and TensorFlow.
Sigrid Keydana is a data scientist with the DACH-based IT consulting company Trivadis.
In the field of data science and machine learning, she focuses on deep learning (concepts and frameworks), statistical learning and statistics, natural language processing and software development using R. She has a broad background in software development (esp. Java and functional programming languages like Scheme and Haskell), database administration, IT architecture and performance optimization. She writes a blog and is active on Twitter as @zkajdan.