Privacy-preserving machine learning is a subfield of machine learning in which the training of the model happens in such a way that the privacy of the data is preserved. Various approaches already exist but are not well established. At the same time, privacy considerations become more important. Among the approaches is federated learning for a decentralized training, whereby the data can stay at the place of origin and only learning updates or gradient updates are exchanged. Another approach is differential privacy – stochastic gradient descent whereby the learning algorithm of the neural network is modified so that single training examples do not affect the model too much. Thus, limited inference can be made from the model to the data it was trained on. In this talk we will understand both approaches and have a look on how to implement them with the help of TensorFlow.