GANs schön kompliziert: Applications of Generative Adversarial Networks

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/135001
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1350010
http://dx.doi.org/10.15496/publikation-76352
Dokumentart: Dissertation
Erscheinungsdatum: 2023-01-03
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Informatik
Gutachter: Macke, Jakob H. (Prof. Dr.)
Tag der mündl. Prüfung: 2022-12-09
DDC-Klassifikation: 500 - Naturwissenschaften
Freie Schlagwörter:
GANs
neuroscience
Bayesian inference
probabilistic machine learning
statistical models
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Gedruckte Kopie bestellen: Print-on-Demand
Zur Langanzeige

Abstract:

Scientific research progresses via model-building. Researchers attempt to build realistic models of real-world phenomena, ranging from bacterial growth to galactic motion, and study these models as a means of understanding these phenomena. However, making these models as realistic as possible often involves fitting them to experimentally measured data. Recent advances in experimental methods have allowed for the collection of large-scale datasets. Simultaneously, advancements in computational capacity have allowed for more complex model-building. The confluence of these two factors accounts for the rise of machine learning methods as powerful tools, both for building models and fitting these models to large scale datasets. In this thesis, we use a particular machine learning technique: generative adversarial networks (GANs). GANs are a flexible and powerful tool, capable of fitting a wide variety of models. We explore the properties of GANs that underpin this flexibility, and show how we can capitalize on them in different scientific applications, beyond the image- and text-generating applications they are well-known for. Here we present three different applications of GANs. First, we show how GANs can be used as generative models of neural spike trains, and how they are capable of capturing more features of these spike trains compared to other approaches. We also show how this could enable insight into how information about stimuli are encoded in the spike trains. Second, we demonstrate how GANs can be used as density estimators for extending simulation-based Bayesian inference to high-dimensional parameter spaces. In this form, we also show how GANs bridge Bayesian inference methods and variational inference with autoencoders and use them to fit complex climate models to data. Finally, we use GANs to infer synaptic plasticity rules for biological rate networks directly from data. We then show how GANs be used to test the robustness of the inferred rules to differences in data and network initialisation. Overall, we repurpose GANs in new ways for a variety of scientific domains, and show that they confer specific advantages over the state-of-the-art methods in each of these domains.

Das Dokument erscheint in: