direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Es gibt keine deutsche Übersetzung dieser Webseite.

Projects of the previous funding period (2010 - 2014)

Pillar A - Temporal aspects and dynamics

The goal of this research area is to develop new modeling techniques for understanding neural computation on temporal sequences (stimuli, neural responses, actions) and under non-stationarity. 
Some of the projects aim at developing new methods for the analysis of behavioral data and/or brain signals which are recorded by non-invasive techniques, while others are concerned with new methods for a model-based analysis of neural coding at the level of spikes and local field potentials. The main work within these projects is theoretical, because their main purpose is to exploit mathematical and computational techniques developed in other fields for neural modeling. Still, all projects have a clear perspective about the applicability of the results to sensory computation in behavioral paradigms.

Pillar B - Understanding local computation: invasive studies

This Pillar collects collaborative projects between theory and experiment, which include an essential invasive component and which require computational models linking to the neuron level. Most projects relate neural signals to behavior, either directly or via human psychophysics studies.

Pillar C - Understanding global computation: non-invasive studies

This research area collects collaborative projects between theory and experiment, where the experimental part is essentially non-invasive and related to human neuroscience. All these projects involve behavioral paradigms. In addition they involve either visual psychophysics experiments, or fMRI studies, or EEG measurements. Computational models relate to this kind of data. Some of the projects will to a large extent involve neural populations and will make contact pattern recognition and inference methods from machine learning, while others will forge links to reinforcement learning and Markov Decision Processes which are both modeling tools widely used in the engineering oriented machine learning community.

Zusatzinformationen / Extras


Schnellnavigation zur Seite über Nummerneingabe