Neural Modeling of Speech Processing and Speech Learning: An Introduction
Neuronale Modellierung der Sprachverarbeitung und des Sprachlernens: Eine Einführung
German version of "the book" (buy from Springer Verlag
or read on SpringerLink)
English version of "the book" (buy from Springer Verlag)
Source Code for NENGO Examples (for book chapter 7)
All simulation examples are writen in Python and can be executed for example using ipython notebook in the
anaconda download.
For installation of NENGO as part of this environment see www.nengo.ai -> download.
- Example 01 (chapter 7.1.2): representation of a sin-wave-input by just two neurons. See: example01.ipynb
- Example 02 (chapter 7.1.2): representation of a sin-wave-input by many neurons. See: example02.ipynb
- Example 03 (chapter 7.1.3): simple transformation (identity) between neuron ensembles. Download: example03.ipynb
- Example 04: (chapter 7.1.3): complex transformation (square, square root) between neuron ensembles and adddition of outputs. Download: example04.ipynb
- Example 05: (chapter 7.1.4): recurrent neuron ensemble as short term memory. See: example05.ipynb
- Example 06: (chapter 7.1.4): recurrent neuron ensemble as oscillator. See: example06.ipynb
- Example 07: (chapter 7.2.1): similarity plot of temporal succession of four S-Pointers (concepts). Download: example07.ipynb
- Example 08: (chapter 7.2.2): binding of words and syntax markers to a sentence followed by unbinding. Download: example08.ipynb
- Example 09: (chapter 7.2.3): simple transformation (identity) of S-Pointers from buffer A to buiffer B. Download: example09.ipynb
- Example 10: (chapter 7.2.3): complex transformation of S-Pointers (using associative memory) from buffer A to buiffer B. Download: example10.ipynb
- Example 11: (chapter 7.2.3): superposition of S-Pointers (addition). Download: example11.ipynb
- Example 12: (chapter 7.3.1): question answering based on visual input (concept generation, one answer, buffers only). Download: example12.ipynb
- Example 13: (chapter 7.3.1): question answering based on visual input (concept generation, four answers, buffers only). Download: example13.ipynb
- Example 14: (chapter 7.3.1): question answering based on visual input (concept generation, four answers, one memory). Download: example14.ipynb
- Example 15: (chapter 7.3.1): question answering based on visual input (concept generation, four answers, three memories). Download: example15.ipynb
- Example 16: (chapter 7.3.3): question answering based on visual input (generation of phonological form). Download: example16.ipynb
- Example 17: (chapter 7.3.3): question answering based on visual input (syllable sequencing). Download: example17.ipynb
- Example 18: (chapter 7.4.3): simulating a phonological and a semantic S-Pointer network: similarity of items (mean values). Download: xxx.ipynb
- Example 19: (chapter 7.4.3): simulating a phonological and a semantic S-Pointer network: similarity of items (sample items). Download: xxx.ipynb
- Example 20: (chapter 7.4.5): simulating inverse binding for "apple" in semantic network. Download: xxx.ipynb
- Example 21: (chapter 7.4.5): simulating inverse binding for "almond" in semantic network. Download: xxx.ipynb
- Example 22: (chapter 7.4.5): simulating inverse binding for "apple" in semantic network using clean-up. Download: xxx.ipynb
- Example 23: (chapter 7.4.6): simulating inverse binding for "fruits" in semantic network using clean-up. Download: xxx.ipynb
- Example 24: (chapter 7.4.6): simulating inverse binding for "apple" in semantic network using modified clean-up. Download: xxx.ipynb
- Example 25: (chapter 7.4.6): simulating inverse binding for "almond" in semantic network using modified clean-up. Download: xxx.ipynb