Additional exercises for Fundamentals
of Computational Neuroscience as a warm-up for the final
a) Visualize a feedforward network and a recurrent network with the following weight matrix
|
( |
2 |
4 |
3 |
) |
w = |
1 |
1 |
0 |
||
|
3 |
0 |
2 |
Can a multiplayer feedforward mapping network approximate an exponential function?
x1 |
x2 |
x3 |
y1 |
y2 |
1 |
1 |
0 |
1 |
1 |
1 |
0 |
0 |
0 |
1 |
0 |
1 |
0 |
1 |
0 |
0 |
0 |
1 |
1 |
1 |
a) Is this function linear separable?
b) Draw a principle network architecture that can implement this function.
c) Specify the weights and thresholds to represent the above function
Specify a supervised learning rule for feedforward mapping networks that does not use error backpropagation.
Given are the pattern x1 = (1,
1, -2)’ and x2 =
(-1.5, 2, -0.5)’
, which we want to imprint into an attractor network with Hebbian learning.
Give the weight matrix after learning.