FINAL REPORT (August 1991)

EPSRC Grant GR/G 31291 (16.07.90 to 15.07.91)

Building a Learning System using Probabilistic RAM Nets

T.G. Clarkson, Department of Electronic and Electrical Engineering, King's College London
J.G. Taylor Department of Mathematics, King's College London

Introduction

The probabilistic random access memory (pRAM) is an electronic device with intrinsically neuron-like behaviour. Networks of these devices can be used to realise in hardware most of the standard connectionist learning paradigms, but because of the close analogies between pRAMs and neural firing behaviour, this also raises the possibility of using neurobiological insights within the context of electronic networks in a more direct way than has hitherto been feasible.

Results of the research

The overall objectives of this project were to investigate in detail the information-processing capabilities of pRAM nets, using both computer simulation and mathematical analysis, and to develop neurocomputing hardware based on these models which has the potential to perform useful work in a variety of pattern-recognition applications. This has successfully been achieved.

The original objectives which have been fully achieved during the period of this grant are itemised below:

Further description of the tasks above and the results obtained are given in this report and also in published papers included in the appendices.

Objectives Partially Met

Those original objectives which have only partially been met are: The first of these, the integration of the pRAM network with a pattern recognition system, has not yet been implemented. However, during the course of the year, an agreement has been established with the Defence Research Agency (RAE Farnborough) to integrate a pRAM network, based on the new pRAM modules developed during the year, with an image processing system in order to classify selected objects within the image in real-time. This work will be spread over a two-year period starting in December 1991.

The second original objective (topographic maps) was set aside since pRAMs have been shown not to be ideally suited to this task. Work on other forms of learning were much more fruitful and led to a distinctive rôle for our research so that the development of topographic maps is now of a lower priority.

Exploitation of the results

Results from the research have been used to set up agreements to exploit and market the pRAM device. Since we collaborate in pRAM research with Dr D Gorse at University College London, UCLi (a UCL company promoting and managing applied contracts and technology transfer) have undertaken to pursue patents on our behalf and to provide legal assistance in discussions with companies concerning the rights to designs incorporating pRAMs.

Papers

17 papers have been published, are in press or under review in this period. The titles are given in the Section below.

Patents

The following patents were filed before this research grant started but full applications have been made during the course of the year. The following patents have been filed during the year as a result of this research.

New devices

A new generation of learning pRAMs has been designed and these are being built by Plessey (Swindon). A commercial agreement is being set up with Plessey concerning potential exploitation of the design. Further pRAM designs have been completed and decisions are now being made concerning the fabrication of these.

Systems developed

During the year, an agreement has been established with the Defence Research Agency (RAE Farnborough) to integrate a pRAM network, based on the new pRAM modules, with an image processing system in order to classify selected objects within the image in real-time. This work will be spread over a two-year period starting in December 1991. The system at RAE Farnborough is very similar to an image processing system developed at King's College, funded by SERC under grant GR/F 62520, "An Integrated Graphics Processor" (3 years from 1.10.89 to 30.09.92). Therefore a similar integration of pRAM modules into this design is taking place.

UCLi have developed a marketing strategy for pRAMs and approaches to companies are being made. However, the pattern so far has been that of companies approaching us concerning our inventions. It is very likely that exploitation of systems using pRAMs will take place.

Background to the pRAM

Networks of Boolean units (RAMs) were first studied by Kauffman [1] in the context of genetic nets, and were subsequently used by Aleksander [2] in image classification applications. It was noted that such deterministic units could be regarded as having 'neuron-like' properties (in fact subsuming the category of McCullough-Pitts binary decision neurons (BDNs), since a RAM can perform any Boolean function of its inputs, whilst a BDN can perform only those which are linearly separable) but no attempt was made to closely model biological neurons. The units mentioned above were constructed from conventional RAMs, with either 0 or 1 stored at a given address. Aleksander [3] extended the RAM model to that of a 'probabilistic logic node' (PLN): in addition to storing 0 or 1, a PLN memory location could be in a 'u' state, in which it was equally likely to output 0 or 1 when addressed. The natural limit of the multi-state PLN would be to let the memory contents take on a continuous range of values from the interval [0,1]: this generalisation defines the pRAM model, a neurobiologically motivated extension of the PLN, which has been shown by Gorse and Taylor [4] to display dynamical behaviour identical to that of the Taylor model [5] of the neuron.

Recent developments in learning have indicated that stochastic activity in the neural units is important for allowing a fuller exploration of the state space, and for providing a capacity for generalisation, but most connectionist models which utilise noise do so by introducing it at the threshold rather than synaptic level, which is biologically unrealistic. In contrast, the Taylor model emphasises the role of synaptic noise, in keeping with the properties of real neurons. The pRAM model also has the advantage of coding signals in terms of pulse trains (as real neurons do) rather than real-valued variables. This method of information transmission can be shown to be more reliable and to lead to a simple way of calculating time-derivatives of neural signals [6]. It also allows for sensitivity to the temporal aspects of patterns, down to the tens of microsecond range for pulse lengths of milliseconds, as in the Mead 'silicon ear' [7]. It may be noted at this point that the most generally adopted connectionist models, those using some form of back error propagation - which is a process with no biological precedent - have recently suffered setbacks associated with their very lengthy (and occasionally unreliable) training procedures, which may make such models unsuitable for real-time applications.

It has recently been shown [8] that pRAMs can implement a range of learning algorithms, from the type of gradient descent training typified by the back error propagation models mentioned above, through more biologically realistic reward/penalty approaches, to fully unsupervised competitive learning of the Kohonen type. pRAMs thus represent a very flexible approach to the construction of adaptive artificial networks. Moreover, the more biologically realistic training algorithms (reinforcement and competitive learning) are hardware implementable in pRAM technology at all stages of the learning process.

pRAM developments resulting from this research programme The following sub-sections indicate the variety of developments to the pRAM which have occurred during the past year. pRAM network connected to a workstation

The original 2-pRAM board built by Clarkson [9] which connects to an IBM-PC compatible computer is too small for realistic networks as only 2 neurons were built. Therefore the first VLSI 4-pRAMs [10] have been built onto a VME card [Appendix 1] in order to be used with the Sun workstations provided by this grant. These are basic pRAM devices and are used in the forward path of the network. The learning task is performed by the workstation and the new 'weights' are written into the hardware pRAMs as required. With these devices, a network of up to ten nodes can be built with the connectivity determined externally by the workstation. In this way, a number of different connectivities can be investigated with the one set of pRAMs.

Half-tone image processing using pRAMs

The 4-pRAM devices have found applications in the processing of half-tone images [11] where learning behaviour is not required, but the non-linear and especially the stochastic features of pRAMs are useful.

Image recognition

One of the stated objectives of this research was to test a pRAM net in realistic image recognition applications using the Sun host. This has yet to be implemented but is still of considerable interest to us. The approach from RAE Farnborough (Section 2.2.3) has led us to develop a pRAM interface to a custom processing system in the first place. This has deferred the implementation of the Sun interface; the estimated completion of which was probably ambitious in the 12 month time-scale. This system is to use the new modular pRAM design so as to allow networks in excess of 1000 neurons to be built. These will be interfaced to the Sun by means of the VME rack as in Section 4.1. The cost of this interface is small and will be borne by the Department.

Learning in pRAMs

Experimentation with the 4-pRAM network has paved the way to the design of a 'learning pRAM', in which all stages of the learning procedure are implemented via pRAM technology [12,13]. Computer simulations of pRAM learning has been performed in order to establish a firm foundation for the subsequent hardware implementations. This work has helped us to achieve the aims of the programme which were stated as follows: These topics are explained in the sections below.

Reward/penalty learning

Reward/penalty learning can be implemented in two ways: as a supervised or unsupervised training method. An example of the supervised method is given in the next section and VLSI devices have been built which use unsupervised, or local reward/penalty learning. A full explanation of this work is contained in papers 1, 3, 4, 5, 8, 9, 10 and 13 listed in the Section 6 below.

Training in noise

When trained in noise, the pRAM exhibits generalisation behaviour. It is also successful in extracting and classifying noisy images. An example of this is given in Appendix 2 and in paper 14 listed in Section 6.

The integrating pRAM

The i-pRAM allows systems to handle real-valued input and output. This is achieved by integrating spike trains over a period and converting the mean spike frequency to a real-valued number. Details of the i-pRAM are given in papers 7, 9 and 13 of Section 6.

The noisy leaky integrator pRAM

This is the latest and most biologically realistic derivative of the pRAM. The structure is shown in Appendix 3 and its temporal processing capability is described in papers 7, 13 and 17 of Section 6.

The modular pRAM architecture

In order to develop scalable architectures (possibly involving pyramid structures as proposed by Aleksander) for recognising visual and temporal patterns using reinforcement training, a modular architecture was developed. This allows a large number of neurons to be implemented in hardware with a reconfigurable connectivity between neurons [ ].

It was envisaged that a net of 256 8-input 'learning pRAMs' would be built. In the event, a module containing 256 4-input learning pRAMs has been built. In early 1992, modules containing 256 8-input non-learning pRAMs will also be fabricated, funded by a grant from the University of London. These will operate at high speed and will store the previously learned behaviour of a pRAM net in their 'weight memory'.

Temporal sequences

Another area investigated was the analysis of temporal patterns. The temporally summating pRAM (i-pRAM) and the noisy-leaky integrating pRAM are used here, where the temporal pattern is stored by modification of the weights on the input lines (references above).

Equipment

A SUN Sparcstation 4/330M was originally requested to host a network of pRAM VLSI devices and to provide extra computing power for computer simulations. The feature of this workstation which made it applicable to our research was the VME backplane which allows plug-in boards to interface at high-speed to the processor.

At the time the grant was announced, it was possible to purchase two Sun IPC workstations and an independent S-Bus to VME adaptor at the same price. This allows one machine to be used for simulation and VLSI design and the other machine to be interfaced to the hardware. The two machines have been connected by Ethernet and will eventually be joined to the College's Ethernet back-bone to facilitate communications between the Departments of Electronic and Electrical Engineering and Mathematics.

Publication of Results

The following 17 papers have been sent to international journals and conferences and have been published, are in press or under review (as indicated) and 6 patent applications have been filed (Section 2.2.2).

  1. "Training Strategies for Probabilistic RAMs", Gorse D and Taylor J G, Parallel Processing in Neural Systems and Computers, Eds, Eckmiller, Hartmann and Hauske, Elsevier, 1990.
  2. "A general model of stochastic neural processing, Gorse D and Taylor J G, Biol. Cybern. 63, 299-306, 1990.
  3. "pRAM Automata", Clarkson T G, Taylor J G and Gorse D, IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA '90), Budapest, 235 - 243, December 16- 19, 1990.
  4. "Biologically Plausible Learning in Hardware Realisable Nets", Clarkson T G, Gorse D, Taylor J G, Proceedings of ICANN91 Conference, Helsinki, 195-199, 24-28 June 1991.
  5. "A Serial Update VLSI Architecture for the Learning Probabilistic RAM Neuron", Clarkson T G, Ng C K, Gorse D, Taylor J G, Proceedings of ICANN91 Conference, Helsinki, 1573-1576, 24-28 June 1991.
  6. "A Hybrid Serial/Parallel Architecture for Learning RAM-based Neural Networks", Clarkson T G and Ng C K, Proc. 2nd International Conference on Microelectronics for Neural Networks, Munich, 183-191, 16-18 October 1991, Kyril & Method Verlag (Munich).
  7. "Applications of the pRAM", Clarkson T G, Gorse D, Guan Y and Taylor J G, ICANN '91 Conference, Singapore, 18-22 November 1991.
  8. "A Probabilistic RAM Controller with Local Reinforcement Learning", Ng C K and Clarkson T G, ICANN '91 Conference, Singapore, 18-22 November 1991.
  9. "From Wetware to Hardware: Reverse Engineering using Probabilistic RAMs", Clarkson T G, Taylor J G and Gorse D, Special Issue: "Recent Advances in Neural Nets", International Journal of Intelligent Systems, Freund, London - in press.
  10. "The pRAM as a hardware-realisable neuron", Clarkson T G, in Series in Neural Networks, Springer. (Accepted for publication in '92).
  11. "Training strategies for Probabilistic RAMs", Gorse D and Taylor J G, Neural Networks, in press.
  12. "Universal Associative Stochastic Learning Automata", Gorse D and Taylor J G, Neural Networks World, in press.
  13. "Learning Probabilistic RAM Nets Using VLSI Structures", Clarkson T G, Gorse D, Taylor J G, Ng C K, Special Issue on Artificial Neural Networks, IEEE Transactions on Computers. Revised Aug 1991 - in press.
  14. "Generalisation in Probabilistic RAM Nets", Clarkson T G, Gorse D and Taylor J G, IEEE Transactions on Neural Networks - under review.
  15. "Image Conversion Using Probabilistic RAM Nets", Clarkson T G and Guan Y, IEEE Transactions on Circuits and Systems for Video Technology - under review.
  16. "VLSI Implementation of a Digital Neuron", Ng C K and Clarkson T G, Int. J. Computer-Aided VLSI Design - under review.
  17. "Learning Sequential Structures with recurrent pRAM Nets", Gorse D and Taylor J G - under review.

Conclusion

The research grant has been successful in advancing pRAM research. The equipment purchased will continue to achieve this end and will become the hub of pRAM research at King's College. This success can be measured in terms of the number of papers and patents arising during the year, the development of new devices and the imminent implementation of pRAM systems.

References

[1] S A Kauffman, "Metabolic stability and epigenesis in randomly connected genetic nets", J Theor Biol, 22, 437-467, 1969.
[2] I Aleksander, I V Thomas and P A Bowden, "WISARD: a radical step forward in pattern recognition", Sensor Review, 120-124, July 1984.
[3] I Aleksander, "The logic of connectionist systems", in Neural Computing Architectures, I Aleksander (ed), MIT Press, 1989, pp 133-155.
[4] D Gorse and J G Taylor, "On the identity and properties of noisy neural and probabilistic RAM nets", Phys Lett, A131, 326-332, 1988; D Gorse and J G Taylor, "An analysis of noisy RAM and neural nets", Physica, D34, 90-114, 1989.
[5] J G Taylor, "Spontaneous behaviour in neural networks", J Theor Biol, 36, 513-528, 1972.
[6] M Gluck, D B Parker and E S Reifsnider, "Learning with temporal derivatives in pulse-coded neuronal systems", Advances in Neural Information Processing Systems 1, ed. D S Towetzky, Morgan Kaufmann Pub., 1989.
[7] J Lazzaro and C Mead, "A Silicon Model of Auditory Localisation", Neural Computation, 1, 39-46, 1989.
[8] D Gorse and J G Taylor, "A gradient descent algorithm for probabilistic random access memories" (submitted to Network); D Gorse and J G Taylor, "Training strategies for probabilistic RAMs", to appear in: Proceedings of the ICNC Conference, Dusseldorf, March 1990.
[9] T G Clarkson, D Gorse and J G Taylor, "Hardware realisable models of neural processing", in: Proceedings of the First IEE International Conference on Artificial Neural Networks, London, October 1989.
[10] T G Clarkson, J G Taylor and D Gorse, "pRAM Automata", IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA '90), Budapest, 235 - 243, December 16- 19, 1990.
[11] "Image Conversion Using Probabilistic RAM Nets", Clarkson T G and Guan Y, IEEE Transactions on Circuits and Systems for Video Technology. Under review.
[12] T G Clarkson and C K Ng, "A Hybrid Serial/Parallel Architecture for Learning RAM-based Neural Networks", Proc. 2nd International Conference on Microelectronics for Neural Networks, Munich, 183-191, 16-18 October 1991, Kyril & Method Verlag (Munich).
[13] T G Clarkson, C K Ng, D Gorse, J G Taylor, "A Serial Update VLSI Architecture for the Learning Probabilistic RAM Neuron", Proceedings of ICANN91 Conference, Helsinki, 1573-1576, 24-28 June 1991.

Appendix 1

VME to pRAM Interface Board

Appendix 2

Global reward/penalty learning and data recovery in the presence of noise

Appendix 3

The noisy leaky integrator pRAM

Appendix 4

Papers relating to the pRAM

August 1991