Some important concepts of neural networks are similarity, generalization, invariance and training. Some neural networks are supposed to be able to classify objects according to hidden similarities. All of those concepts are put into question by the consideration first put forward by Watenabe that from a purely logical point of view, similarity is a purely arbitrary concept. It can be shown that this implies that the notion of invariance is also arbitrary , that so-called hidden similarities and generalization cannot exist without some external criteria. Such criteria are either implicit in the training algorithms or must be imposed explicitly. This imposes severe limitations on what neural networks can accomplish. However there are some positive implications; neural networks can be designed to classify objects into arbitrary classes. Applications to optical neural networks and examples will be presented.
Tópico:
Neural Networks and Applications
Citaciones:
0
Citaciones por año:
No hay datos de citaciones disponibles
Altmétricas:
0
Información de la Fuente:
FuenteProceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE