This work presents a class of functions serving
as generalized neuron models to be used in artificial neural
networks. They are cast in the common framework of computing a
similarity function, a flexible definition of a neuron as a pattern
recognizer. The similarity endows the model with a clear conceptual
view and serves as a unification cover for many of the existing neural
models, including those classically used for the MultiLayer Perceptron
(MLP) and most of those used in Radial Basis Function Neural
Networks (RBF). These families of models are conceptually unified and their
relation is clarified. The possibilities of deriving new instances are
then explored and several neuron models --representative of their
families-- are proposed. In addition, the similarity view leads
naturally to further extensions of the models to handle heterogeneous
information, that is to say, information coming from sources radically
different in character, including continuous and discrete (ordinal) numerical
quantities, nominal (categorical) quantities, and fuzzy
quantities. Missing data are also treated explicitly as such. A
neuron of this class is called an heterogeneous neuron and any
architecture making use of them is an Heterogeneous
Neural Network (HNN), regardless of the specific architecture or
learning algorithm. In this work, however, the experiments are restricted to
feed-forward networks, as the initial focus of study. The learning
procedures include a great variety of techniques, basically divided in
derivative-based methods (such as the conjugate gradient) and evolutionary
ones (such as genetic algorithms).