This paper introduces a general class of neuron
models, accepting heterogeneous inputs in the form of mixtures of
continuous (crisp or fuzzy) numbers, linguistic information, and
discrete (either ordinal or nominal) quantities, with provision also
for missing information. Their internal stimulation
is based on an explicit {\it similarity relation}
between the input and weight tuples (which are also heterogeneous).
The framework is comprehensive and several models can
be derived as instances --in particular, two of the commonly used
models are shown to compute a specific similarity function
provided all inputs are real-valued and complete. An example family of models
defined by composition of a Gower-based similarity with a
sigmoid function is shown to lead to network designs
({\em Heterogeneous Neural Networks}) capable of learning from
non-trivial data sets with a remarkable effectiveness, comparable
to that of classical models.