The well known problem of software development is that there is very little scientific foundation to the subject. Electrical engineers can use physics as the foundation of their activities. As a result the measures they use in their daily work, such as Volt and Ampere, correspond to well defined phenomena (electric potential, electric current), and satisfy the rules and properties defined by theories of measurement, such as the representational theory [11]. The models they use, such as the Ohm’s law, are experimentally validated and expressed in mathematical form.
In software engineering we have a hard time in defining the objects and properties
(what is exactly the size of a software program? And the complexity?). Not surprisingly measuring an ill defined property is a hard task (are Lines of Code a valid measure of size?). Models relating properties are therefore hard to find and validate.
Some models, for instance the relationship between effort and size, correspond to the common perception of engineers (larger programs require more effort) but are hard to express in a general form (is the relationship linear, exponential?). Other models are simply envisaged but still very fuzzy (what is the relationship between effort in integration testing and post release defect rate?).
Initially, the software engineering community has attacked the problem of defining software measures as a hard science problem. The Software Science approach by
Halstead [15] in the mid seventies could be mentioned as an example of the optimism in that period. However, these approaches were completely missing the validation part, and the importance of human