This switch is the equivalent of the -CF command line switch. It sets the minimal precision of floating point constants. Supported values are 32, 64 and DEFAULT. 80 is not supported for implementation reasons.
Note that this has nothing to do with the actual precision used by calculations: there the type of the variable will determine what precision is used. This switch determines only with what precision a constant declaration is stored:
Will use 64 bits precision to store the constant.
Note that a value of 80 (Extended precision) is not supported.