bitToNumberVector
|
The bitToNumberVector function converts a Bit Vector into a Number Vector.
The Bit vector is treated as a series of 48-bit groups. Each 48-bit group is converted
into a Number as follows: (a) the first 8 bits become the signed exponent, and
(b) the trailing 40 bits become the signed mantissa. Only Bit Vectors, which are an exact multiple of 48 bits in length, can be converted.
If a Bit Vector, which is not an even multiple of 48 bits in length is inputted, an
error message is returned;otherwise the Bit Vector is converted into a Number Vector
containing one number for every 48 bits in the input Bit vector. For instance,
passing a Bit Vector of length 144 will return a Number Vector of length 3;
while, passing a Bit Vector of length 100 will return an error message. Usage When a Bit Vector has been evolved, using a genetic algorithm, as a genome,
the bitToNumberVector function is an efficient way to convert the Bit
Vector genome into a Number Vector for direct use in solving the target problem
(bitToIntegerVector bitVector) (bitToIntegerVector bitVector numVector) Returns a Number Vector.
Expression:
Arguments
Name
Type
Description Argument: bitVector BitVector
A Bit Vector to be converted into a Number Vector Argument: numVector NumVector
(Optional) a Number Vector to receive the converted bits from the Bit Vector
Returns:
Here are a number of links to Lambda coding examples which contain this instruction in various use cases.
Here are the links to the data types of the function arguments.
Vector | NumVector | BitVector |
Here are also a number of links to functions having arguments with any of these data types.
Analytic Information Server (AIS)AIS Component Systems
|