bitToIntegerVector
|
The bitToIntegerVector function converts a Bit Vector into an Integer Vector.
The Bit vector is treated as a series of 32-bit groups. Each 32-bit group is converted
into an Integer as follows: (a) the first bit becomes the sign, and (b) the trailing
31 bits become the magnitude. Only Bit Vectors, which are an exact multiple of 32 bits in length, can be converted.
If a Bit Vector, which is not an even multiple of 32 bits in length is inputted, an
error message is returned; otherwise the Bit Vector is converted into an Integer
Vector containing one number for every 32 bits in the input Bit vector.
For instance, passing a Bit Vector of length 96 will return an Integer Vector of
length 3; while, passing a Bit Vector of length 100 will return an error message. Usage When a Bit Vector has been evolved, using a genetic algorithm, as a genome,
the bitToIntegerVector function is an efficient way to convert the Bit
Vector genome into an Integer Vector for direct use in solving the target problem
(bitToIntegerVector bitVector) (bitToIntegerVector bitVector intVector) Returns an Integer Vector.
Expression:
Arguments
Name
Type
Description Argument: bitVector BitVector
A Bit Vector to be converted into an Integer Vector Argument: intVector IntVector
(Optional) an Integer Vector to receive the converted bits from the Bit Vector
Returns:
Here are a number of links to Lambda coding examples which contain this instruction in various use cases.
Here are the links to the data types of the function arguments.
IntVector | BitVector |
Here are also a number of links to functions having arguments with any of these data types.
Analytic Information Server (AIS)AIS Component Systems
|