What is the meaning of the following pseudo-code?:
BYTE1(v2)
orBYTE1(v2)
HIBYTE(v2)
Is there any explanation for those macros? How can I implement those macros in C code?
What is the meaning of the following pseudo-code?:
BYTE1(v2)
or BYTE1(v2)
HIBYTE(v2)
Is there any explanation for those macros? How can I implement those macros in C code?
BYTE1(v2)
is the second byte of value v2. according to the reference it's Zero-Indexed. defined as:
#define BYTEn(x, n) (*((_BYTE*)&(x)+n))
#define BYTE1(x) BYTEn(x, 1) // byte 1 (counting from 0)
for example BYTE1(0x1213141516)
is 0x15. (according to Little Endian Byte Order)
HIBYTE(v2)
is the higher byte of value v2. defined as:
#define HIBYTE(x) (*((_BYTE*)&(x)+1))
for example HIBYTE(0x1213)
is 0x12. (according to Little Endian Byte Order)
Open your IDA installation folder open plugins\defs.h, this file contains all of the macros used by the hexrays decompiler. It can also be found at the github in arman's answer.
Important -- this definition has changed in recent versions of IDA, both in defs.h
and in the decompiler output.
As of (some version of IDA between 7.1 and 7.5) the meaning of HIBYTE means something different, I guess you would call it the most significant byte, or highest byte, or last byte.
e.g., in an __int32
it now means BYTE3
, in an __int64
it would mean BYTE7
.
This is contrary to the default windows definition and older versions of IDA.
// minwindef.h
auto result_win = static_cast<BYTE>(static_cast<uintptr_t>(x) >> 8 & 0xff);
// ida_defs_70.h
auto result_ida70 = *(reinterpret_cast<uint8*>(&x)+1);
// ida_defs_75.h
auto result_ida75 = *(reinterpret_cast<uint8*>(&x)+(sizeof x/sizeof(uint8) - 1));
Make sure that you are using the definition that applies to your version of IDA, which can be found in defs.h
in your IDA executable's path under plugins/
After C++ conversion, it now looks like this:
– Orwellophile Jan 15 '21 at 20:26*(reinterpret_cast<uint8*>(&x)+(sizeof x/sizeof(uint8) - 1))