-1

I am working on an algorithm whose specs says that it accepts 32 bit input (long type).

My actual data is 14 bytes e.g 11:12:12:04:DD

I have created sub arrays like

subarray1[4]={'1','1',':',1} // 32 bits

Can I pass this subarray to my algorithm?

Actually when I print subarray as string then it gives garbage value. But after increasing the size to 5 it prints well. But now it has increased to 5 bytes (more than required by algorithm).

How to create 32 bit from my data above so that it can be passed to algorithm?

gnat
  • 21,213
  • 29
  • 113
  • 291

1 Answers1

0

Quick head-up: this site is actually about Software Engineering. You won't get your code written for you here. If you have problems with code that you have written, refer to CodeReview SE or Stack Overflow. Check out the guidelines on how to format a successful question.

The outermost problem here is that you do not know how the input data relates to the desired output. This is a requirement/design problem and can not, nor should it be, solved by you.

You need to get back to whoever gave you this assignment and require an explanation, e.g.

How do I go from 00:11:22:33 to the 32-bit value? And what is the value in this case? May I have a couple of examples?

The likely explanation, which I am not correct in suggesting since mine is just a possibility, is that the format is similar to the MAC address representation: xx:xx:xx:xx are four bytes represented in hexadecimal.

So the inner problem here is that you need to split the string into its constituent parts

01:ab:38:af:17   => 01, ab, 38, af, 17

which depends on what language you're working with; then you need to convert each component into its 8-bit integer representation, where ff is 0xff, or 255, i.e. 11111111 in binary, and know for certain how they are assembled: for example, should 0f:00:f0 become 000011110000000011110000, or rather 111100000000000000001111? Is the first byte the least significant one, or the most significant one? This must be addressed in the design specs which should have been given you. You cannot decide this by yourself (and if you do, you've a 50% chance of being wrong, and a 100% chance of being held responsible if you are. In some firms you'll be held accountable even if you are right, for having covered someone else's lack of professionality).

The fact that you were not told is a third problem worth tackling, as it may indicate some bad procedural juju in your development process. On the other hand, in some firms and with some coworkers, it might be seen as throwing someone under the bus. Your call. You may want to bring the matter to Workplace SE.

Then we have a fourth problem which might be unreal, and just arise from my interpretation, that 11:12:12:04:DD are actually the bytes 0x11, 0x12, 0x12, 0x04 and 0xDD in either LSB or MSB order. And this is that five bytes will never fit into 32 bits, since 5*8 is 40 bits.

Either the specs have "grown" to 64-bit integers, or there's something badly messed up in designer land.

LSerni
  • 2,441