
Just so you know, I looked at my code, and found a few other "omissions" - I must really stop trying to pull all-nighters in series - it was even *
more* embarrassing - glad nobody could see what
I saw!!! If I drank, I'd've thought I was drunk when I wrote that... ...guess I got no real excuse, then, huh?


For what it's worth - to clarify everything (and try to restore some shred of my dignity - no hope there!

) - the four-byte hexadecimal values look like this in your Hex Editor:
[00] [00] [05] [BA]
...and it's pretty straight-forward how to convert those values to decimal from hexadecimal. but when you're in DBPro, those bytes are read as decimal values (not hexadecimal, even though that's how we display it in the Hex Editor). NOW what do you do? Your byte values will read [0] [0] [5] [186] in DBPro.

The secret is in knowing which "places" the bytes are found - and performing a little "scalar" multiplication before adding them all back together. "Scalar" is used to reference the "Scale of the Order of Magnitude" of the numbers, not necessarily "a quantity that has a magnitude but no direction". (Although the two definitions come pretty close to one another.)

But what should those multiplicative values be? That's the easy part: the rest falls into position. (Anybody here who has studied a little number theory will be with me on this part.)
If you think about it, the byte values are retrieved as decimal values, but their *places* are still in hexadecimal. No, I haven't gone mad: it's the truth! The places are in hex, but the values are in decimal:
So, if you open a file in your Hex Editor, and see the four-byte pattern [03][D1][09][A3], what do you think those four (byte) values will be when you read them into a DBPro variable? They will be the decimal values [3][209][9][163]. Trust me - go on and try it. I'll wait...


But what
should they be? How do you get the correct
decimal value from those four bytes? Well, if you looked into my explanation under the +Code+ button above, you'll have a formula for turning this value from four (hexadecimal) bytes to a decimal value:

...is the relationship recognizable? I've got some pretty good "deer in the headlights" looks from the middle of the classroom, here. You see - the decimal values are the FULL values (for each place) of the hexadecimal numbers in those bytes - in those places. In this case, we know that there are four places, and each byte value is the FULL value that two hexadecimal digits occupy in our Hex Editor. So all we need to know is what "place" our value is occupying, in order to know what order of magnitude it belongs to - so we know what to multiply it with:

Magic, right? Yep, just like magic. And once you know the trick, well, it still mystifies others.

...And the
Big-Endian vs.
Little-Endian thing is a
Mac vs.
Intel thing: for a given data type (which takes up a certain amount of memory, regardless of actual value) the data can be stored "
Little End First", like the Intel people do it, or "
Big End First", like the Mac guys used to. In "Big-Endian" format, our number above is stored as [03][D1][09][A3]. In "Little-Endian" format, the bytes are stored in the reverse order, [A3][09][D1][03]. That's it - that's the big difference.

Since I'm not teaching Basic Algorithms this summer, we won't need to discuss it any further. (Unless you really want to, that is...)


Thank your for your time and attention. Sorry I almost caused a stir about nothing. I included the above message to clarify what was going on, and to (hopefully) de-mystify a few things about MIDI data encoding. MIDI values are encoded/written in four-byte "big-endian" format. And once you know that, you're ready to parse MIDI at the file level...