It's always possible that the hardware employed somehow influenced the results. So here what was used:
- Arduino Uno compatible microprocessor
- Data logger shield with SD card capabilities
- Vishay IR 38 kHz receiver
- High power, 940 nm infrared led for the code emitter
SEGMENT BOUNDARY DETERMINATION
The start of each song/segment was determined simply by a departure from the intermission routines, while the end of each song appears to be signified by the sequence: 98 25 0D 48 10 D0 42 F3 FF 24 51
The portions of the recordings containing the various songs were extracted from full session recordings. No adjustment of the segment's timestamps has been done, ie, none of the recordings are set to start at time=0. The PlaybackMouseEarFile script allows for setting an offset time, so that shouldn't pose too much of a problem - but it is something to be aware of.
The run times for the MWM recordings often don't exactly match the run times of the videos. Several factors come into play. Videos may not begin exactly with the start of the song. And it's common for the MWM emitters to begin sending codes a second or so before they are intended to be executed. That easily may add a second or two to the front end of the MWM recordings. It's also quite common for the MWM shows to begin with a series of initialization commands that need to be enacted before the first actual performance code is executed. On the backend, I've already mentioned that each segment has a 98 25 0D 48 10 D0 42 F3 FF 24 51 trailer. That FF code, alone, should be good for an extra 1.5 sec. So it's understandable if the MWM run time is longer than the audio/video. However, there are a couple of these for which the MWM run time is shorter than the audio/video. That I can't explain at this time. These recordings haven't been really studied yet.