I mean, Erlang uses a virtual machine called BEAM, erlang programs sit at a remarkably high level of abstraction, and it's not like erlang systems are known for their chronically high latency. There's nothing inherent to the idea of not coding directly on the lowest level possible which forces your programs to have bad performance. I'm being facetious here but what if I told you that nearly all machines are virtual machines? Your CPU doesn't really execute instructions, it interprets them by first translating them to microcode. I shouldn't but I will anyway: People who are 100% sure of things are generally wrong and this case is no exception. That way, by the time the organ's sound actually comes out, the drummer's sound will have arrived and they'll be in sync (assuming you have the distance right between the drums and organ this is something you'll probably have to tune for every stage, given air-pressure-related differences in the speed of sound). So for an organ player (organist?), your best bet would be to place the drummer behind the organ, then give the organist a mirror to watch the drummer's sticks and keep time with the visual beat. band), the percussion section is typically positioned rear-most on the stage for this exact reason (and in this case, it is okay to listen for the beat, since the sound of said beat would be coming from behind you). your typical orchestra or symphonic band or even rock/jazz/etc. Instead, each performer has to basically ignore one's own ears (at least for rhythm one should by all means use one's ears for matching pitch, though in theory a sufficiently-fast-moving performer might need to pitch up/down relative to what one hears to account for Doppler shifts) and watch someone up front (e.g. Say, following a drummer is out of the question.įield musicians (marching band, drum & bugle corps, etc.) have a similar issue because of the sheer size of the "stage" the distances between performers is large enough that the speed of sound actually causes notes to be off-beat to the audience if they're "on-beat" to the performer. > Organ players have to deal with that kind of latency on their instrument, but it's a hard skill to learn, and limits the music that can be performed on the instrument. Moral of the story: 10ms is terrible, and every bit of delay you add to a UI can push it (and your users) closer to the edge. ( Maybe the delay is an intentional fade-in or. Every time I click on a menu in macOS (Safari, for example, which I am using as I write this) I can almost hear it go "kachunk." Not even 10ms. Unfortunately this failed horribly in practice because it frequently pushed things over the edge, resulting in two frames of latency (or many more if latency wasn't managed well) which made things feel sluggish and unresponsive.Įven in 2019, though we have 120+ Hz user interfaces, asynchronous refresh, multi-gigahertz processors, and insane GPUs, menus are still clunky. Even if you missed a frame "humans couldn't notice it." There was a terrible idea in the CRT era that when a user clicked on a menu you could do all sorts of computation to update menu contents and still have time to draw the menu before the next ~80Hz refresh. The keyboard also seems to trigger before the note bottoms out, which helps a bit (except when you're playing very quietly.) I can almost tolerate a bit of latency playing a weighted keyboard because I imagine that a hammer is hitting a string so I adjust my playing accordingly. drawing on it is still like drawing underwater, and touch-to-sound latency is still too high in many cases. Life can be rough for bass players, given the wavelength and speed of sound - you have to play ahead of the beat just for a note to show up on time! -)ġ0ms of added latency is very irritating - even adding 4ms feels clunky on guitar/headphones and that's typical for playing through a DAW and digital effects.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |