Hey, so I wrote a pretty long but working enigma program on my TI-82 that encodes and decodes a string input like an enigma used by the Germans in WWII. So far I've overcome several issues and it seems like I fixed all the bugs, but one thing i noticed is that the amount of time it takes to encode/decode one letter increases after each letter. After it encodes one letter, the output letter gets stored in a string to the last place and the whole string is outputted to the screen. The only thing that really changes after each iteration is the length of this output string.
It seems like it takes 0.05 s longer with EACH LETTER. So after e.g. 10 letter it would take 0.5 s longer, that means withing 300 letter the amount of time to en/decode a single letter gets quadrupled, from ~0.7 s per letter to over 2 seconds.
I dont think it has to do with the enigma code itself, but with the string size. But is one letter per iteration really that much? Does it make that much of a difference to add 1.3s to each iteration after 300 letters?