I'm writing a program for converting from base 10 to other bases. While testing by converting to base 10, I noticed that numbers like 1/3 and pi output an answer with 14 digits. In order to eliminate rounding error with repeating decimals (so it will display the output as a repeating decimal without any extra "pieces" tacked on the end, and also to avoid the phenomenon where .99999… gets output as 0.0000…—where the  is in the base^(-1) position), I want the calculator to round off to one figure less than its maximum accuracy.
My first thought (after the round( function, which obviously failed) was to simply tell the calculator
:.1F->F //F is the fractional part of what I'm converting. :10F->F
to turn .33333333333333 into .03333333333333 and then into .33333333333330, but that didn't work for some reason.
I'm at a loss; does anyone else have a better understanding of the calculator's floting-point decimals than me?
I want my program to calculate its own maximum accuracy; as of now it has to be input manually.