Can't remember when or where I saw it, but I do seem to recall reading that when a 68k calculator evaluates a function on a matrix A, the actual steps are something like:
1. find an eigendecomposition (A=VΛVֿ¹, Λ the diagonal matrix of eigenvalues and V the matrix made up of the eigenvectors joined by columns);
2. evaluate the function (noninteger powers, exponential/logarithm, trig functions, etc.) at each of the eigenvalues, thus forming the diagonal matrix f(Λ);
3. Multiply back V to form the result; f(A)=Vf(Λ)Vֿ¹
which is also the reason why these functions will fail on matrices with so-called "defective eigenvalues", e.g. [[2,1,0][0,2,1][0,0,2]] (a so-called "Jordan block" with the multiple eigenvalue 2 but only a single eigenvector).
Of course, the power series calculation (in a suitably modified form) is actually more efficient for computing matrix functions, but it looks like TI went for generality with evaluating functions of a matrix.