I did. I've switched to using blender so that there won't be any discrepancies.
The 1st frame in the image below I have the mesh in Blender. The parameters were already how you see them, but I applied the object transform anyway just to rule anything out. I exported this to a collada file.
The 2nd frame is the mesh after I uploaded it to SL, saved a collada from SL, then imported it back into Blender. You can see the scale transform parameters have changed, and the shading on the mesh is different as the vertex normals are pointing in a different direction. This will cause problems with normal maps like I showed in my original post.
The 3rd frame is again the mesh from SL. If I apply object transforms to the mesh, it fixes any shading issues as the process re-averages the normals. At this point the mesh basically looks like the one we first exported from Blender.
The 4th frame is interesting. This is the mesh from SL, but instead of applying the transforms I have just reset the scale (Alt+S while in object mode). You can see that the mesh is stretched to fit inside the volume of a cube. If SL computes the normals at this stage, this is how you end up with the result in the 2nd frame.
If a mesh were to fill more of the cubic volume then SL doesn't have to stretch it as much, so the issue with the vertex normals is less pronounced. It is just very evident in this case because the mesh is thin on one axis.