Gather round kids, for i have to bring you another fascinating tale of the infinite quest for AR.
Where did we left off - the last time? Ah, yes, the udp package wars. Well those where actually quite boring and solved.
So what remains? Yes, the math intrigues.
On the ARDevice we have  a object-pose, which is basically the virtual objects position in world space.
But dont we need the camera, uncle Pica?
Yes, we need that- we need the camera position in the ar-core world, and the center of the object.
We get that by  taking the anchorPose and applying the camera affinity matrix on the coordinates.
Code: Select all
 anchorPose= anchorPose.compose(camera.getPose());
Now we have the center of our little world - in coordinates relative to the camera.
To make our live easier we also send the camera rotation as a quaternion to the system:
Code: Select all
 camera.getPose().getRotationQuaternion(rotQuaternion,0);
Now we have the data arrive on the Spring Side of town..
Code: Select all
function setCamera(cam_mat, rot_quat)
We first determine the max dimensions of the map
Code: Select all
	MAX_MAP_SIZE = math.max(mapSizeZ,mapSizeX)
From this we calculate the scalefactor- arcores dimensions are in meters 
Code: Select all
	--Scalefactor = OriginalScale(1m)/TotalSizeOfSquareInReality (e.g. 2m)* biggest map size in Elmo
	scaleFactor = ((1/SIZE_SPRING_SQUARE)* MAX_MAP_SIZE)
We scale the matrice so the object is a spring sized distance away from the camera
	
Code: Select all
	world_mat= scaleMatrice(cam_mat, scaleFactor,scaleFactor,scaleFactor)
Now, to get to the position of the camera in the world, we have to calculate the adjunct matrix.
Which is done by calculating the -minor matrices and the cofactors.
Code: Select all
	--calculate the adjugate - the matrix of cofactors 
	minor_mat= matrix(4,4,0)
	for i=1,#minor_mat do
		for j=1,#minor_mat do
			-- Minor -- cofactor
			minor_mat[i][j] = matrix.det(getMinorMat(world_mat,i,j)) * (-1^(i)*-1^(j))	
		end
	end
	
	--transpose the cofactor matrice into the adjunct
	adjunct = matrix.transpose(minor_mat)
Finally we computate the inverse matrix to get the coordinates of the camera relative to the object in world
	
Code: Select all
inverse_mat = (1/matrix.det(world_mat))* adjunct
Because the inverse matrice might not be normalized, we do 
Code: Select all
	--normalize the matrice
	norm = 1/inverse_mat[4][4]
	inverse_mat= inverse_mat * norm
Now we can reap what we saw - we get the camera Position from the inverse and plung that into the camera state.
Code: Select all
	 camState= Spring.GetCameraState()
	camState.px= inverse_mat[1][4]
	camState.pz= inverse_mat[3][4]
	camState.py= inverse_mat[2][4]
For the Quaternion to euler angles - radiants as we imagine them i found a nice litte javascript function.
Code: Select all
	camState.rx, camState.rz,	camState.ry= quaternionToEulerAngle(rot_quat[1],rot_quat[2],rot_quat[3],rot_quat[4])
	
	Spring.SetCameraState(camState)
end
Also it fails, of course it fails.