====== Fiducial tracking ====== In applications of augmented reality or virtual reality, **fiducials** are often manually applied to objects in a scene so that the objects can be recognized in images of the scene -> [[wp>Fiduciary_marker]] * [[https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html#feature-homography|Feature Matching + Homography to find Objects]] at OpenCV Python * [[https://github.com/mattvenn/fiducial|opencv python fiducial demo]] Python code using numpy & cv2. * [[http://note.sonots.com/SciSoftware/haartraining.html|Tutorial: OpenCV haartraining (Rapid Object Detection With A Cascade of Boosted Classifiers Based on Haar-like Features)]] * [[http://reactivision.sourceforge.net/|reacTIVision]] a toolkit for tangible multi-touch surfaces. * reacTIVision is an open source, cross-platform computer vision framework for the fast and robust **tracking of fiducial markers** attached onto physical objects, as well as for multi-touch finger tracking. It was mainly designed as a toolkit for the rapid development of table-based tangible user interfaces ([[/glossaire/TUI|TUI]]) and multi-touch interactive surfaces. {{ http://raw.githubusercontent.com/mattvenn/fiducial/master/artags/artag_rotate.jpg?200}} [[http://www.artag.net/|ARTag]] "Magic Lens" and "Magic Mirror" systems use arrays of the square ARTag markers added to objects or the environment allowing a computer vision algorithm to calculate the camera "pose" in real time, allowing the CG (computer graphics) virtual camera to be aligned. La suite sur le [[http://funlab.fr/funwiki/doku.php?id=projets:fiducial|wiki du FunLab]]