X3D AR Requirements and Use cases

From Web3D.org
Revision as of 14:46, 23 October 2012 by Endovert (Talk | contribs)

Jump to: navigation, search

Requirements and use cases of X3D functions to support AR and MR visualization

By Augmented Reality Working Group, Web3D Consortium

August 17, 2011

Last update: June 20, 2012

1. Requirements

1.1 Functional Requirements

The new set of X3D specification for supporting AR and MR visualization must include the following functions and features:

  • Use live video stream as a texture in the X3D scene.
  • Use live video stream as a background of the X3D scene.
  • Retrieve tracking information of the position and orientation of physical objects (such as the camera device and markers).
  • Use tracking information to change the position and orientation of arbitrary nodes in the X3D scene.
  • Synchronization between video image and tracking information.
  • Retrieve calibration information of the camera device providing the video stream.
  • Use calibration information to set properties of (virtual) camera nodes.
  • Specify key color for the live video stream texture chroma keying, making pixels in this color appear transparent.
  • Specify a group of nodes as representatives of physical objects, and render those nodes into depth buffer and not into color buffer. As a result, revealing background video on those part where physical objects are rendered, showing correct occlusion between physical objects and virtual objects.

1.2 Non-functional Requirements

The new set of X3D specification for supporting AR and MR visualization must meet following guidelines:

  • Try to reuse/extend existing nodes as much as possible

In order to guarantee backward compatibility, specify the default value/behavior for new field/feature. For consistency, mixing multiple functions into a single node should be avoided.

  • Device independence must be kept

The scene description should be independent from the hardware/software environment (type of tracker, camera device, browser, etc.) Detail hardware configuration should be adopted to or reconfigured by the users’ hardware/software environment The scene description should only specify generic type/role of interface (e.g. position tracker, orientation tracker, video source) Identifying devices by high level feature (usage or generic setup, e.g. main camera, front facing camera, back facing camera), 
not by low level features (e.g. UUID, device number, port)

  • Balance between simplicity and detail control

Specify default values/behaviors to provide simplicity with detailed control. Follow the naming convention in current specification

  • New features must include examples/use cases that shows the validation of its compatibility with other feature.

2. Use cases

The functions and features could be used in the following use cases:

- Augmented Reality applications, where live video stream is shown on the background and the 3D scene is shown as registered in the physical space of the live video stream. (Correct occlusion between virtual and physical objects can be achieved by preparing 3D models of physical objects and specifying them as a representative of physical objects.)

- Augmented Virtuality (or virtual studio) applications, where live video stream of physical objects can be placed within the 3D scene. (Only the foreground objects can be shown in the live video stream, if the scene in the video is prepared with color matte on the background.)

pdf version