Difference between revisions of "X3D AR Requirements and Use cases"

From Web3D.org
Jump to: navigation, search
(clearing excessive whitespace)
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
= Requirements and use cases of X3D functions to support AR and MR visualization =
 
= Requirements and use cases of X3D functions to support AR and MR visualization =
 
 
By [http://www.web3d.org/x3d/wiki/index.php/X3D_and_Augmented_Reality Augmented Reality Working Group], Web3D Consortium
 
By [http://www.web3d.org/x3d/wiki/index.php/X3D_and_Augmented_Reality Augmented Reality Working Group], Web3D Consortium
  
 
August 17, 2011
 
August 17, 2011
  
 +
Last update: June 20, 2012
  
 
== 1. Requirements ==
 
== 1. Requirements ==
 
=== 1.1 Functional Requirements ===
 
=== 1.1 Functional Requirements ===
 
The new set of X3D specification for supporting AR and MR visualization must include the following functions and features:
 
The new set of X3D specification for supporting AR and MR visualization must include the following functions and features:
 
 
* Use live video stream as a texture in the X3D scene.
 
* Use live video stream as a texture in the X3D scene.
 
 
* Use live video stream as a background of the X3D scene.
 
* Use live video stream as a background of the X3D scene.
 
 
* Retrieve tracking information of the position and orientation of physical objects (such as the camera device and markers).
 
* Retrieve tracking information of the position and orientation of physical objects (such as the camera device and markers).
 
 
* Use tracking information to change the position and orientation of arbitrary nodes in the X3D scene.
 
* Use tracking information to change the position and orientation of arbitrary nodes in the X3D scene.
 
 
* Synchronization between video image and tracking information.
 
* Synchronization between video image and tracking information.
 
 
* Retrieve calibration information of the camera device providing the video stream.
 
* Retrieve calibration information of the camera device providing the video stream.
 
 
* Use calibration information to set properties of (virtual) camera nodes.
 
* Use calibration information to set properties of (virtual) camera nodes.
 
 
* Specify key color for the live video stream texture chroma keying, making pixels in this color appear transparent.
 
* Specify key color for the live video stream texture chroma keying, making pixels in this color appear transparent.
 
 
* Specify a group of nodes as representatives of physical objects, and render those nodes into depth buffer and not into color buffer. As a result, revealing background video on those part where physical objects are rendered, showing correct occlusion between physical objects and virtual objects.
 
* Specify a group of nodes as representatives of physical objects, and render those nodes into depth buffer and not into color buffer. As a result, revealing background video on those part where physical objects are rendered, showing correct occlusion between physical objects and virtual objects.
  
 
=== 1.2 Non-functional Requirements ===
 
=== 1.2 Non-functional Requirements ===
 
The new set of X3D specification for supporting AR and MR visualization must meet following guidelines:
 
The new set of X3D specification for supporting AR and MR visualization must meet following guidelines:
 
 
* Try to reuse/extend existing nodes as much as possible
 
* Try to reuse/extend existing nodes as much as possible
  
 
In order to guarantee backward compatibility, specify the default value/behavior for new field/feature.
 
In order to guarantee backward compatibility, specify the default value/behavior for new field/feature.
 
For consistency, mixing multiple functions into a single node should be avoided.
 
For consistency, mixing multiple functions into a single node should be avoided.
 
 
* Device independence must be kept
 
* Device independence must be kept
  
Line 41: Line 30:
 
Detail hardware configuration should be adopted to or reconfigured by the users’ hardware/software environment
 
Detail hardware configuration should be adopted to or reconfigured by the users’ hardware/software environment
 
The scene description should only specify generic type/role of interface (e.g. position tracker, orientation tracker, video source)
 
The scene description should only specify generic type/role of interface (e.g. position tracker, orientation tracker, video source)
Identifying devices by high level feature (usage or generic setup, e.g. main camera, front facing camera, back facing camera), 
not by low level features (e.g. UUID, device number, port)
+
Identifying devices by high level feature (usage or generic setup, e.g. main camera, front facing camera, back facing camera), 
not by low level features (e.g. UUID, device number, port)
 
+
 
* Balance between simplicity and detail control
 
* Balance between simplicity and detail control
 
Specify default values/behaviors to provide simplicity with detailed control.
 
Specify default values/behaviors to provide simplicity with detailed control.
 
Follow the naming convention in current specification
 
Follow the naming convention in current specification
 
 
* New features must include examples/use cases that shows the validation of its compatibility with other feature.
 
* New features must include examples/use cases that shows the validation of its compatibility with other feature.
 
  
 
== 2. Use cases ==
 
== 2. Use cases ==

Latest revision as of 06:16, 1 November 2012

Requirements and use cases of X3D functions to support AR and MR visualization

By Augmented Reality Working Group, Web3D Consortium

August 17, 2011

Last update: June 20, 2012

1. Requirements

1.1 Functional Requirements

The new set of X3D specification for supporting AR and MR visualization must include the following functions and features:

  • Use live video stream as a texture in the X3D scene.
  • Use live video stream as a background of the X3D scene.
  • Retrieve tracking information of the position and orientation of physical objects (such as the camera device and markers).
  • Use tracking information to change the position and orientation of arbitrary nodes in the X3D scene.
  • Synchronization between video image and tracking information.
  • Retrieve calibration information of the camera device providing the video stream.
  • Use calibration information to set properties of (virtual) camera nodes.
  • Specify key color for the live video stream texture chroma keying, making pixels in this color appear transparent.
  • Specify a group of nodes as representatives of physical objects, and render those nodes into depth buffer and not into color buffer. As a result, revealing background video on those part where physical objects are rendered, showing correct occlusion between physical objects and virtual objects.

1.2 Non-functional Requirements

The new set of X3D specification for supporting AR and MR visualization must meet following guidelines:

  • Try to reuse/extend existing nodes as much as possible

In order to guarantee backward compatibility, specify the default value/behavior for new field/feature. For consistency, mixing multiple functions into a single node should be avoided.

  • Device independence must be kept

The scene description should be independent from the hardware/software environment (type of tracker, camera device, browser, etc.) Detail hardware configuration should be adopted to or reconfigured by the users’ hardware/software environment The scene description should only specify generic type/role of interface (e.g. position tracker, orientation tracker, video source) Identifying devices by high level feature (usage or generic setup, e.g. main camera, front facing camera, back facing camera), 
not by low level features (e.g. UUID, device number, port)

  • Balance between simplicity and detail control

Specify default values/behaviors to provide simplicity with detailed control. Follow the naming convention in current specification

  • New features must include examples/use cases that shows the validation of its compatibility with other feature.

2. Use cases

The functions and features could be used in the following use cases:

- Augmented Reality applications, where live video stream is shown on the background and the 3D scene is shown as registered in the physical space of the live video stream. (Correct occlusion between virtual and physical objects can be achieved by preparing 3D models of physical objects and specifying them as a representative of physical objects.)

- Augmented Virtuality (or virtual studio) applications, where live video stream of physical objects can be placed within the 3D scene. (Only the foreground objects can be shown in the live video stream, if the scene in the video is prepared with color matte on the background.)

pdf version