Difference between revisions of "Discussions for Merging X3D AR Proposals"

From Web3D.org
Jump to: navigation, search
m
Line 1: Line 1:
 
As described in [[Plans for Merging X3D AR Proposals]], here we discuss and produce a merged proposal for each functional components by investigating each functional features stepwise.
 
As described in [[Plans for Merging X3D AR Proposals]], here we discuss and produce a merged proposal for each functional components by investigating each functional features stepwise.
  
1. Camera video stream image into the scene (texture and background)
+
= 1. Camera video stream image into the scene (texture and background) =
* New node structure for supporting live camera video stream as a background or texture.
+
== Node structure ==
 +
There are three options to choose from for designing the new node structure for supporting camera video stream in X3D scene.
  
Option 1. Explicit
+
Option 1. Describe sensors explicitly
Defining a node that represents the camera/image sensor, then routing it to other nodes (e.g. Pixel Texture node or a new Background node such as ImageBackground or MovieBackground)
+
* Define a node that represents the camera/image sensor, then route its output to other nodes (e.g. Pixel Texture node or a new Background node such as ImageBackground or MovieBackground)
 +
All three proposals KC1, KC2 and IR support this model with slightly different details.
  
Pros.
+
* Pros.
- Open for using it in other purposes in the future (more extensible)
+
** Open for using it in other purposes in the future (more extensible)
  
Cons.
+
* Cons.
- Relatively more complicated to write scenes and implement browsers
+
** Relatively more complicated to write scenes and implement browsers
  
  
Option 2. Implicit
+
Option 2. Describe sensors Implicitly
Defining a node that represents "background" or "texture" with user media (either from  
+
* Define a node that represents "background" or "texture" that is dedicated to showing user media (either from a camera device or a user selected file.)
 +
KC1 proposes this option as an alternative with simpler structure for browser implementation and scene writing.
  
Pros.
+
* Pros.
- Simpler on content creators perspective
+
** Simpler on content creators perspective
- Easier to implement and test since lesser interaction with other nodes
+
** Easier to implement and test since lesser interaction with other nodes
  
Cons.
+
* Cons.
- Single purpose node, which might not be used much for other purposes
+
** Single purpose node, which might not be used much for other purposes
  
  
  
 
Option 3. Allowing both
 
Option 3. Allowing both
Pros.
+
* Pros.
- Letting user to choose the option that meets their needs
+
** Letting user to choose the option that meets their needs
  
Cons.
+
* Cons.
- Cost to implement both to browser developers
+
** Cost to implement both to browser developers
  
  
  
 +
== Selecting video source ==
 +
* Reference: Adobe Flash and HTML5 getUserMedia() API
 +
Scene writer doesn't know about the hardware setup on scene viewer, and accessing camera on the user's device could be an privacy issue.
 +
Both Adobe Flash and HTML5 deals this by asking the user to allow browser to use camera input.
 +
In addition, they also asks for which camera or video file to use.
  
* Selecting a device
 
Reference: HTML5 getUserMedia() API
 
  
2. Tracking (including support for general tracking devices)
+
 
3. Camera calibration (viewpoints)
+
= 2. Tracking (including support for general tracking devices) =
4. Others (color-keying, depth occlusion)
+
= 3. Camera calibration (viewpoints) =
 +
= 4. Others (color-keying, depth occlusion) =

Revision as of 16:25, 18 October 2012

As described in Plans for Merging X3D AR Proposals, here we discuss and produce a merged proposal for each functional components by investigating each functional features stepwise.

1. Camera video stream image into the scene (texture and background)

Node structure

There are three options to choose from for designing the new node structure for supporting camera video stream in X3D scene.

Option 1. Describe sensors explicitly

  • Define a node that represents the camera/image sensor, then route its output to other nodes (e.g. Pixel Texture node or a new Background node such as ImageBackground or MovieBackground)

All three proposals KC1, KC2 and IR support this model with slightly different details.

  • Pros.
    • Open for using it in other purposes in the future (more extensible)
  • Cons.
    • Relatively more complicated to write scenes and implement browsers


Option 2. Describe sensors Implicitly

  • Define a node that represents "background" or "texture" that is dedicated to showing user media (either from a camera device or a user selected file.)

KC1 proposes this option as an alternative with simpler structure for browser implementation and scene writing.

  • Pros.
    • Simpler on content creators perspective
    • Easier to implement and test since lesser interaction with other nodes
  • Cons.
    • Single purpose node, which might not be used much for other purposes


Option 3. Allowing both

  • Pros.
    • Letting user to choose the option that meets their needs
  • Cons.
    • Cost to implement both to browser developers


Selecting video source

  • Reference: Adobe Flash and HTML5 getUserMedia() API

Scene writer doesn't know about the hardware setup on scene viewer, and accessing camera on the user's device could be an privacy issue. Both Adobe Flash and HTML5 deals this by asking the user to allow browser to use camera input. In addition, they also asks for which camera or video file to use.


2. Tracking (including support for general tracking devices)

3. Camera calibration (viewpoints)

4. Others (color-keying, depth occlusion)