Difference between revisions of "Comparison of X3D AR Proposals"

From Web3D.org
Jump to: navigation, search
Line 5: Line 5:
 
Web3D Consortium
 
Web3D Consortium
  
Jan 24, 2012
+
Jan 25, 2012
  
  
Line 92: Line 92:
  
 
=== 3.3 Proposal C ===
 
=== 3.3 Proposal C ===
This proposal deals this problem similar to the case for using the camera image for texture. It proposes a PolygonBackground node, which represents a background that renders a single polygon using the specified material.
+
This proposal deals this problem similar to the case for using the camera image for texture. It proposes a PolygonBackground node, which represents a background that renders a single polygon using the specified material. It allows for defining an aspect ratio of the background image that is independent of the actual window size. Different modes are possible to fit the image in the window (vertical or horizontal).
  
 
<pre>
 
<pre>
Line 101: Line 101:
  
 
Using the proposed PolygonBackground node, the image from camera is simply routed to the texture used for the PolygonBackground node.
 
Using the proposed PolygonBackground node, the image from camera is simply routed to the texture used for the PolygonBackground node.
 +
The image assigned to the image outslot of the IOSensor is routed to the texture in the appearance of the PolygonBackground node.
  
 
<pre>
 
<pre>
Line 136: Line 137:
  
 
=== 4.3 Proposal C ===
 
=== 4.3 Proposal C ===
 +
This proposal doesn't include direct solution to this case.
 +
Closely related functions in this proposal would be the ColorMaskMode node and BlendMode as a child of Appearance node.
  
We introduce a new background node, the PolygonBackground. It allows for defining an aspect ratio of the background image that is independent of the actual window size. Different modes are possible to fit the image in the window (vertical or horizontal). The image assigned to the image outslot of the IOSensor is routed to the texture in the appearance of the PolygonBackground node. Here is an example:
+
The ColorMaskMode masks a specific color channel, and this results color changes in the global image.  
 
+
Rather than resulting pixels in key color to appear transparent, the ColorMaskMode makes color changes in every pixel.
 
<pre>
 
<pre>
<PolygonBackground fixed ImageSize='640,480' mode='VERTICAL'>
+
<ColorMaskMode maskR='TRUE' maskG='TRUE' maskB='TRUE' maskA='TRUE' logFeature='' />
    <Appearance>
+
</pre>
        <PixelTexture2D DEF='tex' />
+
    </Appearance>  
+
</PolygonBackground>  
+
  
 
+
The BlendMode gives general control over alpha blending function. However, there is no such function that compares the source images with a given key color, which is necessary to have proper result for color keying.
<ROUTE fromNode='VisionLib' fromField='VideoSourceImage' toNode='tex' toField='image'/>  
+
<pre>
 +
<BlendMode srcFactor='src_alpha' destFactor='one_minus_src_alpha' color='1 1 1' colorTransparency='0' alphaFunc='none' alphaFuncValue='0' equation='none' logFeature='' />
 
</pre>
 
</pre>
  
Line 154: Line 155:
  
 
=== 5.1 Proposal A ===
 
=== 5.1 Proposal A ===
This proposal does not define an explicit way to interface tracking information, but suggests using the same CameraSensor node, used for retrieving live video stream, for retrieving tracking information. As described in 2.1, the proposed CameraSensor node includes ‘position’ and ‘orientation’ fields that represent the tracking information of the camera motion. The method has its limitations with not supporting tracking information of general objects other than the camera sensor.  
+
This proposal suggests using the same CameraSensor node, used for retrieving live video stream, for retrieving tracking information. As described in 2.1, the proposed CameraSensor node includes ‘position’ and ‘orientation’ fields that represent the tracking information of the camera motion.  
 +
 
 +
<pre>
 +
CameraSensor:X3DDirectSensorNode {
 
 +
  SFImage [out] value
 
 +
  SFBool  [out]        on      FALSE
 
 +
  SFMatrix4f [out] projmat  "1 0 0 0 … “
 
 +
  SFBool [out] tracking FALSE
 
 +
  SFVec3f [out] position
 
 +
  SFRotation [out] orientation 

 +
}
 +
</pre>
 +
 
 +
The method has its limitations with not supporting tracking information of general objects other than the camera sensor.  
 +
 
 +
=== 5.3 Proposal C ===
 +
For retrieving tracking information, this proposal uses the same IOSensor node used for retrieving camera image. The TrackedObject1Camera_ModelView field of the IOSensor node represents the transformation matrix of tracked position of the tracked object (visual marker).
 +
 
 +
<pre>
 +
<IOSensor DEF='VisionLib' type='VisionLib' configFile='TutorialMarkerTracking_OneMarker.pm'>
 +
    <field accessType='outputOnly' name='VideoSourceImage' type='SFImage'/>
 +
    <field accessType='outputOnly' name='TrackedObject1Camera_ModelView' type='SFMatrix4f'/>
 +
    <field accessType='outputOnly' name='TrackedObject1Camera_PrincipalPoint' type='SFVec2f'/>
 +
    <field accessType='outputOnly' name='TrackedObject1Camera_FOV_horizontal' type='SFFloat'/>
 +
    <field accessType='outputOnly' name='TrackedObject1Camera_FOV_vertical' type='SFFloat'/>
 +
    <field accessType='outputOnly' name='TrackedObject1Camera_CAM_aspect' type='SFFloat'/>
 +
</IOSensor>
 +
</pre>
 +
 
 +
The node could support multiple tracking objects by changing the configFile (TutorialMarkerTracking_OneMarker.pm file in the sample code), and defining additional ModelView fields for tracked objects.
 +
 
 +
=== 5.4 Discussion ===
 +
While both proposes to retrieve tracking information from a node that represents a camera sensor, proposal A gives the tracking information of the camera, while C deals with the tracking information of tracked object. This makes proposal C to be more extensible in terms of supporting multiple tracking objects. However, the method used for defining tracking objects and markers through proprietary configuration file  needs to be revised for standardization.
 +
 
  
 
== 6. Using tracking information to change 3D scene ==
 
== 6. Using tracking information to change 3D scene ==
  
 
=== 6.1 Proposal A ===
 
=== 6.1 Proposal A ===
This proposal does not propose any new node or function, but to use routing method to link tracking information from the CameraSensor node to a Viewpoint node’s position and orientation, in general. This could be also extended by a MatrixViewpoint node (to be described in 8.1) which could have a field to identify the corresponding CameraSensor node, causing the same results without explicitly routing the corresponding fields.
+
This proposal proposes to use routing method to link tracking information from the CameraSensor node to a Viewpoint node’s position and orientation, in general. This could be also extended by a MatrixViewpoint node (to be described in 8.1) which could have a field to identify the corresponding CameraSensor node, causing the same results without explicitly routing the corresponding fields.
 +
 
 +
=== 6.3 Proposal C ===
 +
This proposal proposes to use routing method to link tracking information from the IOSensor node to a Transform node of a corresponding virtual object.
 +
<pre>
 +
<MatrixTransform DEF='TransformRelativeToCam'>
 +
    <Shape>
 +
        <Appearance>
 +
            <Material diffuseColor='1 0.5 0' />
 +
        </Appearance>
 +
        <Teapot size='5 5 5' />
 +
    </Shape>
 +
</MatrixTransform>
 +
 
 +
<ROUTE fromNode='VisionLib' fromField='Camera_ModelView' toNode='TransformRelativeToCam' toField='set_matrix'/>
 +
</pre>
 +
 
 +
For routing a transform matrix to a transform node, this proposal also proposes a MatrixTransform node that takes a transform matrix directly, rather than using position and orientation fields.
 +
<pre>
 +
MatrixTransform : X3DGroupingNode {
 +
...
 +
SFBool    [in,out] render TRUE
 +
SFMatrix4f [in,out] matrix identity
 +
}
 +
</pre>
 +
 
 +
 
 +
=== 6.4 Discussion ===
 +
While both proposals relies on routing for applying tracking results for updating the 3D scene, as discussed in 5.4, proposal A focuses on updating the Viewpoint node, while proposal C uses for updating a virtual object (or scene). Proposal C also proposes a new type of transformation node for dealing with transformation matrices, while A sticks to traditional position and orientation vectors.
 +
 
  
  
Line 165: Line 228:
  
 
=== 7.1 Proposal A ===
 
=== 7.1 Proposal A ===
This proposal doesn’t define an explicit way to interface tracking information, but suggests using the same CameraSensor node, used for retrieving live video stream, for retrieving camera calibration information. As described in 2.1, the proposed CameraSensor node includes a ‘projmat’ field which represents the calibration information of the CameraSensor.
+
This proposal suggests using the same CameraSensor node, used for retrieving live video stream, for retrieving camera calibration information. As described in 2.1, the proposed CameraSensor node includes a ‘projmat’ field which represents the calibration information of the CameraSensor.
  
  
Line 207: Line 270:
 
   SFMatrix4f [in,out] projection (identity)
 
   SFMatrix4f [in,out] projection (identity)
 
}
 
}
</pre>
 
 
<pre>
 
MatrixTransform : X3DGroupingNode {
 
...
 
SFBool    [in,out] render TRUE
 
SFMatrix4f [in,out] matrix identity
 
}
 
</pre>
 
 
Accordingly, we also propose a new transform node type, the MatrixTransform. The modelview matrix delivered by the IOSensor node can be applied to a MatrixTransform node. The objects that are superimposed (i.e. the “augmentations”) are children of this MatrixTransform. Here is an example:
 
 
<pre>
 
<MatrixTransform DEF='TransformRelativeToCam'>
 
    <Shape>
 
        <Appearance>
 
            <Material diffuseColor='1 0.5 0' />
 
        </Appearance>
 
        <Teapot size='5 5 5' />
 
    </Shape>
 
</MatrixTransform>
 
 
<ROUTE fromNode='VisionLib' fromField='Camera_ModelView' toNode='TransformRelativeToCam' toField='set_matrix'/>
 
 
</pre>
 
</pre>
  

Revision as of 12:09, 24 January 2012

Comparison between existing proposals - Working Draft

Augmented Reality Working Group Web3D Consortium

Jan 25, 2012


1. Introduction

This document compares the existing proposals for extending X3D to support augmented and mixed reality visualization. Three (?) main proposals are compared in terms of requirements – two from Korean Chapter (A, B) and one from Instant Reality (C).


2. Using Live Video stream as a texture

2.1 Proposal A

This proposal proposed a new sensor node, CameraSensor (previously named LiveCamera node), for retrieving live video data from a camera device, and then routing the video stream to a PixelTexture node. The X3D browser is in charge of implementing and handling devices and mapping the video data to the CameraSensor node inside the X3D scene. The video stream itself is provided as a value (SFImage) field of the node which is updated every frame by the browser implementation according to the camera data.

CameraSensor:X3DDirectSensorNode {
   
   SFImage 	[out]		value
   
   SFBool   	[out]         	on       	FALSE
   
   SFMatrix4f	[out]		projmat   "1 0 0 0 … “
   
   SFBool	[out]		tracking	FALSE
   
   SFVec3f	[out]		position
   
   SFRotation 	[out]		orientation 

}

While this straight forward, routing SFImage values might lead to performance and implementation problem. As an alternative, the same proposal also proposed to extend the behavior of the existing MovieTexture node to support live video stream within the node. The proposed behavior X3D browser is to allow users to select a file or a camera device for the MovieTexture node in the scene, if the url field of the node is empty (or filled with special token values, such as ‘USER_CUSTOMIZED’).


<Appearance>
      <MovieTexture loop='true'   url=''/> 
</Appearance>

While this approach avoids performance problems by not exposing SFImage fields updated in real-time, it lacks of supports for using live video stream data for other purposes, such as background. This is to be solved partially by adding a new node MovieBackground, which behaves similarly to the MovieTexture but uses the user selected movie file or live video stream from a camera for filling the background of the 3D scene.

2.2 Proposal B

The proposal from Gerard Kim, in Korea Chapter, proposed a new sensor node, , …

2.3 Proposal C

This proposal proposes a general purpose IOSensor node, which allows to access external devices (e.g., joysticks and cameras) inside the X3D scene.

<IOSensor system='auto' type='' name='' triggerName='Interaction' maxValuesPerTrigger='1' description='' enabled='TRUE' logFeature='' />


The camera sensor (including marker tracking) is loaded through an instance of IOSensor, by defining the type of the sensor and it's field.

<IOSensor DEF='VisionLib' type='VisionLib' configFile='TutorialMarkerTracking_OneMarker.pm'>
    <field accessType='outputOnly' name='VideoSourceImage' type='SFImage'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_ModelView' type='SFMatrix4f'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_PrincipalPoint' type='SFVec2f'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_FOV_horizontal' type='SFFloat'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_FOV_vertical' type='SFFloat'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_CAM_aspect' type='SFFloat'/>
</IOSensor>

Using the camera image for texture is nothing more than routing the VideoSourceImage field of the IOSensor node to a PixelTexture node.

2.4 Discussion

Proposal A and B proposes a new node, specific for a camera, while C proposes a more generic type of node to be applied for variety of sensors. The trade off between simplicity and flexibility/extensibility needs further discussion.



3. Using Live Video stream as a background

3.1 Proposal A

The proposal proposed a MovieBackground node, extended from Background node to support ‘liveSource’ field which is assigned with a CameraSensor node (as described in 2.1) from which the Background node receives the live video stream data. Once the ‘liveSource’ field is assigned with a validate CameraSensor node, the background image is updated according to the live video stream from the CameraSensor node, assigned. For other purpose of use, it could also have a url field on which general source of movie clip could be assigned an used as a background.

MovieBackground:X3DBackgroundNode {
     ... // same to the original Background node
     SFString    [in] url
     SFNode 	[in] liveSource
}

Similar to the case in 2.1, the proposal also suggests a different approach where the MovieBackground node doesn’t explicitly need a CameraSensor node, but to let the browser to ask the user to choose the movie source (including camera device) when the url field is left empty (or filled with special token values, such as ‘USER_CUSTOMIZED’).


3.3 Proposal C

This proposal deals this problem similar to the case for using the camera image for texture. It proposes a PolygonBackground node, which represents a background that renders a single polygon using the specified material. It allows for defining an aspect ratio of the background image that is independent of the actual window size. Different modes are possible to fit the image in the window (vertical or horizontal).


<PolygonBackground positions='0 0, 1 0, 1 1, 0 1' texCoords='0 0 0, 1 0 0, 1 1 0, 0 1 0' normalizedX='TRUE' normalizedY='TRUE' fixedImageSize='0,0' zoomFactor='1.0' tile='TRUE' doCleanup='TRUE' mode='VERTICAL' clearStencilBitplanes='-1' isDefault='FALSE' description='' triggerName='Synchronize' logFeature='' />

Using the proposed PolygonBackground node, the image from camera is simply routed to the texture used for the PolygonBackground node. The image assigned to the image outslot of the IOSensor is routed to the texture in the appearance of the PolygonBackground node.


<PolygonBackground fixedImageSize='640,480' mode='VERTICAL'>
    <Appearance>
        <PixelTexture DEF='tex' autoScale='false'/>
        <TextureTransform scale='1 -1'/>
    </Appearance>
</PolygonBackground>

<ROUTE fromNode='VisionLib' fromField='VideoSourceImage' toNode='tex' toField='image'/>

To make the polygon for the background fill the viewport, the PolygonBackground's field fixedImageSize is used for describing the aspect ratio of the image, and the mode field is set to "VERTICAL" or "HORIZONTAL" which describes the way the polygon fits the viewport.


3.4 Discussion

Proposal A proposes a dedicated node for movie backgrounds, while proposal C proposes a multi-purpose PolygonBackground node. While the later gives more flexibility, it requires details to be elaborated, compared to the former which is more simple. Again, the trade off between simplicity and flexibility/extensibility needs further discussion.


4. Supporting color keying in texture

4.1 Proposal A

This proposal proposed to add a ‘keyColor’ field to the MovieTexture node, which indicates the color expected to be rendered as transparent, in order to provide chroma key effect on the movie texture. The browser will be in charge of rendering the parts of the MovieTexture with as transparent, and those browser that does not support this feature could simply fall back with rendering the MovieTexture in a normal way (i.e. showing the texture as is).

MovieTexture:X3DBackgroundNode {
     ... // same to the MovieTexture node described in 2.1
SFColor    [in] keyColor
}


4.3 Proposal C

This proposal doesn't include direct solution to this case. Closely related functions in this proposal would be the ColorMaskMode node and BlendMode as a child of Appearance node.

The ColorMaskMode masks a specific color channel, and this results color changes in the global image. Rather than resulting pixels in key color to appear transparent, the ColorMaskMode makes color changes in every pixel.

<ColorMaskMode maskR='TRUE' maskG='TRUE' maskB='TRUE' maskA='TRUE' logFeature='' />

The BlendMode gives general control over alpha blending function. However, there is no such function that compares the source images with a given key color, which is necessary to have proper result for color keying.

<BlendMode srcFactor='src_alpha' destFactor='one_minus_src_alpha' color='1 1 1' colorTransparency='0' alphaFunc='none' alphaFuncValue='0' equation='none' logFeature='' />


5. Retrieving tracking information

5.1 Proposal A

This proposal suggests using the same CameraSensor node, used for retrieving live video stream, for retrieving tracking information. As described in 2.1, the proposed CameraSensor node includes ‘position’ and ‘orientation’ fields that represent the tracking information of the camera motion.

CameraSensor:X3DDirectSensorNode {
   
   SFImage 	[out]		value
   
   SFBool   	[out]         	on       	FALSE
   
   SFMatrix4f	[out]		projmat   "1 0 0 0 … “
   
   SFBool	[out]		tracking	FALSE
   
   SFVec3f	[out]		position
   
   SFRotation 	[out]		orientation 

}

The method has its limitations with not supporting tracking information of general objects other than the camera sensor.

5.3 Proposal C

For retrieving tracking information, this proposal uses the same IOSensor node used for retrieving camera image. The TrackedObject1Camera_ModelView field of the IOSensor node represents the transformation matrix of tracked position of the tracked object (visual marker).

<IOSensor DEF='VisionLib' type='VisionLib' configFile='TutorialMarkerTracking_OneMarker.pm'>
    <field accessType='outputOnly' name='VideoSourceImage' type='SFImage'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_ModelView' type='SFMatrix4f'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_PrincipalPoint' type='SFVec2f'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_FOV_horizontal' type='SFFloat'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_FOV_vertical' type='SFFloat'/>
    <field accessType='outputOnly' name='TrackedObject1Camera_CAM_aspect' type='SFFloat'/>
</IOSensor>

The node could support multiple tracking objects by changing the configFile (TutorialMarkerTracking_OneMarker.pm file in the sample code), and defining additional ModelView fields for tracked objects.

5.4 Discussion

While both proposes to retrieve tracking information from a node that represents a camera sensor, proposal A gives the tracking information of the camera, while C deals with the tracking information of tracked object. This makes proposal C to be more extensible in terms of supporting multiple tracking objects. However, the method used for defining tracking objects and markers through proprietary configuration file needs to be revised for standardization.


6. Using tracking information to change 3D scene

6.1 Proposal A

This proposal proposes to use routing method to link tracking information from the CameraSensor node to a Viewpoint node’s position and orientation, in general. This could be also extended by a MatrixViewpoint node (to be described in 8.1) which could have a field to identify the corresponding CameraSensor node, causing the same results without explicitly routing the corresponding fields.

6.3 Proposal C

This proposal proposes to use routing method to link tracking information from the IOSensor node to a Transform node of a corresponding virtual object.

<MatrixTransform DEF='TransformRelativeToCam'> 
    <Shape> 
        <Appearance> 
            <Material diffuseColor='1 0.5 0' /> 
        </Appearance> 
        <Teapot size='5 5 5' /> 
    </Shape> 
</MatrixTransform> 

<ROUTE fromNode='VisionLib' fromField='Camera_ModelView' toNode='TransformRelativeToCam' toField='set_matrix'/> 

For routing a transform matrix to a transform node, this proposal also proposes a MatrixTransform node that takes a transform matrix directly, rather than using position and orientation fields.

MatrixTransform : X3DGroupingNode {
 ...
 SFBool     [in,out] render TRUE
 SFMatrix4f [in,out] matrix identity
}


6.4 Discussion

While both proposals relies on routing for applying tracking results for updating the 3D scene, as discussed in 5.4, proposal A focuses on updating the Viewpoint node, while proposal C uses for updating a virtual object (or scene). Proposal C also proposes a new type of transformation node for dealing with transformation matrices, while A sticks to traditional position and orientation vectors.


7. Retrieving camera calibration information

7.1 Proposal A

This proposal suggests using the same CameraSensor node, used for retrieving live video stream, for retrieving camera calibration information. As described in 2.1, the proposed CameraSensor node includes a ‘projmat’ field which represents the calibration information of the CameraSensor.


8. Using calibration information to set properties of (virtual) camera

8.1 Proposal A

This proposal suggests a MatrixViewpoint node, which is a child of a scene node which represents a virtual viewpoint calibrated according to the corresponding physical live video camera (on the user's computer). The 'projmat' field represents the internal parameters (or projection matrix) of the MatrixViewpoint. The ‘position' and ‘orientation’ fields represent three dimensional position and orientation of the viewpoint within the virtual space. The ‘cameraSensor’ field represents a CameraSensor node, from which the viewpoint parameters (including projmat, position and orientation fields) of the MatrixViewpoint are updated according to. Once the ‘cameraSensor’ field is assigned with a validate CameraSensor node, the viewpoint parameters are updated according to the values from the CameraSensor node, assigned. Otherwise, it could be also used with routing each parameter of the MatrixViewpoint node from corresponding source of calibrated values.

MatrixViewpoint : X3DViewpointNode{
     SFMatrix4f 		[in,out]	projmat
     SFVec3f 		[in,out]	position
     SFRotation 		[in,out]	orientation
     SFNode 		[in,out]	cameraSensor
}


8.3 Proposal C

Viewpoint : X3DViewpointNode {
  ...
  SFString [in,out] fovMode        VERTICAL
  SFVec2f  [in,out] principalPoint 0 0
  SFFloat  [in,out] aspect         1.0
}

The new fields provide a more general camera model than the standard Viewpoint. The ``principalPoint field defines the relative position of the principal point. If the principal point is not equal to zero, the viewing frustum parameters (left, right, top, bottom) are simply shifted in the camera's image plane. A value of x = 2 means the left value is equal to the default right value. A value of x = -2 means the right value is equal to default. If the principal point is not equal to zero, the ``fieldOfView value is not equal to the real field of view of the camera, otherwise it complies with the default settings.

To extend this idea, the ``fovMode defines whether the field of view is measured vertically, horizontally or in the smaller direction, which is important for correctly parameterizing the aforementioned cinematographic camera. The field ``aspect defines the aspect ratio for the viewing angle defined by the ``fieldOfView range. This setting is independent of the current aspect ratio of the window, but reflects the aspect ratio of the actual capturing device. This extension allows us to model cameras with a non-quadratic pixel format, i.e. it defines (width / height) of a pixel.

In addition to the Viewpoint extension we include a new camera node named Viewfrustum. This node has the two input/output fields ``modelview and ``projection of type SFMatrix4f. With the Viewfrustum node we are able to define a camera position and projection utilizing a standard projection/ modelview matrix pair.

Viewfrustum : X3DViewpointNode {
  ...
  SFMatrix4f [in,out] modelview  (identity)
  SFMatrix4f [in,out] projection (identity)
}


9. Specifying nodes as physical object representatives

9.1 Proposal A

This proposal suggests a GhostGroup node for indicating its child nodes being representatives of physical objects for visualizing correct occlusion. The proposed node is extended from Group node to support those geometries of its child nodes are rendered as ghost objects. The browser should render the child nodes only into the depth buffer and not into the color buffer. As a result, the portion of the live video image corresponding to the ghost object is visualized with correct depth value, forming correct occlusion with other virtual objects.

Group: X3DGroupingNode{
     ... // same to the original Group node
}


9.3 Proposal C

See http://www.web3d.org/x3d/wiki/index.php/X3D_and_Augmented_Reality