Summary Of Old AR Proposals

From Web3D.org
Revision as of 11:42, 25 February 2013 by Endovert (Talk | contribs) (Created page with "= Existing Proposals = == Instant Reality == Instant Reality is a Mixed Reality framework developed and maintained by Fraunhofer IGD, which uses X3D as application descriptio...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Existing Proposals

Instant Reality

Instant Reality is a Mixed Reality framework developed and maintained by Fraunhofer IGD, which uses X3D as application description language and which thus also provides corresponding nodes and concepts for developing AR/MR applications. The full node documentation can be found on IR-Docs.

There also exist several tutorials on vision-based tracking with Instant Reality, which e.g. describe specific nodes like the IOSensor node for retrieving the camera streams and the tracking results of the vision subsystem, or which discuss the new PolygonBackground node for displaying the camera images behind the virtual objects as well as some useful camera extensions to the X3D Viewpoint node, etc.: Tracking-Tutorial.

In addition, some papers on AR and MR visualization already were published at the Web3D conferences. Here, e.g. occlusions, shadows and lighting in MR scenes were discussed in the context of X3D:

There are moreover some more ideas on realistic MR rendering in X3D outlined in chapter 6 (p. 163 ff.), and especially in section 6.4, in the following PhD thesis (by Y. Jung): PDF.

The screenshots below show several issues in MR visualization. From top left to bottom right: (a) real image of a room; (b) real scene augmented with virtual character (note that the character appears to be before the table); (c) augmentation with additional occlusion handling (note that the character still seems to float on the floor); (d) augmentation with occlusion and shadows (applied via differential rendering).

Error creating thumbnail: File missing

In the following, an example for achieving occlusion effects between real and virtual objects in AR/MR scenes is shown, given that the (real) 3D object, for which occlusions should be handled, already exist as 3D model (given as Shape in this example). Here, the invisible ghosting objects (denoting real scene geometry) are simply created by rendering them before the virtual objects (by setting the Appearance node's "sortKey" field to '-1') without writing any color values to the framebuffer (via the ColorMaskMode node) to initially stamp out the depth buffer.

 <Shape>
   <Appearance sortKey='-1'>
     <ColorMaskMode maskR='false' maskG='false' maskB='false' maskA='false'/>
   </Appearance>
   ...
 </Shape>

To set the camera's image in the background we use the aforementioned PolygonBackground node. By setting its field "fixedImageSize" the aspect ratio of the image can be defined. Depending on how you want the background image fit into the window, you need to set the mode field to 'VERTICAL' or 'HORIZONTAL'.

 <PolygonBackground fixedImageSize='640,480' mode='VERTICAL'>
   <Appearance>
     <PixelTexture2D DEF='tex'/>
   </Appearance>
 </PolygonBackground> 

As mentioned above, more on that can be found in the corresponding tutorials, e.g. here.

Korean Chapter

The Korea chapter has been keenly interested in the standardization of augmented reality in many aspects including the AR based contents. This is especially due to the recent world-wide appearances of mobile AR services and realization (from both academia and industry) of the definite need for exchanging service contents on different platforms.

Three main proposals have been made within the Korean chapter, by: (1) Gerard Kim from Korea University (also representing KIST), (2) Gun A. Lee (formerly with ETRI, now with HITLabNZ), and (3) Woontack Woo of Gwangju Inst. of Science and Tech.

Here, we briefly describe each proposal and provide links to documents with more detailed descriptions. These short summaries also try to highlight their distinctions with regards to other proposals, but not in the critical sense, but as a way to suggest alternatives.

(1) Gerry Kim's proposal can be highlighted by the following features:

- Extension of existing X3D "sensors" and formalisms to represent physical objects serving as proxies for virtual objects

- The physical objects and virtual objects are tied using the "routes" (e.g. virtual objects' parent coordinate system being set to that of the corresponding physical object).

- Below shows an example construct which is a simple extension of the "VisibilitySensor" attached to a marker. The rough semantic would be to attach a sphere to a marker when visible. The visibility would be determined by the browser using a particular tracker. In this simple case, a simple mark description is given through the "marker" node.

<Scene>
<Group>
 <Marker DEF = “HIRO” enable “TRUE” filename=”C:\hiro.patt”/>
 <VisibilitySensor DEF='Visibility' description='activate if seen' enabled=”TRUE”/>
 <Transform DEF='BALL'>
  <Shape>
   <Appearance>
    <Material/>
   </Appearance>
    <Sphere/>
   </Shape>
 </Transform>
</Group>
<ROUTE fromNode=’Visibility’ fromField='visible' toNode=’BALL’ toField=’visible’ /> 
</Scene>

- Different types of sensors can be newly defined or old ones extended to describe various AR contents. These include proximity sensors, range sensors, etc.

- Different physical object description will be needed at the right level of abstraction (such as the "marker" node in the above example). These include those for image patch, 3D object, GPS location, natural features (e.g. points, lines), and etc.

(2) Gun Lee's proposal

- Extension of the TextureBackground and Movie texture node to handle video background for video see-through AR implementation.

- Introduction of a node called "LiveCam" representing the video capture or vision based sensing in a video see-through AR implementation.

- The video background would be routed from the "LiveCam" node and be supplied with the video image and/or camera parameters.

- Extension of the virtual view point to accommodate more detailed camera parameters and to be set according to the parameters of the "LiveCam".

Slides from Web3D Tech Talk at SIGGRAPH Asia 2010

(3) Woo's proposal

- Woo proposes to use XML, as meta descriptors, with existing standards (e.g. X3D, Collada, etc.) for describing the augmentation information themselves.

- As for the context (condition) for augmentation, a clear specification of "5W" approach is proposed: namely who, when, where, what and how.

- "who" part specifies the owner/author of the contents.

- "when" part specifies content creation time.

- "where" part specifies the location of the physical object to which an augmentation is attached.

- "what" part specifies the what is to be augmented content (augmentation information).

- "how" part specifies dynamic part (behavior) of the content.