https://www.web3d.org/wiki/api.php?action=feedcontributions&user=Endovert&feedformat=atomWeb3D.org - User contributions [en]2024-03-28T12:44:56ZUser contributionsMediaWiki 1.25.1https://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=8739X3D and Augmented Reality2015-02-24T12:53:27Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Mixed/Augmented Reality is usually:<br />
On 3rd Wednesday of a month, at 12am (00:00) US Pacific / 9am Germany / 5pm Korea<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D Consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
TBA<br />
<br />
== Events ==<br />
*Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 26-28, 2015, Seoul, Korea<br />
*Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 20-22, 2014, Seoul, Korea<br />
*[http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
*Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
*AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
*ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
*[http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
*[http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
*OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
*W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, 2011, Santa Clara, CA<br />
*AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
*ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
*[http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
*[http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
*[http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
*SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Mixed and Augmented Reality (MAR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of MAR WG include:<br />
*Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
*Produce and propose X3D components for AR/MR scenes and applications<br />
*Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
*Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
**Archive and distribute collected requirements and use cases through MAR WG wiki page<br />
*Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
**Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
*Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
**Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
[http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Plans for Merging X3D AR Proposals] describes our detailed path forward through this challenging space.<br />
*August 2011: [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] <br />
*March 2012: [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] from Fraunhofer team, Dr. Gun Lee and Dr. Gerry Kim<br />
*August 2012: SIGGRAPH public progress review<br />
*February 2013: [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Merged proposal for X3D AR Extensions] draft ready for member review<br />
*March 2013: Public expert review and comment period [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this link]<br />
*March 2014: Proposal revision<br />
*~ June 2015: build example scenes and draft specification prose for new functionality and encodings, including XML validation<br />
*July 2015 ~ : Start discussion at X3D WG for inclusion in a future version of X3D<br />
<br />
<br />
== Participants ==<br />
*Anita Havele<br />
*Don Brutzman<br />
*Gerard J. Kim<br />
*Gun Lee<br />
*Leonard Daly<br />
*Myeongwon Lee<br />
*Oliver Neubauer<br />
*Sabine Webel<br />
*Timo Engelke<br />
*Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
*X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
*X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
*X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
*The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
*[http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
*[http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
*[http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
*Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
*The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
*Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D MAR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D MAR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
*Christine Perry's AR Standardization Community group<br />
*Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=8053X3D and Augmented Reality2014-02-25T04:15:46Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
On 3rd Wednesday of a month, at 12am (00:00) US Pacific / 9am Germany / 5pm Korea<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D Consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
*Feb 26 Wed 2014 at 12am (00:00) US Pacific / 9am Germany / 5pm Korea<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under the process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is held during March, 2013.<br />
Details can be found at [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this link].<br />
<br />
== Events ==<br />
*Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 20-22, 2014, Seoul, Korea<br />
*[http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
*Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
*AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
*ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
*[http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
*[http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
*OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
*W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, 2011, Santa Clara, CA<br />
*AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
*ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
*[http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
*[http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
*[http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
*SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality Continuum (ARC) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of ARC WG include:<br />
*Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
*Produce and propose X3D components for AR/MR scenes and applications<br />
*Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
*Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
**Archive and distribute collected requirements and use cases through ARC WG wiki page<br />
*Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
**Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
*Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
**Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
[http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Plans for Merging X3D AR Proposals] describes our detailed path forward through this challenging space.<br />
*August 2011: [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] <br />
*March 2012: [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] from Fraunhofer team, Dr. Gun Lee and Dr. Gerry Kim<br />
*August 2012: SIGGRAPH public progress review<br />
*February 2013: [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Merged proposal for X3D AR Extensions] draft ready for member review<br />
*March 2013: Public expert review and comment period<br />
*March 2014: Proposal revision<br />
*May 2014: build example scenes and draft specification prose for new functionality and encodings, including XML validation<br />
*June 2014: Start discussion at X3D WG for inclusion in a future version of X3D<br />
<br />
== Participants ==<br />
*Anita Havele<br />
*Don Brutzman<br />
*Gerard J. Kim<br />
*Gun Lee<br />
*Leonard Daly<br />
*Myeongwon Lee<br />
*Oliver Neubauer<br />
*Sabine Webel<br />
*Timo Engelke<br />
*Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
*X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
*X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
*X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
*The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
*[http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
*[http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
*[http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
*Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
*The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
*Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D ARC WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D ARC WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
*Christine Perry's AR Standardization Community group<br />
*Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=7744X3D and Augmented Reality2013-12-19T03:01:32Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D Consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Dec 19 Thu 2013 at 00:00 US Pacific / 09:00 Germany / 17:00 Korea<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under the process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is held during March, 2013.<br />
Details can be found at [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this link].<br />
<br />
== Events ==<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 20-22, 2014, Seoul, Korea<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, 2011, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality Continuum (ARC) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of ARC WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through ARC WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
[http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Plans for Merging X3D AR Proposals] describes our detailed path forward through this challenging space.<br />
* August 2011: [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] <br />
* March 2012: [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] from Fraunhofer team, Dr. Gun Lee and Dr. Gerry Kim<br />
* August 2012: SIGGRAPH public progress review<br />
* February 2013: [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Merged proposal for X3D AR Extensions] draft ready for member review<br />
* March 2013: Public expert review and comment period<br />
* March 2014: Proposal revision<br />
* May 2014: build example scenes and draft specification prose for new functionality and encodings, including XML validation<br />
* June 2014: Start discussion at X3D WG for inclusion in a future version of X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Leonard Daly<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D ARC WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D ARC WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's AR Standardization Community group<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6683X3D and Augmented Reality2013-02-27T02:05:08Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under the process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is held during March, 2013.<br />
Details can be found at [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this link].<br />
<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality Continuum (ARC) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of ARC WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through ARC WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
<br />
[http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Plans for Merging X3D AR Proposals] describes our detailed path forward through this challenging space.<br />
<br />
* August 2011: [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] <br />
* March 2012: [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] from Fraunhofer team, Dr. Gun Lee and Dr. Gerry Kim<br />
* August 2012: SIGGRAPH public progress review<br />
* February 2013: [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Merged proposal for X3D AR Extensions] draft ready for member review<br />
* March 2013: Public expert review and comment period<br />
* May 2013: build example scenes and draft specification prose for new functionality and encodings, including XML validation<br />
* June 2013: [http://www.web3d2013.org Web3D 2013 Conference] papers, review sample AR/MR applications with X3D<br />
* July 2013: SIGGRAPH public progress review, determine milestones for achieving X3D Mobile Profile<br />
* Ongoing: contributions and alignment with ISO SC24 Working Group 9, AR Continuum Abstract Model<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Leonard Daly<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D ARC WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D ARC WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's AR Standardization Community group<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6682X3D and Augmented Reality2013-02-27T02:03:24Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under the process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is held during March, 2013.<br />
Details can be found at [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this link].<br />
<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality Continuum (ARC) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of ARC WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
<br />
[http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Plans for Merging X3D AR Proposals] describes our detailed path forward through this challenging space.<br />
<br />
* August 2011: [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] <br />
* March 2012: [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] from Fraunhofer team, Dr. Gun Lee and Dr. Gerry Kim<br />
* August 2012: SIGGRAPH public progress review<br />
* February 2013: [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Merged proposal for X3D AR Extensions] draft ready for member review<br />
* March 2013: Public expert review and comment period<br />
* May 2013: build example scenes and draft specification prose for new functionality and encodings, including XML validation<br />
* June 2013: [http://www.web3d2013.org Web3D 2013 Conference] papers, review sample AR/MR applications with X3D<br />
* July 2013: SIGGRAPH public progress review, determine milestones for achieving X3D Mobile Profile<br />
* Ongoing: contributions and alignment with ISO SC24 Working Group 9, AR Continuum Abstract Model<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Leonard Daly<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's AR Standardization Community group<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6681AR Proposal Public Review2013-02-27T02:02:13Z<p>Endovert: </p>
<hr />
<div><br />
By [[X3D and Augmented Reality|Augmented Reality Continuum Working Group]], Web3D Consortium<br />
<br />
Feb 26, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: March, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
In this proposal, two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for sensors that are essential for MR visualization.<br />
<br />
Since hardware and software setup (including X3D browser) vary between end users, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provide such parameters that correspond to those fields used in the Viewpoint node. Detailed descriptions of each field are in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (e.g., useful for immersive displays to track user's viewpoint.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate.<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (e.g. useful for computer vision based AR tracking systems.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how a simple AR scene can be described using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” description=”OBJECT_FROM_VIEWPOINT” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6680AR Proposal Public Review2013-02-27T02:01:48Z<p>Endovert: </p>
<hr />
<div><br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 26, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: March, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
In this proposal, two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for sensors that are essential for MR visualization.<br />
<br />
Since hardware and software setup (including X3D browser) vary between end users, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provide such parameters that correspond to those fields used in the Viewpoint node. Detailed descriptions of each field are in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (e.g., useful for immersive displays to track user's viewpoint.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate.<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (e.g. useful for computer vision based AR tracking systems.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how a simple AR scene can be described using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” description=”OBJECT_FROM_VIEWPOINT” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6679AR Proposal Public Review2013-02-27T02:01:07Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 26, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: March, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
In this proposal, two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for sensors that are essential for MR visualization.<br />
<br />
Since hardware and software setup (including X3D browser) vary between end users, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provide such parameters that correspond to those fields used in the Viewpoint node. Detailed descriptions of each field are in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (e.g., useful for immersive displays to track user's viewpoint.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate.<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (e.g. useful for computer vision based AR tracking systems.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how a simple AR scene can be described using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” description=”OBJECT_FROM_VIEWPOINT” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6676AR Proposal Public Review2013-02-27T01:49:08Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 25, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: March, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
In this proposal, two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for sensors that are essential for MR visualization.<br />
<br />
Since hardware and software setup (including X3D browser) vary between end users, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provide such parameters that correspond to those fields used in the Viewpoint node. Detailed descriptions of each field are in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (e.g., useful for immersive displays to track user's viewpoint.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate.<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (e.g. useful for computer vision based AR tracking systems.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how a simple AR scene can be described using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” description=”OBJECT_FROM_VIEWPOINT” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6675X3D and Augmented Reality2013-02-27T01:48:22Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under the process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is held during March, 2013.<br />
Details can be found at [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this link].<br />
<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
<br />
[http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Plans for Merging X3D AR Proposals] describes our detailed path forward through this challenging space.<br />
<br />
* August 2011: [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] <br />
* March 2012: [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] from Fraunhofer team, Dr. Gun Lee and Dr. Gerry Kim<br />
* August 2012: SIGGRAPH public progress review<br />
* February 2013: [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Merged proposal for X3D AR Extensions] draft ready for member review<br />
* March 2013: Public expert review and comment period<br />
* May 2013: build example scenes and draft specification prose for new functionality and encodings, including XML validation<br />
* June 2013: [http://www.web3d2013.org Web3D 2013 Conference]] papers, review sample AR/MR applications with X3D<br />
* July 2013: SIGGRAPH public progress review<br />
* Ongoing: contributions and alignment with ISO SC24 Working Group 9, AR Continuum Abstract Model<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Leonard Daly<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's AR Standardization Community group<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6662X3D and Augmented Reality2013-02-27T01:28:07Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under the process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is scheduled to be held from Feb 27 until Apr 10, 2013.<br />
Details can be found at [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this link].<br />
<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
**[http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Public review: Feb 27 ~ Apr 10, 2013]<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6661X3D and Augmented Reality2013-02-27T01:27:21Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under to process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is scheduled to be held from Feb 27 until Apr 10, 2013.<br />
Details can be found at [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this link].<br />
<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
**[http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Public review: Feb 27 ~ Apr 10, 2013]<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6660X3D and Augmented Reality2013-02-27T01:27:07Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under to process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is scheduled to be held from Feb 27 until Apr 10, 2013.<br />
Details can be found on [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this link].<br />
<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
**[http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Public review: Feb 27 ~ Apr 10, 2013]<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6659X3D and Augmented Reality2013-02-27T01:25:48Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under to process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is scheduled to be held from Feb 27 until Apr 10, 2013.<br />
Details can be found on [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this page].<br />
<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
**[http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Public review: Feb 27 ~ Apr 10, 2013]<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6658X3D and Augmented Reality2013-02-27T01:24:47Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under to process of collecting feedback on the AR proposal which the WG is working on. The public reviewing is schedule to be held from Feb 27 until Apr 10, 2013.<br />
Details can be found on [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this page].<br />
<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
**[http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Public review: Feb 27 ~ Apr 10, 2013]<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6657X3D and Augmented Reality2013-02-27T01:21:47Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under to process of collecting feedback on the AR proposal which the WG is working on.<br />
Details can be found on [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this page].<br />
<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Community Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
**[http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Public review: Feb 27 ~ Apr 10, 2013]<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR Standards Community], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Community Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6656X3D and Augmented Reality2013-02-27T01:19:32Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
== Public Review of AR Proposal ==<br />
We are now going under to process of collecting feedback on the AR proposal which the WG is working on.<br />
Details can be found on [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review this page].<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
**[http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Public review: Feb 27 ~ Apr 10, 2013]<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly via teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
<br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D] for full member priveleges!<br />
<br />
Meeting agenda and minutes are also distributed on the [mailto:x3d-public@web3d.org?subject=X3D%20AR%20Working%20Group x3d-public@web3d.org] mailing list and [http://www.web3d.org/membership/login/list_archives archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR standards], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR Standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6653X3D and Augmented Reality2013-02-27T01:14:23Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
**[http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Public review: Feb 27 ~ Apr 10, 2013]<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly through teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D]!<br />
Meeting agenda and minutes are announced through the X3D WG mailing list.<br />
<br />
Meeting minutes are also distributed on the X3D mailing list and [http://www.web3d.org/membership/login/list_archives/ archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward. Currently under discussion.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR standards], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
The unified AR proposal for public review can be found [http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review here].<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR Standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6652X3D and Augmented Reality2013-02-27T01:13:31Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea) 2013<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* AR Standards Community Meeting - November 8-9, 2012 - Atlanta, US.<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meeting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
<!-- [[Upcoming X3D events]] --><br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
**[http://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review Public review: Feb 27 ~ Apr 10, 2013]<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Working Group Meeting Routine ==<br />
Regular meetings are held monthly through teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D]!<br />
Meeting agenda and minutes are announced through the X3D WG mailing list.<br />
<br />
Meeting minutes are also distributed on the X3D mailing list and [http://www.web3d.org/membership/login/list_archives/ archived online].<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward. Currently under discussion.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR standards], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR Standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6646X3D and Augmented Reality2013-02-27T01:06:16Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea)<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* AR Standards Community Meeting - Nov 8-9, 2012 - Atlanta, US.<br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
[[Upcoming X3D events]]<br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
** Public review: Feb 27 ~ Apr 10, 2013 (TBC)<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Damon Hernandez<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Meetings ==<br />
Regular meetings are held monthly through teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D]!<br />
Meeting agenda and minutes are announced through the X3D WG mailing list.<br />
<br />
Meeting minutes are also distributed on the X3D mailing list and [http://www.web3d.org/membership/login/list_archives/ archived online].<br />
<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward. Currently under discussion.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR standards], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification - Proposals ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR Standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6645X3D and Augmented Reality2013-02-27T01:04:37Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea)<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* AR Standards Community Meeting - Nov 8-9, 2012 - Atlanta, US.<br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* ISO JTC Meting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
<br />
[[Upcoming X3D events]]<br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
** Public review: Feb 27 ~ Apr 10, 2013 (TBC)<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Damon Hernandez<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Meetings ==<br />
Regular meetings are held monthly through teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D]!<br />
Meeting agenda and minutes are announced through the X3D WG mailing list.<br />
<br />
Meeting minutes are also distributed on the X3D mailing list and [http://www.web3d.org/membership/login/list_archives/ archived online].<br />
<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward. Currently under discussion.<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR standards], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR Standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6643AR Proposal Public Review2013-02-26T22:37:57Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 25, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
In this proposal, two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for sensors that are essential for MR visualization.<br />
<br />
Since hardware and software setup (including X3D browser) vary between end users, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provide such parameters that correspond to those fields used in the Viewpoint node. Detailed descriptions of each field are in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (e.g., useful for immersive displays to track user's viewpoint.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate.<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (e.g. useful for computer vision based AR tracking systems.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how a simple AR scene can be described using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” description=”OBJECT_FROM_VIEWPOINT” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6642AR Proposal Public Review2013-02-26T22:12:21Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 25, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
In this proposal, two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for sensors that are essential for MR visualization.<br />
<br />
Since hardware and software setup (including X3D browser) vary between end users, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provide such parameters that correspond to those fields used in the Viewpoint node. Detailed descriptions of each field are in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (e.g., useful for immersive displays to track user's viewpoint.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate.<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (e.g. useful for computer vision based AR tracking systems.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how a simple AR scene can be described using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” description=”OBJECT_FROM_VIEWPOINT” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=6641X3D and Augmented Reality2013-02-25T18:46:04Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
<br />
== Meetings ==<br />
Our monthly teleconference meeting for X3D and Augmented Reality is usually:<br />
* 17:00-18:00 Pacific time on 3rd Wednesday, which is 10:00-11:00 Thursday in Korea and 02:00-03:00 Thursday in Europe.<br />
<br />
The schedule is subject to change based on the time zone for most expected attendees. The meeting is held on Web3D consortium teleconference line.<br />
<br />
Our next teleconference meeting is:<br />
* Mar 20 Wed 2013 at 17:00 (US Pacific) / Mar 21 Thu 10:00 (Korea)<br />
<br />
== Events ==<br />
* [http://www.perey.com/ARStandards/eighth-ar-standards-community-meeting AR Standards Community Meeting, March 1-2, 2013, Barcelona, Spain]<br />
* Web 3D and ISO/IEC JTC1 SC24 WG9 meetings, Jan 28-31, 2013, Seoul, Korea<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* ISO JTC Meting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
* Christine Perey AR Workshop, 23-25 October 2011, Basel Switzerland<br />
<br />
[[Upcoming X3D events]]<br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
** Public review: Feb 27 ~ Apr 10, 2013 (TBC)<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Damon Hernandez<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Meetings ==<br />
Regular meetings are held monthly through teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D]!<br />
Meeting agenda and minutes are announced through the X3D WG mailing list.<br />
<br />
Meeting minutes are also distributed on the X3D mailing list and [http://www.web3d.org/membership/login/list_archives/ archived online].<br />
<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR standards], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
[http://www.web3d.org/wiki/index.php/Summary_Of_Old_AR_Proposals Summary of Old AR Proposals]<br />
<br />
== Developing X3D AR Specification ==<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
The working group is looking for public feedback on the unified AR proposal that captures essential features for MR/AR visualization.<br />
<br />
== X3D Earth Working Group ==<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR Standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
*Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
*Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
*X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
*Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
*Handling of depth data and occlusion effects <br />
*Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
*Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
*The OGC and K-Mart for describing POIs and sensed physical objects. <br />
*The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
= Participation and Liaisons =<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward. Currently under discussion.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=Summary_Of_Old_AR_Proposals&diff=6640Summary Of Old AR Proposals2013-02-25T18:42:43Z<p>Endovert: Created page with "= Existing Proposals = == Instant Reality == Instant Reality is a Mixed Reality framework developed and maintained by Fraunhofer IGD, which uses X3D as application descriptio..."</p>
<hr />
<div>= Existing Proposals =<br />
<br />
== Instant Reality ==<br />
Instant Reality is a Mixed Reality framework developed and maintained by Fraunhofer IGD, which uses X3D as application description language and which thus also provides corresponding nodes and concepts for developing AR/MR applications. The full node documentation can be found on [http://doc.instantreality.org/documentation/ IR-Docs].<br />
<br />
There also exist several tutorials on vision-based tracking with Instant Reality, which e.g. describe specific nodes like the IOSensor node for retrieving the camera streams and the tracking results of the vision subsystem, or which discuss the new PolygonBackground node for displaying the camera images behind the virtual objects as well as some useful camera extensions to the X3D Viewpoint node, etc.: [http://doc.instantreality.org/tutorial/ Tracking-Tutorial].<br />
<br />
In addition, some papers on AR and MR visualization already were published at the Web3D conferences. Here, e.g. occlusions, shadows and lighting in MR scenes were discussed in the context of X3D:<br />
* [http://www.web3d.org/x3d/wiki/images/6/63/07X3DforMR.pdf MR-Paper-2007]<br />
* [http://www.web3d.org/x3d/wiki/images/0/0d/07X3DforMR_slides.pdf MR-Paper-Slides]<br />
* [http://www.web3d.org/x3d/wiki/images/0/0a/08PRTforX3D.pdf MR-Paper-2008]<br />
There are moreover some more ideas on realistic MR rendering in X3D outlined in chapter 6 (p. 163 ff.), and especially in section 6.4, in the following PhD thesis (by Y. Jung): [http://tuprints.ulb.tu-darmstadt.de/2489/ PDF].<br />
<br />
The screenshots below show several issues in MR visualization.<br />
From top left to bottom right: (a) real image of a room; (b) real scene augmented with virtual character (note that the character appears to be before the table); (c) augmentation with additional occlusion handling (note that the character still seems to float on the floor); (d) augmentation with occlusion and shadows (applied via differential rendering).<br />
<br />
[[image:Kaiser140.png|600px|MR visualization]]<br />
<br />
In the following, an example for achieving occlusion effects between real and virtual objects in AR/MR scenes is shown, given that the (real) 3D object, for which occlusions should be handled, already exist as 3D model (given as Shape in this example). Here, the invisible ghosting objects (denoting real scene geometry) are simply created by rendering them ''before'' the virtual objects (by setting the Appearance node's "sortKey" field to '-1') without writing any color values to the framebuffer (via the ColorMaskMode node) to initially stamp out the depth buffer.<br />
<br />
<Shape><br />
<Appearance sortKey='-1'><br />
<ColorMaskMode maskR='false' maskG='false' maskB='false' maskA='false'/><br />
</Appearance><br />
...<br />
</Shape><br />
<br />
To set the camera's image in the background we use the aforementioned PolygonBackground node. By setting its field "fixedImageSize" the aspect ratio of the image can be defined. Depending on how you want the background image fit into the window, you need to set the mode field to 'VERTICAL' or 'HORIZONTAL'.<br />
<br />
<PolygonBackground fixedImageSize='640,480' mode='VERTICAL'><br />
<Appearance><br />
<PixelTexture2D DEF='tex'/><br />
</Appearance><br />
</PolygonBackground> <br />
<br />
As mentioned above, more on that can be found in the corresponding tutorials, e.g. [http://doc.instantreality.org/tutorial/marker-tracking/ here].<br />
<br />
== Korean Chapter ==<br />
The Korea chapter has been keenly interested in the standardization of augmented reality in many aspects including the AR based contents. This is especially due to the recent world-wide appearances of mobile AR services and realization (from both academia and industry) of the definite need for exchanging service contents on different platforms. <br />
<br />
Three main proposals have been made within the Korean chapter, by: (1) Gerard Kim from Korea University (also representing KIST), (2) Gun A. Lee (formerly with ETRI, now with HITLabNZ), and (3) Woontack Woo of Gwangju Inst. of Science and Tech. <br />
* [http://dxp.korea.ac.kr/AR_standards/AR_standards.zip A zipped file containing various Korean proposals].<br />
* [http://dxp.korea.ac.kr/AR_standards/workshop-2011.pdf Gerry Kim's survey presented at the AR Standards Meeting in Taiwan (2011 Jun)].<br />
<br />
Here, we briefly describe each proposal and provide links to documents with more detailed descriptions. These short summaries also try to highlight their distinctions with regards to other proposals, but not in the critical sense, but as a way to suggest alternatives.<br />
<br />
(1) Gerry Kim's proposal can be highlighted by the following features:<br />
<br />
- Extension of existing X3D "sensors" and formalisms to represent physical objects serving as proxies for virtual objects<br />
<br />
- The physical objects and virtual objects are tied using the "routes" (e.g. virtual objects' parent coordinate system being set to that of the corresponding physical object).<br />
<br />
- Below shows an example construct which is a simple extension of the "VisibilitySensor" attached to a marker. The rough semantic would be to attach a sphere to a marker when visible. The visibility would be determined by the browser using a particular tracker. In this simple case, a simple mark description is given through the "marker" node.<br />
<br />
<Scene><br />
<Group><br />
<Marker DEF = “HIRO” enable “TRUE” filename=”C:\hiro.patt”/><br />
<VisibilitySensor DEF='Visibility' description='activate if seen' enabled=”TRUE”/><br />
<Transform DEF='BALL'><br />
<Shape><br />
<Appearance><br />
<Material/><br />
</Appearance><br />
<Sphere/><br />
</Shape><br />
</Transform><br />
</Group><br />
<ROUTE fromNode=’Visibility’ fromField='visible' toNode=’BALL’ toField=’visible’ /> <br />
</Scene><br />
<br />
- Different types of sensors can be newly defined or old ones extended to describe various AR contents. These include proximity sensors, range sensors, etc.<br />
<br />
- Different physical object description will be needed at the right level of abstraction (such as the "marker" node in the above example). These include those for image patch, 3D object, GPS location, natural features (e.g. points, lines), and etc.<br />
<br />
(2) Gun Lee's proposal<br />
<br />
- Extension of the TextureBackground and Movie texture node to handle video background for video see-through AR implementation.<br />
<br />
- Introduction of a node called "LiveCam" representing the video capture or vision based sensing in a video see-through AR implementation.<br />
<br />
- The video background would be routed from the "LiveCam" node and be supplied with the video image and/or camera parameters.<br />
<br />
- Extension of the virtual view point to accommodate more detailed camera parameters and to be set according to the parameters of the "LiveCam". <br />
<br />
[http://web3d.org/x3d/wiki/images/7/7f/20101216-MR-Web3D-SiggraphAsia-TeckTalk-GunLee.pdf Slides from Web3D Tech Talk at SIGGRAPH Asia 2010]<br />
<br />
(3) Woo's proposal<br />
<br />
- Woo proposes to use XML, as meta descriptors, with existing standards (e.g. X3D, Collada, etc.) for describing the augmentation information themselves.<br />
<br />
- As for the context (condition) for augmentation, a clear specification of "5W" approach is proposed: namely who, when, where, what and how.<br />
<br />
- "who" part specifies the owner/author of the contents.<br />
<br />
- "when" part specifies content creation time.<br />
<br />
- "where" part specifies the location of the physical object to which an augmentation is attached.<br />
<br />
- "what" part specifies the what is to be augmented content (augmentation information).<br />
<br />
- "how" part specifies dynamic part (behavior) of the content.</div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6639AR Proposal Public Review2013-02-25T17:57:31Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 25, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
In this proposal, two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for sensors that are essential for MR visualization.<br />
<br />
Since hardware and software setup (including X3D browser) vary between end users, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provide such parameters that correspond to those fields used in the Viewpoint node. Detailed descriptions of each field are in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (e.g., useful for immersive displays to track user's viewpoint.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate.<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (e.g. useful for computer vision based AR tracking systems.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how a simple AR scene can be described using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6638AR Proposal Public Review2013-02-25T17:51:27Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 25, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
In this proposal, two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for sensors that are essential for MR visualization.<br />
<br />
Since hardware and software setup (including X3D browser) vary between end users, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (Useful in computer vision based AR tracking system.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6637AR Proposal Public Review2013-02-25T17:49:06Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 25, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
In this proposal, two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for sensors that are essential for MR visualization.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (Useful in computer vision based AR tracking system.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6636AR Proposal Public Review2013-02-25T17:47:54Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 25, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
<br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (Useful in computer vision based AR tracking system.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6635AR Proposal Public Review2013-02-25T17:46:56Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 25, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (Useful in computer vision based AR tracking system.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6634AR Proposal Public Review2013-02-25T17:46:38Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 22, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level description of the purpose of the use of sensor, in order to give hint to the browser and the user to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user through user interface (e.g. a dialog box), asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. <br />
In addition, the browser can be configured to use specific sensor hardware for the sensor nodes with certain predefined values in the “description” field. Each type of sensor node can have different set of predefined values for the description field, and these values are used by the browser to automatically map the default sensors that are preconfigured by the user. Another way to determine default sensors to use is to keep the history of the sensors chosen by the user. The browser can record the mapping of the sensors and sensor nodes chosen by the user, and when the same X3D scene is loaded later, the browser can use the mapping saved from the previous instance.<br />
Asking the user interactively in run-time is not only for mapping appropriate sensors, but also provides a method for validating the use of sensors on the user's device to avoid privacy issues. Therefore, the browser must always ask the user for confirmation, even if the browser is able to automatically map the sensors.<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The browser should ask the user to choose which camera to use for each CalibratedCameraSensor node through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on what type of camera is expected for use. Predefined values can be used in the description field to let the browser to automatically map the default sensors preconfigured by the user. Table 1 shows the predefined values for the description field of CalibratedCameraSensor nodes.<br />
<br />
{| border='1' <br />
|+ Table 1. Predefined values for the description field of CalibratedCameraSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| USER_FACING<br />
| Camera that is facing towards the user.<br />
|-<br />
| WORLD_FACING<br />
| Camera that is facing towards the user’s view direction.<br />
|}<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use. The value should include what kind of object the tracking sensor is intended to track, and what reference frame it is using for the coordinate system. Table 2 shows predefined values for the description field that can help the browser to automatically map default sensors that are preconfigured by the user.<br />
<br />
{| border='1' <br />
|+ Table 2. Predefined values for the description field of TrackingSensor nodes<br />
! Predefined Value !! Description<br />
|-<br />
| VIEWPOINT_FROM_WORLD<br />
| For tracking viewpoint relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_WORLD<br />
| For tracking an arbitrary physical object relative to the world coordinate. (Useful in immersive displays.)<br />
|-<br />
| OBJECT_FROM_VIEWPOINT<br />
| For tracking an arbitrary physical object relative to the viewpoint coordinate. (Useful in computer vision based AR tracking system.)<br />
|}<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6633AR Proposal Public Review2013-02-25T16:57:21Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 22, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level purpose of the sensor to give hint to the browser (or the user) to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user in a dialog box, asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. Asking the user to choose the sensor also provides a method for validating the use of sensors on the user's device to overcome privacy issues. In addition, browsers can have options configured to use the sensors that was chosen by the user in previous instance of running the scene.<br />
<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
When there are more than one camera device available, the browser should ask the user to choose which camera to use for which node, interactively through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on which camera to use.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid. The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
...<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6632AR Proposal Public Review2013-02-25T16:47:44Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 22, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level purpose of the sensor to give hint to the browser (or the user) to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user in a dialog box, asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. Asking the user to choose the sensor also provides a method for validating the use of sensors on the user's device to overcome privacy issues. In addition, browsers can have options configured to use the sensors that was chosen by the user in previous instance of running the scene.<br />
<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
When there are more than one camera device available, the browser should ask the user to choose which camera to use for which node, interactively through the user interface (e.g. a dialog box). The browser will show the value of the "description" field to the user, providing hint on which camera to use.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid. The “description” string field defines the intended use of the tracking sensor, which will be provided to the user to help choosing the tracking hardware to use.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the X3D scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the CalibratedCameraSensor node’s “image” field can be routed to the corresponding field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background node in the current X3D specification covers only environmental backgrounds in 3D space. Both Background and TextureBackground nodes describe environment around the user’s viewpoint, represented as a colored sphere or a textured cube around the user . In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We propose two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a another node that corresponds to the node structure of the Background node.<br />
Feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. The internal and external parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node and TrackingSensor node defined in section 2.<br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the two new fields (at bottom) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
…<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6631AR Proposal Public Review2013-02-25T05:16:11Z<p>Endovert: </p>
<hr />
<div>-Working draft <br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 22, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer e-mails, please mail your feedback to Gun Lee (ARC WG co-chair, endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals: two from Web3D Korea Chapter (KC1 and KC2) and one from InstantReality (IR), Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
* Supporting generic type of sensors including those are not directly related to AR/MR visualization (Direct sensor nodes in IR)<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level purpose of the sensor to give hint to the browser (or the user) to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “description” field is used to describe such intention of using the sensor. At run-time, the browser will show the value of the "description" field to the user in a dialog box, asking to choose an appropriate one from the list of sensors available in the local hardware/software setup. The user chooses the appropriate hardware to use for the sensor node, and in this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment. Asking the user to choose the sensor also provides a method for validating the use of sensors on the user's device to overcome privacy issues. In addition, browsers can have options configured to use the sensors that was chosen by the user in previous instance of running the scene.<br />
<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFBool [in,out] enabled TRUE<br />
SFNode [in,out] metadata NULL [X3DMetadataObject]<br />
SFBool [out] isActive<br />
<br />
SFString [in,out] description ""<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
}<br />
</pre><br />
<br />
When there are more than one camera device available, the browser should ask the user to choose which camera to use for which node, interactively through the user interface (e.g. a dialog box).<br />
<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
MFString [in,out] purpose<br />
}<br />
</pre><br />
<br />
The “purpose” string field defines the intended use of the tracking sensor.<br />
<br />
TABLE HERE<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the virtual scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the SFImageSensor node’s “value” field can be routed to the “image” field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background nodes in the current X3D specification cover only environmental backgrounds in 3D space. Both Background and TextureBackground describes an environment around the user’s viewpoint, represented as a colored sphere around the user or a textured cube. In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We proposed two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a higher level node that corresponds to the Background node.<br />
Again, feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. Both external and internal parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node defined in section 1. <br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields (or properties) of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the following fields (in bold font) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
…<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6630AR Proposal Public Review2013-02-25T04:47:24Z<p>Endovert: </p>
<hr />
<div>- Working Draft<br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 22, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* How to give feedback:<br />
** Use the "discussion" tab on the top of this page to give feedback and start discussions.<br />
** If you prefer, please mail your feedback to Gun Lee (ARC WG co-chair endovert[at]postech.ac.kr)<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals from Web3D Korea Chapter and Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level purpose of the sensor to give hint to the browser (or the user) to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “purpose” field is used to describe such an intention of using the sensor. The browser can be preconfigured to use specific sensors available on the local setup for sensor nodes with a certain “purpose”. If there is no sensor assigned for the purpose, the browser can ask the user, at run-time, to choose an appropriate one from the list of sensors available in the local hardware/software setup. In this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment.<br />
<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
MFString [in,out] purpose USER_FACING (WORLD_FACING)<br />
}<br />
</pre><br />
<br />
When there are more than one camera device available, the browser should decide which device to map to the CalibratedCameraSensor node. The ‘purpose’ field provides hint to the browser in such case. The purpose field can have one of the following values <br />
<br />
TABLE HERE<br />
<br />
If the purpose field is empty, the browser should arbitrarily choose the camera device based on user preference either set in advance or interactively through the user interface (e.g. a dialog box).<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
MFString [in,out] purpose<br />
}<br />
</pre><br />
<br />
The “purpose” string field defines the intended use of the tracking sensor.<br />
<br />
TABLE HERE<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the virtual scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the SFImageSensor node’s “value” field can be routed to the “image” field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background nodes in the current X3D specification cover only environmental backgrounds in 3D space. Both Background and TextureBackground describes an environment around the user’s viewpoint, represented as a colored sphere around the user or a textured cube. In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We proposed two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a higher level node that corresponds to the Background node.<br />
Again, feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. Both external and internal parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node defined in section 1. <br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields (or properties) of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the following fields (in bold font) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
…<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=Talk:AR_Proposal_Public_Review&diff=6629Talk:AR Proposal Public Review2013-02-25T04:44:42Z<p>Endovert: </p>
<hr />
<div>== Read me: To start discussion on new topics ==<br />
Please add a new section to start discussion on a new topic. [[User:Endovert|Endovert]] ([[User talk:Endovert|talk]]) 23:43, 24 February 2013 (EST)<br />
:And reply to the comments using indentation. [[User:Endovert|Endovert]] ([[User talk:Endovert|talk]]) 23:43, 24 February 2013 (EST)<br />
::And try this link to learn how to use this talk page. http://en.wikipedia.org/wiki/Help:Using_talk_pages [[User:Endovert|Endovert]] ([[User talk:Endovert|talk]]) 23:43, 24 February 2013 (EST)</div>Endoverthttps://www.web3d.org/wiki/index.php?title=Talk:AR_Proposal_Public_Review&diff=6628Talk:AR Proposal Public Review2013-02-25T04:43:47Z<p>Endovert: Created page with "== New topics for discussion == Please add a new section to start discussion on a new topic. ~~~~ :And reply to the comments using indentation. ~~~~ ::And try this link to lea..."</p>
<hr />
<div>== New topics for discussion ==<br />
Please add a new section to start discussion on a new topic. [[User:Endovert|Endovert]] ([[User talk:Endovert|talk]]) 23:43, 24 February 2013 (EST)<br />
:And reply to the comments using indentation. [[User:Endovert|Endovert]] ([[User talk:Endovert|talk]]) 23:43, 24 February 2013 (EST)<br />
::And try this link to learn how to use this talk page. http://en.wikipedia.org/wiki/Help:Using_talk_pages [[User:Endovert|Endovert]] ([[User talk:Endovert|talk]]) 23:43, 24 February 2013 (EST)</div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6622AR Proposal Public Review2013-02-22T00:34:52Z<p>Endovert: </p>
<hr />
<div>- Working Draft<br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 22, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
* Reviewing period: Feb 27 ~ Apr 10, 2013<br />
* Where to send your comments: TBA<br />
<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals from Web3D Korea Chapter and Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
* High-level events for tracking from proposal KC2<br />
* Supporting color keying in texture from proposal KC1<br />
* Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level purpose of the sensor to give hint to the browser (or the user) to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “purpose” field is used to describe such an intention of using the sensor. The browser can be preconfigured to use specific sensors available on the local setup for sensor nodes with a certain “purpose”. If there is no sensor assigned for the purpose, the browser can ask the user, at run-time, to choose an appropriate one from the list of sensors available in the local hardware/software setup. In this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment.<br />
<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
MFString [in,out] purpose USER_FACING (WORLD_FACING)<br />
}<br />
</pre><br />
<br />
When there are more than one camera device available, the browser should decide which device to map to the CalibratedCameraSensor node. The ‘purpose’ field provides hint to the browser in such case. The purpose field can have one of the following values <br />
<br />
TABLE HERE<br />
<br />
If the purpose field is empty, the browser should arbitrarily choose the camera device based on user preference either set in advance or interactively through the user interface (e.g. a dialog box).<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
MFString [in,out] purpose<br />
}<br />
</pre><br />
<br />
The “purpose” string field defines the intended use of the tracking sensor.<br />
<br />
TABLE HERE<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the virtual scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the SFImageSensor node’s “value” field can be routed to the “image” field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background nodes in the current X3D specification cover only environmental backgrounds in 3D space. Both Background and TextureBackground describes an environment around the user’s viewpoint, represented as a colored sphere around the user or a textured cube. In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We proposed two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a higher level node that corresponds to the Background node.<br />
Again, feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. Both external and internal parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node defined in section 1. <br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields (or properties) of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the following fields (in bold font) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
…<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6621AR Proposal Public Review2013-02-22T00:33:41Z<p>Endovert: </p>
<hr />
<div>- Working Draft<br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 22, 2013<br />
<br />
The Augmented Reality Continuum Working Group (ARC WG) has been developing a proposal for extending X3D to support augmented and mixed reality visualization. As the proposal is reaching the state of its completion, the working group has decided to open the proposal into public and collect feedbacks from others, including our Web3D members, other working groups, and generally from anyone who is interested in AR and Web3D technology. The ARC WG would like to welcome all kinds of feedback that would be helpful to consolidate the proposal and advance into next level of extending the X3D specification to support AR and MR visualization.<br />
<br />
- Review Duration: Feb 27 ~ Apr 10, 2013<br />
- Where to send your comments: TBA<br />
<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals from Web3D Korea Chapter and Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
- High-level events for tracking from proposal KC2<br />
- Supporting color keying in texture from proposal KC1<br />
- Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level purpose of the sensor to give hint to the browser (or the user) to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “purpose” field is used to describe such an intention of using the sensor. The browser can be preconfigured to use specific sensors available on the local setup for sensor nodes with a certain “purpose”. If there is no sensor assigned for the purpose, the browser can ask the user, at run-time, to choose an appropriate one from the list of sensors available in the local hardware/software setup. In this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment.<br />
<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
MFString [in,out] purpose USER_FACING (WORLD_FACING)<br />
}<br />
</pre><br />
<br />
When there are more than one camera device available, the browser should decide which device to map to the CalibratedCameraSensor node. The ‘purpose’ field provides hint to the browser in such case. The purpose field can have one of the following values <br />
<br />
TABLE HERE<br />
<br />
If the purpose field is empty, the browser should arbitrarily choose the camera device based on user preference either set in advance or interactively through the user interface (e.g. a dialog box).<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
MFString [in,out] purpose<br />
}<br />
</pre><br />
<br />
The “purpose” string field defines the intended use of the tracking sensor.<br />
<br />
TABLE HERE<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the virtual scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the SFImageSensor node’s “value” field can be routed to the “image” field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background nodes in the current X3D specification cover only environmental backgrounds in 3D space. Both Background and TextureBackground describes an environment around the user’s viewpoint, represented as a colored sphere around the user or a textured cube. In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We proposed two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a higher level node that corresponds to the Background node.<br />
Again, feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. Both external and internal parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node defined in section 1. <br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields (or properties) of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the following fields (in bold font) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
…<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=AR_Proposal_Public_Review&diff=6620AR Proposal Public Review2013-02-22T00:22:32Z<p>Endovert: Created page with "- Working Draft By Augmented Reality Working Group, Web3D Consortium Feb 22, 2013 The Augmented Reality Continuum Working Group has been devel..."</p>
<hr />
<div>- Working Draft<br />
<br />
By [[X3D and Augmented Reality|Augmented Reality Working Group]], Web3D Consortium<br />
<br />
Feb 22, 2013<br />
<br />
The Augmented Reality Continuum Working Group has been developing a proposal for extending X3D to support augmented and mixed reality visualization.<br />
<br />
<br />
= Extending X3D for MR Visualization - Unified Proposal =<br />
<br />
== 1. Introduction ==<br />
<br />
This document describes an overview of the unified proposal for extending the X3D standard to support Mixed Reality (MR) visualization. Mixed Reality includes both, Augmented Reality (AR) and Augmented Virtuality (AV). The extension of the X3D standard proposed in this document is based on the comparison of three proposals from Web3D Korea Chapter and Fraunhofer IGD. The details of the comparison can be found in the following public wiki page: http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals.<br />
<br />
In this document we focus on the three main components that are necessary to achieve basic MR visualization: sensors, video stream rendering, and camera calibration. We try to minimize the changes to the current specification, but also try to make the solution to be generic enough so that it could be applied to various future applications besides MR visualization.<br />
<br />
In order to focus on consolidating the fundamental features, we leave out the following items/functions from the original proposals as future work.<br />
- High-level events for tracking from proposal KC2<br />
- Supporting color keying in texture from proposal KC1<br />
- Supporting correct occlusion between virtual and physical objects (Ghost object from Proposal KC1 and Color Mask + sortKey from IR)<br />
<br />
== 2. Sensors ==<br />
To achieve MR visualization, sensing the real environment is crucial. Two types of sensor that are necessary to support MR visualization are those for acquiring video stream images from a real camera and motion tracking information of physical objects. While the sensors could be generalized to acquiring any type of information from the real world, in this proposal, we focus on these two sensors that are crucial for MR visualization.<br />
<br />
Two new nodes, CalibratedCameraSensor and TrackingSensor nodes, are proposed for representing interfaces for these types of sensors.<br />
<br />
Since different end users view the X3D scene on different browsers and have different setups of hardware and software, it is not appropriate to describe specific devices or tracking technology to use within the scene. In fact, the author of the X3D scene can have no knowledge of what kind of hardware or software setup is available on the user’s side. Therefore, the X3D scene should only include the high-level purpose of the sensor to give hint to the browser (or the user) to choose appropriate hardware or software on the user’s setup that could meet the intended use. The “purpose” field is used to describe such an intention of using the sensor. The browser can be preconfigured to use specific sensors available on the local setup for sensor nodes with a certain “purpose”. If there is no sensor assigned for the purpose, the browser can ask the user, at run-time, to choose an appropriate one from the list of sensors available in the local hardware/software setup. In this way, users can view the X3D scene with the best option of hardware/software sensors available in his/her environment.<br />
<br />
<br />
=== 2.1 CalibratedCameraSensor node ===<br />
The CalibratedCameraSensor node provides an interface to a camera device. The main information provided into the X3D scene through this node is an image stream captured with the camera. The ‘image’ field of the node provides the image stream captured with the camera device. In addition to the image stream, the node should also provide internal parameters of the camera for calibration of the Viewpoint parameters to achieve correct composition of the MR scene. Four fields (focalPoint, fieldOfView, fovMode, and aspectRatio) provides such parameters that corresponds to those fields used in the Viewpoint node. The detailed description of each field is in section 3 where the Viewpoint node is described.<br />
<br />
<pre><br />
CalibratedCameraSensor : X3DSensorNode {<br />
SFImage [out] image <br />
SFVec2f [out] focalPoint<br />
SFFloat [out] fieldOfView<br />
SFString [out] fovMode<br />
SFFloat [out] aspectRatio<br />
SFBool [in, out] isActive FALSE<br />
MFString [in,out] purpose USER_FACING (WORLD_FACING)<br />
}<br />
</pre><br />
<br />
When there are more than one camera device available, the browser should decide which device to map to the CalibratedCameraSensor node. The ‘purpose’ field provides hint to the browser in such case. The purpose field can have one of the following values <br />
<br />
TABLE HERE<br />
<br />
If the purpose field is empty, the browser should arbitrarily choose the camera device based on user preference either set in advance or interactively through the user interface (e.g. a dialog box).<br />
<br />
=== 2.2 TrackingSensor node ===<br />
The TrackingSensor node provides an interface for motion tracking information. The main information provided in this node is position and orientation of the tracked physical object. These values are provided through ‘position’ and ‘rotation’ fields respectively. The ‘isPositionAvailable’ and ‘isRotationAvailable’ fields are TRUE if the tracking target is successfully tracked and the values of the ‘position’ or ‘rotation’ field is valid.<br />
<br />
<pre><br />
TrackingSensor : X3DSensorNode {<br />
SFVec3f [out] position<br />
SFRotation [out] rotation<br />
SFBool [out] isPositionAvailable FALSE<br />
SFBool [out] isRotationAvailable FALSE<br />
SFBool [in, out] isActive FALSE<br />
MFString [in,out] purpose<br />
}<br />
</pre><br />
<br />
The “purpose” string field defines the intended use of the tracking sensor.<br />
<br />
TABLE HERE<br />
<br />
== 3. Rendering video stream from camera ==<br />
To visualize a MR scene, the video stream image acquired from a sensor node should be rendered in the virtual scene. For AR visualization, the video stream should be rendered as a background of the virtual environment, while in the AV visualization, the video stream is used as a texture of a virtual object.<br />
<br />
<br />
=== 3.1 Using video stream as a Texture ===<br />
For using the video stream image as a texture, no extension of the standard is needed. We can use the PixelTexture node, which is already available in the current version of the X3D specification. The video stream image from the SFImageSensor node’s “value” field can be routed to the “image” field of the PixelTexture node. The following example shows how this routing works.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<PixelTexture DEF=”tex” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode='tex' toField='image'/><br />
</pre><br />
<br />
<br />
=== 3.2 Using video stream as a Background ===<br />
The Background nodes in the current X3D specification cover only environmental backgrounds in 3D space. Both Background and TextureBackground describes an environment around the user’s viewpoint, represented as a colored sphere around the user or a textured cube. In both cases the background of the virtual scene gets updated depending on the viewing direction of the user. However, for AR visualization, the background of the virtual scene should always show the video stream from the camera sensor.<br />
While the Background node and TextureBackground node represent a three dimensional environmental background around the user, the AR background should work as a two dimensional backdrop of the viewport where the 3D scene is rendered on. For this purpose we need a new node type that could represent these kinds of background that work as a 2D backdrop of the scene. We proposed two new nodes for this purpose: BackdropBackground and ImageBackdropBackground nodes. The node structure of these nodes are described as the following:<br />
<br />
<pre><br />
BackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
MFString [in,out] url<br />
}<br />
<br />
ImageBackdropBackground: X3DBackgroundNode {<br />
SFColor [in,out] color<br />
SFImage [in,out] image<br />
}<br />
</pre><br />
<br />
While only ImageBackdropBackground is necessary for AR application, we also define BackdropBackground node as a higher level node that corresponds to the Background node.<br />
Again, feeding the video stream image from the camera sensor to the ImageBackdropBackround node can be achieved by routing the ‘image’ field of the CalibratedCameraSensor node to the ‘image’ field of the ImageBackdropBackground node.<br />
<br />
<pre><br />
<CalibratedCameraSensor DEF=”camera” /><br />
...<br />
<ImageBackdropBackground DEF=”bg” /><br />
...<br />
<ROUTE fromNode='camera' fromField='image' toNode=bg toField='image'/><br />
</pre><br />
<br />
The ImageBackdropBackround will automatically scale the image and fit the width or height of the image to that of the viewport while retaining the aspect ratio. As a result, the background image will fill the entire viewport so that there are no blank region left uncovered by the image background.<br />
<br />
== 4. Camera calibration ==<br />
To assure the virtual world appears correctly registered to the real world in the MR scene, the camera parameters of the virtual camera should be calibrated to match those of the real camera. There are two types of camera parameters: internal and external parameters. External parameters are position and orientation of the camera in the world reference frame, while the internal parameters represent the projection of the 3D scene onto a 2D plane to produce a rendered image of the 3D scene. <br />
The external parameters of a real camera is measured with tracking sensors, while the internal parameters are defined from the optical features of the real camera. Both external and internal parameters of the real camera can be fed into the X3D scene through the CalibratedCameraSensor node defined in section 1. <br />
The Viewpoint node in the X3D specification represents a virtual camera in the virtual scene. While the fields (or properties) of the Viewpoint node cover the full set of external parameters (position and orientation), it only has fields that cover limited aspects of the internal parameters. To meet the minimum requirements for achieving MR visualization, we propose adding the following fields (in bold font) to the Viewpoint node.<br />
<br />
<pre><br />
Viewpoint: X3DViewpointNode {<br />
SFVec3f [in,out] centerOfRotation<br />
SFFloat [in,out] fieldOfView<br />
SFRotation [in,out] orientation<br />
SFVec3f [in,out] position<br />
SFString [in,out] fovMode<br />
SFFloat [in,out] aspectRatio<br />
}<br />
</pre><br />
<br />
In the current X3D specification, the “fieldOfView” field represents minimum field of view (either vertical or horizontal) that the virtual camera will have. This is insufficient for MR visualization, which needs precise calibration of the field of view (FOV) parameter. While the straightforward way would be explicitly having both horizontal and vertical FOV parameters as individual fields, this is not compatible with the current specification. <br />
In order to keep backward compatibility with the current specification, we propose having a “fovMode” field which designates what does the value of the “fieldOfView” field represent. The “fovMode” field can have one of the following values: MINIMUM, VERTICAL, HORIZONTAL, or DIAGONAL. The value MINIMUM is the default value for the “fovMode” field which represents the value of the “fieldOfView” is considered as a minimum FOV (either vertical or horizontal), as it is in the current specification. When the “fovMode” field has the value of VERTICAL, HORIZONTAL, or DIAGONAL, the “fieldOfView” is considered as specific values of FOV in vertical, horizontal, or diagonal direction, respectively.<br />
In addition to the “fovMode” field, the aspect ratio of the FOV in real cameras might not necessarily follow the aspect ratio of the image size it produces. To accommodate this feature, the “aspectRatio” field is introduced which represents the ratio of vertical FOV to the horizontal FOV (vertical/horizontal).<br />
<br />
== 5. Use cases ==<br />
<br />
The following example X3D scene shows how to describe a simple AR scene using the proposed nodes.<br />
<pre><br />
...<br />
<CalibratedCameraSensor DEF=”camera” /><br />
<br />
<ImageBackdropBackground DEF=”bg” /><br />
<ROUTE fromNode=”camera” fromField=”value” toNode=”bg” toField=”image”/><br />
<br />
<Viewpoint DEF=”arview” position=”0 0 0” /><br />
<ROUTE fromNode=”camera” fromField=”fieldOfView” toNode=”arview” toField=”fieldOfView”/><br />
<ROUTE fromNode=”camera” fromField=”fovMode” toNode=”arview” toField=”fovMode”/><br />
<ROUTE fromNode=”camera” fromField=”aspectRatio” toNode=”arview” toField=”aspectRatio”/><br />
<br />
<br />
<TrackingSensor DEF=”tracker1” purpose=”urn:web3d:tracking_sensor_purpose:object_to_viewpoint” /><br />
<br />
<Transform DEF=”tracked_object”> <br />
<Shape><br />
<Appearance><Material diffuseColor="1 0 0" /></Appearance> <br />
<Box /> <br />
</Shape> <br />
</Transform> <br />
<br />
<ROUTE fromNode=”tracker1” fromField=”position” toNode=”tracked_object” toField=”position”/><br />
<ROUTE fromNode=”tracker1” fromField=”rotation” toNode=”tracked_object” toField=”rotation”/><br />
…<br />
</pre></div>Endoverthttps://www.web3d.org/wiki/index.php?title=Plans_for_Merging_X3D_AR_Proposals&diff=6521Plans for Merging X3D AR Proposals2013-01-28T00:10:03Z<p>Endovert: </p>
<hr />
<div>This page is for discussing plans for merging X3D AR proposals, compared in [[Comparison of X3D AR Proposals]].<br />
<br />
These are the steps we will take as a process of merging the X3D AR proposals:<br />
<br />
1. Discuss general strategy/policy/guidelines<br />
* General design guidelines in Web3D consortium level? → Not in explicit form, but there are in the concept sections of node specifications.<br />
* Notes from Don and Dick:<br />
** When a new field is added to a node, carefully choose a default value for backward compatibility.<br />
** For consistency, mixing multiple functions into a single node is not recommended.<br />
** Device independence is taken for granted.<br />
** For identifying devices, URNs could be a proper way to describe them.<br />
** Metric for names - We have the right name if nobody asks about it anymore.<br />
** Start with designing abstract functionality first and then move on to the node specification.<br />
** Build examples/use cases and use them as a test for integrity.<br />
* Rule of thumb<br />
** From the scene writer's point of view, preferably less code for commonly/frequently used features, while detail control available for special cases.<br />
** From the user's (viewer's) point of view, the scene should be adopted to the hardware/software environment (tracker, camera device, browser, etc.) given user has. In other words, scene writer should not specify hardware/software environment in the scene, which is on the users' side.<br />
<br />
2. Produce a merged proposal for each functional components<br />
* Investigate each functional features stepwise: [[Discussions for Merging X3D AR Proposals]]<br />
** Camera video stream image into the scene (texture and background)<br />
** Tracking (including support for general tracking devices)<br />
** Camera calibration (viewpoints)<br />
** Others (color-keying, depth occlusion)<br />
<br />
[https://docs.google.com/document/d/1BXkeaj0RVdoLpj-vBzuVRlpgF7UbQDNtpTCUF5ocQqw/edit Merged proposal - Working Draft]<br />
<br />
3. Check Integrity of the merged proposal<br />
* Check and resolve conflicts between individual functional components<br />
* Merge overlapping features between individual functional components<br />
<br />
4. Write specification<br />
<br />
5. Review</div>Endoverthttps://www.web3d.org/wiki/index.php?title=Discussions_for_Merging_X3D_AR_Proposals&diff=6446Discussions for Merging X3D AR Proposals2012-11-22T00:55:43Z<p>Endovert: </p>
<hr />
<div>As described in [[Plans for Merging X3D AR Proposals]], here we discuss and produce a merged proposal for each functional components by investigating each functional features stepwise.<br />
<br />
= 1. Camera video stream image into the scene (texture and background) =<br />
== Node structure ==<br />
There are three options to choose from for designing the new node structure for supporting camera video stream in X3D scene.<br />
<br />
Option 1. Describe sensors explicitly<br />
* Define a node that represents the camera/image sensor, then route its output to other nodes (e.g. Pixel Texture node or a new Background node such as ImageBackground or MovieBackground)<br />
All three proposals KC1, KC2 and IR support this model with slightly different details.<br />
<br />
* Pros.<br />
** Open for using it in other purposes in the future (more extensible)<br />
<br />
* Cons.<br />
** Relatively more complicated to write scenes and implement browsers<br />
<br />
<br />
Option 2. Describe sensors Implicitly<br />
* Define a node that represents "background" or "texture" that is dedicated to showing user media (either from a camera device or a user selected file.)<br />
KC1 proposes this option as an alternative with simpler structure for browser implementation and scene writing.<br />
<br />
* Pros.<br />
** Simpler on content creators perspective<br />
** Easier to implement and test since lesser interaction with other nodes<br />
<br />
* Cons.<br />
** Single purpose node, which might not be used much for other purposes<br />
<br />
<br />
<br />
Option 3. Allowing both<br />
* Pros.<br />
** Letting user to choose the option that meets their needs<br />
<br />
* Cons.<br />
** Cost to implement both to browser developers<br />
<br />
<br />
<br />
== Selecting video source ==<br />
* Reference: Adobe Flash and HTML5 getUserMedia() API<br />
Scene writer doesn't know about the hardware setup on scene viewer, and accessing camera on the user's device could be an privacy issue.<br />
Both Adobe Flash and HTML5 deals this by asking the user to allow browser to use camera input.<br />
In addition, they also asks for which camera or video file to use.<br />
<br />
<br />
= 2. Tracking (including support for general tracking devices) =<br />
Similar to selecting video source, tracking device configuration is unknown to the scene writer, hence it should be taken care by the browser on the user side.<br />
In that sense, X3D nodes should just provide an interface to receive tracking results, which is basically transform information.<br />
<br />
In that sense, a special transform matrix could be defined, and when a browser detects this node, it should automatically map it to available tracker or ask user to choose which to use.<br />
<br />
<TrackedTransform type="PositionAndOrientation" target="hand" /><br />
<br />
URN classes could be developed to categorize the tracking targets (e.g. hand, head, viewpoint, etc.) to make it easier for users to identify which tracking devices one should choose.<br />
<br />
<br />
= 3. Camera calibration (viewpoints) =<br />
= 4. Others (color-keying, depth occlusion) =</div>Endoverthttps://www.web3d.org/wiki/index.php?title=Discussions_for_Merging_X3D_AR_Proposals&diff=6445Discussions for Merging X3D AR Proposals2012-11-22T00:55:01Z<p>Endovert: </p>
<hr />
<div>As described in [[Plans for Merging X3D AR Proposals]], here we discuss and produce a merged proposal for each functional components by investigating each functional features stepwise.<br />
<br />
= 1. Camera video stream image into the scene (texture and background) =<br />
== Node structure ==<br />
There are three options to choose from for designing the new node structure for supporting camera video stream in X3D scene.<br />
<br />
Option 1. Describe sensors explicitly<br />
* Define a node that represents the camera/image sensor, then route its output to other nodes (e.g. Pixel Texture node or a new Background node such as ImageBackground or MovieBackground)<br />
All three proposals KC1, KC2 and IR support this model with slightly different details.<br />
<br />
* Pros.<br />
** Open for using it in other purposes in the future (more extensible)<br />
<br />
* Cons.<br />
** Relatively more complicated to write scenes and implement browsers<br />
<br />
<br />
Option 2. Describe sensors Implicitly<br />
* Define a node that represents "background" or "texture" that is dedicated to showing user media (either from a camera device or a user selected file.)<br />
KC1 proposes this option as an alternative with simpler structure for browser implementation and scene writing.<br />
<br />
* Pros.<br />
** Simpler on content creators perspective<br />
** Easier to implement and test since lesser interaction with other nodes<br />
<br />
* Cons.<br />
** Single purpose node, which might not be used much for other purposes<br />
<br />
<br />
<br />
Option 3. Allowing both<br />
* Pros.<br />
** Letting user to choose the option that meets their needs<br />
<br />
* Cons.<br />
** Cost to implement both to browser developers<br />
<br />
<br />
<br />
== Selecting video source ==<br />
* Reference: Adobe Flash and HTML5 getUserMedia() API<br />
Scene writer doesn't know about the hardware setup on scene viewer, and accessing camera on the user's device could be an privacy issue.<br />
Both Adobe Flash and HTML5 deals this by asking the user to allow browser to use camera input.<br />
In addition, they also asks for which camera or video file to use.<br />
<br />
<br />
= 2. Tracking (including support for general tracking devices) =<br />
Similar to selecting video source, tracking device configuration is unknown to the scene writer, hence it should be taken care by the browser on the user side.<br />
In that sense, X3D nodes should just provide an interface to receive tracking results, which is basically transform information.<br />
<br />
In that sense, a special transform matrix could be defined, and when a browser detects this node, it should automatically map it to available tracker or ask user to choose which to use.<br />
<br />
<TrackedTransform type="PositionAndOrientation" target="hand" /><br />
<br />
URN classes could be developed to categorize the tracking targets (e.g. hand, head, viewpoint, etc.) to make it easier for users to identify which tracking devices one should use.<br />
<br />
<br />
= 3. Camera calibration (viewpoints) =<br />
= 4. Others (color-keying, depth occlusion) =</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_AR_Requirements_and_Use_cases&diff=6390X3D AR Requirements and Use cases2012-10-23T21:46:46Z<p>Endovert: </p>
<hr />
<div>= Requirements and use cases of X3D functions to support AR and MR visualization =<br />
<br />
By [http://www.web3d.org/x3d/wiki/index.php/X3D_and_Augmented_Reality Augmented Reality Working Group], Web3D Consortium<br />
<br />
August 17, 2011<br />
<br />
Last update: June 20, 2012<br />
<br />
== 1. Requirements ==<br />
=== 1.1 Functional Requirements ===<br />
The new set of X3D specification for supporting AR and MR visualization must include the following functions and features:<br />
<br />
* Use live video stream as a texture in the X3D scene.<br />
<br />
* Use live video stream as a background of the X3D scene.<br />
<br />
* Retrieve tracking information of the position and orientation of physical objects (such as the camera device and markers).<br />
<br />
* Use tracking information to change the position and orientation of arbitrary nodes in the X3D scene.<br />
<br />
* Synchronization between video image and tracking information.<br />
<br />
* Retrieve calibration information of the camera device providing the video stream.<br />
<br />
* Use calibration information to set properties of (virtual) camera nodes.<br />
<br />
* Specify key color for the live video stream texture chroma keying, making pixels in this color appear transparent.<br />
<br />
* Specify a group of nodes as representatives of physical objects, and render those nodes into depth buffer and not into color buffer. As a result, revealing background video on those part where physical objects are rendered, showing correct occlusion between physical objects and virtual objects.<br />
<br />
=== 1.2 Non-functional Requirements ===<br />
The new set of X3D specification for supporting AR and MR visualization must meet following guidelines:<br />
<br />
* Try to reuse/extend existing nodes as much as possible<br />
<br />
In order to guarantee backward compatibility, specify the default value/behavior for new field/feature.<br />
For consistency, mixing multiple functions into a single node should be avoided.<br />
<br />
* Device independence must be kept<br />
<br />
The scene description should be independent from the hardware/software environment (type of tracker, camera device, browser, etc.)<br />
Detail hardware configuration should be adopted to or reconfigured by the users’ hardware/software environment<br />
The scene description should only specify generic type/role of interface (e.g. position tracker, orientation tracker, video source)<br />
Identifying devices by high level feature (usage or generic setup, e.g. main camera, front facing camera, back facing camera), not by low level features (e.g. UUID, device number, port)<br />
<br />
* Balance between simplicity and detail control<br />
Specify default values/behaviors to provide simplicity with detailed control.<br />
Follow the naming convention in current specification<br />
<br />
* New features must include examples/use cases that shows the validation of its compatibility with other feature.<br />
<br />
== 2. Use cases ==<br />
The functions and features could be used in the following use cases:<br />
<br />
- Augmented Reality applications, where live video stream is shown on the background and the 3D scene is shown as registered in the physical space of the live video stream. (Correct occlusion between virtual and physical objects can be achieved by preparing 3D models of physical objects and specifying them as a representative of physical objects.)<br />
<br />
- Augmented Virtuality (or virtual studio) applications, where live video stream of physical objects can be placed within the 3D scene. (Only the foreground objects can be shown in the live video stream, if the scene in the video is prepared with color matte on the background.)<br />
<br />
[http://web3d.org/x3d/wiki/images/6/62/ARWG-Requirements_and_Usecases.pdf pdf version]</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_AR_Requirements_and_Use_cases&diff=6389X3D AR Requirements and Use cases2012-10-23T21:46:17Z<p>Endovert: </p>
<hr />
<div>= Requirements and use cases of X3D functions to support AR and MR visualization =<br />
<br />
By [http://www.web3d.org/x3d/wiki/index.php/X3D_and_Augmented_Reality Augmented Reality Working Group], Web3D Consortium<br />
<br />
August 17, 2011<br />
Last update: June 20, 2012<br />
<br />
== 1. Requirements ==<br />
=== 1.1 Functional Requirements ===<br />
The new set of X3D specification for supporting AR and MR visualization must include the following functions and features:<br />
<br />
* Use live video stream as a texture in the X3D scene.<br />
<br />
* Use live video stream as a background of the X3D scene.<br />
<br />
* Retrieve tracking information of the position and orientation of physical objects (such as the camera device and markers).<br />
<br />
* Use tracking information to change the position and orientation of arbitrary nodes in the X3D scene.<br />
<br />
* Synchronization between video image and tracking information.<br />
<br />
* Retrieve calibration information of the camera device providing the video stream.<br />
<br />
* Use calibration information to set properties of (virtual) camera nodes.<br />
<br />
* Specify key color for the live video stream texture chroma keying, making pixels in this color appear transparent.<br />
<br />
* Specify a group of nodes as representatives of physical objects, and render those nodes into depth buffer and not into color buffer. As a result, revealing background video on those part where physical objects are rendered, showing correct occlusion between physical objects and virtual objects.<br />
<br />
=== 1.2 Non-functional Requirements ===<br />
The new set of X3D specification for supporting AR and MR visualization must meet following guidelines:<br />
<br />
* Try to reuse/extend existing nodes as much as possible<br />
<br />
In order to guarantee backward compatibility, specify the default value/behavior for new field/feature.<br />
For consistency, mixing multiple functions into a single node should be avoided.<br />
<br />
* Device independence must be kept<br />
<br />
The scene description should be independent from the hardware/software environment (type of tracker, camera device, browser, etc.)<br />
Detail hardware configuration should be adopted to or reconfigured by the users’ hardware/software environment<br />
The scene description should only specify generic type/role of interface (e.g. position tracker, orientation tracker, video source)<br />
Identifying devices by high level feature (usage or generic setup, e.g. main camera, front facing camera, back facing camera), not by low level features (e.g. UUID, device number, port)<br />
<br />
* Balance between simplicity and detail control<br />
Specify default values/behaviors to provide simplicity with detailed control.<br />
Follow the naming convention in current specification<br />
<br />
* New features must include examples/use cases that shows the validation of its compatibility with other feature.<br />
<br />
== 2. Use cases ==<br />
The functions and features could be used in the following use cases:<br />
<br />
- Augmented Reality applications, where live video stream is shown on the background and the 3D scene is shown as registered in the physical space of the live video stream. (Correct occlusion between virtual and physical objects can be achieved by preparing 3D models of physical objects and specifying them as a representative of physical objects.)<br />
<br />
- Augmented Virtuality (or virtual studio) applications, where live video stream of physical objects can be placed within the 3D scene. (Only the foreground objects can be shown in the live video stream, if the scene in the video is prepared with color matte on the background.)<br />
<br />
[http://web3d.org/x3d/wiki/images/6/62/ARWG-Requirements_and_Usecases.pdf pdf version]</div>Endoverthttps://www.web3d.org/wiki/index.php?title=Discussions_for_Merging_X3D_AR_Proposals&diff=6387Discussions for Merging X3D AR Proposals2012-10-18T23:25:50Z<p>Endovert: </p>
<hr />
<div>As described in [[Plans for Merging X3D AR Proposals]], here we discuss and produce a merged proposal for each functional components by investigating each functional features stepwise.<br />
<br />
= 1. Camera video stream image into the scene (texture and background) =<br />
== Node structure ==<br />
There are three options to choose from for designing the new node structure for supporting camera video stream in X3D scene.<br />
<br />
Option 1. Describe sensors explicitly<br />
* Define a node that represents the camera/image sensor, then route its output to other nodes (e.g. Pixel Texture node or a new Background node such as ImageBackground or MovieBackground)<br />
All three proposals KC1, KC2 and IR support this model with slightly different details.<br />
<br />
* Pros.<br />
** Open for using it in other purposes in the future (more extensible)<br />
<br />
* Cons.<br />
** Relatively more complicated to write scenes and implement browsers<br />
<br />
<br />
Option 2. Describe sensors Implicitly<br />
* Define a node that represents "background" or "texture" that is dedicated to showing user media (either from a camera device or a user selected file.)<br />
KC1 proposes this option as an alternative with simpler structure for browser implementation and scene writing.<br />
<br />
* Pros.<br />
** Simpler on content creators perspective<br />
** Easier to implement and test since lesser interaction with other nodes<br />
<br />
* Cons.<br />
** Single purpose node, which might not be used much for other purposes<br />
<br />
<br />
<br />
Option 3. Allowing both<br />
* Pros.<br />
** Letting user to choose the option that meets their needs<br />
<br />
* Cons.<br />
** Cost to implement both to browser developers<br />
<br />
<br />
<br />
== Selecting video source ==<br />
* Reference: Adobe Flash and HTML5 getUserMedia() API<br />
Scene writer doesn't know about the hardware setup on scene viewer, and accessing camera on the user's device could be an privacy issue.<br />
Both Adobe Flash and HTML5 deals this by asking the user to allow browser to use camera input.<br />
In addition, they also asks for which camera or video file to use.<br />
<br />
<br />
<br />
= 2. Tracking (including support for general tracking devices) =<br />
= 3. Camera calibration (viewpoints) =<br />
= 4. Others (color-keying, depth occlusion) =</div>Endoverthttps://www.web3d.org/wiki/index.php?title=Discussions_for_Merging_X3D_AR_Proposals&diff=6386Discussions for Merging X3D AR Proposals2012-10-18T23:06:58Z<p>Endovert: </p>
<hr />
<div>As described in [[Plans for Merging X3D AR Proposals]], here we discuss and produce a merged proposal for each functional components by investigating each functional features stepwise.<br />
<br />
1. Camera video stream image into the scene (texture and background)<br />
* New node structure for supporting live camera video stream as a background or texture.<br />
<br />
Option 1. Explicit<br />
Defining a node that represents the camera/image sensor, then routing it to other nodes (e.g. Pixel Texture node or a new Background node such as ImageBackground or MovieBackground)<br />
<br />
Pros.<br />
- Open for using it in other purposes in the future (more extensible)<br />
<br />
Cons.<br />
- Relatively more complicated to write scenes and implement browsers<br />
<br />
<br />
Option 2. Implicit <br />
Defining a node that represents "background" or "texture" with user media (either from <br />
<br />
Pros.<br />
- Simpler on content creators perspective<br />
- Easier to implement and test since lesser interaction with other nodes<br />
<br />
Cons.<br />
- Single purpose node, which might not be used much for other purposes<br />
<br />
<br />
<br />
Option 3. Allowing both<br />
Pros.<br />
- Letting user to choose the option that meets their needs<br />
<br />
Cons.<br />
- Cost to implement both to browser developers<br />
<br />
<br />
<br />
<br />
* Selecting a device<br />
Reference: HTML5 getUserMedia() API<br />
<br />
2. Tracking (including support for general tracking devices)<br />
3. Camera calibration (viewpoints)<br />
4. Others (color-keying, depth occlusion)</div>Endoverthttps://www.web3d.org/wiki/index.php?title=Plans_for_Merging_X3D_AR_Proposals&diff=6385Plans for Merging X3D AR Proposals2012-10-18T23:05:46Z<p>Endovert: </p>
<hr />
<div>This page is for discussing plans for merging X3D AR proposals, compared in [[Comparison of X3D AR Proposals]].<br />
<br />
These are the steps we will take as a process of merging the X3D AR proposals:<br />
<br />
1. Discuss general strategy/policy/guidelines<br />
* General design guidelines in Web3D consortium level? → Not in explicit form, but there are in the concept sections of node specifications.<br />
* Notes from Don and Dick:<br />
** When a new field is added to a node, carefully choose a default value for backward compatibility.<br />
** For consistency, mixing multiple functions into a single node is not recommended.<br />
** Device independence is taken for granted.<br />
** For identifying devices, URNs could be a proper way to describe them.<br />
** Metric for names - We have the right name if nobody asks about it anymore.<br />
** Start with designing abstract functionality first and then move on to the node specification.<br />
** Build examples/use cases and use them as a test for integrity.<br />
* Rule of thumb<br />
** From the scene writer's point of view, preferably less code for commonly/frequently used features, while detail control available for special cases.<br />
** From the user's (viewer's) point of view, the scene should be adopted to the hardware/software environment (tracker, camera device, browser, etc.) given user has. In other words, scene writer should not specify hardware/software environment in the scene, which is on the users' side.<br />
<br />
2. Produce a merged proposal for each functional components<br />
*Investigate each functional features stepwise: [[Discussions for Merging X3D AR Proposals]]<br />
** Camera video stream image into the scene (texture and background)<br />
** Tracking (including support for general tracking devices)<br />
** Camera calibration (viewpoints)<br />
** Others (color-keying, depth occlusion)<br />
<br />
3. Check Integrity of the merged proposal<br />
* Check and resolve conflicts between individual functional components<br />
* Merge overlapping features between individual functional components<br />
<br />
4. Write specification<br />
<br />
5. Review</div>Endoverthttps://www.web3d.org/wiki/index.php?title=Discussions_for_Merging_X3D_AR_Proposals&diff=6384Discussions for Merging X3D AR Proposals2012-10-18T23:05:39Z<p>Endovert: New page: Here we discuss on how to merge X3D AR proposals As described in Plans for Merging X3D AR Proposals, here we discuss and produce a merged proposal for each functional components by in...</p>
<hr />
<div>Here we discuss on how to merge X3D AR proposals<br />
<br />
As described in [[Plans for Merging X3D AR Proposals]], here we discuss and produce a merged proposal for each functional components by investigating each functional features stepwise.<br />
<br />
1. Camera video stream image into the scene (texture and background)<br />
* New node structure for supporting live camera video stream as a background or texture.<br />
<br />
Option 1. Explicit<br />
Defining a node that represents the camera/image sensor, then routing it to other nodes (e.g. Pixel Texture node or a new Background node such as ImageBackground or MovieBackground)<br />
<br />
Pros.<br />
- Open for using it in other purposes in the future (more extensible)<br />
<br />
Cons.<br />
- Relatively more complicated to write scenes and implement browsers<br />
<br />
<br />
Option 2. Implicit <br />
Defining a node that represents "background" or "texture" with user media (either from <br />
<br />
Pros.<br />
- Simpler on content creators perspective<br />
- Easier to implement and test since lesser interaction with other nodes<br />
<br />
Cons.<br />
- Single purpose node, which might not be used much for other purposes<br />
<br />
<br />
<br />
Option 3. Allowing both<br />
Pros.<br />
- Letting user to choose the option that meets their needs<br />
<br />
Cons.<br />
- Cost to implement both to browser developers<br />
<br />
<br />
<br />
<br />
* Selecting a device<br />
Reference: HTML5 getUserMedia() API<br />
<br />
2. Tracking (including support for general tracking devices)<br />
3. Camera calibration (viewpoints)<br />
4. Others (color-keying, depth occlusion)</div>Endoverthttps://www.web3d.org/wiki/index.php?title=X3D_and_Augmented_Reality&diff=5967X3D and Augmented Reality2012-07-08T19:13:54Z<p>Endovert: </p>
<hr />
<div>= Calendar: Meetings and Events =<br />
== Meetings ==<br />
Our twice-monthly teleconference for X3D and Augmented Reality is usually<br />
* Main meeting (AR WG only): 10:30-11:30 (Central European time) / 17:30-18:30 (Korea time) / 01:30-02:30 (Pacific time) on 3rd Wednesday - exact time subject to change.<br />
* Follow-up meeting (together with Korea Chapter): 17:00-18:00 (Pacific time)/20:00-21:00 (Eastern time) on 1st Wednesday, which is 09:00-10:00 (Korea time) on 1st Thursday.<br />
<br />
Our next public teleconference meeting is <br />
* Jul 18th (Wed) 17:00-18:00 (Pacific time)/20:00-21:00 (Eastern time)<br />
<br />
== Events ==<br />
* ISO/IEC JTC1 SC24 Plenary and WG6, WG9 meetings, August 20-24, 2012, Brussels, Belgium <br />
* [http://s2012.siggraph.org/attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2012, August 8, 2012, Los Angeles, CA, USA]<br />
* [http://web3d2012.org Web3D Conference, August 4-5, 2012, Los Angeles, CA, USA]<br />
* OGC TP/PC Meeting - AR Working Group - Sept 19-23, 2011, Boulder, CO<br />
* AR Standards Community Meeting - Oct 24, 25, 2011 - Basel, Switzerland<br />
* W3C TPAC Meeting W3C AR Community Group - Oct 31- Nov 4, Santa Clara, CA<br />
* ISO JTC Meting - Nov 7 - 10, 2011 - San Diego, CA<br />
* [http://www.perey.com/ARStandards/third-international-ar-standards-meeting/ Third International AR Standards Meeting, June 15-17, Taichung, Taiwan]<br />
* [http://web3d2011.org/ Augmented/Mixed Reality Workshop at Web3D Conference, 10:50AM-12:10PM Local Time, June 21, Paris, France]<br />
* [http://www.siggraph.org/s2011/for_attendees/birds-feather Augmented and Mixed Reality Web3D BOF at SIGGRAPH 2011, August 2011, Vancouver, Canada]<br />
* SC24 Augmented and Mixed Reality Study Group Meeting @ SC24 Plenary and Working Group Meetings, August 21 2011, Rapid City, South Dakota, USA<br />
* Christine Perey AR Workshop, 23-25 October 2011, Basel Switzerland<br />
<br />
<br />
= Charter =<br />
<br />
== Overview ==<br />
The Augmented Reality (AR) Working Group focuses on utilizing and extending X3D capabilities to support augmented reality (AR) and mixed reality (MR) applications.<br />
<br />
''Discussion.'' These X3D AR discussions were initially held as part of a special interest group. Now that we have determined sufficient exists to modify the X3D Specification, this effort is moving forward by forming the X3D AR Working Group.<br />
<br />
== Goals ==<br />
Planned goals of AR WG include:<br />
* Collect requirements and describe typical use cases for using X3D in AR/MR applications<br />
* Produce and propose X3D components for AR/MR scenes and applications<br />
* Produce sample AR/MR applications using X3D to demonstrate how this functionality can work correctly<br />
<br />
== Tasks ==<br />
* Investigate state-of-the-art technologies related to X3D and AR/MR to collect requirements and use cases<br />
** Archive and distribute collected requirements and use cases through AR WG wiki page<br />
* Hold regular meetings and workshops to motivate discussions and editing of AR/MR related X3D specification proposals<br />
** Regular meetings will be held through teleconferencing and workshops will be planned through regular meetings<br />
* Promote X3D in AR/MR field by developing and distributing promotional materials of AR applications based on X3D<br />
** Promotional materials include sample applications, video clips, documents, images distributed on the web<br />
<br />
== Deliverables and Timeline ==<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_AR_Requirements_and_Use_cases Use cases and requirements of X3D for AR/MR application] - August 2011<br />
* [http://web3d.org/x3d/wiki/index.php/Comparison_of_X3D_AR_Proposals Comparison of existing proposals] - March 2012<br />
* [http://web3d.org/x3d/wiki/index.php/Plans_for_Merging_X3D_AR_Proposals Merge X3D AR Proposals]<br />
* Proposed new/extended functions and nodes for X3D specification<br />
** once use cases and requirements are stated, can compare existing and new proposals for X3D functionality<br />
* Define specification prose for new functionality and encodings<br />
* Sample AR/MR applications with X3D<br />
** these will be produced in support of each proposal<br />
<br />
== Participants ==<br />
* Anita Havele<br />
* Damon Hernandez<br />
* Don Brutzman<br />
* Gerard J. Kim<br />
* Gun Lee<br />
* Len Daly, Daly Realism<br />
* Myeongwon Lee<br />
* Oliver Neubauer<br />
* Sabine Webel<br />
* Timo Engelke<br />
* Yvonne Jung<br />
<br />
== Meetings ==<br />
Regular meetings are held twice-monthly through teleconference.<br />
Participation is open to everyone via the Web3D teleconference line. <br />
Non-members can [mailto:anita.havele@web3d.org?subject=AR%20teleconference%20request request access information] for this call, or [http://web3d.org/membership/join Join Web3D]!<br />
Meeting agenda and minutes are announced through the X3D WG mailing list.<br />
<br />
Discussions occur on the [mailto:x3d-public@web3d.org x3d-public@web3d.org mailing list]. If the email traffic becomes very busy then we can create a separate email list.<br />
Meeting minutes are also distributed on the X3D-Public mailing list and [http://web3d.org/pipermail/x3d-public_web3d.org/ archived online].<br />
<br />
<br />
[[Upcoming Meetings|#Meetings]]<br />
<br />
= History and Background Information =<br />
Web3D Consortium formed a special interest group on AR initiatives in July 2009 worked to help create the AR Technology Road Map.<br />
<br />
Several Web3D Consortium member projects showcase the feasibility of AR in X3D, particularly [http://www.x3dom.org X3DOM] open source produced by [http://www.igd.fhg.de/www/igd-a4 Fraunhofer IGD]. <br />
<br />
* X3DOM can serve as an out of the box, standards-based solution for AR developers.<br />
* X3D and X3DOM continue to improve scripting options for X3D with HTML5 and this has good promise for AR applications. <br />
* X3DOM is being considered for potential standardization in a Mobile and/or Augmented Reality (AR) profile for X3D.<br />
<br />
Our [http://www.web3d.kr Web3D Korea Chapter members] from ETRI are working on Mixed Reality visualization in X3D.<br />
<br />
* The Consortium as been working closely within W3C HTML5 WG to align our standards for 3D visualization on the Web. <br />
<br />
Additional details are available at: <br />
<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML5 X3D and HTML5]<br />
* [http://www.web3d.org/x3d/wiki/index.php/X3D_and_HTML_FAQ X3D and HTML FAQ]<br />
* [http://www.x3dom.org X3DOM]<br />
<br />
Web3D Consortium is also engaged with the [http://www.arstandards.org AR standards], [http://opengeospacial.org OGC], [http://www.khronos.org Khronos] and [http://w3c.org W3C] organizations for applying and adapting X3D.<br />
<br />
* Web3D Consortium will be starting an AR working group in March 2011 to develop a roadmap for AR standardization. We encourage your participation. <br />
<br />
* The technology discussions and meetings will be conducted publicly so that the widest possible comment and reaction can be considered from the Community. Feedback from this community will help X3D quickly and stably adopt new technologies to provide ongoing archival value for all 3D graphics.<br />
<br />
* Meanwhile Web3D Members still retain important membership rights of proposing significant new technology and considering patented technologies within a "safe haven" prior to public release.<br />
<br />
[http://www.web3d.org/x3d/wiki/images/3/32/X3dProfilePossibilitiesMobileHtml5AR.2010June29.pdf X3D Mobile Profile slideset]<br />
from last summer's Mobile X3D ISO Workshop has also been<br />
linked: how Mobile, HTML5 and possibly Augmented Reality (AR) components<br />
can be aligned together.<br />
<br />
Many new Web3D capabilities are becoming available. There has been no better time to discuss X3D technologies and also join the [http://web3d.org/membersip/join Web3D Consortium]. Get involved in these early discussions to create the AR Technology Road Map. We look forward to your participation.<br />
<br />
= Existing Proposals =<br />
<br />
== Instant Reality ==<br />
<br />
Instant Reality is a Mixed Reality framework developed and maintained by Fraunhofer IGD, which uses X3D as application description language and which thus also provides corresponding nodes and concepts for developing AR/MR applications. The full node documentation can be found on [http://doc.instantreality.org/documentation/ IR-Docs].<br />
<br />
There also exist several tutorials on vision-based tracking with Instant Reality, which e.g. describe specific nodes like the IOSensor node for retrieving the camera streams and the tracking results of the vision subsystem, or which discuss the new PolygonBackground node for displaying the camera images behind the virtual objects as well as some useful camera extensions to the X3D Viewpoint node, etc.: [http://doc.instantreality.org/tutorial/ Tracking-Tutorial].<br />
<br />
In addition, some papers on AR and MR visualization already were pubished at the Web3D conferences. Here, e.g. occlusions, shadows and lighting in MR scenes were discussed in the context of X3D:<br />
* [http://www.web3d.org/x3d/wiki/images/6/63/07X3DforMR.pdf MR-Paper-2007]<br />
* [http://www.web3d.org/x3d/wiki/images/0/0d/07X3DforMR_slides.pdf MR-Paper-Slides]<br />
* [http://www.web3d.org/x3d/wiki/images/0/0a/08PRTforX3D.pdf MR-Paper-2008]<br />
There are moreover some more ideas on realistic MR rendering in X3D outlined in chapter 6 (p. 163 ff.), and especially in section 6.4, in the following PhD thesis (by Y. Jung): [http://tuprints.ulb.tu-darmstadt.de/2489/ PDF].<br />
<br />
The screenshots below show several issues in MR visualization.<br />
From top left to bottom right: (a) real image of a room; (b) real scene augmented with virtual character (note that the character appears to be before the table); (c) augmentation with additional occlusion handling (note that the character still seems to float on the floor); (d) augmentation with occlusion and shadows (applied via differential rendering).<br />
<br />
[[image:Kaiser140.png|600px|MR visualization]]<br />
<br />
In the following, an example for achieving occlusion effects between real and virtual objects in AR/MR scenes is shown, given that the (real) 3D object, for which occlusions should be handled, already exist as 3D model (given as Shape in this example). Here, the invisible ghosting objects (denoting real scene geometry) are simply created by rendering them ''before'' the virtual objects (by setting the Appearance node's "sortKey" field to '-1') without writing any color values to the framebuffer (via the ColorMaskMode node) to initially stamp out the depth buffer.<br />
<br />
<Shape><br />
<Appearance sortKey='-1'><br />
<ColorMaskMode maskR='false' maskG='false' maskB='false' maskA='false'/><br />
</Appearance><br />
...<br />
</Shape><br />
<br />
To set the camera's image in the background we use the aforementioned PolygonBackground node. By setting its field "fixedImageSize" the aspect ratio of the image can be defined. Depending on how you want the background image fit into the window, you need to set the mode field to 'VERTICAL' or 'HORIZONTAL'.<br />
<br />
<PolygonBackground fixedImageSize='640,480' mode='VERTICAL'><br />
<Appearance><br />
<PixelTexture2D DEF='tex'/><br />
</Appearance><br />
</PolygonBackground> <br />
<br />
As mentioned above, more on that can be found in the corresponding tutorials, e.g. [http://doc.instantreality.org/tutorial/marker-tracking/ here].<br />
<br />
== Korean Chapter ==<br />
<br />
The Korea chapter has been keenly interested in the standardization of augmented reality in many aspects including the AR based contents. This is especially due to the recent world-wide appearances of mobile AR services and realization (from both academia and industry) of the definite need for exchanging service contents on different platforms. <br />
<br />
Three main proposals have been made within the Korean chapter, by: (1) Gerard Kim from Korea University (also representing KIST), (2) Gun A. Lee (formerly with ETRI, now with HITLabNZ), and (3) Woontack Woo of Gwangju Inst. of Science and Tech. <br />
* [http://dxp.korea.ac.kr/AR_standards/AR_standards.zip A zipped file containing various Korean proposals].<br />
* [http://dxp.korea.ac.kr/AR_standards/workshop-2011.pdf Gerry Kim's survey presented at the AR Standards Meeting in Taiwan (2011 Jun)].<br />
<br />
Here, we briefly describe each proposal and provide links to documents with more detailed descriptions. These short summararies also try to highlight their distinctions with regards to other proposals, but not in the critical sense, but as a way to suggest alternatives.<br />
<br />
(1) Gerry Kim's proposal can be highlighted by the following features:<br />
<br />
- Extension of existing X3D "sensors" and formalisms to represent physical objects serving as proxies for virtual objects<br />
<br />
- The physical objects and virtual objects are tied using the "routes" (e.g. virtual objects' parent coordinate system being set to that of the corresponding physical object).<br />
<br />
- Below shows an example construct which is a simple extension of the "VisibilitySensor" attached to a marker. The rough semantic would be to attach a sphere to a marker when visible. The visibilty would be determined by the browser using a particular tracker. In this simple case, a simple marke description is given through the "marker" node.<br />
<br />
<Scene><br />
<Group><br />
<Marker DEF = “HIRO” enable “TRUE” filename=”C:\hiro.patt”/><br />
<VisibilitySensor DEF='Visibility' description='activate if seen' enabled=”TRUE”/><br />
<Transform DEF='BALL'><br />
<Shape><br />
<Appearance><br />
<Material/><br />
</Appearance><br />
<Sphere/><br />
</Shape><br />
</Transform><br />
</Group><br />
<ROUTE fromNode=’Vsibility’ fromField='visible' toNode=’BALL’ toField=’visible’ /> <br />
</Scene><br />
<br />
- Different types of sensors can be newly defined or old ones extended to describe various AR contents. These include proximity sensors, range sensors, etc.<br />
<br />
- Different physical object description will be needed at the right level of abstraction (such as the "marker" node in the above example). These include those for image patch, 3D object, GPS location, natural features (e.g. points, lines), and etc.<br />
<br />
(2) Gun Lee's proposal<br />
<br />
- Extension of the TextureBackground and Movie texture node to handle video background for video see-through AR implementation.<br />
<br />
- Introduction of a node called "LiveCam" representing the video capture or vision based sensing in a video see-through AR implementation.<br />
<br />
- The video background would be routed from the "LiveCam" node and be supplied with the video image and/or camera parameters.<br />
<br />
- Extension of the virtual view point to accomodate more detailed camera parameters and to be set according to the parameters of the "LiveCam". <br />
<br />
[http://web3d.org/x3d/wiki/images/7/7f/20101216-MR-Web3D-SiggraphAsia-TeckTalk-GunLee.pdf Slides from Web3D Tech Talk at SIGGRAPH Asia 2010]<br />
<br />
(3) Woo's proposal<br />
<br />
- Woo proposes to use XML, as meta descriptors, with existing standards (e.g. X3D, Collada, etc.) for describing the augmentation information themselves.<br />
<br />
- As for the context (condition) for augmentation, a clear specification of "5W" approach is proposed: namely who, when, where, what and how.<br />
<br />
- "who" part specifies the owner/author of the contents.<br />
<br />
- "when" part specifies content creation time.<br />
<br />
- "where" part specifies the location of the physical object to which an augmentation is attached.<br />
<br />
- "what" part specifies the what is to be augmented content (augmentation information).<br />
<br />
- "how" part specifies dynamic part (behavior) of the content. <br />
<br />
== Developing X3D AR Specification ==<br />
<br />
The working group has reviewed the existing proposals and have summarized in [[Comparison of X3D AR Proposals]].<br />
<br />
Based on this comparison, the working group is now preparing [[Plans for Merging X3D AR Proposals]].<br />
<br />
== X3D Earth Working Group ==<br />
<br />
The X3D Earth Working Group has expanded and refined a proposal by Dr. Myeong Won Lee for a new <br />
[http://www.web3d.org/membership/login/memberwiki/index.php/X3D_v3.3_Specification_Changes#GpsSensor_node GpsSensor node].<br />
Due to several overlapping technical issues, the group has asked to collaborate with the Augmented Reality group on final design for this node.<br />
<br />
= Interoperability with other AR and Web Standards =<br />
Several discussions at the Web3D Conference, SIGGRAPH and ISO-SC24 meeting about the recent AR Standards developments continue to improve and refine our strategy on interoperability with other standards. <br />
<br />
Establishing common implementations and examples is important to demonstrate successful interoperable capabilities for the spec. Continued collaboration and reaching out to other AR standards groups is essential. Our common goal remains maximum interoperability with all Web standards. <br />
<br />
*W3C Augmented Reality Community Group <br />
*OGC ARML WG <br />
*AR Standards Group <br />
*ISO-SC24<br />
*KHRONOS <br />
<br />
The recent work in the Web3D AR WG and the realization that the Current status of AR content models is not comprehensive, the ISO Standards Committee - SC24, which administers X3D review as an ISO standard has established a new Working Group for Augmented and Mixed Reality. <br />
<br />
This Group conducted a survey of the current state of the art in AR/MR standardization, Here is a summary of the main findings. <br />
<br />
*A need for making clear and precise definition of terms <br />
*A need for a reference architecture with the following feature <br />
*Separation of the content and browser/player/application <br />
*Extendible and general enough to accommodate new future technologies (e.g. display devices, tracking algorithms, sensors, etc.) <br />
*Defined at the right abstraction level to be platform/vendor independent <br />
*Clear interface definition among the subsystems <br />
*A proposal to develop a protocol between AR/MR engine and the object tracking/recognition subsystem independently from the algorithms used <br />
Reusing of existing standards as much as possible (see below) <br />
*A content model based on the underlying reference architecture that is <br />
Comprehensive (e.g. scene/world model, interaction, rich augmentation methods and styling options, representation of extensive types of physical real world objects) <br />
*A need for rich and sophisticated scene/world model <br />
X3D-based approach seems promising for providing a sophisticated world model (scene graph structure) and many media objects for augmentation. A proposal to extend X3D standards <br />
*A need for representation of sensors and physical objects <br />
*A proposal for a merged abstraction of physical objects and separate sensors as “objects with virtual sensors” and extend virtual sensors of X3D <br />
*A need for sophisticated representation of “places of interests (POI)” <br />
*A proposal to use and extend OGC/KML standards <br />
*A need for extensive styling for 2D/3D information <br />
*A proposal to use and extend HTML 5 <br />
*A need to abstract AR/MR interaction behaviors <br />
Complicated behaviors to be handled by scripts and DOM like approach <br />
*A proposal to extend X3DOM for this purpose <br />
*Needs other supporting functionalities <br />
*Inclusion and specification of real world capture camera/sensors <br />
*Moving texture/background functionality for video see through AR <br />
Handling of depth data and occlusion effects <br />
Specification of virtual/real light sources and rendering methods <br />
Based on these findings the group proposes to derive a AR content model as an extension of a virtual world with provisions for representing the physically-sensed objects. The provisions refer to ways to specify the physical augmentation “targets” without specific sensor information and ways to (intuitively) tie or associate them to their virtual counterparts. This will result in vendor independence, use convenience and support for extensibility. <br />
<br />
The ISO AR standardization proposal recommends <br />
<br />
Merging HTML and X3D (X3dom (Declarative 3D) for abstract content components for 2D and 3D Augmentation. <br />
The OGC and K-Mart for describing POIs and sensed physical objects. <br />
The Scripting approach for non-standard complex content behaviors and the use of remote cloud services. <br />
Current technical work within the Web3D AR WG includes harmonizing these proposals for best fitting AR capabilities into X3D scenes. This work will be formally written up as the Augmented Reality (AR) Component for the X3D Specification.<br />
<br />
<br />
= Participation and Liaisons =<br />
<br />
* Christine Perry's group on AR Standardization<br />
* Other partnerships can also be considered as appropriate.<br />
<br />
Of interest is that Web3D Consortium Intellectual Property Rights (IPR) Policy insists on open, royalty free (RF) specifications. These coexist effectively with the Web Architecture and many different business models.<br />
<br />
= Augmented Reality Roadmap for X3D =<br />
<br />
The [[Augmented Reality Roadmap for X3D]] is a description document charting shared strategies and our way forward. Currently under discussion.</div>Endovert