Data Property: 'X3DSensorNode field description is implemented by multiple nodes.'
Annotations (11)
- rdfs:label "Author-provided prose that describes intended purpose of the node."
- rdfs:label "Author-provided prose that describes intended purpose of the url asset."
- rdfs:label "Author-provided prose that describes intended purpose of this node."
- rdfs:label "Text description or navigation hint to identify this ViewpointGroup."
- rdfs:label "X3DScriptNode field description is implemented by multiple nodes."
- rdfs:label "X3DSensorNode field description is implemented by multiple nodes."
- rdfs:label "X3DSoundNode field description is implemented by multiple nodes."
- rdfs:label "X3DTextureNode field description is implemented by multiple nodes."
- rdfs:label "X3DTextureProjectorNode field description is implemented by multiple nodes."
- rdfs:label "X3DTimeDependentNode field description is implemented by multiple nodes."
- rdfs:label "X3DViewpointNode field description is implemented by multiple nodes."
Domains (93)
- 'AcousticProperties specifies the interaction of sound waves with characteristics of geometric objects in the scene.'
- 'An HAnimMotion node supports discrete frame-by-frame playback for H-Anim motion data animation.'
- 'An HAnimSite node serves three purposes: (a) define an end effector location which can be used by an inverse kinematics system, (b) define an attachment point for accessories such as jewelry and clothing, and (c) define a location for a Viewpoint virtual camera in the reference frame of an HAnimSegment (such as a view through the eyes of the humanoid for use in multi-user worlds).'
- 'Analyser provides real-time frequency and time-domain analysis information, without any change to the input.'
- 'AudioClip provides audio data used by parent Sound nodes.'
- 'AudioDestination node represents the final audio destination and is what user ultimately hears, typically from the speakers of user device.'
- 'Base type for all drag-style pointing device sensors.'
- 'Base type for all node types that specify texture projector nodes, which provide a form of lighting.'
- 'Base type for all nodes that specify 3D sources for texture images.'
- 'Base type for all nodes that specify cubic environment map sources for texture images.'
- 'Base type for all nodes which specify 2D sources for texture images.'
- 'Base type for all nodes which specify sources for texture images.'
- 'Base type for all pointing device sensors.'
- 'Base type for all sensor node types that operate using key devices.'
- 'Base type for all sensors.'
- 'Base type for all sound destination nodes, which represent the final destination of an audio signal and are what the user can ultimately hear.'
- 'Base type for all sound destination nodes, which represent the final destination of an audio signal and are what the user can ultimately hear.'
- 'Base type for all sound nodes.'
- 'Base type for all sound processing nodes, which are used to enhance audio with filtering, delaying, changing gain, etc.'
- 'Base type for all texture node types that define a single texture. A single texture can be used to influence a parameter of various material nodes in the Shape component, and it can be a child of MultiTexture.'
- 'Base type for all touch-style pointing device sensors.'
- 'Base type for scripting nodes (but not shader nodes).'
- 'Base type for the environmental sensor nodes ProximitySensor, TransformSensor and VisibilitySensor.'
- 'Base type from which all time-dependent nodes are derived.'
- 'Base typefor all sensors that generate events based on network activity.'
- 'BiquadFilter node is an AudioNode processor implementing common low-order filters.'
- 'BufferAudioSource node represents a memory-resident audio asset that can contain one or more channels.'
- 'ChannelMerger unites different input channels into a single output channel.'
- 'ChannelSelector selects a single channel output from all input channels.'
- 'ChannelSplitter separates the different channels of a single audio source into a set of monophonic output channels.'
- 'Collision detects camera-to-object contact using current view and NavigationInfo avatarSize.'
- 'CollisionCollection holds a collection of objects that can be managed as a single entity for resolution of inter-object collisions.'
- 'CollisionSensor generates collision-detection events.'
- 'ComposedCubeMapTexture is a texture node that defines a cubic environment map source as an explicit set of images drawn from individual 2D texture nodes.'
- 'ComposedTexture3D defines a 3D image-based texture map as a collection of 2D texture sources at various depths.'
- 'Convolver performs a linear convolution on a given AudioBuffer, often used to achieve a reverberation effect.'
- 'CylinderSensor converts pointer motion (for example, a mouse or wand) into rotation values using an invisible cylinder aligned with local Y-axis.'
- 'Delay causes a time delay between the arrival of input data and subsequent propagation to the output.'
- 'DynamicsCompressor node implements a dynamics compression effect, lowering volume of loudest parts of signal and raising volume of softest parts.'
- 'EspduTransform is a networked Transform node that can contain most nodes.'
- 'GeneratedCubeMapTexture is a texture node that defines a cubic environment map that sources its data from internally generated images.'
- 'GeoProximitySensor generates events when the viewer enters, exits and moves within a region of space (defined by a box).'
- 'GeoTouchSensor returns geographic coordinates for the object being selected.'
- 'GeoViewpoint specifies viewpoints using geographic coordinates.'
- 'HAnimDisplacer nodes alter the shape of coordinate-based geometry within parent HAnimJoint or HAnimSegment nodes.'
- 'HAnimJoint node can represent each joint in a body.'
- 'HAnimSegment node contains Shape geometry for each body segment, providing a visual representation of the skeleton segment.'
- 'If a non-uniform scale is applied to the pick sensor, correct results may require level 3 support.'
- 'ImageCubeMapTexture is a texture node that defines a cubic environment map source as a single file format that contains multiple images, one for each side.'
- 'ImageTexture maps a 2D-image file onto a geometric shape.'
- 'ImageTexture3D defines a 3D image-based texture map by specifying a single image file that contains complete 3D data.'
- 'KeySensor generates events as the user presses keys on the keyboard.'
- 'LinePickSensor uses one or more pickingGeometry line segments to compute intersections with pickTarget shapes.'
- 'ListenerPointSource node represents position and orientation of a person listening to virtual sound in the audio scene, and provides single or multiple sound channels as output.'
- 'LoadSensor generates events as watchList child nodes are either loaded or fail to load.'
- 'MicrophoneSource captures input from a physical microphone in the real world.'
- 'MovieTexture applies a 2D movie image to surface geometry, or provides audio for a Sound node.'
- 'MultiTexture applies several individual textures to a single geometry node, enabling a variety of visual effects that include light mapping and environment mapping.'
- 'Node type X3DViewpointNode defines a specific location in the local coordinate system from which the user may view the scene, and also defines a viewpoint binding stack.'
- 'Nodes implementing X3DSoundSourceNode provide signal inputs to the audio graph.'
- 'OrthoViewpoint provides an orthographic perspective-free view of a scene from a specific location and direction.'
- 'OscillatorSource node represents an audio source generating a periodic waveform, providing a constant tone.'
- 'PeriodicWave defines a periodic waveform that can be used to shape the output of an Oscillator.'
- 'PickableGroup is a Grouping node that can contain most nodes.'
- 'PixelTexture creates a 2D-image texture map using a numeric array of pixel values.'
- 'PixelTexture3D defines a 3D image-based texture map as an explicit array of pixel values (image field).'
- 'PlaneSensor converts pointing device motion into 2D translation parallel to the local Z=0 plane.'
- 'PointPickSensor tests one or more pickingGeometry points in space as lying inside the provided pickTarget geometry.'
- 'ProximitySensor generates events when the viewer enters, exits and moves within a region of space (defined by a box).'
- 'ReceiverPdu is a networked Protocol Data Unit (PDU) information node that transmits the state of radio frequency (RF) receivers modeled in a simulation.'
- 'Script contains author-programmed event behaviors for a scene.'
- 'SignalPdu is a networked Protocol Data Unit (PDU) information node that communicates the transmission of voice, audio or other data modeled in a simulation.'
- 'SphereSensor converts pointing device motion into a spherical rotation about the origin of the local coordinate system.'
- 'StreamAudioDestination node represents the final audio destination via a media stream.'
- 'StreamAudioSource operates as an audio source whose media is received from a MediaStream obtained using the WebRTC or Media Capture and Streams APIs.'
- 'StringSensor generates events as the user presses keys on the keyboard.'
- 'TextureProjector is similar to a light that projects a texture into the scene, illuminating geometry that intersects the perspective projection volume.'
- 'TextureProjectorParallel is similar to a light that projects a texture into the scene, illuminating geometry that intersects the parallel projection volume.'
- 'The Gain node amplifies or deamplifies the input signal.'
- 'The HAnimHumanoid node is used to: (a) store references to the joints, segments, sites, skeleton, optional skin, and fixed viewpoints, (b) serve as a container for the entire humanoid, (c) provide a convenient way of moving the humanoid through its environment, and (d) store human-readable metadata such as name, version, author, copyright, age, gender and other information.'
- 'The Sound node controls the 3D spatialization of sound playback by a child AudioClip or MovieTexture node.'
- 'The SpatialSound node controls the 3D spatialization of sound playback by a child AudioClip or MovieTexture node.'
- 'The X3DPickSensorNode abstract node type is the base node type that represents the lowest common denominator of picking capabilities.'
- 'TimeSensor continuously generates events as time passes.'
- 'TouchSensor tracks location and state of the pointing device, detecting when a user points at or selects (activates) geometry.'
- 'TransformSensor generates output events when its targetObject enters, exits, and moves within a region in space (defined by a box).'
- 'TransmitterPdu is a networked Protocol Data Unit (PDU) information node that provides detailed information about a radio transmitter modeled in a simulation.'
- 'Viewpoint provides a specific location and direction where the user may view the scene.'
- 'ViewpointGroup can contain Viewpoint, OrthoViewpoint, GeoViewpoint and other ViewpointGroup nodes for better user-navigation support with a shared description on the viewpoint list.'
- 'VisibilitySensor detects when user can see a specific object or region as they navigate the world.'
- 'VolumePickSensor tests picking intersections using the pickingGeometry against the pickTarget geometry volume.'
- 'WaveShaper node represents a nonlinear distorter that applies a wave-shaping distortion curve to the signal.'
- 'X3DUrlObject indicates that a node has content loaded from a Uniform Resource Locator (URL) and can be tracked via a LoadSensor. Such child nodes have containerField=\'children\' to indicate their relationship to the parent LoadSensor node.'