[x3d-public] Text-to-mesh

John Carlson yottzumm at gmail.com
Fri Oct 7 17:42:30 PDT 2022


Joe,

Thinking a bit about Interactive Self-Help now (remember Interactive
Fiction like Inform and Zork?).  Have many primitives now.  HAnim is like
the body parts, but what are mind parts, spiritual parts, soul parts.  How
does one “nourish” parts, or as the the Buddhists say, “water the seeds you
want to grow.”

So the idea is that experts could sit down and design the “plugins” for
self-help.  I don’t really really like to bring up Mindvalley, but their
system seems to be working.

Ultimately, with GPT-3, one could type in a sentence, and a self-help
chapter would be generated by the system.

John

On Fri, Oct 7, 2022 at 7:01 PM Joseph D Williams <joedwil at earthlink.net>
wrote:

>
>    - Graph Convolution Networks
>
>
>
> Hi John,
>
> Well, looking at ways of organizing paths and data in a live repository
> when
>
> dealing with data, discovery, knowledge, and perhaps wisdom,
>
> along with descriptions of data and knowledge,
>
> using representations of data, finding and showing relationships between
> elements and collections of data,
>
> having dynamic, orderly, findable, predictable interactions with the
> collection,
>
> obvious, appropriate planned and whimsical navigation within and outside
> of the current environment without getting lost,
>
> while controlling a live on the inside and awake to the outside versatile,
> extensible graphical structure,
>
> then, what better than x3d realtime anytime scenegraph and
> internal/external sai.
>
>
>
> The key is you could do all basic functionality with x3d scenegraph using
> nothing but metadata.
>
> Active or passive multisensory multimedia, in a way, are just add-in
> inhabitants for our beloved dag.
>
> How do you improve this with ai?
>
> Teach the ai to help you do input output add archive, structure, document,
> navigate, and interact with your current dynamic knowledge repository and
> other networks and graphs.
>
>
>
> It really is part of the fun with x3d,
>
> Joe
>
>
>
>
>
>
>
> *From: *John Carlson <yottzumm at gmail.com>
> *Sent: *Saturday, October 1, 2022 9:29 PM
> *To: *Joe D Williams <joedwil at earthlink.net>; X3D Graphics public mailing
> list <x3d-public at web3d.org>
> *Subject: *Re: Text-to-mesh
>
>
>
> Ok.
>
>
>
> I imagine some perhaps future tool where a prompt or description is given
> and a games/unreal/unity world/universe/metaverse is produced.  can we do
> this with VRML or XML?  I understand that XR may not initially be supported.
>
>
>
> Originally, in 1986, I envisioned a description of games rules being
> converted to a game, with something like Cyc.
>
>
>
> Eventually, I even wrote a crazy8s game using OpenCyc.  The product was
> way too slow.
>
>
>
> I’ve also envisioned using version space algebra or EGGG or Ludii to do
> this.   I even wrote a card table top to record plays.  I tried to find
> commonalities between moves, but no supercomputer was available.
>
>
>
> I guess i want to have an AI cache everything needed for the game, so
> interaction can be really fast.
>
>
>
> Is the Web3D Consortium Standards/Metaverse Standards up to the task?
>
>
>
> How does game mechanics link up with rendering?  Michalis?
>
>
>
> John
>
>
>
> On Sat, Oct 1, 2022 at 10:34 PM John Carlson <yottzumm at gmail.com> wrote:
>
> Even as a start, semantics metadata to image or video would be a start.
> I’m not sure what’s possible with Graph Convolution Networks (GCNs).
>
>
>
> John
>
>
>
> On Sat, Oct 1, 2022 at 9:17 PM John Carlson <yottzumm at gmail.com> wrote:
>
> I realize image-to-3D and video-to-3D may be possible already.
>
>
>
> John
>
>
>
> On Sat, Oct 1, 2022 at 9:05 PM John Carlson <yottzumm at gmail.com> wrote:
>
> Now with prompt engineering, we can do text-to-image and text-to-video.
> Now, how about text(description)-to-mesh, text(description)-to-shape,
> text(description)-to-scene, or even text(description)-to-world?
>
>
>
> Can semantics help this?
>
>
>
> John
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20221007/dccbb95e/attachment-0001.html>


More information about the x3d-public mailing list