[x3d-public] [AI] On SAI Issues for MCCF
cbullard at hiwaay.net
cbullard at hiwaay.net
Sun Apr 19 10:07:33 PDT 2026
Thank you, Joe and John. We just tested John's fix and that made the
SAI work. Fantastic.
Joe, I will pass on your suggestions to Claude. We have a backlog of
about 30 items to fix before we have 2.2 working. The GitHub is updated
for John's fix and some changes to the systems manual, users guide and
mathematical foundations per items provided in the code review.
We probably won't get to H-Anim RSN. But the AI surprises me with how
rapidly we can adapt. The blog will revert to bloggy topics and the
GitHub becomes the dashboard.
The LLMs are like very smart industrious children. They tease me and
make me laugh. That's a bonus. Affective layers are affective. :)
len
On 2026-04-17 11:59 pm, Joe D Williams wrote:
>> Add animation behaviors to that, even skills, and you will get a nice
>> cheap simulation model of humans inaction under pressure. That is
>> where creative X3D lives. I hope to use H-Anim.
>
> Hope will be enough, plus some insights, since HAnim is guaranteed
> extensible, so bring it on.
> Intelligence and really any implementations of pressures and skills can
> be applied from
> external, at the Humanoid container, the skeleton, skin, sensors, and
> accessories,
> plus metadata structures anywhere. Import/Export to the max.
> The You can just be an ID (optional) and a viewpoint, or a complex
> Human<=>Humanoid
> relationship as complete as you can afford.
> Up to and including the Humanoid Digital Twin stimulus-response
> networks producing the
> augmented physical environment including the connected author.
>
> Legacy has created basic World environments and interactors then
> adding events to monitor and make scene accessible and responsive.
> Metaverse presents real and virtual event graphs as base for
> creating and interacting with environments and related assets.
>
> Structures of physical and virtual services, your metaversal You,
> create, intercept, and transmit events between the
> host environment and the reality of the physical author
> and audience.
> The limit is set by available bandwidth, quality and quantity
> of sensor interfaces, server and environment compute,
> local data and food stores, power, lights, sanitation
> and best personal safety as budget will allow.
>
> Me thinking that the base You can consist of credentials,
> metadata, data, user code, plus other interfaces and
> sets of interfaces, some specially tuned for AI visit or habitation.
>
> So,in the abstract it is presenting a set or sets of nodes and fields
> that
> can be implemented using existing web3d x3d hanim Humanoid structures,
> or compatible new structures.
>
> To get standards-track physics into hanim seems like mostly the same
> idea,
> to get some nodes extended or new added. Seems natural to relate action
> under pressure with reactions based on skills to hanim.
> There is a standards-track taxonomy developing for some specific poses,
> postures, and skeletal actions, a vocabulary for facial expressions is
> (still) in work, and advancements in accessibility are needed.
>
> I mention a special HAnim interface for AI, because
> maybe you don't want any AI working in/with your You
> to have access to your entire hanim You database.
> tr&rp Thanks,
> Joe
>
> -----Original Message-----
> From: John Carlson via AI <ai at web3d.org></ai at web3d.org>
> Sent: Apr 17, 2026 6:56 PM
> To: Interaction of AI with Web3D Technologies
> <ai at web3d.org></ai at web3d.org>
> Cc: John Carlson <yottzumm at gmail.com>,
> <cbullard at hiwaay.net></cbullard at hiwaay.net></yottzumm at gmail.com>
> Subject: Re: [AI] On SAI Issues for MCCF
>
> Hi Len, Just a couple of notes: I tried facing off claude vs gemini on
> a powershell script to reveal what hosts were blocked on my system, and
> I got 9 steps in with each of them revealing problems in each
> other’s code, and I ran out of tokens. I hope you have a supply
> of $$ for tokens. Local LLMs are looking attractive, and building or
> using agent code like pi is attractive, as long as I don’t give
> LLM a real shell (apparently there are sandboxes shells and machines).
> John
> On Fri, Apr 17, 2026 at 5:53 PM lee bullard via AI <ai at web3d.org
> (mailto:ai at web3d.org)=""> wrote:Hi John:</ai at web3d.org>
>
> I'll have a look at the video. Being text bots, they don't watch videos
> but I will.
>
> A little long here, but I think I should be clear.
>
> At this point I can't say what to include without driving them down
> rabbit trails. Hallucinations are simply semantic attractor dynamics in
> action (Yes, SAD. It fits). Keep in mind that high temperature and
> cross domain prompts (And adjectives and adverbs) create wild
> untraveled
> trajectories that return manifolds which are ... Improbable. That can
> be a good thing when being orignal or creative. Be specific and cool to
> return specific results. Be noisy and hot to return novelty. Using
> XML for structured prompts is a brilliant way to get specific results
> and to enable LLMs to work together. I tested that first. All
> documented on the blog. The same dynamics as working groups apply.
> Discuss and noodle, vote and argue, pick one. Implement. Test. Rinse.
> Repeat. Publish.
>
> To get to MCCF, there was a long period of sessions with ChatGPT I
> documented on my blog. Some different ideas resonated well enough to
> try them, particularly from quantum field theory and Boltzmann
> distributions. I wanted to explore the idea that semantic convergence
> is a form of wave collapse. A good metaphor just as "information
> ecosystem" was when I wrote that paper in the late 90s. The metaphor
> works. If you want to consider routes as entanglement, it works. But
> Dirac equations if implemented completely don't. Boltzmann works.
> Although we did eventually need a continuous function. So we have a
> Hamiltonian in there for the emotional vector, essentially, an
> emotional
> agent evolving under constraints. Add animation behaviors to that, even
> skills, and you will get a nice cheap simulation model of humans in
> action under pressure. That is where creative X3D lives. I hope to use
> H-Anim.
>
> Just as HumanML was originally an idea for X3D, I decided to come back
> to the barn for the visualization. Should work if the runtime API
> issues can be worked. Thanks ever so much for the pointers. Claude
> liked those and we'll test them now that V2.1 is on GitHub. I don't
> know if the Castle engine is better and if open standards are still
> valid, it should. Someone can test that. I want to hang with X3D as
> long as possible. The problems of long life cycle content costs have
> only gotten worse. The death of the old VRML browsers were tragic and
> what is happening to Meta reinforces that. Parisi has some Medium posts
> worth reading. Will Unity survive? I don't know. I am still a
> metaverse skeptic. 3D is a media type. Full stop.
>
> Originally I used my blog as the blackboard and passed URLs among them
> to share state. Clunky and old school but it works. Now I will use the
> GitHub repo. Live and learn.
>
> I picked up on the LLMs because image and video generation were
> directly
> useful for making song videos for my YouTube channel. Then I read some
> articles about LLMs being flirty. I thought, hmm. someone did create
> emotional systems similar to HumanML and decided to have a look at
> ChatGPT.
>
> I had it create programmed instruction (Skinner) courses for affective
> layering and went from there. ChatGPT asked about the HumanML
> references, looked them up and was amazed. It was the right idea and I
> was asked to revisit it. I said no, water under the bridge. I suspect
> someone will revisit schemas for emotional systems. MCCF can contribute
> to that. The protos can contribute to the X3D libraries. If so, que
> bueno. If not, I had fun.
>
> I was working on a novel and ChatGPT was helpful with reviews. It wants
> to write it but just as I don't use generators for my music, I don't it
> let it write my fiction. That takes the fun of the experience away and
> that is a looming issue for AI art in general. I LOVED my time in the
> studio with my mates and if one is to become room hard (able to perform
> whatever happens), one needs that experience. FWIW.
>
> I suspect programmers are having a hard time with AI generated code for
> similar reasons. I don't. I'm a good analyst and my markup chops were
> superior but I am a lousy programmer as anyone from the VRML days can
> tell you who had to coach me endlessly. I should have been shot.
> Instead I took what i knew about stories and we created IrishSpace on
> top of prototype code and hit one out of the park. Again, wonderful
> experience.
>
> But running code is running code regardless of how it was made. The
> same for music. It's a matter of how you want to do things and what you
> enjoy. So perhaps your fear is misplaced or not but the only way to
> know is execution. There is no way to day to get around the need to
> organize, direct, implement and test. LLMs remove bottlenecks but not
> design discipline. Part of my investigation was to test if the LLMs
> converge on solutions and if their different personna create conflicts.
> Yes and a little. But the results so far are awesome. And I don't find
> myself sitting at a desk managing life among the mammals. That got very
> old.
>
> I'm old, I'm ill and this is how I have fun. Like my songs that I put
> on YouTube, there is no fame or money in it and that's fine. I never
> wanted fame. It's a box that gets smaller as the rooms get bigger. It's
> toxic and surprisingly dangerous. Money? We're not rich but we aren't
> hurting.
>
> MCCF may be a solution to certain challlenges in evaluation and
> alignment of LLM personna. If so, great. If not, I will get a
> simulation theater which was the orignal purpose for HumanML. Then my
> novel and my music have a new media container just as I did with
> IrishSpace, The River of Life and other odds and ends. For me, it is
> not about ground truth or the secrets of the universe. It is art. Full
> stop. A long long trek from the KateWorlds but worth it. And I don't
> ever have to work for the MIC again. THAT became ghastly.
>
> Test the code Claude provides. If it works for you, use it. If not
> toss it. Treat the LLMs like staff, apply your systems engineering
> skills. Have them do code reviews. Just remember the fog of history is
> a real problem. They are good with domains that have densely populated
> data. When the data is scarce, they improvise. X3D is not dead but
> there is very little outside the W3DC written about it so your mileage
> will vary.
>
> Ai should remove bottlenecks but test test test. Always remember Apollo
> 6. The first flight does not shake out all the bugs.
>
> cheers,
> len
>
>> One thing you may want to try for your panel >to do is look at
>> universe
>> simulation based on the idea of exacting >cardinals and ultra exacting
>> cardinals.
>
>> https://youtu.be/pzF23qGA4Pw?si=LYbxGYy3L4C7d77K
>
>
>> I?ve got Python from claude.ai (http://claude.ai), but I?m >afraid to
>> run it.
>
>> John
>
> --
> AI mailing list
> AI at web3d.org (mailto:AI at web3d.org)
> http://web3d.org/mailman/listinfo/ai_web3d.org
More information about the x3d-public
mailing list