I originally posted this message in response to a question about morph targets I received on the chatbots.org forum and thought I'd include it here to provide more technical details. The engine is about 90% complete for working with Windows and we have a working prototype. This is actually a description of what the engine already does in its current state:
Our system uses morph targets, and one of the things that makes it so special is that we give a great deal of control about how the face moves between those targets. This allows us to tune specifically for an inhuman character like an Orc, or a regular human, or even something like Terrance and Philip from South Park (quick, virtually non animated transitions).
Another thing about our system is that we also mix in emotions as morph targets, so the character can display complex combinations of emotional states in mixes, all while talking. We also have a special kind of “morph animation sequenceâ€, which allows an artist to include subtle additional layers of animation. For example, picture a woman speaking, then a woman singing the same words she just spoke. We can make that difference by applying one of our special animation types.
In addition to that, we also have skeletal animation support, meaning that our characters would fit right in to a game like Fallout… they can walk, run, shoot guns, etc. all while making full use of our system. Again as an example, a soldier might have certain levels of fear, courage, rage, doubt (whatever) and we can apply them to his face, meaning whatever he says will come out of a face that accurately represents his state of mind, instead of the cardboard cutouts we see in games today.
So in a simpler way:
- We use morph targets as one of the fundamental elements of our system
- Among our best features is the way we blend between those morph targets, and layer them, and how much control we give the programmer over blending - it took a lot of work to build what we have, so people shouldn’t get the idea that if they can set a morph target, they can just emulate what we have
- We also have our own special type of morph animation technique which allows us to create very expressive characters, and it’s not hard for people to use
- We also support skeletal animation for characters (though that’s more for bodies)
- We also support morph animation (which is something we’ve never needed to use use and probably never will)
The engine code is very modular and even though we utilize Ogre3D it is not married to Ogre. It can be set up so that other game/graphics engines can use it as well. To put it another way, if you’d rather use Unity instead of Ogre3D for the backend, that is possible if you want to put in the work to make a Unity plugin.
I wish I could post an executable demo showing the current state of the engine but one of the main problems we have is that the characters we have created are owned by Ford Motor Company. This engine was originally created while Zabaware was working on a contract for Ford. Part of the $20,000 we are requesting is for an artist to create new characters under an open license.
But if anyone wants to see our engine running the Ford characters and see the level of control given to the programmer I’d be happy to send you an executable if you want to sign a non disclosure agreement. I just want to prevent the characters from being publicly released.