So now let's dive under the hood ...
My design basically has two classes: an Agent, and a Task. An Agent represents any entity in the gameworld that can run AI, and the Tasks are the things you can get an Agent to do. So a task might be low level ("Moove", "Shoot"), high-level ("Fight", "PathFinding"), a state in a state machine ("Flee", "Attack"), a goal-based planner (e.g. a STRIPS system generating new subtasks), a sensor ("DamageSensor", "LineOfSight", "AudioSensor"), etc, etc. Any Agent can run zero or more Tasks (i.e. it can run Tasks in parallel) and new Tasks assigned to the Agent can either augment, interrupt or replace existing Tasks.
As well as managing and running the Tasks, the Agent provides an abstract interface to the game characters (or to use AI-speak, the Agent is the actuator of the AI). So rather than having the AI know how to attach a LookAt constraint to a game skeleton, the AI just asks the Agent to "LookAt" something, and it's the Agent implementation which knows how to make this type of game character look (e.g. using a Skeleton constraint, vs playing a blended animation, vs rotating the character, etc). This allows me to keep the AI code focused on AI, without knowing too much about the way the game world and characters are put together.
Now the nice thing about building the AI as Tasks on top of an Agent, is it allows me to totally mix and match control schemes. So I can have an enemy using some fancy-pants AI planning engine, but then have a game script which assigns him some simple scripted Tasks (e.g. go here, press this button, then say this) - and the Task framework will handle putting the high-level fancy AI to sleep while it carries out the scripted sequence. And this can work the other way too: I can give a game character some simple scripted motion, and then "call" some higher level combat AI task when it sees the player.
And most satisfyingly of all, the player input fits right it. There's a "ControlScheme" task which implements the game's control scheme (i.e. reading the joystick/keyboard input and turning into Agent calls). Not only does this allow me to re-use all the same Agent functions for controlling the main character - but it means you can script the player's character by simply assigning him some new tasks. Then when they end, the ControlScheme task takes back over and you get control back. But you could also do cool things like attach the ControlScheme task to other AI Agents in the game to let the player temporarily control other game characters or vehicles (as Vehicles are just another type of Agent that implement the Agent interface as a vehicle model rather than a character skeleton).
So that's my AI plan for world domination.
So far, I've refactored all my existing character and control classes to sit on top of the new Agent/Task architecture. I can control my cow by having him run the ControlScheme task, and I can assign a simple "Moove" task to the Elephants to make them run around in a circle. Strangely, just seeing the little Elephants run around crazily in a circle is very satisfying. If nothing else, just having a really clean decoupling between the control code and the character/agent makes me very happy (as there used to be a big mess of auto-push code mixed through the generic character code which always upset me).
The next goal is to add a basic PhysicsSensor so I can have the Elephants change direction when they bump into something. Unfortunately, this requires me to start working out how Task management is specified in the AI graph - something I don't have a nice solution to yet.
And just to wrap up, here's an in-development outtake where I forgot to lock some of the physics axes ...
Cheers!