🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

AI of npc's in a strategy game

Started by
3 comments, last by Arkon 23 years, 8 months ago
Hello I would like to hear your thoughts/ideas on how to program the AI of the enemies, since it''s the MOST complicated subject to do in a strategy game I need your help! (I''m talking of the DESIGN ASPECT ofcourse) Thanks Arkon
Advertisement
Each enemy acts individually, yet in all likelyhood is acting on behalf of a team at the same time. Therefore, each agent has two possibly conflicting goals: satisfy individual''s needs and fulfill teams''s objectives. The keyword here is goal.

Each enemy unit has goals, and should be in a planning state attempting to fulfill these goals. Each high-level goal is comprised of subgoals. There may be more than one set of subgoals which can effectively satisfy the top-level goal. Knowledge is required to plan and execute goals. The keyword here is knowledge.

Each enemy has knowledge of how goals and subgoals are achieved. Some goals are actually the aquisition of additional knowledge to determine the feasibility of one solution over another. How does an agent aquire new knowledge? By using its existing knowledge of knowledge aquisition methods. For example, in satisfying the goal of gaining entry into a new basin, the agent has possibly proposed two solutions: enter through the canyon, or enter from over the ridge. And to determine the best route, the agent needs additional knowledge. Is there an ambush in the canyon? Will we be seen coming over the ridge? To aquire this new knowledge, the agent can employ its knowledge of knowledge aquisition techniques to satisfy these questions: The knowledge aquisition techniques could comprise of several methods: Ask another agent, send a scout in to check out the situation, or go in and look for yourself.

In addition, in a teamwork environment, there must be axioms for the sole purpose of cooridnation of teamwork goal achievment. All agents must initially believe in the objective of the team''s goals. If any agent learns that it is pointless to continue with the achievment of the team''s goals, it should be that agent''s goal to communicate to its fellow agents the pointlessness of continuing.

You wanted a simple answer, didn''t you? In order to create a game with lasting play value, you must remove the brittleness factor of the game. By brittleness, I mean exactly that. If you hit the game hard, does it shatter, revealing its shallow internals and the superficial facade that it really was?

It all boils down to depth, and how deep you are willing to program the structure of your game.

_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
thanks man
i get the idea
real thanks
You know.. you really need to think about HOW you want the AI to work.. first. I mean, it''s a computer.. so things must have values. It has to compare value to value. What you have to do is figure out a way to make the values have a grey area. That''s the hard part.. making a non-perfect AI. If you want, email me and we''ll get into a discussion about it

J
quote: Original post by Niphty

You know.. you really need to think about HOW you want the AI to work.. first. I mean, it's a computer.. so things must have values. It has to compare value to value. What you have to do is figure out a way to make the values have a grey area. That's the hard part.. making a non-perfect AI.


We'll always be making non-perfect AI's, and it is not intentional. Fuzzy value processing is important, but it is not a be all / end all solution. As soon as game AI's start modeling the processing of knowledge more effectively, game AI will leap forward.

Knowledge processing is a chain of reasoning that does not directly translate to visible behavior, yet does ultimately affect an agent's behavior. If an agent knows A->B and B->C and C->D and D->E and later learns that A is true, the agent then knows that E is true. If the agent also knows that (E & X)->F is true and the agent later learns X is true, the the agent then knows F is true. By knowing F is true, the agent might then have a solution to a goal.

Note: When I say an agent knows A->B is true, I am not saying the agent thinks A is true or B is true. What I am saying is the agent knows that "if A were to be true, then the agent knows that B would also have to be true".



Edited by - bishop_pass on November 11, 2000 12:50:56 AM
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.

This topic is closed to new replies.

Advertisement