Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent

Authors

  • Silvana Petruseva Mathematics Department, Faculty of Civil Engineering, 'St. Cyril and Methodius' University, Skopje, Macedonia

DOI:

https://doi.org/10.2298/YJOR0602211P

Keywords:

emotional agent, complexity, polynomial time, exponential time, adjacency matrix, shortest path

Abstract

This paper discusses the comparison of the efficiency of two algorithms, by estimation of their complexity. For solving the problem, the Neural Network Crossbar Adaptive Array (NN-CAA) is used as the agent architecture, implementing a model of an emotion. The problem discussed is how to find the shortest path in an environment with n states. The domains concerned are environments with n states, one of which is the starting state, one is the goal state, and some states are undesirable and they should be avoided. It is obtained that finding one path (one solution) is efficient, i.e. in polynomial time by both algorithms. One of the algorithms is faster than the other only in the multiplicative constant, and it shows a step forward toward the optimality of the learning process. However, finding the optimal solution (the shortest path) by both algorithms is in exponential time which is asserted by two theorems. It might be concluded that the concept of subgoal is one step forward toward the optimality of the process of the agent learning. Yet, it should be explored further on, in order to obtain an efficient, polynomial algorithm.

References

Bang-Jensen, J., and Gutin, G., Digraphs, Theory, Algorithms and Applications, Springer-Verlag London Limited, 2001.

Blondel, V., and Tsitsiklis J., “A survey of computational complexity results in systems and control”, Elsevier IFAC Publication “Automatica” 36, Oxford, 2000.

Bozinovski, S., “Consequence driven systems, teaching, learning and self-learning agents”, Gocmar Press, Amherst 1995.

Goldsmith, J., and Mundhenk, M., “Complexity issues in Markov decision processes”, Proceedings of IEEE Conference on Computational Complexity, 1998, 272-280.

Mansour, Y., and Singh, S., “On the complexity of policy iteration”, Proceedings of the 15-th Conference on Uncertainty in Artificial Intelligence, Stockholm, Sweden, 1999.

Mundhenk, M., Goldsmith, J., and Allender, E., “The complexity of the policy existence problem for finite horizon partially-observable Markov decision processes”, Proceedings of the 25-th Mathematical Foundations of Computer Sciences, Springer Verlag, 1997, 129-138.

Orponen, P., “Computational complexity of neural networks: a survey”, Nordic Journal of Computing, 2 (1), Helsinki, Finland, 1995, 77-93.

Papadimitriou, H.C., Computational Complexity, Addison-Wesley Publishing Company, San Diego, California, 1994.

Petruseva, S., and Bozinovski, S., “Consequence programming: the algorithm ‘at subgoal go back’”, Mathematical Bulletin, book 24 (L), 2000.

Whitehead, S.D., “Reinforcement learning for the adaptive control of perception and action”, PhD thesis, University of Rochester, 1992.

Wilf, H.S., Algorithms and Complexity, Prentice-Hall, New Jersey, 1986.

Wooldrige, M., “The computational complexity of agent design problems”, Fourth International Conference on Multi-Agent Systems (ICMS), IEEE Press 2000.

Downloads

Published

2006-09-01

Issue

Section

Research Articles