written 5.3 years ago by |
Keeping just one node in memory might seem to be an extreme reaction to the problem of memory limitations. The local beam search algo keeps track of K states rather than just one. It begins with K randomly generated states. At each step, all the successors of all K states are generated.
At first sight, a local beam search with K states might seem to be nothing more than running K random restarts in parallel instead of in sequence. In fact, the two algo are quite different. In a random – restart search, each search process runs independently of the others. In a local beam search, useful information is passed among the K parallel search threads.
Example: If one state generates several good successors and the other K – 1 states all generate. Bad successors then the effect is that the first state says to the others, “come over here, the grass is greener!”. The algorithm quickly abandons unfruitful searches and moves its resources to where the most progress is being made.
In its simplest form, local beam search can suffer from a lack of diversity among the K states – they can quickly become concentrated in a small region of the state space, making the search little more than an expensive version of hill climbing. A variant called stochastic beam search, analogous to stochastic hill climbing, helps to alleviate this problem. Instead, of choosing the best K from the pool of candidate successors, stochastic beam search chooses K successors at random, with the probability of choosing a given successor being an increasing function of its value.
Stochastic beam search bears some resemblance to the process of natural. Selection, whereby the “successors” (off spring) of a “state” (organism) populate the next generation according to its “value” (Fitness).