Searching Technique


Searching Techniques

Every AI program has to do the process of searching for the solution steps are not explicit in nature.  This searching is needed for solution steps are not known before hand and have to be found out.  Basically to do a search process the following steps are needed.
1. The initial state description of the problem.
2. A set of legal operators that changes the state.
3. The final or goal state.

          The searching process in AI can be broadly classified into two major parts.
1. Brute force searching techniques (Or) Uninformed searching techniques.
2. Heuristic searching techniques (Or) Informed searching techniques.

Brute force searching techniques:

          In which, there is no preference is given to the order of successor node generation and selection. The path selected is blindly or mechanically followed. No information is used to determine the preference of one child over another. These are commonly used search procedures, which explore all the alternatives, during the searching process. They don’t have any domain specific knowledge. All their need are the initial state , final state and the set of legal operators. Very important brute force searching techniques are
1.     Depth First Search
2.     Breadth First Search

Depth First Search: This is a very simple type of brute force searching techniques. The search begins by expanding the initial node i.e. by using an operator generate all successors of the initial node and test them.
This procedure finds whether the goal can be reached or not but the path it has to follow has not been mentioned. Diving downward into a tree as quickly as possible performs DFS searches.                                                     
Algorithm:
Step1:         Put the initial node on a list START.
Step2:         If START is empty or START = GOAL terminates search.
Step3:         Remove the first node from START. Call this node a.
Step4:         If (a= GOAL) terminates search with success.
Step5:         Else if node a has successors, generate all of them and add them at the beginning  
             of START.
Step6:         Go to Step 2.

           

The major drawback of the DFS is the determination of the depth citric with the search has to proceed this depth is called cut of depth.
          The value of cutoff depth is essential because the search will go on and on.  If the cutoff depth is smaller solution may not be found.  And if cutoff depth is large time complexity will be more.
Advantages: DFS requires less memory since only the nodes on the current path are stored. By chance DFS may find a solution without examining much of the search space at all.
Breadth First Search:  This is also a brute force search procedure like DFS.  We are searching progresses level by level. Unlike DFS which goes deep into the tree.  An operator employed to generate all possible children of a node.  BFS being a brute force search generates all the nodes for identifying the goal.  The amount of time taken for generating these nodes is prepositional to the depth “d” and branching factor “b” is given by 0(b)                      
Algorithm:
Step 1.        Put the initial node on a list “START”
Step 2.        If START is empty or goal terminate the search.
Step 3.        Remove the first node from the Start and call this node “a”
Step 4.        If a =”GOAL” terminate search with success
Step 5.        Else if node “a” has successors generate all of them and add them at the tail of 
             “START”
Step 6.        Go to step 2.
Advantages:
1.                 BFS will not get trapped exploring a blind alley.
2.                 If there is a solution then BFS is guaranteed to find it.
3.                 The amount of time needed to generate all the nodes is considerable because of the time complexity.
4.                 Memory constraint is also a major problem because of the space complexity.
5.                 The searching process remembers all unwanted nodes, which are not practical use for the search process.

Heuristic Search Techniques:
           In informed or directed search some information about the problem space is used to compute a preference among the children for exploration and expansion.
The process of searching can be drastically reduced by the use of heuristics. Heuristic is a technique that improves the efficiency of search process. Heuristics are approximations used to minimize the searching process. 
Generally two categories of problems are used in heuristics.
1.                 Problems for which know exact algorithms are known & one need to find an appropriate & satisfying the solution for example computer vision.  Speech recognition.
2.                 Problems for which exact solutions are known like rebuke cubes & chess.

The following algorithms make use of heuristic evolution
1.                 Generate & test
2.                 Hill climbing
3.                 Best first search
4.                 A* Algorithm
5.                 AO* Algorithm
6.                 Constraint satisfaction
7.                 Means- ends analysis.

1.Generate and test: The generate & test strategy is the simplest of all the approaches.  The generate & test algorithm is a depth first search procedure – since complete solutions must be generated before they can be tested. It is most systematic form, it is simply an exhaustive search of the problem space, It is also known as the British museum algorithm. A reference to a method for finding an object in British museums by wandering randomly.
Algorithm:
Step 1: Generate possible solutions.  For some problems this means generating a particular point in the problem space.  For others it means generating a path from a start state.
Step 2: Test to see if these actually a solution by comparing the chosen point or the end point of the chosen path of the set of acceptable goal states.
Step 3: If a solution has been found, quit, otherwise, return to step 1.

2. Hill Climbing: It is a variant of generate a test in which feedback from the test procedure is used to help the generator decide which direction to move in the search space.  In a pure generate & test procedure the test function response with only a yes or no.  But if the tests function is augmented with a heuristic function that provides a estimate of how close given state is to a goal state.  Hill climbing is often used when a good heuristic function is available for evaluating states but when no other useful knowledge is available. 
Suppose, you are in an unfamiliar city without a map and you want to get down. Then you aim for all the tall buildings. The heuristic function is just distance between the current location and the location of all the tall buildings and the desirable states are those in which this distance is minimized.






This algorithm is also discrete optimization algorithm uses a simple heuristic function. There is practically no difference between hill climbing & DFS except that the children of the node that has been expanded are shorted by the remaining distance nodes.                                            
Algorithm:
Step1:         Put the initial node on a list START.
Step2:         If (START is empty) or (START = GOAL) then terminate the search.
Step3:         Remove the first node form the start, call this node ‘a”.
Step4:         If ( a = GOAL) terminate search with success.
Step5:         ELSE if node “a” has successors generate all of them.  Find out how far they are from the goal node.  Sort them by the remaining distance from the goal and add them to beginning of the start.
Step6:         Go to step 2.

Problems of hill climbing:
Local maximum: A state that is better than all its neighbors but not better than some other states when compared to states farther away.


 
Here all moves to make things worse, because
of foothills.




 
                                                                                                Local Maximum
Plateau: The flat area of the search space in which all neighbors have the same value.












 







                                                                                               
                                                                  Plateau

Ridge: Described as long and narrow stretch of evaluated ground or a narrow elevation or raised part running along or across a surface.








 







                                                                                                                                
                                                                       Ridge

            In order to overcome these problems, adopt one of the following or a combination of the following methods.
                    
1. Backtracking for local maximum. Backtracking helps in undoing what has been done so far and permits to try different path to attain the global peak.

2. A big jump is the solution to escape from the plateau. A huge jump is recommended because in a plateau all neighboring points have the same value.

3. Trying different paths at the same time is the solution for dealing with  ridges.

Best first Search: Which is a way of combining the advantages of both depth-first-search and breadth-first-search in to a single method.
DFS  is good because it allows a solution to be found without all competing branches having to be expanded. BFS is good because it does not get trapped on dead end paths. One way of combining the two is to follow a single path at a time, but switch paths whenever some competing path looks more promising than the current one does.
In this procedure, the heuristic function used here called an evaluation function is an indicator of how far the node is from the goal node. Goal nodes have an evaluation function value of zero.
Example attached to material…..
There is an only minor variation between hill climbing and Best FS. In the former we sorted the children of the first node being generated. Here we have to sort the entire list to identify the next node to be expanded. The paths found by best first search are likely to give solutions faster because it expands a node that seems closer to the goal. However there is no guarantee of this.
Algorithm:
Step 1: put initial node on a list start.                                                                 
Step 2: if (start is empty) or (start =goal) then terminate search.
Step 3: remove the first node from start. Call this node a.
Step 4: if (a = goal) then terminate search with success.
Step 5: else if node ‘ a’ has successors, generate all of them. Find out how far they are from  the goal node. Sort all the children generated so far by the remaining distance from the  goal.
Step 6: name this list as start one.
Step 7: replace start with start one.
Step 8: go to step 2.

A* algorithm: The best first search algorithm that was just presented is a simplification an algorithm called A* algorithm, which was first, presented by HART.                
A part from the evolution function values one can also bring in cost functions indicate how much resources like time, energy, money etc.. have been spent in reaching a particular node from the start. While evolution functions deal with the future, cost function deals with the past. Since the cost function values are really expanded they are more concrete than evolution function values. If it is possible for one to obtain the evolution function values then A* algorithm can be used. The basic principle is that the sum of the cost and evolution values for a state to get its goodness worth and this is a yard stick instead evolution function value in best first search. The sum of the evolution function value and the cost along the path leading to that state is called fitness number. While best first search uses the evolution function value for expanding the best node, A* uses the fitness number for its computations.
Algorithm:
Step 1: put the initial node on a list start
Step 2: if (start is empty) or (start = goal) terminate search.
Step 3: remove the first node from the start call this node  “a”.
Step 4: if (a= goal) terminate search with success.
Step 5: else if node has successors generate all of them estimate the fitness number of the successors by total the evaluation function value and the cost function value and sort the fitness number.
Step 6: name the new list as start 1.
Step 7: replace start with start 1.Step 8: go to step 2.
AND-OR Graph(AO*):In this method,  a complex  problem  is broken down or decomposed into a set of primitive sub problems. Solutions for these primitive sub-problems are easily obtained. The solutions for all the sub-problems collectively given the solution for the complex problem. It is also called Problem Reduction.
          Between the complex problem and the sub-problem, there exist two kinds of relationships, i.e AND relation and OR relationship.
          In AND relationship, the solution for the problem is obtained by solving all the sub-problems.(Remember AND gate truth table condition).
          In OR relationship, the solution for the problem is obtained by solving any of the sub-problems.  (Remember OR gate truth table condition).
          This is why the structure is called an AND-OR graph.
The problem reduction is used on problems such as theorem proving, symbolic integration and analysis of industrial schedules.
            To describe an algorithm for searching an AND-OR graph, need to exploit a value, call futility. If the estimated coast of a solution becomes greater than the value of futility, then give up the search. Futility should be chosen to corresponds to a threshold such that any solution with a cost above it is too expensive to be practical, even if it could ever be found.


A
 
           


 


                                                                                             9
D
 
C
 
B
 
 
                                                            5                  3                   4

AO* Algorithm:

          The problem reduction algorithm we just described is a simplification of an algorithm described in Martelli and Montanari, Martelli and Montanari and Nilson. Nilsson calls it the AO* algorithm.
1.                 Place the start node S on open.
2.                 Using the search tree constructed thus far, compute the most promising solution tree T
3.                 Select a node “n”  that is both on open and a part of T. Remove n from open and place it on closed.
4.                 If  “n”  is a terminal goal node, label “n”  as solved.  If the solution of  “n” results in any of n’s ancestors being solved, label all the ancestors as solved. If the start node s is solved, exit with success where T is the solution tree. Remove from open all nodes with a solved ancestor.
5.                 If n is not a solvable node (operators cannot be applied), label n as unsolvable.  If the start node is labeled as unsolvable, exit with failure.  If any of n’s ancestors become unsolvable because n is, label them unsolvable as well.  Remove from open all nodes with unsolvable ancestors.
6.                 Otherwise, expand node n generating all of its successors.  For each such successor node that contains more than one sub problem, generate their successors to give individual sub problems.  Attach to each newly generated node a back pointer to its predecessor.  Compute the cost estimate h* for each newly generated node and place all such nodes that do not yet have descendants on open.  Next, recomputed the values of h* at n and each ancestor of n.
7.                 Return to step 2.

If can be shown that AO* will always find a minimum –cost solution tree if one exists, provided only that h*(n) <_ h(n), and all are costs are positive.  Like A*, the efficiency depends on how closely h* approximates h.

Constraint Satisfaction:

Many problems in AI can be viewed as problems of constraint satisfaction in which the goal is to discover some problem state that satisfies a given set of constraints. Constraint satisfaction is a search procedure that operates in a space of constraint sets. The initial state contains the constraints that are originally given in the problem description. A goal state is any state that has been constrained “enough”, where enough must be defined for each problem. Constraint satisfaction is two-step process. First, constraints are discovered and propagated as far as possible through outs the system.
Algorithm:
1.     Propagate available constraints. To do this, first set OPEN to the set of all objects that must have values assigned to them in a complete solution. Then do until an inconsistency is detected or until OPEN is empty:                                      
(a)  Select an object OB from OPEN. Strengthen as much as possible the set of constraints that apply to OB.
(b)  If this set is different from the set that was assigned the last time OB was examined or if this is the first time OB has been examined, then add to OPEN all objects that share any constraints with OB    
(c)    Remove OB from OPEN.

2.     If the union of the constraints discovered above defines a solution, then quit and report the solution.

3.     If the union of the constraints discovered above defines a contradiction, then return failure.

4.     If neither of the above occurs, then it is necessary to make a guess at in order to proceed. To do this, loop until a solution is found or all possible solutions have been eliminated
(a)   Select an object whose value is not at determined and select a way of strengthening the constraints on that object.
(b)  Recursively invoke constrain satisfaction with the current set of constraints augmented by the strengthening constraint just selected.


 This algorithm applies it in particular problem domain requires the use of two kinds of rules. Rules that define the way constraints may validly be propagated and rules that suggest guesses when guesses are necessary.
The solution process proceeds in cycles, at each cycle two significant things are done.
1.     Constraints are propagated by using rules that correspond to the properties of arithmetic.
2.     A value is guessed for some letter whose value is not yet determined.
Problem1:     S E N D
                     M O R E
                    =======
                  M O N E Y
                ==========
LET M=1
C3+S+1>9, C3 can be 0 or 1 => S=9 or 8
C3+S+M can be either 9,10 or 11. It is 9 then no carry, If sum is 11 then O=1.
But M is already assigned 1.
So O=0 and C3=0. S=9 or 8.
Let C3=0 & S=9
C2+E+O=N if C2=0, E=N It is wrong.
So c2 =1, 1+E =N.
Let E=2 then N=3
9 2 3 D
1 0 R 2
======
       10 3 2  Y                           R=9 & C2 = 0    wrong
                                                       R=8 & C2 = 1 correct
            9 2 3 D
            1 0 8 2
            =====
       1 0 3 2 Y              to get carry D>8 => D= 8 or 9 clash, Similarly E= 3 & 4 clash
Now for E=5 then N=6
            9 5 6 D
            1 0 R 5
            =====
       1 0 6 5 Y              C1+6+R  = 1 5
                                                C1= 0 => R=9 wrong
                                                C1= 1 => R =8
9 5 6 D
            1 0 8 5
            =====
       1 0 6 5 Y              Now D+5>9=>D>4
                                                            D=6 then Y=1 It is wrong
                                                            D= 7 then Y=2 It is correct
                                                            D= 8 or 9 wrong



Result: 9 5 6 7
                        1 0 8 5
                        =====
                     1 0 6 5 2
                                    Values: S=9,E=5, N=6 , D= 7,          
                                                   M=1,O=0,R=8,E=5
                                                  M=1,O=0,N=6,E= 5,Y=2.

Problem2:

            D O N A L D
            G E R A L D
            =========
            R O B E R T
D+D = C1.T                                        C1+ L+ L = C2.R
C2+A+A = C3.E                                C3+N+R= C4.B
C4+O+E= C5.O                                 C5+D+G= R

Let us assume that there are no carries then O+E = O => E = 0 .
                                                                                    D+G<=9.
                                                                        Let D=1 then T= 2.
                                                            Let L=3 then R= 6.
                                                             Let A=4 then E=8
Now we have to take N and R-values from 5,7,9.  But it is not possible. if it we get carry and O+E=0 condition will fail
Consider E=9
                 C3+O+E=0
                 C4=1; E=9
                  Let D=1=>T=2;L=3 => R=6
                          A=4 =>E=8 contradiction
                        LetD=5 => T=0
                                    1+L+L=R        Let L= 8 then 1+8+8 = 17 => R=7 and L=8.
                        Since E=9; 1+A+A =E=>A=4.
                        Let B=3 then N+R=B
                                                N+7=B
            B should be either 1,2,3 if B=1 means B=11.
            N=4 with carry
            But A=4.
Let B=2; N=5 But D=5.
So B=3,
D+G+Carry = R
5+G+1 =7
So G=1.
Result: D O N A L D                         5 2 6 4 8 5
             G E R A L D                         1 9 7 4 8 5
           R O B E R T                            7 2 3 9 7 0                              
            Values:            D=5 ;T=0; L=8; R=7;A=4;N=6;B=3;O=2;E=9;G=1.
Problem3:
                        C R O S S                               9 6 2 3 3
                        R O A D S                              6 2 5 1 3
                    D A N G E R                           1 5 8 7 4 6
 Values:   C=9;R=6,O=2;S=3;A=5; D=1;E=4;N=8;G=7.

Means-Ends Analysis:
            We have presented a collection of search strategies that can reason either forward or backward, but for a given problem, one direction or the other must be chosen.  Generally a mixture of the two directions is appropriate.  Such a mixed strategy would make it possible to solve the major parts of a problem first and then go back and solve the small problems that arise in “gluing” the big pieces together.  A technique known as means-ends analysis allows us to do that.
          The means-ends analysis process centers on the detection of differences between the current state and the Goal State.  Once such a difference is isolated, an operator that can reduce the difference must be found.  But perhaps that operator cannot be applied to the current state.  So we set up a sub problem of getting to a state in which it can be applied.  The kind of backward chaining in which operators are selected and then sub-goals are set up to establish the preconditions of the operators is called operator sub-goal.
          Just like the other problem solving techniques we have discussed, means-end- analysis relies on a set of rules that can transform one problem state into another.  These rules are usually not represented as a left side that describes the conditions that must be met for the rule to be applicable (these conditions are called the rule’s preconditions) and a right side that describes those aspects of the problem state that will be changed by the application of the rule.
Algorithm:     1. Compare CURRENT to GOAL.  IF there are no differences between them then return.
2. Otherwise, select the most important difference and reduce it by doing   t   the  following until success or failure is signaled.
(a)    Select an as yet untried operator 0 that is applicable to the current difference.  If there are no such operators, the signal failure.
(b)    Attempt to apply 0 to CURRENT. Generate descriptions of two states: 0-START, a state in which 0’s preconditions are satisfied and 0-RESULT, the state that would result if 0 was applied in 0-START.
(c)     If (FIRST-PART ßMEA(CURRENT, 0-START))
And
(LAST-PART ß MEA (0-RESULT, GOAL)
are successful, then signal success and return the result of concatenating
                             FIRST PART, 0, and LAST-PART.


Comments

Popular posts from this blog

2D-Transformations

3D-Transformations(Including projections)