Files
ailearningexample/README.md
2025-12-23 13:42:57 +01:00

899 lines
30 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Comprehensive Learning Guide: Search Algorithms & Logic
## Detailed Execution Examples with Step-by-Step Tables
---
## Table of Contents
1. [Search Algorithm Execution Tables](#search-algorithm-execution-tables)
- [Sample Graph](#sample-graph-for-all-examples)
- [Breadth-First Search (BFS)](#1-breadth-first-search-bfs)
- [Depth-First Search (DFS)](#2-depth-first-search-dfs)
- [Best-First Search](#3-best-first-search-greedy)
- [Hill-Climbing Search](#4-hill-climbing-search)
- [A* Search](#5-a-search)
- [Branch and Bound](#6-branch-and-bound)
- [Comparison Results](#search-algorithm-comparison)
2. [Alpha-Beta Pruning](#alpha-beta-pruning-4-level-depth)
3. [Statement Verification with Inference Rules](#statement-verification-with-inference-rules)
4. [Resolution Proof](#resolution-comprehensive-example)
---
## Search Algorithm Execution Tables
### Sample Graph for All Examples
```
S(Start, h=7)
/ \
/ \
A(h=6) B(h=4)
/ \ / \
C(h=5) D(h=3) E(h=2)
\ / /
\ / /
\ / /
F(h=1) /
\ /
\ /
G(Goal, h=0)
Edge Costs:
S→A: 1, S→B: 2
A→C: 2, A→D: 3
B→D: 1, B→E: 4
C→F: 2, D→F: 1
E→G: 2, F→G: 1
```
**Graph Notation:**
- Letters = Nodes
- h(n) = Heuristic estimate to goal
- Edge labels = Actual costs
---
## 1. Breadth-First Search (BFS)
**Strategy:** Explores graph level-by-level using a Queue (FIFO)
### BFS Execution Table
| Step | Current Node | Neighbors Found | Queue (FIFO) | Visited | Path to Current |
|------|--------------|-----------------|--------------|---------|-----------------|
| 0 | - | - | [S] | {} | - |
| 1 | S | A, B | [A, B] | {S} | S |
| 2 | A | C, D | [B, C, D] | {S, A} | S→A |
| 3 | B | D(skip), E | [C, D, E] | {S, A, B} | S→B |
| 4 | C | F | [D, E, F] | {S, A, B, C} | S→A→C |
| 5 | D | F(skip) | [E, F] | {S, A, B, C, D} | S→A→D |
| 6 | E | G | [F, G] | {S, A, B, C, D, E} | S→B→E |
| 7 | F | G(skip) | [G] | {S, A, B, C, D, E, F} | S→A→C→F |
| 8 | **G** | - | [] | {S, A, B, C, D, E, F, G} | **S→B→E→G** |
**Result:**
- Path found: S→B→E→G
- Total cost: 2 + 4 + 2 = **8**
- Nodes expanded: **8**
- Complete: ✓ Yes
- Optimal: ✗ No (not considering edge weights)
---
## 2. Depth-First Search (DFS)
**Strategy:** Explores as deep as possible using a Stack (LIFO)
### DFS Execution Table
| Step | Current Node | Neighbors Found | Stack (LIFO) | Visited | Path to Current |
|------|--------------|-----------------|--------------|---------|-----------------|
| 0 | - | - | [S] | {} | - |
| 1 | S | A, B | [A, B] | {S} | S |
| 2 | B | D, E | [A, D, E] | {S, B} | S→B |
| 3 | E | G | [A, D, G] | {S, B, E} | S→B→E |
| 4 | **G** | - | - | {S, B, E, G} | **S→B→E→G** |
**Result:**
- Path found: S→B→E→G
- Total cost: 2 + 4 + 2 = **8**
- Nodes expanded: **4**
- Complete: ✗ No (can get stuck in cycles)
- Optimal: ✗ No
**Note:** DFS happened to find same path as BFS, but typically finds any path, not necessarily shortest
---
## 3. Best-First Search (Greedy)
**Strategy:** Always expands node with lowest heuristic h(n) using Priority Queue
### Best-First Execution Table
| Step | Current Node | h(n) | Neighbors Found | Priority Queue (min h) | Visited | Path to Current |
|------|--------------|------|-----------------|------------------------|---------|-----------------|
| 0 | - | - | - | [S:7] | {} | - |
| 1 | S | 7 | A:6, B:4 | [B:4, A:6] | {S} | S |
| 2 | B | 4 | D:3, E:2 | [E:2, D:3, A:6] | {S, B} | S→B |
| 3 | E | 2 | G:0 | [G:0, D:3, A:6] | {S, B, E} | S→B→E |
| 4 | **G** | 0 | - | - | {S, B, E, G} | **S→B→E→G** |
**Result:**
- Path found: S→B→E→G
- Total cost: **8**
- Nodes expanded: **4**
- Complete: ✗ No
- Optimal: ✗ No
**Note:** Greedy search got lucky and found a path quickly by following the heuristic, but no guarantee of optimality
---
## 4. Hill-Climbing Search
**Strategy:** Local search that only keeps current state, always moves to best neighbor
### Hill-Climbing Execution Table
| Step | Current State | h(Current) | Neighbors | h(Neighbors) | Best Neighbor | Decision |
|------|---------------|------------|-----------|--------------|---------------|----------|
| 0 | S | 7 | - | - | - | Start |
| 1 | S | 7 | A, B | 6, 4 | B (h=4) | Move to B |
| 2 | B | 4 | D, E | 3, 2 | E (h=2) | Move to E |
| 3 | E | 2 | G | 0 | G (h=0) | Move to G |
| 4 | **G** | 0 | - | - | - | **Goal!** |
**Result:**
- Path found: S→B→E→G
- Total cost: **8**
- States explored: **4**
- Memory: **O(1)** - only stores current state
- Complete: ✗ No (can get stuck at local maxima)
- Optimal: ✗ No
**Advantages:** Minimal memory usage
**Disadvantages:** Can get stuck at local maxima, plateaus, or ridges
---
## 5. A* Search
**Strategy:** Uses f(n) = g(n) + h(n) where g(n) = cost from start, h(n) = heuristic to goal
### A* Execution Table
| Step | Current | g(n) | h(n) | f(n)=g+h | Neighbors | Open List (f, node) | Closed | Path |
|------|---------|------|------|----------|-----------|---------------------|--------|------|
| 0 | - | - | - | - | - | [S:7] | {} | - |
| 1 | S | 0 | 7 | 7 | A, B | [B:6, A:7] | {S} | S |
| 2 | B | 2 | 4 | 6 | D, E | [D:6, A:7, E:8] | {S,B} | S→B |
| 3 | D | 3 | 3 | 6 | F | [F:5, A:7, E:8] | {S,B,D} | S→B→D |
| 4 | F | 4 | 1 | 5 | G | [G:5, A:7, E:8] | {S,B,D,F} | S→B→D→F |
| 5 | **G** | 5 | 0 | 5 | - | - | {S,B,D,F,G} | **S→B→D→F→G** |
**Detailed f-value calculations:**
- **S:** g=0, h=7, f=7
- **A (via S→A):** g=1, h=6, f=7
- **B (via S→B):** g=2, h=4, f=6 ✓ (expanded first)
- **D (via S→B→D):** g=3, h=3, f=6 ✓
- **E (via S→B→E):** g=6, h=2, f=8
- **F (via S→B→D→F):** g=4, h=1, f=5 ✓
- **G (via S→B→D→F→G):** g=5, h=0, f=5 ✓
**Result:**
- Path found: S→B→D→F→G
- Total cost: **5****OPTIMAL**
- Nodes expanded: **5**
- Complete: ✓ Yes (if h is admissible)
- Optimal: ✓ Yes (if h is admissible)
**Key:** A* found the optimal path by balancing actual cost and heuristic estimate!
---
## 6. Branch and Bound
**Strategy:** Systematically explores search space while pruning branches that exceed best bound
### Branch and Bound Execution Table
| Step | Current | Cost g(n) | Bound | Neighbors | Queue (cost order) | Best Solution | Action |
|------|---------|-----------|-------|-----------|-------------------|---------------|--------|
| 0 | - | - | ∞ | - | [S:0] | None | - |
| 1 | S | 0 | ∞ | A:1, B:2 | [A:1, B:2] | None | Expand |
| 2 | A | 1 | ∞ | C:3, D:4 | [B:2, C:3, D:4] | None | Expand |
| 3 | B | 2 | ∞ | D:3, E:6 | [C:3, D:3, D:4, E:6] | None | Expand |
| 4 | C | 3 | ∞ | F:5 | [D:3, D:4, F:5, E:6] | None | Expand |
| 5 | D | 3 | ∞ | F:4 | [D:4, F:4, F:5, E:6] | None | Update F |
| 6 | D | 4 | 4 | F:5 | [F:4, F:5, E:6] | None | Prune (≥4) |
| 7 | F | 4 | 4 | G:5 | [F:5, G:5, E:6] | **G:5** | Found! Bound=5 |
| 8 | F | 5 | 5 | G:6 | [G:5, E:6] | G:5 | Prune (5≥5) |
| 9 | G | 5 | 5 | - | [E:6] | **G:5** | Optimal |
| 10 | E | 6 | 5 | - | [] | **G:5** | Prune (6>5), Done |
**Result:**
- Optimal path: S→B→D→F→G
- Total cost: **5****OPTIMAL**
- Nodes expanded: **10**
- Complete: ✓ Yes
- Optimal: ✓ Yes
**Note:** Branch and Bound guarantees optimal solution by systematically pruning suboptimal branches
---
## Search Algorithm Comparison
### Results on Same Graph
| Algorithm | Path Found | Total Cost | Nodes Expanded | Optimal? | Memory |
|-----------|-----------|------------|----------------|----------|---------|
| BFS | S→B→E→G | 8 | 8 | ✗ | High |
| DFS | S→B→E→G | 8 | 4 | ✗ | Low |
| Best-First | S→B→E→G | 8 | 4 | ✗ | High |
| Hill-Climbing | S→B→E→G | 8 | 3 | ✗ | Minimal |
| A* | S→B→D→F→G | **5** | 5 | ✓ | High |
| Branch & Bound | S→B→D→F→G | **5** | 10 | ✓ | High |
**Key Insights:**
- **A*** and **Branch & Bound** found the optimal path (cost = 5)
- **Greedy algorithms** (BFS, DFS, Best-First, Hill-Climbing) found suboptimal path (cost = 8)
- **A*** is most efficient among optimal algorithms (5 nodes vs 10 nodes)
- **Hill-Climbing** uses minimal memory but no guarantee of finding solution
---
## Alpha-Beta Pruning (4-Level Depth)
### Game Tree Scenario
Two-player game:
- **MAX** player wants to maximize score
- **MIN** player wants to minimize score
- Numbers at leaf nodes = utility values for MAX player
### Complete Game Tree
```
MAX (Root - A)
______________|______________
/ \
MIN (B) MIN (F)
________|________ ________|________
/ | \ / | \
MAX(C) MAX(D) MAX(E) MAX(G) MAX(H) MAX(I)
/ | \ / | \ / | \ / | \ / | \ / | \
3 5 2 8 1 4 6 7 9 2 3 5 1 8 6 4 3 7
Level: 0 1 2 3
```
---
### Alpha-Beta Execution Table
| Step | Node | Type | Depth | α | β | Children Explored | Value | Backed-up | Pruned? | Explanation |
|------|------|------|-------|---|---|-------------------|-------|-----------|---------|-------------|
| 1 | A | MAX | 0 | -∞ | +∞ | - | - | - | - | Start at root |
| 2 | B | MIN | 1 | -∞ | +∞ | - | - | - | - | Explore left child |
| 3 | C | MAX | 2 | -∞ | +∞ | [3,5,2] | 5 | 5 | - | Returns max(3,5,2)=5 |
| 4 | B | MIN | 1 | -∞ | 5 | [5] | 5 | - | - | Update β=5 |
| 5 | D | MAX | 2 | -∞ | 5 | [8] | 8 | - | - | First child is 8 |
| 6 | B | MIN | 1 | -∞ | 5 | [5,8] | 5 | - | **✓ E pruned** | 8>β(5), prune E |
| 7 | B | MIN | 1 | -∞ | 5 | - | 5 | 5 | - | Returns 5 to A |
| 8 | A | MAX | 0 | 5 | +∞ | [5] | 5 | - | - | Update α=5 |
| 9 | F | MIN | 1 | 5 | +∞ | - | - | - | - | Explore right child |
| 10 | G | MAX | 2 | 5 | +∞ | [2,3,5] | 5 | 5 | - | Returns max(2,3,5)=5 |
| 11 | F | MIN | 1 | 5 | 5 | [5] | 5 | - | - | Update β=5, α=β |
| 12 | H | MAX | 2 | 5 | 5 | [1] | 1 | 1 | **✓ Rest pruned** | 1<α(5), prune rest |
| 13 | F | MIN | 1 | 5 | 5 | [5,1] | 1 | 1 | **✓ I pruned** | Returns min(5,1)=1 |
| 14 | A | MAX | 0 | 5 | +∞ | [5,1] | 5 | **5** | - | **Final: Choose left** |
---
### Visual Step-by-Step Execution
#### Phase 1: Left Subtree (Node B)
```
Step 1-3: Evaluate Node C
A (α=-∞, β=+∞)
|
B (α=-∞, β=+∞)
|
C
/|\
3 5 2
Result: C returns 5 (maximum)
B updates β = 5
```
```
Step 4-5: Evaluate Node D (first child only)
A (α=-∞, β=+∞)
|
B (α=-∞, β=5)
/ \
5 D
/|\
8 ? ?
Result: D finds 8, but 8 > β(5)
MIN won't choose this path!
Node E is PRUNED (β-cutoff)
```
```
After Phase 1:
A (α=5, β=+∞)
/ \
5 ?
B returns 5 to A
A updates α = 5
```
---
#### Phase 2: Right Subtree (Node F)
```
Step 6-7: Evaluate Node G
A (α=5, β=+∞)
\
F (α=5, β=+∞)
|
G
/|\
2 3 5
Result: G returns 5 (maximum)
F updates β = 5 (now α = β = 5)
```
```
Step 8: Evaluate Node H (first child only)
A (α=5, β=+∞)
\
F (α=5, β=5)
/ \
5 H
/|\
1 ? ?
Result: H finds 1, but 1 < α(5)
This path won't improve MAX's position!
Remaining children of H are PRUNED (α-cutoff)
Node I is PRUNED entirely
```
```
Final Result:
A = 5
/ \
5 1
MAX chooses left branch (value = 5)
```
---
### Pruning Summary Table
| Node | Children | Visited | Pruned | Pruning Type | Reason |
|------|----------|---------|--------|--------------|--------|
| C | 3 | 3 | 0 | - | All evaluated |
| D | 3 | 1 | 2 | β-cutoff | 8 > β(5) at MIN |
| E | 3 | 0 | 3 | β-cutoff | Parent pruned |
| G | 3 | 3 | 0 | - | All evaluated |
| H | 3 | 1 | 2 | α-cutoff | 1 < α(5) at MAX |
| I | 3 | 0 | 3 | β-cutoff | Sibling pruned |
**Efficiency:**
- **Total leaf nodes:** 24
- **Nodes visited:** 16
- **Nodes pruned:** 8
- **Reduction:** ~33% fewer nodes explored
---
### Alpha-Beta Pruning Rules
| Situation | Condition | Action | Type |
|-----------|-----------|--------|------|
| **β-cutoff** | At MIN node, if value ≤ α | Stop exploring siblings | MAX found better elsewhere |
| **α-cutoff** | At MAX node, if value ≥ β | Stop exploring siblings | MIN found better elsewhere |
**Key Insight:**
- **α** = Best value MAX can guarantee (lower bound for MAX)
- **β** = Best value MIN can guarantee (upper bound for MIN)
- When α ≥ β, remaining branches can't affect final decision
---
## Statement Verification with Inference Rules
### Inference Rules Reference
| Rule Name | Form | Notation | Example |
|-----------|------|----------|---------|
| **Modus Ponens (MP)** | P, P→Q ⊢ Q | If P and "P implies Q", then Q | Rain, Rain→Wet ⊢ Wet |
| **Modus Tollens (MT)** | ¬Q, P→Q ⊢ ¬P | If not Q and "P implies Q", then not P | ¬Wet, Rain→Wet ⊢ ¬Rain |
| **Hypothetical Syllogism (HS)** | P→Q, Q→R ⊢ P→R | Chain implications | A→B, B→C ⊢ A→C |
| **Disjunctive Syllogism (DS)** | PQ, ¬P ⊢ Q | If P or Q, and not P, then Q | HotCold, ¬Hot ⊢ Cold |
| **Conjunction (Conj)** | P, Q ⊢ P∧Q | Combine statements | Rain, Cold ⊢ Rain∧Cold |
| **Simplification (Simp)** | P∧Q ⊢ P | Extract from conjunction | Rain∧Cold ⊢ Rain |
| **Addition (Add)** | P ⊢ PQ | Add disjunction | Rain ⊢ RainSnow |
| **Resolution (Res)** | PQ, ¬PR ⊢ QR | Resolve complementary literals | AB, ¬AC ⊢ BC |
---
### Example 1: Criminal Investigation
**Knowledge Base (Clean Notation):**
**Propositional Variables:**
- P = "It rained last night"
- Q = "Ground is wet"
- R = "Footprints are visible"
- S = "Suspect was at scene"
- T = "Suspect is guilty"
**Rules:**
1. P → Q (If it rained, then ground is wet)
2. Q → R (If ground is wet, then footprints are visible)
3. R ∧ S → T (If footprints visible AND suspect at scene, then guilty)
4. ¬T → ¬S ¬R (If not guilty, then suspect wasn't there OR no footprints)
**Initial Facts:** P (It rained), S (Suspect was at scene)
**Query:** Is T (Suspect guilty) true?
---
### Forward Chaining with Inference Rules
| Step | Known Facts | Inference Rule | Premises | Conclusion | New KB State | Explanation |
|------|-------------|----------------|----------|------------|--------------|-------------|
| 0 | {P, S} | - | - | - | Initial | Starting facts |
| 1 | {P, S, **Q**} | **Modus Ponens** | P, P→Q | Q | Ground is wet | Applied rule 1 |
| 2 | {P, S, Q, **R**} | **Modus Ponens** | Q, Q→R | R | Footprints visible | Applied rule 2 |
| 3 | {P, S, Q, R, **R∧S**} | **Conjunction** | R, S | R∧S | Both conditions met | Combined facts |
| 4 | {P, S, Q, R, R∧S, **T**} | **Modus Ponens** | R∧S, R∧S→T | T | **Suspect is guilty!** ✓ | Applied rule 3 |
**Proof Chain:**
```
P →[MP via rule 1]→ Q →[MP via rule 2]→ R
R + S →[Conjunction]→ R∧S →[MP via rule 3]→ T
```
**Answer:** T is **PROVEN TRUE** - Suspect is guilty!
---
### Backward Chaining with Inference Rules
| Step | Goal | Rule to Use | Sub-goals Generated | Inference Rule | Status | Explanation |
|------|------|-------------|---------------------|----------------|---------|-------------|
| 1 | T | R∧S → T | R, S | Modus Ponens | Need both | To prove T, need R and S |
| 2 | S | - | - | - | ✓ Given fact | S is in KB |
| 3 | R | Q → R | Q | Modus Ponens | Need Q | To prove R, need Q |
| 4 | Q | P → Q | P | Modus Ponens | Need P | To prove Q, need P |
| 5 | P | - | - | - | ✓ Given fact | P is in KB |
| 6 | Q | P → Q | P (proven) | **Modus Ponens** | ✓ Proven | P true, so Q true |
| 7 | R | Q → R | Q (proven) | **Modus Ponens** | ✓ Proven | Q true, so R true |
| 8 | R∧S | - | R, S (both proven) | **Conjunction** | ✓ Proven | Both R and S true |
| 9 | **T** | R∧S → T | R∧S (proven) | **Modus Ponens** | **✓ PROVEN** | Goal achieved! |
**Proof Chain (backward):**
```
T ←[need R∧S]← R ←[need Q]← P ✓ (fact)
← S ✓ (fact)
```
**Answer:** T is **PROVEN TRUE** via backward chaining!
---
### Example 2: Extended with Multiple Inference Rules
**Extended Knowledge Base:**
**Additional Variables:**
- U = "Alarm rings"
- V = "Security called"
- W = "Police arrive"
**Additional Rules:**
5. T U → V (If guilty OR alarm rings, then security called)
6. V → W (If security called, then police arrive)
7. ¬W (Police did NOT arrive) - **New fact**
**Query:** What can we conclude? Check for contradictions.
---
### Forward Chaining - Extended
| Step | Known Facts | Inference Rule | Premises | Conclusion | Rule Used | Explanation |
|------|-------------|----------------|----------|------------|-----------|-------------|
| 0 | {P, S, ¬W} | - | - | - | - | Initial + new fact |
| 1 | {P, S, ¬W, Q} | **Modus Ponens** | P, P→Q | Q | Rule 1 | From rain |
| 2 | {P, S, ¬W, Q, R} | **Modus Ponens** | Q, Q→R | R | Rule 2 | From wet ground |
| 3 | {..., R∧S} | **Conjunction** | R, S | R∧S | - | Combine facts |
| 4 | {..., T} | **Modus Ponens** | R∧S, R∧S→T | T | Rule 3 | Guilty proven |
| 5 | {..., TU} | **Addition** | T | TU | - | Add disjunction |
| 6 | {..., V} | **Modus Ponens** | TU, TU→V | V | Rule 5 | Security called |
| 7 | {..., W} | **Modus Ponens** | V, V→W | W | Rule 6 | Police arrive |
| 8 | **CONTRADICTION** | - | W, ¬W | **⊥** | - | **Inconsistent KB!** |
**Result:** The knowledge base is **INCONSISTENT**. We proved both W and ¬W.
**Conclusion:** Either the fact ¬W is incorrect, or one of the rules is flawed.
---
### Example 3: Using Modus Tollens
**Scenario:** Working backward from ¬W (police didn't arrive)
| Step | Known Facts | Inference Rule | Premises | Conclusion | Explanation |
|------|-------------|----------------|----------|------------|-------------|
| 1 | {¬W} | - | - | - | Police didn't arrive (fact) |
| 2 | {¬W, ¬V} | **Modus Tollens** | ¬W, V→W | ¬V | Security NOT called |
| 3 | {¬W, ¬V, ¬(TU)} | **Modus Tollens** | ¬V, TU→V | ¬(TU) | Neither guilty nor alarm |
| 4 | {¬W, ¬V, ¬T∧¬U} | **De Morgan's Law** | ¬(TU) | ¬T∧¬U | Convert to conjunction |
| 5 | {¬W, ¬V, ¬T, ¬U} | **Simplification** | ¬T∧¬U | ¬T, ¬U | Extract both parts |
**Result:** If police didn't arrive, then:
- Security was NOT called (¬V)
- Suspect is NOT guilty (¬T)
- Alarm did NOT ring (¬U)
**Note:** This contradicts our earlier proof that T is true!
---
### Example 4: Disjunctive Syllogism
**Scenario:**
- **Fact 1:** T U (Either guilty OR alarm rang)
- **Fact 2:** ¬T (NOT guilty)
- **Query:** What can we conclude about U?
| Step | Known Facts | Inference Rule | Premises | Conclusion | Explanation |
|------|-------------|----------------|----------|------------|-------------|
| 1 | {TU, ¬T} | **Disjunctive Syllogism** | TU, ¬T | U | Alarm must have rung |
**Result:** U is true (Alarm rang)
---
### Example 5: Hypothetical Syllogism (Chain Rules)
**Chain of Implications:**
1. P → Q (Rain → Wet ground)
2. Q → R (Wet ground → Footprints)
3. R → S (Footprints → Evidence)
| Step | Rules Available | Inference Rule | New Rule Created | Explanation |
|------|----------------|----------------|------------------|-------------|
| 1 | P→Q, Q→R | **Hypothetical Syllogism** | P→R | Rain directly implies footprints |
| 2 | P→R, R→S | **Hypothetical Syllogism** | P→S | Rain directly implies evidence |
**Result:** P → S (If it rained, then there is evidence)
**Application:**
- **Given:** P (It rained)
- **Apply Modus Ponens:** P, P→S ⊢ S
- **Conclusion:** S (There is evidence)
---
### Summary: Inference Rules Usage
| Inference Rule | Forward Chaining | Backward Chaining | Most Common Use |
|----------------|------------------|-------------------|-----------------|
| **Modus Ponens** | ✓ Primary | ✓ Primary | Deriving new facts |
| **Modus Tollens** | ✓ Sometimes | ✓ Common | Backward reasoning |
| **Conjunction** | ✓ Combine facts | ✓ Combine subgoals | Building compound statements |
| **Simplification** | ✓ Extract info | ✓ Break down goals | Separating conjunctions |
| **Disjunctive Syllogism** | ✓ Eliminate options | ✗ Rare | Choosing between alternatives |
| **Hypothetical Syllogism** | ✓ Build chains | ✓ Link goals | Creating rule shortcuts |
| **Addition** | ✓ Add options | ✗ Rare | Expanding possibilities |
| **Resolution** | ✗ Not used | ✗ Not used | Proof by contradiction only |
---
## Resolution (Comprehensive Example)
### Problem: Medical Diagnosis
**Domain Statements:**
1. If patient has fever, then patient has infection or allergy
2. If patient has infection, then patient needs antibiotics
3. If patient needs antibiotics, then patient visits doctor
4. Patient has fever
5. Patient does not have allergy
**Query:** Does patient visit doctor?
---
### Step 1: Define Propositional Variables
| Variable | Meaning |
|----------|---------|
| F | Patient has fever |
| I | Patient has infection |
| A | Patient has allergy |
| B | Patient needs antibiotics |
| D | Patient visits doctor |
---
### Step 2: Convert to Conjunctive Normal Form (CNF)
| # | Statement | Logic Form | CNF Form | Clause |
|---|-----------|-----------|----------|--------|
| 1 | Fever → Infection Allergy | F → (I A) | ¬F I A | **C1** |
| 2 | Infection → Antibiotics | I → B | ¬I B | **C2** |
| 3 | Antibiotics → Doctor | B → D | ¬B D | **C3** |
| 4 | Has fever | F | F | **C4** |
| 5 | No allergy | ¬A | ¬A | **C5** |
| 6 | **Query (negated)** | ¬D | ¬D | **C6** |
**Note:** For resolution proof by contradiction, we **negate the query** and try to derive a contradiction (empty clause ⊥)
---
### Step 3: Resolution Proof Table
| Step | Clause 1 | Clause 2 | Unification | Resolve On | Resolvent | Inference | New Clause |
|------|----------|----------|-------------|------------|-----------|-----------|------------|
| 1 | C1: ¬FIA | C4: F | - | F | IA | **Resolution** | **C7** |
| 2 | C7: IA | C5: ¬A | - | A | I | **Resolution** | **C8** |
| 3 | C8: I | C2: ¬IB | - | I | B | **Resolution** | **C9** |
| 4 | C9: B | C3: ¬BD | - | B | D | **Resolution** | **C10** |
| 5 | C10: D | C6: ¬D | - | D | **⊥** | **Resolution** | **PROOF** |
---
### Step 4: Detailed Resolution Explanation
#### Resolution Step 1: C1 + C4 → C7
```
C1: ¬F I A (NOT Fever OR Infection OR Allergy)
C4: F (Fever is TRUE)
───────────────────────────────────────────────────────
C7: I A (Infection OR Allergy)
Explanation: Since F is true, ¬F is false, so we eliminate ¬F
```
#### Resolution Step 2: C7 + C5 → C8
```
C7: I A (Infection OR Allergy)
C5: ¬A (NOT Allergy)
───────────────────────────────────────────────────────
C8: I (Infection)
Explanation: Since ¬A is true, A is false, so we eliminate A
```
#### Resolution Step 3: C8 + C2 → C9
```
C8: I (Infection is TRUE)
C2: ¬I B (NOT Infection OR Antibiotics)
───────────────────────────────────────────────────────
C9: B (Antibiotics)
Explanation: Since I is true, ¬I is false, so we eliminate ¬I
```
#### Resolution Step 4: C9 + C3 → C10
```
C9: B (Antibiotics is TRUE)
C3: ¬B D (NOT Antibiotics OR Doctor visit)
───────────────────────────────────────────────────────
C10: D (Doctor visit)
Explanation: Since B is true, ¬B is false, so we eliminate ¬B
```
#### Resolution Step 5: C10 + C6 → ⊥
```
C10: D (Doctor visit is TRUE)
C6: ¬D (Doctor visit is FALSE - negated query)
───────────────────────────────────────────────────────
⊥ (Empty clause - CONTRADICTION!)
Explanation: We have both D and ¬D, which is impossible
```
---
### Step 5: Resolution Derivation Tree
```
⊥ (Empty Clause - Contradiction!)
|
┌───────┴───────┐
| |
D ¬D
[C10] [C6: Negated Query]
|
┌───┴───┐
| |
B ¬BD
[C9] [C3]
|
┌───┴───┐
| |
I ¬IB
[C8] [C2]
|
┌───┴───┐
| |
IA ¬A
[C7] [C5]
|
┌───┴───┐
| |
¬FIA F
[C1] [C4]
```
---
### Step 6: Logical Reasoning Chain
**Forward reasoning (without negated query):**
1. **Given:** F (Fever) and ¬A (No allergy)
2. **From F and Rule 1:** F → IA, therefore IA
3. **From IA and ¬A:** Using DS, therefore I
4. **From I and Rule 2:** I → B, therefore B
5. **From B and Rule 3:** B → D, therefore D
**Conclusion:** D is TRUE (Patient visits doctor)
---
### Resolution Summary
| Clause | Content | Type | Derived From | Step |
|--------|---------|------|--------------|------|
| C1 | ¬F I A | Rule (given) | Statement 1 | - |
| C2 | ¬I B | Rule (given) | Statement 2 | - |
| C3 | ¬B D | Rule (given) | Statement 3 | - |
| C4 | F | Fact (given) | Statement 4 | - |
| C5 | ¬A | Fact (given) | Statement 5 | - |
| C6 | ¬D | Negated Query | Query negation | - |
| C7 | I A | Intermediate | C1 + C4 | 1 |
| C8 | I | Intermediate | C7 + C5 | 2 |
| C9 | B | Intermediate | C8 + C2 | 3 |
| C10 | D | Intermediate | C9 + C3 | 4 |
| ⊥ | Empty | **Proof** | C10 + C6 | 5 |
---
### Final Answer
**Question:** Does patient visit doctor (D)?
**Answer:** **YES**
**Proof Method:** Resolution refutation
- Assumed ¬D (patient does NOT visit doctor)
- Derived contradiction (empty clause ⊥)
- Therefore, D must be TRUE
**Reasoning Path:**
```
Fever → Infection (since no allergy)
Infection → Antibiotics
Antibiotics → Doctor visit
∴ Patient visits doctor
```
---
## Key Concepts Summary
### Search Algorithms
| Algorithm | Best For | Guarantee |
|-----------|---------|-----------|
| **BFS** | Shortest path (unweighted) | Complete, Optimal (unweighted) |
| **DFS** | Memory-limited, any solution | Not complete, Not optimal |
| **Best-First** | Quick approximation | Not complete, Not optimal |
| **Hill-Climbing** | Local optimization, minimal memory | Not complete, Not optimal |
| **A*** | Optimal with good heuristic | Complete*, Optimal* |
| **Branch & Bound** | Guaranteed optimal, systematic | Complete, Optimal |
*If heuristic is admissible (never overestimates)
---
### Alpha-Beta Pruning
- **Purpose:** Optimize minimax search in game trees
- **α (alpha):** Best value MAX can guarantee (lower bound)
- **β (beta):** Best value MIN can guarantee (upper bound)
- **Pruning:** When α ≥ β, remaining branches won't affect decision
- **Efficiency:** Can reduce search from O(b^d) to O(b^(d/2)) with perfect ordering
---
### Inference Rules
**Key Rules for Proofs:**
1. **Modus Ponens:** Most common in forward reasoning
2. **Modus Tollens:** Most common in backward reasoning
3. **Resolution:** Used in proof by contradiction (CNF required)
4. **Others:** Support rules for building and manipulating logic
---
### Resolution Method
**Steps:**
1. Convert all statements to CNF
2. Negate the query
3. Apply resolution until:
- Empty clause (⊥) found → Query is TRUE
- No more resolutions possible → Query unproven
**Key:** Resolution is **complete** - if something is provable, resolution will find it.
---
## Practice Recommendations
### For Search Algorithms:
1. Implement each algorithm on different graphs
2. Compare performance on same problem
3. Experiment with different heuristics for A*
4. Try graphs with cycles, infinite paths
### For Alpha-Beta:
1. Work through trees manually
2. Try different node orderings
3. Count pruned nodes
4. Implement in code for games (Tic-Tac-Toe, Chess)
### For Logic:
1. Convert complex statements to CNF
2. Practice both forward and backward chaining
3. Do resolution proofs by hand
4. Identify which inference rule applies when
---
## End of Guide
This guide provides detailed step-by-step examples for:
- ✓ 6 Search algorithms with execution tables
- ✓ Alpha-Beta pruning with 4-level tree
- ✓ Statement verification with 8 inference rules
- ✓ Resolution proof with complete CNF conversion
**Next Steps:**
1. Work through examples with different inputs
2. Create your own problems
3. Implement algorithms in code
4. Apply to real-world scenarios
---
**Document Version:** 1.0
**Last Updated:** 2024