Chapter 2: Concept Learning and the General-to-Specific Ordering
Candidate Elimination Algorithm Issues
- Will it converge to the correct hypothesis? Yes, if (1) the
training examples are error free and (2) the correct hypothesis
can be represented by a conjunction of attributes.
- If the learner can request a specific training example, which
one should it select?
- How can a partially learned concept be used?
Inductive Bias
- Definition: Consider a concept learning algorithm L for the
set of instances X. Let c be an arbitrary concept defined over X
and let Dc = {<x, c(x)>} be an arbitrary
set of training examples of c. Let L(xi, Dc)
denote the classification assigned to the instance xi by L
after training on the data Dc. The inductive bias
of L is any minimal set of assertions B such that for any target concept
c and corresponding training examples Dc
(∀ xi ∈ X)
[ L(xi, Dc) follows deductively from
(B ∧ Dc ∧ xi) ]
- Thus, one advantage of an inductive bias is that it gives the
learner a rational basis for classifying unseen instances.
- What is another advantage of bias?
- What is one disadvantage of bias?
- What is the inductive bias of the candidate elimination algorithm?
Answer: the target concept c is a conjunction of attributes.
- What is meant by a weak bias versus a strong bias?
Sample Exercise
Work exercise 2.4 on page 48.