What is interesting is that you need to have a pretty strong bias to begin with if you are learning only from positive examples if you want to avoid over-generalizing..
Consider, for example, that your most general grammar hypothesis allows both languages that constraint word order (e.g. english) as well as those that don't (e.g. hindi," spanish -- where you can, in essense say "Tom Mary hit"--while in english you have to say "tom hit mary" or "mary hit tom".) If I only give you positive examples of usage in English, you would not know that English doesn't allow sentences like "Tom mary hit". In order to do do that, you will need to bias your
learner saying that word order dependence and word order independence are mutually exclusive (so you won't over generalize).
Of course, this is not just idle speculation---our current understanding is that children come into this world with a universal grammar which has all these kinds of constraints embedded, so they are able to learn grammar from mostly positive examples.
On Mon, Dec 15, 2008 at 4:13 PM, Shruti Gaur <email@example.com> wrote:
From the version space idea you explained, it seems the more positive examples we have, the more we can generalize (whereas the negative examples will break the current hypothesis we hold into the next level where we can evaluate each of them and remove the false ones.) which I guess is true in human learning as well.
This made the idea of generating grammar from positive examples more clear as grammar is also kind of a generalization of all the syntactically valid sentences we can make.eg (subject verb object) is the grammar or generalization where each constituent can have different values.
Please correct me if I am wrong
Thanks & Regards
Dept of Computer Science & Engineering
Arizona State University