Drifting a plan a little bit further:
So a knowledge base (KB) of Web and Wikipedia (as a very good information source) would be maintained on central server. Users could browse KB in a tree-like self expanding texts, where sources of informations would be offered to navigate to. KB would be presented in readable way, summing up all knowledge previously collected and when a user wants to visit source site, she/he can navigate to it. Like a Google search, but more intelligent.
So, how to obtain KB? When users are browsing the web, KB is updated with new informations from sites they visit. This could be done by iframe inside HTML where sites would be shown up. Around iframe would be navigation buttons, bookmarks tabs and KB extracts for navigating web. Primary source would be Wikipedia with its outer links. If user can't find what she/he searches for, she/he can open any standard search engine inside iframe and there goes again NLP parsing and indexing of visited sites.
And now comes the interesting part: Users would choose where to invest their processor time through webworkers (multitasking implementation of HTML scripting). For example, user says: I want to dedicate one thread from my processor to discovery of new drugs for curing cancer. And the machine induces, abduces and deduces related unknown knowledge from facts already stored in KB. When it finds out interesting discovery, it alerts user and new knowledge extension becomes a part of shared KB. I know where I want to invest my processor time: for finding artificial food (I guess it is a kind of chemistry)
Yeah, that could be a life, while you are playing around the web, your computer thinks and improves the science world. If this works out, I wouldn't need a behavior algorithm for human like AI, because all I need is a subset of AI for improving science world. Anyway, I'm kind of afraid of autonomous machines, it would be a great responsibility to build one.
A lot of work to do in the future and I like the current plan.