SEO modifications at Allans suggestion take2, a=chris

Chris Pollett [2011-01-03 01:Jan:rd]
SEO modifications at Allans suggestion take2, a=chris
Filename
en-US/pages/home.thtml
diff --git a/en-US/pages/home.thtml b/en-US/pages/home.thtml
new file mode 100755
index 0000000..d798529
--- /dev/null
+++ b/en-US/pages/home.thtml
@@ -0,0 +1,33 @@
+<h1>Open Source Search Engine Software!</h1>
+<p>SeekQuarry is the parent site for <a href="http://www.yioop.com/">Yioop!</a>.
+Yioop! is a <a href="http://gplv3.fsf.org/">GPLv3</a>, open source, PHP search
+engine. Yioop! can be configured  as either a general purpose
+search engine for the whole web or it can be configured to provide search
+results for a set of urls or domains.
+</p>
+<h2>Goals</h2>
+<p>Yioop! was designed with the following goals in mind:</p>
+<ul>
+<li><b>To lower the barrier of entry for people wanting to obtain personal
+crawls of the web.</b> At present, it requires only a WebServer such as Apache
+and command line access to a default build of PHP 5.3 or better. Configuration
+can be done using a GUI interface.</li>
+<li><b>To allow for distributed crawling of the web.</b> To get a snapshot of
+many web pages quickly, it is useful to have more than one machine when crawling
+the web. If you have several machines at home, simply install the software
+on all the machines you would like to use in a web crawl. In the configuration
+interface give the URL of the machine you would like to serve search results
+from. Start the queue server on that machine and start fetchers on each of the
+other machines.</li>
+<li><b>To be reasonably fast and online.</b> The Yioop engine is "online" in the
+sense that it creates a word index and document ranking as it crawls rather
+than ranking as a separate step. The point is to keep the processing done by any
+machine as low as possible so you can still use them for what you bought them
+for. Nevertheless, it is reasonably fast: four Lenova Q100 fetchers and
+a 2006 MacMini queue server can crawl and index a million pages every couple
+days.</li>
+<li><b>To make it easy to archive crawls.</b> Crawls are stored in timestamped
+folders, which can be moved around zipped, etc. Through the admin interface you
+can select amongst crawls which exist in a crawl folder as to which crawl you
+want to serve from.</li>
+</ul>
ViewGit