I've been looking into the topic of how to both have an AJAX/DHTML website using a toolkit like prototype or an interface infrastructure like Google Web Toolkit. It boils down to the fact that if you need to let the web crawlers in you have to give them something non-DHTML/AJAX to consume. This can be done a few ways it seems, including intercepting page requests and directing to different handlers based on who's requesting and having a parallel site, one AJAX and the other plain old HTML. Another option is to limit the AJAX to page elements that make things more convenient and functional for the user; things like hide/show login areas, etc. Again, the point is to have a strategy to let the web crawlers in.