I've been looking into the topic of how to both have an AJAX/DHTML website using a toolkit like prototype or an interface infrastructure like Google Web Toolkit. It boils down to the fact that if you need to let the web crawlers in you have to give them something non-DHTML/AJAX to consume. This can be done a few ways it seems, including intercepting page requests and directing to different handlers based on who's requesting and having a parallel site, one AJAX and the other plain old HTML. Another option is to limit the AJAX to page elements that make things more convenient and functional for the user; things like hide/show login areas, etc. Again, the point is to have a strategy to let the web crawlers in.
I'm thinking about all of this in the context of setting up a site that needs to be indexed properly by Google in order for Adsense to work properly. I'm leaning toward the route of having an entry point for web crawlers and another one for the application with exactly the same core content visible on both. The web crawler content would be optimized to provide maximum meta data and minimum extraneous bulk. I'm leaning to this solution because I am intruiged by the Google Web Toolkit and its ability to be used to build a content-rich site. Of course, I'd have to get a half-decent development machine to do this as well since the GWT uses a model whereby the application moves from a Java one in development to a JavaScript on at deploy time. The Java/Eclipse/etc. part is pretty resource intensive methinks.