That was a few months ago. Today, out of sheer curiosity, I went and looked at my former client's new site. It was efficient, professional, and laid out well when it displayed in different browsers. But then, I looked at the source code.
There were literally ten pages of code before you ever found any content in the source code. There are some schools of thought to this that support the idea that search engine spiders will ignore code when crawling a page. Other schools of thought hold that the spiders won't crawl past a certain point in the code and will simply stop after a certain number of characters. The literature isn't particularly clear on this point; on the one hand, I can see how search engine spiders' algorithms are complex and could be coded to skip over the reams and reams of JavaScript, Applets, Flash, et cetera, that's been embedded in the source code. On the other hand, I've always thought it better to be safe than sorry. Besides, it's not that hard to write a line of code referencing JavaScript, Applet, Flash, and CSS files that loads them off the page. The effect is the same, and if the spiders don't like code, well, then, it doesn't matter, does it?
The second problem was that the new designer had buried the main keywords under a hierarchial menu, which means that the content that contained the keywords only appears two to three clicks away from the main page. This means that his main keywords, cultivated over literally years of working with the site, never appear on any of the main "hub" pages.
I suppose my advice at this point would be to keep an eye on the site to make sure it gets indexed properly by search engine spiders and alter the content on the main pages accordingly. Since I am no longer in charge of that site, that would be left up to the new designer or the site owner. There are, of course, dozens of other aspects to the SEO of a site that would have to be considered, as well, but, in the end, what's the point of having a flashy, professional-looking site that nobody visits?