SEO Blueprints: Preplan Sound Site Architecture

blueprint-ses-toronto

Don’t cut corners on your site’s foundation! Information architecture is an important part of a site’s performance and should be addressed from the very get-go of development. Many existing websites could benefit greatly if their content was only properly organized, labeled and prioritized.

This SES Toronto session Information Architecture, Site Performance, Tuning and SEO offered attendees classic methods to greatly increase site performance, while making navigation easier for users and search engines alike.

sestoronto_mapleleafAnne Kennedy (Beyond Ink) was session moderator. Speaking on the panel was Shari Thurow (Omni Marketing Interactive), Jill Sampey (Blast Radius), Jodi Showers (HomeStars) & Naoise Osborne (NVI).

First up to speak was Shari Thurow. If you ask web professionals what site architecture is, you will more than likely get several different answers. There is one definition she finds to be the definitive answer: The combination of organization, labeling, search retrieval, and navigation systems within web sites. Nowhere in that definition do you see Page Rank, crawlability or indexation, these are all variables that affect site architecture.

Before you create a page layout you should develop an information architecture. The URL structure should be developed after you develop an information architecture.

Taxonomy. Nobody should skip this step. This is the hierarchical layout of categories and pages (1st level, 2nd, 3rd, 4th, and so on). After creating a taxonomy, establish a usability test where you ask users questions to determine the ease of navigation of the layout. Can they tell very easily what page they’re on? On average it takes half a second for people to orient themselves to page. The longer it takes for them to orient themselves, the longer it takes for them to find what they’re looking for.

Work to use the language of users when developing architecture not only for the benefit of users but for data retrieval and spiders as well. The goal of controlled vocabulary is to make products easy to find by browsing and retrieval (searching).

Taxonomists understand business goals, IT and user goals, but they often conduct usability tests to ensure people can orient themselves quickly.

After a taxonomy is complete and it has been tested, begin development of a page interlinking structure. Every site needs both horizontal and vertical linking. Interlinking pages is a big priority that allows you to not only pass authority, but further give insight into what a page is about.  Use related and supporting links to build up pages within the site.

URL structure is part of the interface not the architecture. Many think that the URL structure can make or break ranking, that’s not exactly true. URL structure should be thoughtful and useful for users. Generally speaking, people like to see clean URL’s.

Shari closes with some words of wisdom.
– Be sure to check for broken links using Dreamweaver, Yahoo! Site Explorer or whichever platform you have an affinity for.
– Whoever is in charge of developing a taxonomy needs to be very objective.
– As always, keywords should be a part of a sites information architecture.
– Don’t assume your architecture will work, test it!
– Implement both vertical and horizontal page linking.
– Prioritize your navigation.

ses-toronto-information-architecture1
Next up is Jill Sampey. She will be talking about the marketing opportunity rapidly changing online landscapes and new competitors bring.

How do you know a site is performing at its best?
– Use on-site analytics &  search trends

EA Sports Games – Website was built for gaming audience. Users knew how to look for games and navigate the site. They built title sites for popular games. These users were pre-qualified, searching on branded terms. Over the past couple years, offline market targets have moved online to look for game option. The site didn’t easily allow for this segment to access the information they were looking for from the main page. Navigation and content hierarchy had to be changed to address new users.

What are we looking for when deciding if a site should be changed?
– Steady or low traffic levels
– New competitors (are they getting traffic from the same keywords or different?)
– Search query changes

SEO and Design: When Designing a Site
1. Always have your users needs in mind
2. Look to how they’re searching
3. Don’t forget about the spiders, they’re always looking for URLs to index, links to follow & text to identify

Some of the key points Jill stressed were the importance of having on-site analytics to monitor performance and weaknesses. Always have your eyes open for additional traffic opportunities. Lastly, remember that rich media and optimization can happily co-exist.

Next to the podium was Naoise Osborne. He began by telling of his first job out of college as an inhouse web developer at a casino. He really wasn’t aware of information architecture until he had to deal with clients. After all, you only really have to deal with problems when you have to deal with clients. Since then he’s realized that site architecture needs to be the first step.

Early in the architecture development stage the marketing team, editorial staff, webmasters, and SEO’s need to meet together. There needs to be an understanding and agreement across all departments of the big picture and overall objective. It’s a holistic process. Editorial needs to know about keyword importance, internal linking/anchor text, etc. The only way they can incorporate this into copy is if they know the importance of site architecture and understand it.

Now to get Technical
What does everybody need to get?
– Search engines crawl sites by following html and other links
– Quality and quantity of links pointing to a page = link popularity
– Pages need link popularity to get indexed and rank
– You can control which pages on your site get indexed and which ones get link juice.
– A page being indexed by search engines is separate from that a page’s ability to accumulate or pass on link juice

robots.txt (addresses indexation): Doesn’t control the flow and accumulation of link juice. You can however control duplicate content. You can block what you don’t want indexed, session id’s, duplicate URLs, entire directories. If another site links to the URL disallowed in robots.txt the link will be listed in the SERPs.

on-page meta no-index: This will stop the page from appearing in search engine indexes entirely. The page can still accumulate and pass link juice. You may want to use meta noindex if you can’t alter your robots.txt or if robotx.txt standard is not flexible enough or if you don’t want URL listings. This is often used in conjunction with nofollow if you don’t want to pass link juice.

Nofollow works on the page level. If you use it with out no index rel=nofollow will stop spiders from following a specific link. They don’t crawl or discover through nofollow links. This is not for duplicate content rather it stops link juice from flowing through a specific link.

301 redirect: Spiders follow redirect and discover new pages. If you are restructureing URLs, you want to transfer all the link juice from old pages to the new ones. 301’s will tell search engines to transfer link juice from old to new. If any need to be URLs changed, 301’s are the best way to shift link juice from old to new. This is also they only way to transfer link juice for one domain to another. Canonicalization tags work within domain but not across domains.

Canonicalization: Spiders go to referred page like a 301 redirect. Search engines will transfer link juice from variation pages to real page. It’s a new approach to both indexation control and link juice control and is supported by the big three (Google, Yahoo! & Bing). Canonicalization may be cheaper than redoing your entire site from scratch, but it may become a maintenance nightmare. It very easily can turn out to be as or more complex than doing it right from scratch.

javascript link: Google tries to crawl and index these if URL is easy to access – don’t use this to block crawls.

Spider your own site to learn how page rank is flowing and identify what you need to change.

Last up to the podium was Jodi Showers. He spoke about performance issues and how these impact indexation. He reveals the direct correlation of a website’s load page time to the amount of pages indexed from that site. The truth of the matter is that the slower a website is the less pages that are crawled during a search engine’s scheduled visit – And if it ain’t indexed, it ain’t ranked.

Use web master tools to look at the number of pages crawled per day, in comparison to the average time it takes to download a page from the site.

HomeStars website last summer had an average page download time of 3000ms/3sec per page. With 2 million pages, the number that were actually being crawled was way too low. Once they began solving performance problems the load page time went down and number of pages crawled, jumped.  Faster page download = more indexing love.

Jodi closes by reinforcing the need to establish metrics (pages indexed vs. page load time) and to monitor these metrics daily.

Sign Up For Our Newsletter

Stay Connected