Pages

Sunday 17 June 2012

"How to Design Site to be Search Friendly?" by Dipen (Part 5)”


The exclusive information this article contains is,


  • Quality Guidelines—Basic Principles
  • Quality Guidelines—Specific Recommendations 
  • Other Important Design Factors
  • Frames 
  • Robots.txt, Meta-Robots Tag 
  • Clean Code Is King 
  • Navigation Techniques






Quality Guidelines—Basic Principles

  • • Make pages for users, not for search engines. Don’t deceive your users, or present different content to search engines than you display to users.
  • • Avoid tricks intended to improve search engine rankings. A good rule of thumb is whether you’d feel comfortable explaining what you’ve done to a Web site that competes with you. Another useful test is to ask, “Does this help my users? Would I do this if search engines didn’t exist?”
  • • Don’t participate in link schemes designed to increase your site’s ranking or PageRank. In particular, avoid links to Web spammers or “bad neighborhoods” on the Web as your own ranking may be affected adversely by those links.
  • • Don’t use unauthorized computer programs to submit pages, check rankings, etc. Such programs consume computing resources and violate our terms of service. Google does not recommend the use of products such as WebPosition Gold that send automatic or programmatic queries or submissions to Google. WebPosition is a great software program to monitor your positioning and has great tools to tweak your search engine optimization; just don’t use these types of tools for submission. Both Yahoo! and Google have implemented dynamic characters that must be replicated in the submission form. These dynamic characters are embedded in a graphic, and software programs such as WebPosition are unable to read the text and input the code.  
Quality Guidelines—Specific Recommendations
  • • Avoid hidden text or hidden links.
  • • Don’t employ cloaking or sneaky redirects.
  • • Don’t send automated queries to Google.
  • • Don’t load pages with irrelevant words.
  • • Don’t create multiple pages, subdomains, or domains with substantially duplicate content.
  • • Avoid “doorway” pages created just for search engines, or other “cookie cutter” approaches such as affiliate programs with little or no original content.
        These quality guidelines cover the most common forms of deceptive or manipulative
behavior, but Google may respond negatively to other misleading practices not listed here (for example, tricking users by registering misspellings of well-known Web sites). It’s not safe to assume that just because a specific deceptive technique isn’t included on this page, Google approves of it. Webmasters who spend their energies upholding the spirit of the basic principles listed above will provide a much better user experience and subsequently enjoy better ranking than those who spend their time looking for loopholes
they can exploit.
        If your Web site is mistakenly penalized for spam, your best course of action is to contact the search engine and discuss remedies. If you are applying a technique that is considered spam, get rid of it. Know what is considered search engine spam and avoid it before it ever becomes a problem for you.

Other Important Design Factors
It is not always possible to have a Web site that meets all requirements of a search engine and your target market. Perhaps you are coming in on the tail end of a Web development project or simply want to make your Web site as search engine friendly as possible, without having to do a significant redesign. Here are some common issues and how you deal with them to improve the search engine friendliness of your Web site, whether you are building a new site or are improving your current one:
  • • Frames
  • • Robots.txt, meta-robots tag
  • • Clean code is king
  • • Navigation techniques
  • • Revisit meta-tag
  • • Cascading style sheets
  • • Dynamic pages and special characters
  • • Splash pages and the use of rich media
  • • Use of tables
  • • Custom error pages
  • • Image maps
  • • Optimization for search localization.
Frames
          From a marketing perspective, you should avoid building a Web site entirely based on frames when you develop your Web site. This is probably the most recognized hurdle when it comes to search engine optimization.
          Frames may result in some search engines being unable to index pages within your site, or they can result in improper pages being indexed. Also, many people simply prefer sites that do not use frames. Frames also cause problems when someone wants to bookmark or add to their favorites a particular page within a framed site. Usually only the home page address is shown.
         What I mean by “improper pages being indexed” is that content pages will be indexed, and when the search engines direct users to these content pages, they will likely not be able to navigate your site because the navigation frame probably will not be visible. To prevent this technique from being used, you can insert a robots meta-tag in the header section of your HTML that does not allow bots to proceed beyond your home page. As a result, you can really submit only your home page, which means you have less of a chance of receiving the high rankings you need on the major search engines. Alternatively,
you should include textual links to all major sections within your site to accommodate those users who enter your site on a page other than a home page, and to assist the search engines with indexing your site.
         Some search engines can read only information between the <NOFRAMES> tags within your master frame. The master frame identifies the other frames. All too often the individuals who apply frames ignore the <NOFRAMES> tags, which is a big no-no. If you do not have any text between the <NOFRAMES> tags, then the search engines that reference your site for information have nothing to look at. This results in your site being listed with little or no information in the indexes, or you are listed so far down in the
rankings that no one will ever find you anyway. To remedy this situation, insert textual information containing your most important descriptive keywords between the <NOFRAMES> tags. This gives the search engines something they can see, and it also helps those users who are using browsers that are not frame-compatible.
            Now that the search engines have found you, you still have a problem. They can’t go anywhere. Create a link within your <NOFRAMES> tags to allow search engines and users with browsers that aren’t frame-compatible to get into your site. Frames are a headache when you are designing your site to be search engine friendly. To make your life easier and from a marketing perspective, it’s better to avoid them altogether.

Robots.txt, Meta-Robots Tag
         <META-NAME=“robots” CONTENT=“ ”> tells certain bots to follow or not follow hypertext links. The W3 Consortium white paper on spidering (spiders are defined below) offers the following definition and discussion:
  • • <META-NAME=“ROBOTS” CONTENT=“ALL | NONE | NOINDEX| NOFOLLOW”>
  • • default = empty = “ALL” “NONE” = “NOINDEX, NOFOLLOW”
  • • The filler is a comma-separated list of terms:– ALL, NONE, INDEX, NOINDEX, FOLLOW, NOFOLLOW.
Note: This tag is for users who cannot control the robots.txt file at their sites. It provides a last chance to keep their content out of search services. It was decided not to add syntax, to allow robot-specific permissions within the meta-tag. INDEX means that robots are welcome to include this page in search services.
            FOLLOW means that robots are welcome to follow links from this page to find other pages. A value of NOFOLLOW allows the page to be indexed, but no links from the page are explored. (This may be useful if the page is a free entry point into pay-per-view content, for example.) A value of NONE tells the robot to ignore the page.
            The values of INDEX and FOLLOW should be added to every page unless there is a specific reason that you do not want your page to be indexed. This may be the case if the page is only temporary.

Clean Code Is King
            Clean code is essential to search engine success. You want to ensure that you do
not have stray tags, HTML errors, or bloated code. Problematic code is bad for the user experience and bad for search engine placement.

Navigation Techniques
           JavaScript embedded in anchor tags, drop-down menus, and pull-down menus can cause many headaches for a Web site looking to be indexed by the major search engines. The rollover effect on navigation links is quite common and can add visual appeal to a Web site. A problem arises when JavaScript is encased within the anchor tag, which can cause problems for the search engines.
            The rollovers look good, so odds are that if your site is using them, you are not going to want to get rid of them. A quick and simple solution to ensure that your site is indexed is to include text-based navigation along the bottom of your Web page as supportive navigation. This approach also gives you the opportunity to get in your keywords twice—once in the Alt tag for your main navigation and the second time around the anchor tag for the supportive text links. In addition, it is to your benefit to include all your JavaScript material in external files to keep the Web site code as clean as possible.
           Drop-down menus (DHTML, for example) and pull-down menus pose similar concerns because of the coding script necessary for them to execute. If you choose to use them, be sure to have an alternative means of navigation available.

No comments:

Post a Comment