Deadeye Keyword Research: Advanced Tips For Choosing The Right KWs from #SMX

Posted in Keyword Research, SMX West

Keyword research is one of the most crucial foundations of our industry, a pillar of so much of the work we do as online marketers. Armed with the fruits of that research, we advise aggressive SEO, craft blow-away PPC campaigns, compose eloquent and relevant content, and oh, oodles more. The keywords we hunt down and go after, then, must be the very best, and only the very best kind of research will reveal them to us. Shallow skimming of the KW universe simply isn’t good enough, for that often only reveals the obvious KWs, the most popular ones, the most expensive ones, the ones our competitors already yoinked. If we are to truly rock at our jobs, we must be clever and cunning enough to find the perfect keywords

Moderator and speaker Christine Churchill, President, KeyRelevance, was joined by Mat Siltala, Founder, Avalaunch Media, and aimClear’s own Founder & Evangelist, Marty Weintraub, for a rousing morning session here on the final day of #SMX West 2013. The three presented their own insider tips, tactics, and top-shelf techniques for badass keyword research.

Christine-Churchill-Matt-Siltala-Marty-Weintraub

Ain’t they a beautiful bunch of marketers? :) aimClear live-tweeted this session via @beebow. Read on for the savory takeaways.

Christine took the stage, welcomed the audience, and introduced the panel. As moderator, she also introduced the first speaker… herself! Her presentation focused on one of the biggest problems SEOs have been lamenting for a year and a half… the dreaded “not provided” keyword data.

Work Arounds For The “Not Provided” Keyword Data Trap
Once upon a time, when visitors landed on your site from a SERP, Google would report all the referring KWs in Analytics. But after 10/18/11, Google stopped providing this data… for… “privacy reasons…”

privacywhat

Some Facts on “Not Provided”

  • Applies to Google search
  • Applies to Organic search
  • Applies to all https:// searches
  • It’s the default in Firefox
  • Applies when browsing while “logged in”
  • Does NOT apply to all PPC search clicks
  • A% of traffic affected varies widely from site to site

Christine’s Approaches For Capturing Lost KW Insight

1) Ignore the not provided line. This is the easiest approach. Best for sites with small Data Not Provided %s (10% or less).

Here’s what you do:

  • Divide not provided # by Google visits to get the %
  • Don’t use the total search visits
  • Filter for Google visits
  • If you focus on trends in reporting and data not provided %s remains consistent, then this is okay
  • But be careful – that number will likely start to creep up

2) Report as a block. Use the Not Provided as a gauge of coverage for the other KW only.

3) Use Bing data. Look at bing KWs in analytics. The Bing market share was 16.5% in Jan ’13, compared to Google’s 67%. Bing provides all KW information. Their slogan: “Your data, always fully provided.”

  • Christine has noted different behaviors from people who come in organic via Bing vs. Google, so that’s something to keep in mind.

4) Use PPC. PPC KW data is not affected by the “not provided” change, so use performance data from there. Assume KWs that work well in PPC are good words to target organically.

5) The roll-it-in method. Roll not provided data INTO other KW data. Assume traffic from not provided searches is exactly like the provided data. This isn’t a perfect method, because you’re making assumptions. Regardless, inflate breakdown data proportionally, such that total visits = sum of data breakdown.

6) Report on “Not Provided” by LP in new report. Look at secondary dimension in analytics to report on LP in separate report. Advantage: no special reporting setup needed. Disadvantage: data is in, yep, a separate report.

7) Report on NPs by LP within the KW report. Same as above, but roll it in with the KW report. Leverage profile filters to do this. Again, this method assumes a lot, so it’s not perfect, but it’s better than nothing.

8) Use Google Webmaster Tools (GWT) KW data. This shows all KWs, impressions, clicks. Impressions without clicks might indicate a title / meta desc / snippet issue. Disadvantages: No conversion info, and you cannot break down by other segments. In other words, the data isn’t highly actionable. Also, mapping KW to LP is tedious 1-at-a-time process, so again, it’s not useful for large scale KW analysis. But it’s worth noting that GWT provides up to 90 days of query data and up to 2,000 terms. (Aaaagain, not the best for large scale sites.)

Here’s what you do:

  • Sort by clicks to see the terms that actually bring most traffic to site. Christine finds this very useful.
  • If you click on the query, it also shows which pages appear for the term.
  • Use filters to separate brand / non-brand and core terms, plus sources from web, image, mobile, and video!   @keyrelevance

SUPER important tip!

  • GWT query data defaults to WEB data.
  • To use corresponding SEO query data in GA, you MUST FILTER for the Google Property Web!
  • It’s a couple extra steps to set this up, but it makes for much more meaningful data.

Christine’s Conclusions…
There’s no perfect way to recapture this data. Marketers lost an important source of KW performance info with the not provided change :(  .

Next up, Matt. He was set to talk about using competitors for KW research. *Rubs hands together manically*

SEO is dead… right?
Don’t get me started. It’s not dead, but as Matt McGee smartly put it…

mattmcgeetweet


Guess what? That plan all starts with keyword research. “You can’t skip over KW research,” Matt (Siltala) stressed. “You can’t get annoyed with it. You HAVE to do it.”

Buying vs. Browsing
One of the most important things to understand is keyword intent, in other words, keywords that suggest people are ready to buy vs. keywords that suggest people are in the browsing phase (or even not related at all in the buying phase).

  • “mattress” might seem like the best KW to go after, but it does NOT express intent
  • “sealy posturepedic proback gold elegance ii” is a much more specific KW that implies the research phase is over, and buying intent is there

Screaming Frog for Keyword Research!
This is, like, the 4th time during this conference speakers have strongly endorsed Screaming Frog – so yeah, it’s worth checking it out. Specifically in this session, Matt recommends we all use it for KW research, namely to crawl a competitor’s site and analyze the basics: titles, heading, URLs, meta keywords, descriptions, the works.

One rule before competitor analysis: make sure the site you’re spending time analyzing is strong!

Now, onto tips for crawling each SEO element.

  • Titles- Use the “page titles” report to crawl the titles of a competitor’s site. Consume the data. Get granular. See what your competitors are focusing on. Get KW ideas from what they’re already doing.
  • Headlines – To crawl headlines, use the H1 report. Crawl and consume. Harvest great ideas for content :) Plug headlines into Google and check out what it suggests – see what people are looking for in terms of those KWs.
  • URLs – To crawl URLs, use the internal report. If KWs are in competitors’ URLs, you know those KWs are important to them.
  • Meta Keywords – No huge impact on SEO anymore, but they’re still worth a look. It’s also fun to see if your competitors are doing something stupid :)
  • Meta Description – This element is super important! It represents meat and potatoes of the point of the page you’re crawling. Consume the data. Get reeeeally granular. Glean all you can! Hone in on those buying vs. browsing words. Pick out the KW nuggets. Matt emphasizes this SEo element as one of the best areas to crawl.

Getting Social With Screaming Frog
You can also find virals or social campaigns with Screaming Frog. Filter report on common locations: blogs, infographics, videos, top X type posts, and so on– then, dig in. Take note of viral and successful content – what were the target KWs? What was the creator going after? What KWs was the creator are linking, and where?

Speaking of Social…
As you go through all your gorgeous crawled data, think about what content would work well on social.

  • For Pinterest, Matt likes to use PinAlerts – it alerts every time something you post gets pinned and also tells you what the pinner used in the description of the pin. This allows you to see what people think is the important part of your post, which can help advise the way you create and optimize future content.

Interesting! They say 20-25% of all queries are brand new, meaning people are literally coming up with new ways to search for things based on the content they consume. Incredibly. In part people of this, KW research truly never ends!

  • Twitter, Matt recommends TweetReach. Find the “most retweeted” tweets, and work to understand why. Use that insight to advise future content and the way you frame it.

Open Site Explorer for Back Links
Matt also recommended running competitor sites through a backlink analysis tool like that offered by Open Site Explorer from our friends at SEOmoz. He’s particularly fond of the anchor text report. Export and analyze the data. Mat suggested focusing on total links, and getting granular with filters.

Spyfu!
Spyfu is another great tool for competitive intelligence. Matt digs their ad history report. After you compile all your KW ideas, run them through AdWords, then prioritize based on relevance and search volume.

Then… go forth, and optimize!!!

Oh wait! Some last minute goodies:

  • http://leve.rs/ is new tool that can help you pick KWs based on ROI (fascinating)
  • PlacesScout – a groovy tool with tons of data specifically for KWs that express local intent

Alright! Matt turned the mic over to Marty, who was ready to take us on a crazy live-demo tour of some freakin’ awesome tools for advanced KW research.

Beyond Google’s Keyword Tool…
Google has tried for years and years to make transparency with KWs go away,” Marty began. So he was set to share with us some cool tools to find KWs *other than* Google’s KW tool. Keep in mind, it’s worth running the massive lists of KWs you create from other tools back through the Google KW tool to get Google’s take on volume, etc. The point of Marty’s preso is you don’t need Google’s KW tool to build those lists in the first place.

That said… leveraging Google in other ways can be helpful… (if you were lucky enough to attend this session, you saw one of the MOST DOPE and totally juiciest hacks to create stunning KW lists… but we weren’t allowed to cover that gem… sorry, folks :) ).

Hint: What is the most highly optimized page on the internet on any given day…? Answer: A Google SERP.

marty-weintraub-wizard
photo credit: Jerad Hill

Moving right along… saddle up for some fabulous tools and tactics for killer KW research you might not have thought of before!

Wikipedia Pages! Zoom in on a Wiki page that speaks to your basic target KW concept. Oh yeah. Go ahead and run that baby through the Google KW tool and consume the goods.

Mozenda. This is a sexy little scraper. More specifically, Mozenda is an object-based data extraction unit that dumps things to a database on a schedule you define. Works on Windows only.

Here’s what you do:

  • Get an account, start an agent, enter a list of things to search in Google.
  • Mozenda scrapes the related searches Google provides at the bottom of Page 1.
  • This is a beautifully scalable way to mine the related searches for a giant batch of KWs.
  • Cool news! Mozenda works for ANY site that has a loop in it form programming, that is to say, a site where you enter a KW, and the it provides related KWs for your original data entry, e.g. Wikipedia, Yandex, Bing, YouTube, Google, etc.

Scrapebox! Also run on Windows. Scrapebox is a wacky tool run by a mysterious mystery man / woman in Australia. Marty thinks Google would kill him/her if it could…

  • Costs $100 to buy it, and then you own it forever.
  • Capable of black-hat thing, nefarious things, but also… lovely things.
  • In Scrapebox’s KW scraper, there’s source box on the left and a results box on the right.
  • One of the most beautiful things about Scrapebox is as you pull lists, run them, and populate them on the right, you can move that right box right back over to the left and run it again! (Over and over!)
  • Lists pull from all kinda of sources: Google (US & foreign), YouTube, Amazon, Yahoo Shopping, etc.
  • Once you have your massive giant list, change your scraping source – add even MORE searches to it!
  • The goal is to find what people SEARCH in relation to the products with KWs you’re hunting after (we’re talking different kinds of searches, too – ones pulled from eCommerce sites, ones pulled form video sites – wide range here!)
  • The end result: A spreadsheet populated with all your gorgeous glorious KWs and all the stems, with columns for each time the list is run through a new source.

Needless to say, Marty can take the giant lists of KWs he builds through both of thse tools and run them through Google’s KW tool to get Google’s take on things.

PHEW. To wrap up, some final precious gems from Marty:

  • There are suggestion boxes are everywhere! They herd you like little kittycats towards the KWs engines want to sell, and basically, suggest boxes exist to consolidate KWs, drive up the price of certain KWs, and drive out the long tail.
  • Your goal is to find the nuances of contextual relationships for content – if you don’t sell them the first time, get them with retargeting.
  • Take what you learn as social marketers, add in semantic insight, and drive people to relevant content through good keyword research.
  • Another cool tip: if you’re after normal intent stuff, create filters to pull out shopping KWs.
  • Add those filters back in – this is completely the best way to find your high-intent KWs.

“High-intent shopping keywords are like lake front property – they’re just not making much more of it …” -Marty Weintraub

  • It’s ALL inventory, because it ALL came from something someone typed in, or the first page of Google related searches!
  • If you can’t find anything to scrape, crawl, and get insight on… write content, stick it in an HTML doc, upload it, and run that URL through Google :) .

Hot DANG that was a jam-packed session. Some of the tastiest takeaways from #SMX West 2013, in this live-blogger’s humble opinion. A great big thanks for the speakers for their wonderful insight and keyword research brainchow. Stick around aimClear Blog for more coverage straight from the convention center, and of course, follow along with yours truly @beebow for continued live tweets comin’ atcha.

  • Tim Carpenter

    First time checking out this blog, will certainly be back. This blog is packed full of some great insights. The parts concerning “Not Provided” are mostly things I have come across before, but it is truly unfortunate that we have to result to such nonsensical methods to add value to the “Not Provided”

    Unfortunately my site has around 40-50% “Not Provided” on any given day, and as such I just treat my actual keyword data as half the total. This tends to work in my instance for most but the really long tail keywords.

    The true gem of this article is the part concerning keyword research, which is why I clicked over to this article. Some really great info that I will work into my methodology.

    Thanks for sharing all of this information!

    PS: Marty seems like such a cool guy!

  • Paul Yokota

    This was definitely an epic session, probably the best of the conference. Tons of great, actionable tactics from all the presenters, and despite the technical difficulties, it was a lot of fun to see all of Marty’s manic antics up close.

  • Allie E. Williams

    Agree with Paul – this was an epic session! It’s the one I keep thinking about almost a week after the show. Thanks for writing it up!

    I’m looking forward to playing with these processes. And finally using Safari for something ;^)

    Thanks!

Join the Conversation