Monthly Archives: January 2005

You are browsing the site archives by month.

Local Search

For the Search Lounge I have been writing reviews of specific engines, but I wanted to try a different tack. I want to focus on types of search, or to put it another way, user missions. I recently did a foray into this by writing about shopping searches on Google and Yahoo, and in the future I plan to do blog and other types of search. I hope writing from the perspective of the type of information that is being sought will be useful.

Intro
For years people have predicted that local search will be one of “the next big things” in the search industry. I don’t dispute that. Local has always been a gaping hole on search engines. Recently, the major engines have started pushing their local search, so I thought it would be a good idea to check them out and see how they stack up against each other. Just to be clear on definitions, local search is search that is targeted to a US city or region.

I will look at each of the big four: Ask Jeeves, Google, MSN and Yahoo. A lot of the data these engines use comes from other sources, but I will focus on the user experience coming through each engine. For most users, it means little where the backend data comes from, even if Ask Jeeves and MSN both use Citysearch.

A major comment I have about all four engines is that their search is focused on local businesses. That is definitely valuable, but I hope they will expand that and allow for other local searching. Like maybe I want to search for general web sites about my city, or for bloggers who live near me, or for local government information. Of course I can use the general web search for this type of thing, but eventually I hope it will be integrated into the local products.

Conclusions
(I’m putting the conclusions earlier in this article to save people the pain of reading all the gory details. But for those of you who want the details, they are included below.)

Google and Yahoo are my preferred choices, with Google being the slight winner. Because I live in San Francisco and have so many other options for local information, MSN’s portal features, about which I will mention more later, are not particularly compelling. For other users, or for people from other cities, it very well may be different.

To provide more context, MSN is the only one of the four that is a full local portal. The other three are more search-based. So, depending on what you’re looking for you’ll want to use different ones (gee, what a surprise). I think Google offers the best search, but MSN’s browsing options could be useful. Yahoo stands out because they control their own data and in the long run that will set them apart. Ask didn’t really stand out to me in any significant way.

It’s interesting to think about the aforementioned strengths of each local engine because they accurately reflect each company as a whole. MSN is a destination company, Google is a search company, Yahoo is a destination/search/media company, and Ask is hanging with the others, but needs a little more oomph.

All four engines default to my saved search location, but Ask and Yahoo also keep a list of other recently searched locations for easy access. I find that feature useful because although I live in San Francisco, I also often search for information about Santa Cruz and San Diego, All four engines have useful help pages dedicated specifically to local.

Breakdown by Engine

Ask Jeeves Local
Strengths:Saves recent locations and searches.
Areas for Improvement:Customizing Citysearch’s data; improving local news.

Ask Jeeves local, which is tagged as still being a beta release, is focused on business listings. In fact, when you are on their Local page, the defaulted highlighted search tab is simply called Business Listings. They also have tabs for Maps, Directions, Local News and Weather. Maps and Driving Directions could probably be consolidated in one tab, but not a big deal.
When you search on Ask, you go to a Citysearch results page, though it is branded to look like Jeeves until you actually click on a listing and then you go right to Citysearch. The order of results is different on Ask than on Citysearch. I guess Ask is overlaying their own algorithm onto Citysearch’s data. Another difference is that on Citysearch you can sort by Best Of, Distance, Alphabetical, or Top Results. But on Ask you can only sort by Distance or Ratings. It seems that since they are using the same listings data, even though they are ranking it differently in search, they would be able to use the other sort options.
The results are sorted by distance as the default, but distance from what exactly? I couldn’t tell. With each listing there is a user rating, the address and phone number, and links for maps, directions, and a website, if one exists.

Query Examples
Vegetarian restaurants -location: San Francisco.
Three of the first ten listings are indeed vegetarian restaurants, but the other seven are not. There’s a sushi restaurant, a taqueria, an Italian restaurant, and so forth. I looked at all of them and most of them have vegetarian listed as one of the cuisines, but not all. For instance, Max’s does not have the word vegetarian anywhere on the page. So why was it returned at all? I don’t know the answer to that. I even looked at the HTML code and couldn’t find vegetarian anywhere. And although the other restaurants do have vegetarian as one of the cuisine options, that is not what I was searching for. I was specifically searching for vegetarian-only restaurants. Although the engine might be forgiven for not realizing my specific user mission, in this day and age just about every restaurant has at least one veggie option, even if it’s just pasta or grilled cheese, so showing me non-vegetarian restaurants that have vegetarian options is not quite good enough. But this problem is really Citysearch’s, since Citysearch is the one entering the data. So Ask might be forgiven for doing what it should: searching Citysearch’s data. But then again, as a user I don’t care, all I care about is finding what I’m looking for.

Jeeves has a tab for local news, but it is a bit confusing because the search box prompts for: City, State or Zip. OK, so I enter San Francisco. The results consist of 10 local news items, most of which are relevant. Some crime articles, some events, that kind of thing. But if I just wanted to see local headlines I can go to a local newspaper site. What I want is to be able to take advantage of Ask’s search technology to search articles from multiple sources. The second article is about a major drug bust that happened in the city, so I should get relevant results if I search for drug bust. However, I don’t. What I get instead is news from around the world about this topic. Since I am still in the local tab, I will give them the benefit of the doubt and say the search is being run against local sources, but not MY local sources. There is an article from Boise and one from Palm Beach. The article from San Francisco is there too. If I refine my query and try San Francisco, CA drug bust I get no results at all. The local news tab is not offering me much.

Google Local
Strengths:Relevant and extensive listing of results from their web search that match the local listings; results are shown on a map.
Areas for Improvement:All searches first return only business listings. So a search for something like reviews of pinball bars – location: San Francisco doesn’t return general review types of sites, just specific listings. However, I only call this an area for improvement because I know they have the data and it could be incorporated.

Google local, another beta release, is also focused on business. It says right there on the page: Find local businesses and services on the web. They are pushing Keyhole. When I first heard of Google’s acquisition of Keyhole, I thought they were off on a tangent. Then I heard Chris Sherman’s keynote speech at Internet Librarian in November and he explained how Keyhole will integrate with Google’s search so users will be able to actually see locations. I’m not 100% behind it yet, but I can certainly see potential.

Query Examples
vegetarian restaurants – location: San Francisco. Right off the bat I am impressed with what I see. Nine out of the first ten listings are vegetarian-only restaurants. Yea! They are really targeting vegetarian restaurants, rather than those restaurants that have some vegetarian dishes. I really like the interface. Along with the expected info, like phone number, address, and web site, there is also a nice map displayed right next to the listings. The only drawback I see is that although I targeted San Francisco, half of the restaurants are actually across the bay in the East Bay. Fortunately I have the option to search within 1 mile, 5 miles, 15 miles, or the default 45 miles. It’s a small thing, but I think the default should be 5 or 15 miles.

At the top of the page, Google shows three sponsored links. In this case, they were fairly relevant. One was for a vegan store, another for Citysearch, and the third for Green’s vegetarian restaurant.

But let me get to the best part, which is clicking on one of the restaurant’s links. Google takes you to a search results type of page with references to the restaurant. They seem to be sending the name of the restaurant to their general web index and returning matching results. It’s really a great feature because it provides not only the restaurant’s homepage, but also many reviews. All the results for Millennium Restaurant were relevant to the restaurant. The only suggestion I can make is that I wish Google’s URL was easier to parse and understand. Here’s what it looks like: http://local.google.com/local?q=vegetarian+restaurants&hl=en&lr=&c2coff=1
&safe=off&sa=G&near=san+francisco,+ca&radius=0&latlng=37775000,-122418333,8758249238891447007
I thought that really long number at the end was a cookie, but actually now I’m guessing it’s some kind of internal mapping ID number they’ve generated. Otherwise, I don’t see anything that indicates how the phrase“Millennium Restaurant” was sent against their web search.

MSN Local
Strengths: Full local portal.
Areas for Improvement:MSN local search is the same as Citysearch’s search.

MSN uses Citysearch’s data for search, but they also have a lot of other content. The front doors for San Francisco are different on MSN vs. Citysearch7.com. MSN, being a portal and all, is pushing its own properties for things like news and shopping. But as soon as you do a search, MSN kicks you over to the Citysearch interface. I also noticed that when I come through the main MSN.com homepage there is a traffic option, but I couldn’t find it on the local page.

Query Examples
vegetarian restaurants –location: San Francisco. Well, the results are different from what I got on Ask, but it comes as no surprise that the same thing is happening: non-vegetarian restaurants are being returned along with vegetarian-only restaurants. And so my thoughts for Ask are the same for MSN. Yes, this is Citysearch’s issue because of the data they are providing. But, as the interface to that data, MSN may want to address this type of thing.

Where MSN differentiates itself is that it is a true portal. It has event listings, job listings, sports news, and so forth. They are pulling a lot of content from a lot of sources. Each sub-page has a similar interface. You get to these pages by browsing from the local front door.

Yahoo Local
Strengths: Searching their own content (I think); traffic (as in cars and roads, not site traffic) monitoring.
Areas for Improvement:More integration with general web search results.

I really like Yahoo’s integration of real time traffic monitoring. Not exactly search, but nice nonetheless. Yahoo states that they do not sell rankings, but they do offer businesses the opportunity to enhance their listings by adding visual elements. They also offer regular users the ability to contact Yahoo Local in order to add or update listing information.

Query Examples
vegetarian restaurants – location: San Francisco. Same problem as Ask and MSN, many of the results are for restaurants that are not vegetarian only. At first I thought Yahoo was simply text-searching the full restaurant description, because a sushi restaurant created a positive match because it had this text: “Cooked seafood and vegetarian dinners are available.” However, Yahoo also offers a “vegetarian restaurants” category, and clicking on that did not change the results at all, so obviously the restaurants were categorized into the vegetarian restaurants category. In this case it looks like the editorial guidelines were a bit loose, though I do understand the logic behind it. I poked around in the HTML for an Italian restaurant and found the following metadata: Category Types: Vegetarian Restaurants, American Restaurants, Barbecue Restaurants. Interesting combination…vegetarian and barbecue.

There are two sponsored results. One is a local restaurant review site, which is reasonable. But the other is for a hotel. I happen to know that the hotel is the hotel where one of San Francisco’s best vegetarian restaurants is located, but many people won’t clue into that.

There are some nice refinement options, such as by rating, price, and atmosphere, and you can choose to view results on a map. It would be nice if Yahoo integrated their general web search results the way Google does it.

Shopping Searches on Google and Yahoo

(Taking a break from the Defining Relevancy series. I will return to that again later.)

Intro
Google and Yahoo have relevant results for many non-commercial searches, but I am often disappointed by their results for shopping queries. Google and Yahoo have become every search engine optimizer and spammer’s target, and so they get “hacked” by spammers fairly often. They also get bombarded by the big shopping sites such as Amazon, eBay, Bizrate, etc. These companies depend on prominent search results on Google and Yahoo in order to survive, and so they target their efforts to doing just that. Oftentimes these sites are useful, but it troubles me that a small group of sites are dominating search results. Not only does it limit the immediate usefulness for users, but it also limits the opportunities of other merchants to sell online, and that will hurt us all in the long run.

I would like to look at some specific queries. To be sure that my queries are recognized as being commercial in nature, I have formulated them all to include the word“buy” in them.

Google
buy printers
The first result is from ZDNet, the second is from CNET. Remember when CNET bought ZDNet? The two sites are indeed different, but take a look at the display titles:
Buy printers – Best printers – Compare printer prices – ZDNet …
Buy printers – Best printers – Compare printer prices – CNET …
Looks fishy to me. Shopping.com and Shop Genie show up prominently as well.

buy ipod
Where is the official Apple store? Nowhere. And look, there is Shopping.com and Shop Genie again. Hi guys!

buy sports goggles
A scan through the results shows some of the usual suspects; My Simon, Dealtime, and Amazon in this case.

buy soccer ball
The first four listings in order are Bizrate, Bizrate (again), Epinions, and Amazon.

And check out this query:
buy some time before I die (intended to be somewhat nonsensical).
Amazon gets the first two spots and there’s Epinions at the bottom of the list.

What is my point with all of this? My point is to demonstrate that there are certain sites which consistently show up in Google results if the query includes “buy” or is otherwise shopping related. Is this bad? I do think it is problematic. Amazon, although a great shopping site, may not always be the best place to buy everything. But they do have a monopoly of sorts on Google for shopping searches.

Yahoo
Let’s see how Yahoo is doing with these same shopping queries.

buy printers
Buy.com, Bizrate, and ZDNet are there. But so are the homepages for Dell and HP. That sounds good at first, but since I am being nitpicky, why are the homepages returned and not Dell and HP’s printer pages?

buy ipod
Amazon has positions 3,4, 9 and 10. Not good. Why four positions instead of just one or two? Official Apple sites show up in positions 2, 5, 6 and 7; though again, do I need four Apple sites in my top ten? Buy.com sneaks in to number 8. Is this better than Google? Yes and no. It is much better in that the official Ipod site is included, but it is worse in that less diversity of results are returned.

buy sports goggles
Again, we see certain suspects showing up. Overstock, shop.com, shopping.com, and eBay. However, some other sites have managed to sneak in as well, which is good.

buy soccer ball
The first result is from soccer.com. Sounds promising, but it actually goes to the homepage and not the soccer ball page. Number 2 is a poster from Art.com. The rest of the results are not so good, there is a soccer ball rug, a soccer ball piñata, and a soccer memorabilia store.

buy some time before I die
Amazon and eBay are there sure enough, but there are also some other sites to provide diversity.

Froogle and Yahoo Shopping
Overall, my shopping experience has been passable, but not great. Of course I am not the first to point these things out, and I suppose the two companies realized this long ago which was why they both launched shopping tools. Google has Froogle, which for some reason is still in Beta even though it launched in 2003 and is even one of the tabs on Google’s front door. Yahoo has Yahoo Shopping.

Looking at some examples from above, the results are much better on both engines.
Froogle:
buy ipod
eBay is still there, but the other results come from a variety of merchants. Not only that, but the interface is much more advanced, allowing users to do things like sorting by price and by store. There are also thumbnails to preview the products.

buy soccer ball
Shop.com and Buy.com are prominent again, plus Froogle’s own comparison option. Overall, good results.

Yahoo Shopping:
buy ipod
Buy.com is in there, but also a whole variety of other merchants. And the interface is sweet. There are previews and some really good sorting options, such as specifying desired hard-drive size.

buy soccer ball
Buy.com front and center in position 1, but the remaining listings are diverse, which is good. But, most of the listings are selling soccer ball related things like video games, shin guards, soccer ball charms, etc. That is very bad. If I wanted those things I would search for those things. Trust me.

Conclusion
For shopping queries it is not worth doing general web searches on Google and Yahoo. Although many users optimistically think they should be able to do shopping searches using general web search, the reality is otherwise.

Google recognizes and admits this and has Froogle results promoted above its general web results. But Yahoo does not do that. I think for general web searches they should promote their shopping results for certain queries, since the user experience is better on Yahoo Shopping. The trick there is to base it on queries, so that only a percentage of searches trigger Froogle and Yahoo Shopping suggestions. I do not want to see Froogle or Yahoo Shopping recommendations for non-commercial queries. Both engines have probably generated heuristic lists of commercial queries over time; these lists could be used for this purpose. Maybe Google is already doing so because soccer promotes Google News instead of Froogle, whereas soccer ball correctly recommends Froogle results.

Both engines could improve their targeting for shopping queries. Sometimes the results are too broad and other times too specific. As a user I can be trusted to search for what I want to buy. If I want a soccer piñata I will search for that. So instead of doing broad matches to my query, the product results should be specific and targeted.

And lastly, it would benefit users if there were more diversification of results. Whether it’s done through source clustering or another method, providing a broader range of merchants will improve my online shopping experience.

For shopping searches, users should use Froogle and Yahoo Shopping. And the engines should promote those options for relevant searches that are conducted using their regular web search interfaces.

Search Engine Relevancy. Part 3: A Call to Arms

A Call to Arms

[Part 3 of a series about relevancy.]

Two years ago, on December 5, 2002, I was working at LookSmart when Danny Sullivan at Search Engine Watch published a short piece called, In Search of the Relevancy Figure. He wrote:

“Where are the relevancy figures? While relevancy is the most important ‘feature’ a search engine can offer, there sadly remains no widely-accepted measure of how relevant the different search engines are. As we shall see, turning relevancy into an easily digested figure is a huge challenge, but it’s a challenge the search engine industry needs to overcome, for its own good and that of consumers.”

It was a call to arms for the search industry to come together and figure out acceptable relevancy metrics. Enough with the empty claims about relevancy, it was time that some standards were set in place so that the public could know definitively which engine was the most relevant. It is a noble, but nearly impossible, idea.

At the time I was running a team that compared the relevancy of search results from a variety of engines. Our insights were used by the executives to make business decisions and by the search engineering team to help improve the company’s own algorithms. Danny Sullivan’s article was sent around the company and commented heavily upon, but in the end we agreed that relevancy figures need to come from the outside. We could advise about our methodologies and analyses, but that was all. After all, would a newspaper trust a book critic who worked for a publisher to review one of that publisher’s books? No, and neither will the public fully embrace a relevancy figure generated by a consortium of search engine companies, no matter how good the intentions and methodologies are.

The single biggest problem with relevancy figures is the devastating, and in some cases illegitimate, damage it can cause a search company. Relevancy is not a universal figure, it is always subjective. It is not one magic number that encompasses all queries for all people. I am not arguing against reviews and criticism of search results; after all, critiques can provide search companies with solid analysis to build upon as is my goal with the Search Lounge. But if Time Magazine or Newsweek published a cover article saying one engine is far and away the most relevant, imagine the effects. Users would desert the other engines and flock to that engine. And that is not right. Users need to use the engine that is best for each unique information need they have.

As one former colleague astutely pointed out to me, it is similar to what happens when magazines publish lists of top universities, top hospitals, top doctors, etc. The rankings are generalized and with the publicity and hype that comes along with them, the winners get to make the rules. But each student and patient has unique needs that may or may not be best served by the university, hospital, or doctor that ranks the highest. The same can be said for search.

Relevancy analyses are often comprised of multiple sections or tests. There may be a part that looks at certain types of queries, such as geographical or shopping, or at queries with a certain number of words, or at natural language queries, or popular queries, or news stories, or ambiguous queries, and on and on. One engine may lose the overall relevancy test to another engine, but might win for local queries because they have targeted zip codes and city-level results. So if every user abandons ship and only uses the overall “winner”, then for local searches they will be getting inferior results. This notion can be taken down to the specific query level where an engine may have good results for chess openings and bad results for chess books. You be the judge! The point I am making here is that an overall relevancy figure sabotages the end goal of helping searchers.

Another major problem is the number of possible ways to evaluate relevancy of search results. And I guarantee there is absolutely no way the industry – that is to say the major search engines, namely Ask, Google, MSN, and Yahoo – would ever agree on one relevancy figure. It just will not happen. Think of these analogies: are you getting all the relevant newspaper articles on a topic if you read only one newspaper? Are you watching the funniest sitcoms on TV if you’re only watching one station? And most pertinent to this topic, are you getting the best books on a topic if you only visit one library or bookstore? The answer to all of these is a resounding NO. And the same is true of search engines. You are absolutely, completely, definitely, not going to get all the best results on one search engine.

Another problem is frequency of analysis. Engines update and release new products so often that it is a full-time job keeping up. Plus there are new search engines that go live every month. I do not have the time to fully analyze and do a Search Lounge review for every single new improvement and release, but I can run one or two or even five queries on a new release or new engine to see if it passes the acceptability barrier. These days most, but not all, engines meet a minimum threshold of acceptability. But a minimum threshold is far from good. And even an engine that is not passable may update its index the very next day and overnight the results may be dramatically better.

The back-end behind each engine’s crawling, indexing, and algorithm technologies is far too complex to produce the same results, and the queries each person will enter during multiple search sessions are too diverse. There are simply too many variables. I’ll throw out one last analogy (I promise): if four people are told to make chocolate chip cookies, will all four taste the same? No. And going one step further, will the tasters all agree on which one is the best? Maybe, but probably not, assuming they all meet at least a minimum threshold of quality. So even if a report is released saying that an engine is the most relevant as judged by a fully objective, scientific study, the counterattacks from the other engines will be swift, immediate, and oftentimes legitimate. The media would be awash with a blizzard of PR releases explaining why the test was incorrect, why the winner did not really win, and why the losing engine is actually more relevant and getting better every day. And there we are, right back where we started with searchers not able to trust corporate press releases.

Next Installment – Part 4: Using Different Engines

Search Engine Relevancy. Part 1: Defining Relevancy
Search Engine Relevancy. Part 2: The Jaded Surfer

Search Engine Relevancy. Part 2: The Jaded Surfer

The Jaded Surfer

[Part 2 in a series about relevancy.]

Search engines love to tell us that they are the most relevant, and I don’t blame them. A glance through any engine’s press releases will include claims like “most relevant update”, “a dramatic increase in relevancy”, and so forth. These conflicting claims are like political rhetoric. Ultimately they have the opposite effect of what was intended, because we are all becoming jaded searchers. Here are some examples, and be sure to note how many claim to be the most relevant:

Search companies must be allowed to say their product is relevant. Otherwise, how can they market themselves? I am not disputing any of these company’s claims or passing judgment on their right to proclaim their search product as being relevant. The issue I am emphasizing is that these claims are subjective and need to be understood as such because they can not all be the most relevant, easiest, most extensive, and fastest.

Relevancy is like Pornography
So, with all that being said, what exactly is this elusive specter called relevancy, and how is it identified? It’s like the classic question: what is pornography? We don’t know how to define it, but we know it when we see it. Similarly, search engine relevancy is subjective and means something different to everyone. And not only that, it can mean different things to the same person at different times. A relevant result can be the site that provides the exact answer to a question; it can be the authority in its topic that provides a broad selection of information; or it might be a new site in a topic that is already known very well. That is why relevancy evaluations must be comparison-based. I know that the results on one engine are bad because another engine has better results. If the other engine did not exist, then my level of expectation would be lowered and it is possible that the first engine’s results would seem relevant to me. Underlying this notion is the notion that both engines pass a minimum threshold of relevancy. It is conceivable that the results could all be not relevant, in which case the comparison does not even come in to play.

Relevancy evaluation changes based on the types of information being sought. Each and every query for information needs to be reevaluated every single time. Users must never think that the results on their favorite engine are always the most relevant. If searchers do not find what they want, they can do a few things: they can come up with a new search strategy and approach the problem from a different angle; they can stick with the same strategy, but refine the query by making it narrower or broader, or by using advanced options and syntax; and, lastly, if the engine is still not finding what they are looking for, they can go elsewhere and try a different search tool.

Next Installment – Part 3: A Call to Arms

Part 1: Defining Relevancy

Search Engine Relevancy. Part 1: Defining Relevancy

Relevancy

[Part 1 in a series of postings about relevancy.]

Relevancy is subjective. Each searcher will have a different evaluation of a search tool’s relevancy, and not only that, but each searcher will change that opinion based on the specific search being done. Search relevancy is a moving target that will never be agreed upon. Novice searchers should look to experts for advice, but in the end must reach their own conclusions about relevancy. Those conclusions must be based on using a few search engines, because relevancy is contextual and can only be understood as a comparison.

This is a concept that has been discussed by countless other information professionals, many of whom will say that defining relevancy is not constructive because of its subjectivity. I disagree. I think all serious searchers need to have their own definition of relevancy in order to make judgments about search results. After all, why do we use search engines? We use them to find information. We don’t use them to be impressed by clever features, a large index, or an intriguing name. We use them to find what we are looking for, and we can only find information if the results we get for our searches are relevant. And we can only decide if results are relevant if we have a simple framework for making that decision. Relevancy is the key and the foundation for search. Without relevancy, the rest is fluff. On an engine that has good relevancy, the features that are built around it become especially valuable. On an engine that has poor relevancy, the features are useless.

There are a slew of companies offering various takes on searching electronic sources. Some companies are searching sources such as databases, archives, and home computers, while on the Web there are general search engines, visual engines, clustering engines, natural language engines, and so forth. There are also specialty search tools – tabs or advanced search on general search engines – that focus on news, blogs, images, and so forth. It is great to have these tools, but none of the bells and whistles mean a thing if the results are not relevant. Without relevancy, users will not come back no matter how many special features are available. How often will I visit a restaurant with great atmosphere, but bad food? Not often.

Definitions of Search Engine Relevancy
With relevancy being such an important part of search, how is this elusive term defined when it comes to search engines? Here are my definitions. I am sure your definition will be different. Even if your definition is similar, when it comes to actually evaluating search results people will not always agree. Even if someone agrees with everything I say, we will still often disagree in our evaluation. I may think a result is relevant, when someone else thinks it is not relevant, and because of the subjective nature of relevancy evaluation we can both be right. So, with all those caveats let me present my definitions.

Relevancy: A measure of how well a search tool finds the information being sought.
[Sound too simple? Please, I welcome any other definitions because the more complicated I tried to make my definition, the more I just kept coming back to this simple sentence.]

To break it down further, I think of relevancy in terms of three levels or grades:

Relevant: the search result provides the information I am looking for. It is that simple.

Somewhat relevant: the result is close, and may even propel me along a path that leads to the information I am looking for, but it does not exactly have what I want. A somewhat relevant result is sometimes valuable because it suggests a different way and different terms for a search.

Not relevant: the site provides no help to me. It may contain the terms I searched for, but the context is wrong. It is that simple.

Next Installment – Part 2: The Jaded Surfer

Post Navigation