Archive for the ‘Semantic Web’ Category

My Experiences Attempting To Scale The Semantic Web

November 9, 2010
 by 

Last week I gave a long overdue presentation on my experiences at Radar Networks where we attempted to built a consumer product using Semantic technologies. I presented this at the San Francisco Semantic Web Meetup that was hosted at my old company CNET (now owned by CBS).

I don’t work in this space anymore but I wanted to share the experiences that I had in the two years I worked with the Semantic Web. This is a very engineering-centric presentation based around the idea of engineering web-scale systems. Hopefully it comes in handy for some of you out there interested in this slowly growing space.

The Semantic Question: To Delete or Not To Delete

September 12, 2008
 by 

Originally published on semanticweb.com


A few months back I posed a question to the folks at DERI (Digital Enterprise Research Institute) from the University of Ireland when they came to visit Radar Networks. This is a question that I have struggled with for a long time; seeking for answers anywhere I could. When I asked them, I heard the same question in response that I always hear.


Why? Why would you ever want to delete something out of the ontology?


Ontologies were not originally created to have things removed from them. They were created to classify and organize information in a relational format. Just because a species goes extinct it doesn’t mean it should removed from the domain of animals does it? Just like a car that isn’t manufactured anymore or a disease that was officially wiped out. These and many more are probably the reasons why Semantic Web gurus and ontologists alike don’t like the idea of deleting entities.


I am helping to create a social site where users generate content; objects, notes, data, and connections to others and other things that are their own. If they want to delete something that they have created, so be it. Sounds easy right? Well, yes and no. This problem is dealt with throughout computer systems. It is essentially all about managing pointers in memory. You can’t delete something that other things are pointing to. Who knows what will happen or how our application will respond when certain pieces of information are missing? Some things we account for on purpose because they are optional — but some things we just can’t. Every application has its unique constructs, whether it is built on a relational database or a triple store.


So what I have to do is define ontological restrictions, stating what links can and cannot be broken. On top of that, we must worry about permissions. Am I allowed to break an object link from someone else’s object to my own? Also, what if the object being deleted has dependent objects that cannot exist alone, or more importantly, don’t make sense on its own? A great example of this is a user object and a profile object. There should either be zero or two, never just one.


My friend and coworker Jesse Byler had dealt with a similar problem in the application tier a few months back regarding dependent objects. He had written a recursive algorithm that would spider the graph until it hit the lowest leaf node matching certain criteria and then begin to operate. I took this same principle and pushed it down into our platform and began to mark restrictions in our ontology.


This is where it became tricky. Some relationships are bi-directional and some are not. In the user and profile example above the relationship is bi-directional; however, many of our relationships are not. An example to this would be a group and an invitation to that group. Sure the group can exist without invitations, but the invitation cannot exist without the group.


All in all, the design pattern isn’t overly complex, nor are the ontological restrictions. But as a whole, it makes for an interesting problem with a lot of nuance. Careful consideration must go into this process because mistakes could be catastrophic. Data integrity is paramount and dangling references could leave the application in a terrible state due to one bad linkage. Although simple in practice, the execution is anything but.


And for those of you who are still asking yourselves why?


Here’s the answer. Scalability. As if working with a triple store wasn’t hard enough, keeping useless data around that will never be used again will definitely make matters worse. We are attempting to build a practical everyday application — not classify organisms. Surely there is a place somewhere in the mind of the ontologist where he can think practically about using ontologies for data storage. Isn’t there?


As more applications are built using OWL and RDF, this problem will become more and more real — and there’s nothing the ontologist can do about it but adapt, or die a little inside. Either way, at the end of the day, I am still an engineer trying to make do with what I have.


If we must delete, then so be it.

SEO and the Semantic Web

July 2, 2008
 by 

I posted this over at SEOmoz’s YOUmoz blog, their user generated blog. If you happen to be a member and like the article please vote for it so it will get promoted on their main blog.


From SEO and the Semantic Web on YOUmoz:

With the proliferation of the Semantic Web, all of our data will be structured and organized so perfectly that search engines will know exactly what we are looking for, all of the time. Even the newest of newbies will be able to create the most well-structured site that would take tens of thousands of dollars today. Everyone’s information will be so precise and semantically correct there will be no need for Search Engine Optimization anymore!

The fact of the matter is, this is never going to happen. Being a long-time SEO practitioner myself, I am very interested in the ramifications of the Semantic Web on today’s search, especially because I am tasked with optimizing Twine when it first becomes publicly readable this summer.

Before we dive too deep, let’s first look at what SEO experts and professionals do today. In a nutshell, we research, study, and test hypotheses learned by watching the heuristics of a search algorithm. We implement by writing clean and semantically correct HTML in certain combinations in order to allow robots to easier asses the meaning of a page. We use CSS to abstract the presentation layer, we follow good linking structures, add proper metadata, and write concise paragraphs. We organize our information in a meaningful way to show bots clean parse-able HTML. In some sense we are information architects, in another we are marketers.

But what would happen if a search engine company published their algorithm? Although that probably isn’t going to happen anytime soon, what if they would tell us exactly what they were looking for? That’s what the Semantic Web is going to do to search. Just the other day Yahoo announced SearchMonkey for just this purpose. It is only going to get bigger. Being told how to mark up your information certainly takes a lot of the guesswork out of it. But in terms of the role of the SEO expert or professional, I don’t think we can retire just yet.

The Semantic Web is organized by people just like the Web of today. The only difference is that now we are going to organize around better standards. Just as people have a hard time organizing their closets, attics, and garages, people have a hard time organizing their websites. Although the Semantic Web will add structure to the Internet, make it easier for novice users to create structured content, and change the way we search, there is still a need for experienced help.

Enter SEO. Some of our roles may have changed, but for the near future there will be still be a lot of similarities. The need to study and analyze robot behaviors to better tune information isn’t going away. They will still have to be on top of the emerging trends, search technologies, and organic ways to drive traffic. The fact of the matter is, nothing is going to change drastically for a while. In the near term, I am mostly worried about how to integrate Twine into the Web of today.

Not very semantic, huh? Well, that’s not say we aren’t going to integrate with microformats, display RDF in our pages, and publish our ontology. All of this is extremely important as the Semantic Web emerges; however, in a world where search is run by Google we have to cater to them. There are a growing number of semantic search engines and document indices out there, which are definitely raising awareness to the mainstream. Yahoo just jumped on the semantic bandwagon publicly and you know Google can’t be too far behind.

In conclusion, there’s nothing to worry about anytime soon. The SEO expert’s salary isn’t going back into the company budget. We still have to tune our pages to the beat of Google’s drum for the time being. When things do take a drastic turn, we will adapt and overcome as we always have. That’s what good SEO does. As for me, I will tune Twine just as I used to tune pages over at CNET, following the teachings of Sir Matthew Cutts et al.

Semanticweb.com published my semantic tagging article

June 23, 2008
 by 

Unfortunately its been a while since I have written anything new and interesting here although I have been working on some very exciting stuff (more on that later). In the meantime, Josh Dilworth from our great PR team at Porter Novelli submitted my last article to semanticweb.com and they published it! Its the first thing I have ever written that has been published somewhere besides this dinky little blog ;) . Anywho, for those who didnt read it the first time, here is the article on semanticweb.com.

Tagging and the Semantic Web

May 20, 2008
 by 

A while back I commented on a Tech Crunch article quoting my CEO regarding keyword searches in the Semantic Web space.   My comment was later quoted on the Faviki blog, a Semantic startup involving tagging web pages with semantic wikipedia data.  Finding this prompted me to start writing more about the Semantic Web on my own blog.  This is actually the first time I have ever posted about someone else’s post.

(The following is based on a presentation I gave on the subject in September of 2007.)

Tags the way they are implemented today

The way the better Web 2.0 sites implement tags involves faceting. I have discussed this in a previous blog post regarding faceting with Lucene and SOLR, but it in a nutshell, it allows you to group together documents or objects based on attributes. For example, give me all documents about ‘George Bush’ and ‘Washington’. The problem with these attributes is they have little or no value on their own and they certainly they are not understood by computers. They are just strings denoting some type of concept. Here is a short list of limitations which I feel the Semantic Web web will address:


- Tags do not provide enough meaningful metadata to make meaningful comparisons
- More information is needed besides their origin
- Tags are essentially a full text search mechanism, although faceting helps
- Need more relationships between tags and the objects they pertain to

The solution, tags as objects

Allowing users to tag an object with another object we can make extremely interesting comparisons; discerning a lot more information about the original object becomes simple and accurate. With this type of interrelationship we can pivot through the data like never before, not with full text search but object graph linkages that machines and humans can understand. Lets go over an example.


Lets say a user adds a note into our system ranting about a beet farmer who lives in Washington state by the name of William Gates. The user goes on to discuss his beets and farming techniques in great detail, mentioning nothing about software and Windows Vista of course. In the current Internet model the user would tag this note with strings like, ‘William Gates’, ‘Bill Gates’, ‘beets’, etc.


Now another user comes along and starts digging through documents tagged ‘Bill Gates’ to try and find new articles about Vista. Unfortunately, many searches will turn up bad results, especially if the density of the word ‘Bill Gates’ is great enough in the document about beets. That being said, the other direction would work more as intended, searching on the tags ‘Bill Gates’ and ‘Beets’ would yield more expected results.


In the Semantic Web model, the document about William Gates the beet farmer would be tagged with the William Gates object which could contain a plethora of metadata, his location, occupation, etc. Now when we look at this document there is no guessing as to what it is referring, especially from a machines point of view. This is exactly what the Semantic Web was built for. In this model we are not relying on linguistics, natural language processing, or full text search. We are relying on hard links that machines can understand and relate to.

The disambiguation page (was the tag page in Web 2.0)

What about regular string tags? The Semantic Web cant possibly understand everything?! The fact of the matter is, thats true. We still support regular string tagging. Some things are not proper nouns and less concrete, like adjectives and verbs. They may not yet deserve their own object; however, lets think about actual language here for a second. The semantics behind how we describe things.


Take the adjective ‘cool’. Well, first of all, what are you looking for? Nouns? A grouping of multiple nouns? Probably ‘cool’ nouns. A search on this tag could turn up anything and everything from many different levels. It could start by pulling in a definition from Wikipedia. Then it could group together a list of groups tagged cool like the ‘Super Cars’ group or the ‘Fast Cars’ group. It would also show you users tagged ‘cool’ and documents tagged ‘cool’. But where it becomes really interesting is where you find the ‘cool’ string tag on a tag object! Now you can find proper noun tags like ‘Ferrari’ as well as ‘Super Cars’ the proper noun.


Joining these tags together in a search would yield detailed results from rich metadata like a list of Ferrari’s over the years represented as objects. Each car object would contain detailed specs on engine type, weight, horsepower, etc. Then by examining the ‘Ferrari Enzo’ object we can find all the people who used this tag on their bookmarks, links, documents, or other objects they created. With this information you can connect with these people, join their groups, and further your search for whatever it is you are interested in. The point is, everything is related at many different levels. What links them together are adjectives and verbs that describes them.

Conclusions

To be able to come at your data from every angle is important. Everyone thinks differently and everyone searches differently. The truth is, I think its going to be a while for machines to really understand what us humans are talking about. Its up to us to help organize data in a format that is machine readable so the machines can share, but in return it allows us to perform incredible searches likes never before.


There are many common misconceptions around the Semantic Web. It is a very broad term which has many facets. We are going after what I feel is an attainable portion of this idea. Our platform may not try and fully comprehend and reason what the term ‘cool’ means like a human does but you are a human. You the user understands what this term means and just how to use it. If not, our platform will definitely try to help.

PipedInputStream and PipedOutputStream with Java

April 10, 2008
 by 

Today I came across an interesting concurrency problem while deleting objects from the Social Graph (Semantic Web remember?). I have been tasked with mass deletes throughout our system, including exporting the objects in case they ever need to be reassembled again. Since our graph is so large, and we could potentially be deleting 10′s of thousands of triples at a time, the serialized XML would be about 10 times that many lines per triple represented in a file. In order to write the output to a file as fast as can be there was no need to store the serialized XML in memory. The best thing to do was to pipe the output stream to our binary store.


Now in order to do this, I need two threads, one to write and one to read. If you were to do this with one thread you would most likely run into a nasty deadlock situation. Anyway, here’s what I came up with:


public InputStream openStream() throws IOException {
    final PipedOutputStream out = new PipedOutputStream();

    Runnable exporter = new Runnable() {
        public void run() {
            tupleTransformer.asXML( tuples, out );
            IOUtils.closeQuietly( out );
        }
    };

    executor.submit( exporter );

    return new PipedInputStream( out );
}


Can anyone see the problem? Well unfortunately I couldnt either for over an hour. My unit tests would pass sometimes and fail others which led me to believe I was dealing with a timing issue. Turns out, sometimes the PipedOutputStream was completed before the PipedInputStream was even instantiated, completely missing the stream of the out.close(). The trick was to instantiate the two streams, in and out, at the same time then start the output with another thread. Problem solved. Here is what the finished product looks like:


public InputStream openStream() throws IOException {
    final PipedOutputStream out = new PipedOutputStream();
    PipedInputStream in = new PipedInputStream( out );

    Runnable exporter = new Runnable() {
        public void run() {
            tupleTransformer.asXML( tuples, out );
            IOUtils.closeQuietly( out );
        }
    };

    executor.submit( exporter );

    return in;
}

Copyright © 2005-2011 John Clarke Mills

Wordpress theme is open source and available on github.