Archive for the ‘Computers’ Category

Gigabit Ethernet for your home

March 9, 2009

Its always been my dream to have an automated house and the first step is hardwired Ethernet. Whether you are going to be running video over Ethernet, VoIP, or just the good old Internet you really just cant beat copper. Although wireless has come a long way since 802.11b it will just never keep up. Besides the fact that copper performs flawlessly and extremely fast, there are no reliability issues that you inevitably run into while using on WiFi. If anyone out there reading has found a perfect WiFi Access Point that can run at gigabit speeds for hours at a time with 5 – 10 devices on it please let me know. It would have saved me a lot of trouble!

For the full writeup with tons of pictures please see my post Wiring For High Speed Ethernet on San Fran Vic. – Let the restoration begin

February 24, 2009

In true geek blogging obsession/narcissism, I have decided to blog the restoration of my 1890 San Francisco Victorian home at Its going to be a long journey and I’ve decided to document it for my friends, family, and any future owner of the house.

It should be interesting to learn what goes into an old house like this one, where it came from, and where its going. Although I have a lot of experience engineering and building many many things, this is my first home restoration. So follow along with me, learn from my mistakes, and try not to yourself like I am going to. Enjoy!

Hacking Apple TV, quickly and easily

January 23, 2009

For years I have been trying to create a Media Center to attach my TV. I’ve tried Windows Media Center years ago, Myth TV on Linux, and everything in between. Nothing was simple, cheap, and easy to setup. I have friends who have tried those off the shelf media boxes like the ones from Netgear and the likes, but those have their pitfalls too. I wanted something open source, truly hackable, and preferably something Unix/Linux based. Thats why I turned to the Apple TV.

For my use case, I did not care about disk storage. I am using my Apple TV’s as terminals or endpoints to play media from storage somewhere else on the network. This allowed me to buy the cheap version of the Apple TV with a 40 GB drive for about $220. Not only that, there is a Toslink (optical) out that I will be attaching to a DAC and set of vintage tubes for crisp beautiful sound. For that kinda money, its pretty hard to beat the Apple TV.

Once in my possession, I immediately hacked it to expand its functionality. Without it, the Apple TV is actually pretty restricted and an overall weak product; however, Boxee helps make it all better. Boxee allows you to watch all those formats that iTunes does not support, like DivX, etc. Not only that, it allows you watch Hulu as well as other online video services.

Hacking your Apple TV and Installing Boxee

There’s actually nothing to do really and I would barely consider it hacking. A few folks out there have made it quite easy. In a nutshell, all you need to do is create a ‘patchstick’ on your PC using a thumb drive you have laying around. You insert it into your Apple TV’s USB drive, restart it, and voila! It installs itself and you’re all set. You can get the USB Patch Stick creator from the Google code repository.

Can you see the concurrency flaw?

November 3, 2008

A coworker and I came across this block of code which was causing us numerous headaches over the past few weeks. For a while, there was absolutely no time delay which was causing an inordinate amount of CPU cycles but thats besides the point. After that was fixed with a short delay we were left with what you see below. Can you tell what’s wrong?

   while ( true ) {
        TriggeredSend ts = queue.peek();
        if ( ts != null ) {
            switch( doPost( ts ) ) {
                case SUCCESS:
                    ts.complete( true );
                case FAILURE:
                    ts.complete( false );
                case SYSTEM_UNAVAILABLE:
                    systemIsAvailable = false;
        Thread.sleep( delay );

As with most concurrency bugs, its hard to determine what’s actually happening without experience. You can turn on your debugger but then you are changing the way the code behaves by slowing it down, essentially taking out any race conditions. Turns out, the same task was getting invoked multiple times well as some other strange conditions. That led us to the block of code you see above.

Figure it out yet? Well, what’s happening is that when the code is peeking into the queue, its assigning the result to a variable which is then being called. According to the java.util.concurrency spec, the peek method is used to see if the queue has tasks in it and not used to take any tasks off the queue. Not only that, but after the task was invoked, another task was being pulled off the queue. So what happens when multiple threads come through that block of code and peek? Well, they can both get the same task and invoke it. Bad bad bad.

The Semantic Question: To Delete or Not To Delete

September 12, 2008

Originally published on

A few months back I posed a question to the folks at DERI (Digital Enterprise Research Institute) from the University of Ireland when they came to visit Radar Networks. This is a question that I have struggled with for a long time; seeking for answers anywhere I could. When I asked them, I heard the same question in response that I always hear.

Why? Why would you ever want to delete something out of the ontology?

Ontologies were not originally created to have things removed from them. They were created to classify and organize information in a relational format. Just because a species goes extinct it doesn’t mean it should removed from the domain of animals does it? Just like a car that isn’t manufactured anymore or a disease that was officially wiped out. These and many more are probably the reasons why Semantic Web gurus and ontologists alike don’t like the idea of deleting entities.

I am helping to create a social site where users generate content; objects, notes, data, and connections to others and other things that are their own. If they want to delete something that they have created, so be it. Sounds easy right? Well, yes and no. This problem is dealt with throughout computer systems. It is essentially all about managing pointers in memory. You can’t delete something that other things are pointing to. Who knows what will happen or how our application will respond when certain pieces of information are missing? Some things we account for on purpose because they are optional — but some things we just can’t. Every application has its unique constructs, whether it is built on a relational database or a triple store.

So what I have to do is define ontological restrictions, stating what links can and cannot be broken. On top of that, we must worry about permissions. Am I allowed to break an object link from someone else’s object to my own? Also, what if the object being deleted has dependent objects that cannot exist alone, or more importantly, don’t make sense on its own? A great example of this is a user object and a profile object. There should either be zero or two, never just one.

My friend and coworker Jesse Byler had dealt with a similar problem in the application tier a few months back regarding dependent objects. He had written a recursive algorithm that would spider the graph until it hit the lowest leaf node matching certain criteria and then begin to operate. I took this same principle and pushed it down into our platform and began to mark restrictions in our ontology.

This is where it became tricky. Some relationships are bi-directional and some are not. In the user and profile example above the relationship is bi-directional; however, many of our relationships are not. An example to this would be a group and an invitation to that group. Sure the group can exist without invitations, but the invitation cannot exist without the group.

All in all, the design pattern isn’t overly complex, nor are the ontological restrictions. But as a whole, it makes for an interesting problem with a lot of nuance. Careful consideration must go into this process because mistakes could be catastrophic. Data integrity is paramount and dangling references could leave the application in a terrible state due to one bad linkage. Although simple in practice, the execution is anything but.

And for those of you who are still asking yourselves why?

Here’s the answer. Scalability. As if working with a triple store wasn’t hard enough, keeping useless data around that will never be used again will definitely make matters worse. We are attempting to build a practical everyday application — not classify organisms. Surely there is a place somewhere in the mind of the ontologist where he can think practically about using ontologies for data storage. Isn’t there?

As more applications are built using OWL and RDF, this problem will become more and more real — and there’s nothing the ontologist can do about it but adapt, or die a little inside. Either way, at the end of the day, I am still an engineer trying to make do with what I have.

If we must delete, then so be it.

Open source home automation project

August 15, 2008

open automator screen shot
Being a long time geek and open source advocate I’ve finally decided to start an open source project for home automation. I’ve always dreamt of being able to wake up to my curtains opening and my heat turning on rather than the usual “beep! beep! beep!” that beckons me to work each day. How about checking on the house when I’m away or open the front door while I’m in the back of the house. The possibilities are endless and my ideas are just beginning to flow.

The project is called Open Automator and it can be found on Source Forge’s site. Its still a very basic framework but its coming along well. It works using the Insteon protocol that communicates over the power lines in your house. Its very similar to the X10 of yesteryear but with updated speed, reliability, and redundancy. It costs a little bit more but it will be well worth it. These devices can be found all over the internet from sites like

This first rendition will be geared toward the iPhone and Simple Home Net’s EZServe which communicates via an XML socket connection. That being said, I will be making this application very flexible allowing for many different types of implementations, not just Insteon either. The application is Java based with Maven being used for building and dependency management. This will be coupled with Spring and Hibernate to manage database and model view support. The final product will have Jetty and some sort of SQL engine rolled in to make the application extremely portable and platform agnostic.

I will be writing a lot more about this project in the months to come as well as recording some online videos about how it works and how to set it up. I would like to make this product as simple as possible so its usable by the masses. Just because its open source doesnt mean its going to be complicated. If you are interesting in being a developer please let me know.

New fuel map for the 2002

July 7, 2008

Fuel map for the ITB BMW 2002Its been a long time since I last tuned the car ever since I started on the new M3 motor. Just as I was getting back into practice and tuning for efficiency given the high gas prices, a friend gave me his fuel map for a similar car also with Individual Throttle Bodies (ITB’s). After about 30 minutes of tuning the car ran superbly!

It had taken me quite a long time to get where I was and from the looks of the map above it would have taken me several more outings. One of my main trouble points was coming back off overrun and right off idle. The latter is a common problem with ITB’s that normal cars dont have with one throttle body. When the throttles quickly open the pressure drops to near full atmosphere and therefore we are getting much better volumetric efficiency requiring more gas.

So far everything is performing wonderfully. The hesitation off idle is gone and the power band is strong all the way up. Also, overrun back onto wide-open throttle is amazing. The fuel efficiency appears to have improved as well. This may not make sense to some but hopefully this is useful to somebody out there (click on the picture above to see a larger image).

SEO and the Semantic Web

July 2, 2008

I posted this over at SEOmoz’s YOUmoz blog, their user generated blog. If you happen to be a member and like the article please vote for it so it will get promoted on their main blog.

From SEO and the Semantic Web on YOUmoz:

With the proliferation of the Semantic Web, all of our data will be structured and organized so perfectly that search engines will know exactly what we are looking for, all of the time. Even the newest of newbies will be able to create the most well-structured site that would take tens of thousands of dollars today. Everyone’s information will be so precise and semantically correct there will be no need for Search Engine Optimization anymore!

The fact of the matter is, this is never going to happen. Being a long-time SEO practitioner myself, I am very interested in the ramifications of the Semantic Web on today’s search, especially because I am tasked with optimizing Twine when it first becomes publicly readable this summer.

Before we dive too deep, let’s first look at what SEO experts and professionals do today. In a nutshell, we research, study, and test hypotheses learned by watching the heuristics of a search algorithm. We implement by writing clean and semantically correct HTML in certain combinations in order to allow robots to easier asses the meaning of a page. We use CSS to abstract the presentation layer, we follow good linking structures, add proper metadata, and write concise paragraphs. We organize our information in a meaningful way to show bots clean parse-able HTML. In some sense we are information architects, in another we are marketers.

But what would happen if a search engine company published their algorithm? Although that probably isn’t going to happen anytime soon, what if they would tell us exactly what they were looking for? That’s what the Semantic Web is going to do to search. Just the other day Yahoo announced SearchMonkey for just this purpose. It is only going to get bigger. Being told how to mark up your information certainly takes a lot of the guesswork out of it. But in terms of the role of the SEO expert or professional, I don’t think we can retire just yet.

The Semantic Web is organized by people just like the Web of today. The only difference is that now we are going to organize around better standards. Just as people have a hard time organizing their closets, attics, and garages, people have a hard time organizing their websites. Although the Semantic Web will add structure to the Internet, make it easier for novice users to create structured content, and change the way we search, there is still a need for experienced help.

Enter SEO. Some of our roles may have changed, but for the near future there will be still be a lot of similarities. The need to study and analyze robot behaviors to better tune information isn’t going away. They will still have to be on top of the emerging trends, search technologies, and organic ways to drive traffic. The fact of the matter is, nothing is going to change drastically for a while. In the near term, I am mostly worried about how to integrate Twine into the Web of today.

Not very semantic, huh? Well, that’s not say we aren’t going to integrate with microformats, display RDF in our pages, and publish our ontology. All of this is extremely important as the Semantic Web emerges; however, in a world where search is run by Google we have to cater to them. There are a growing number of semantic search engines and document indices out there, which are definitely raising awareness to the mainstream. Yahoo just jumped on the semantic bandwagon publicly and you know Google can’t be too far behind.

In conclusion, there’s nothing to worry about anytime soon. The SEO expert’s salary isn’t going back into the company budget. We still have to tune our pages to the beat of Google’s drum for the time being. When things do take a drastic turn, we will adapt and overcome as we always have. That’s what good SEO does. As for me, I will tune Twine just as I used to tune pages over at CNET, following the teachings of Sir Matthew Cutts et al. published my semantic tagging article

June 23, 2008

Unfortunately its been a while since I have written anything new and interesting here although I have been working on some very exciting stuff (more on that later). In the meantime, Josh Dilworth from our great PR team at Porter Novelli submitted my last article to and they published it! Its the first thing I have ever written that has been published somewhere besides this dinky little blog ;) . Anywho, for those who didnt read it the first time, here is the article on

Tagging and the Semantic Web

May 20, 2008

A while back I commented on a Tech Crunch article quoting my CEO regarding keyword searches in the Semantic Web space.   My comment was later quoted on the Faviki blog, a Semantic startup involving tagging web pages with semantic wikipedia data.  Finding this prompted me to start writing more about the Semantic Web on my own blog.  This is actually the first time I have ever posted about someone else’s post.

(The following is based on a presentation I gave on the subject in September of 2007.)

Tags the way they are implemented today

The way the better Web 2.0 sites implement tags involves faceting. I have discussed this in a previous blog post regarding faceting with Lucene and SOLR, but it in a nutshell, it allows you to group together documents or objects based on attributes. For example, give me all documents about ‘George Bush’ and ‘Washington’. The problem with these attributes is they have little or no value on their own and they certainly they are not understood by computers. They are just strings denoting some type of concept. Here is a short list of limitations which I feel the Semantic Web web will address:

- Tags do not provide enough meaningful metadata to make meaningful comparisons
- More information is needed besides their origin
- Tags are essentially a full text search mechanism, although faceting helps
- Need more relationships between tags and the objects they pertain to

The solution, tags as objects

Allowing users to tag an object with another object we can make extremely interesting comparisons; discerning a lot more information about the original object becomes simple and accurate. With this type of interrelationship we can pivot through the data like never before, not with full text search but object graph linkages that machines and humans can understand. Lets go over an example.

Lets say a user adds a note into our system ranting about a beet farmer who lives in Washington state by the name of William Gates. The user goes on to discuss his beets and farming techniques in great detail, mentioning nothing about software and Windows Vista of course. In the current Internet model the user would tag this note with strings like, ‘William Gates’, ‘Bill Gates’, ‘beets’, etc.

Now another user comes along and starts digging through documents tagged ‘Bill Gates’ to try and find new articles about Vista. Unfortunately, many searches will turn up bad results, especially if the density of the word ‘Bill Gates’ is great enough in the document about beets. That being said, the other direction would work more as intended, searching on the tags ‘Bill Gates’ and ‘Beets’ would yield more expected results.

In the Semantic Web model, the document about William Gates the beet farmer would be tagged with the William Gates object which could contain a plethora of metadata, his location, occupation, etc. Now when we look at this document there is no guessing as to what it is referring, especially from a machines point of view. This is exactly what the Semantic Web was built for. In this model we are not relying on linguistics, natural language processing, or full text search. We are relying on hard links that machines can understand and relate to.

The disambiguation page (was the tag page in Web 2.0)

What about regular string tags? The Semantic Web cant possibly understand everything?! The fact of the matter is, thats true. We still support regular string tagging. Some things are not proper nouns and less concrete, like adjectives and verbs. They may not yet deserve their own object; however, lets think about actual language here for a second. The semantics behind how we describe things.

Take the adjective ‘cool’. Well, first of all, what are you looking for? Nouns? A grouping of multiple nouns? Probably ‘cool’ nouns. A search on this tag could turn up anything and everything from many different levels. It could start by pulling in a definition from Wikipedia. Then it could group together a list of groups tagged cool like the ‘Super Cars’ group or the ‘Fast Cars’ group. It would also show you users tagged ‘cool’ and documents tagged ‘cool’. But where it becomes really interesting is where you find the ‘cool’ string tag on a tag object! Now you can find proper noun tags like ‘Ferrari’ as well as ‘Super Cars’ the proper noun.

Joining these tags together in a search would yield detailed results from rich metadata like a list of Ferrari’s over the years represented as objects. Each car object would contain detailed specs on engine type, weight, horsepower, etc. Then by examining the ‘Ferrari Enzo’ object we can find all the people who used this tag on their bookmarks, links, documents, or other objects they created. With this information you can connect with these people, join their groups, and further your search for whatever it is you are interested in. The point is, everything is related at many different levels. What links them together are adjectives and verbs that describes them.


To be able to come at your data from every angle is important. Everyone thinks differently and everyone searches differently. The truth is, I think its going to be a while for machines to really understand what us humans are talking about. Its up to us to help organize data in a format that is machine readable so the machines can share, but in return it allows us to perform incredible searches likes never before.

There are many common misconceptions around the Semantic Web. It is a very broad term which has many facets. We are going after what I feel is an attainable portion of this idea. Our platform may not try and fully comprehend and reason what the term ‘cool’ means like a human does but you are a human. You the user understands what this term means and just how to use it. If not, our platform will definitely try to help.

Copyright © 2005-2011 John Clarke Mills

Wordpress theme is open source and available on github.