Archive for the ‘Software’ Category

Can you see the concurrency flaw?

November 3, 2008

A coworker and I came across this block of code which was causing us numerous headaches over the past few weeks. For a while, there was absolutely no time delay which was causing an inordinate amount of CPU cycles but thats besides the point. After that was fixed with a short delay we were left with what you see below. Can you tell what’s wrong?

   while ( true ) {
        TriggeredSend ts = queue.peek();
        if ( ts != null ) {
            switch( doPost( ts ) ) {
                case SUCCESS:
                    ts.complete( true );
                case FAILURE:
                    ts.complete( false );
                case SYSTEM_UNAVAILABLE:
                    systemIsAvailable = false;
        Thread.sleep( delay );

As with most concurrency bugs, its hard to determine what’s actually happening without experience. You can turn on your debugger but then you are changing the way the code behaves by slowing it down, essentially taking out any race conditions. Turns out, the same task was getting invoked multiple times well as some other strange conditions. That led us to the block of code you see above.

Figure it out yet? Well, what’s happening is that when the code is peeking into the queue, its assigning the result to a variable which is then being called. According to the java.util.concurrency spec, the peek method is used to see if the queue has tasks in it and not used to take any tasks off the queue. Not only that, but after the task was invoked, another task was being pulled off the queue. So what happens when multiple threads come through that block of code and peek? Well, they can both get the same task and invoke it. Bad bad bad.

The Semantic Question: To Delete or Not To Delete

September 12, 2008

Originally published on

A few months back I posed a question to the folks at DERI (Digital Enterprise Research Institute) from the University of Ireland when they came to visit Radar Networks. This is a question that I have struggled with for a long time; seeking for answers anywhere I could. When I asked them, I heard the same question in response that I always hear.

Why? Why would you ever want to delete something out of the ontology?

Ontologies were not originally created to have things removed from them. They were created to classify and organize information in a relational format. Just because a species goes extinct it doesn’t mean it should removed from the domain of animals does it? Just like a car that isn’t manufactured anymore or a disease that was officially wiped out. These and many more are probably the reasons why Semantic Web gurus and ontologists alike don’t like the idea of deleting entities.

I am helping to create a social site where users generate content; objects, notes, data, and connections to others and other things that are their own. If they want to delete something that they have created, so be it. Sounds easy right? Well, yes and no. This problem is dealt with throughout computer systems. It is essentially all about managing pointers in memory. You can’t delete something that other things are pointing to. Who knows what will happen or how our application will respond when certain pieces of information are missing? Some things we account for on purpose because they are optional — but some things we just can’t. Every application has its unique constructs, whether it is built on a relational database or a triple store.

So what I have to do is define ontological restrictions, stating what links can and cannot be broken. On top of that, we must worry about permissions. Am I allowed to break an object link from someone else’s object to my own? Also, what if the object being deleted has dependent objects that cannot exist alone, or more importantly, don’t make sense on its own? A great example of this is a user object and a profile object. There should either be zero or two, never just one.

My friend and coworker Jesse Byler had dealt with a similar problem in the application tier a few months back regarding dependent objects. He had written a recursive algorithm that would spider the graph until it hit the lowest leaf node matching certain criteria and then begin to operate. I took this same principle and pushed it down into our platform and began to mark restrictions in our ontology.

This is where it became tricky. Some relationships are bi-directional and some are not. In the user and profile example above the relationship is bi-directional; however, many of our relationships are not. An example to this would be a group and an invitation to that group. Sure the group can exist without invitations, but the invitation cannot exist without the group.

All in all, the design pattern isn’t overly complex, nor are the ontological restrictions. But as a whole, it makes for an interesting problem with a lot of nuance. Careful consideration must go into this process because mistakes could be catastrophic. Data integrity is paramount and dangling references could leave the application in a terrible state due to one bad linkage. Although simple in practice, the execution is anything but.

And for those of you who are still asking yourselves why?

Here’s the answer. Scalability. As if working with a triple store wasn’t hard enough, keeping useless data around that will never be used again will definitely make matters worse. We are attempting to build a practical everyday application — not classify organisms. Surely there is a place somewhere in the mind of the ontologist where he can think practically about using ontologies for data storage. Isn’t there?

As more applications are built using OWL and RDF, this problem will become more and more real — and there’s nothing the ontologist can do about it but adapt, or die a little inside. Either way, at the end of the day, I am still an engineer trying to make do with what I have.

If we must delete, then so be it.

Open source home automation project

August 15, 2008

open automator screen shot
Being a long time geek and open source advocate I’ve finally decided to start an open source project for home automation. I’ve always dreamt of being able to wake up to my curtains opening and my heat turning on rather than the usual “beep! beep! beep!” that beckons me to work each day. How about checking on the house when I’m away or open the front door while I’m in the back of the house. The possibilities are endless and my ideas are just beginning to flow.

The project is called Open Automator and it can be found on Source Forge’s site. Its still a very basic framework but its coming along well. It works using the Insteon protocol that communicates over the power lines in your house. Its very similar to the X10 of yesteryear but with updated speed, reliability, and redundancy. It costs a little bit more but it will be well worth it. These devices can be found all over the internet from sites like

This first rendition will be geared toward the iPhone and Simple Home Net’s EZServe which communicates via an XML socket connection. That being said, I will be making this application very flexible allowing for many different types of implementations, not just Insteon either. The application is Java based with Maven being used for building and dependency management. This will be coupled with Spring and Hibernate to manage database and model view support. The final product will have Jetty and some sort of SQL engine rolled in to make the application extremely portable and platform agnostic.

I will be writing a lot more about this project in the months to come as well as recording some online videos about how it works and how to set it up. I would like to make this product as simple as possible so its usable by the masses. Just because its open source doesnt mean its going to be complicated. If you are interesting in being a developer please let me know.

Maven2, my first impressions

May 6, 2008

Maven has been a far departure from the usual Ant builds that I have become accustomed to. Now although Ant doesnt deal with dependency management like Maven does I have used Savant in conjunction which worked quite well. The constructs werent complicated, it was getting accustomed to Maven which required me to forget I know everything about Ant; oh yea, and a few days of pounding my head.

After the initial mental exercise I really got to liking it. It allowed me to easily handle multiple-module projects and all their dependencies. It also has great plugin support for anything you can think of, one of my favorite being Jetty (developing locally with Tomcat is for suckers!). I also really like the pom.xml settings as well as profile setting to handle different environments. Anyway, next time I start another project from scratch I will probably take another run with Maven.

Ajax and IE 7: cache not invalidating

March 21, 2008

For the past day or two I had been struggling at work to figure out why Internet Explore 7 would not pay attention to the response headers stating not to cache the response. First, I tried setting the date header to expire instantly, with a value of -1. QA confirmed that this solved the problem in some revisions of IE 7 but not all. After digging around the web it turns out that you have to set a few more headers, one of which I had never even heard of. Here’s what solved the problem for me. This snippet is in Java but could apply to any language:

response.setDateHeader( "Expires", -1 );
response.setHeader( "Cache-Control", "private" );
response.setHeader( "Last-Modified", new Date().toString() );
response.setHeader( "Pragma", "no-cache" );
response.setHeader( "Cache-Control", "no-store, 
                                 no-cache, must-revalidate" );

IE, ImageIO, and Microsoft’s ‘image/pjpeg’ prank

February 28, 2008

Today I came across an issue at work regarding the made up MIME type of ‘image/pjpeg’ while uploading images from IE 7. Java’s ImageIO was choking while trying to upload this type of file because it is not in the list of supported MIME types. The supported MIME type list can be viewed by invoking ImageIO.getReaderMIMETypes(). After reading many blogs of others pounding their head, making fun of Microsoft, and using other solutions like ImageMagick, I found a solution. Add this to your list of supported types and move on. This format is made up and will not cause any problems. Pass the input stream onto your File IO and be done with it. Microsoft owes me half an hour of my life back.

Dynamic Proxies in Java

July 13, 2007

Up until recently I never had to write any code that dealt with reflection. Before now I worked for CNET which is basically a series of differnent CMS’ with one request, one response. My first week at the new job required me to build a JMS system that would accept method invocations across the wire seamlessly for the producer. Java.lang.reflect was the answer.

Once you can wrap your head around the idea its pretty simple to write. Essentially what we are attempting to do is shield the caller from having to know anything about the other method that is going to invoked behind the scenes.

public Class MyConcreteProxyClass implements InvocationHandler {

   Object obj;
   public MyDynamicProxyClass(Object obj)
     { this.obj = obj; }

  public Object invoke(Object proxy, Method m, Object[] args)
         throws Throwable {
     try {
        // do stuff
     } catch (Exception e) {
        throw e;
        // return something

Now we have to create the proxy interface which the caller will implement. Any of the methods of this class that he implements and fire will invoke method on the MyConcreteProxyClass object.

public interface ProxyInterface{
    public Object myMethod();

Now its time to create wire everything up. There are several ways to do this which help hide this proxying like a static factory method but this simply demonstrates how this works.

ProxyInterface proxy = (ProxyInterface)
         Class[] { ProxyInterface.class },
          new MyConcreteProxyClass(obj));

Java Concurrency in Practice

May 25, 2007

java-concurrency.gifThis is now one of my favorite books on Java which I am probably going to read again just to be sure I have soaked up as much information as I can. This is a practical book written by a practical person who understands his audience, engineers. He is straight to the point with his code examples and doesnt bore you to death explaining every little detail or referencing lines of code and functions 8 pages back. I wish more technical writers would be this concise and straight to the point. There is something to be said for any technical writer who can get their message across with fewer words. No where else can I think of a better example of the old adage “less is more”. I would highly recommend this book to any engineer, especially associates.

DoS attack at the office

May 18, 2007

Unfortunately for us we dont have a full time operations team at the moment. Its me really. The engineer, the frontend web developer, and admin. Im lucky that I saw it, put two-and-two together, and stopped it.

Anywho, this script kiddie managed to actually take down our appservers because our servlet container was only set to use 512 MB of RAM! This was not my doing although it has been operating fine for almost 2 years. Anyway, when we saw the increased spike in traffic we said great, maybe Googebot has come back around to save us from our perpetual Google dance. I noticed the increase in bot activity but thought nothing of it and went home. Little did I know it wasnt Googlebot.

The next day, i.e. today, I got into work to find our leads were still down but our page views were up. Thats odd? How and why could this be happening? I knew this wasnt people traffic because Google’s Urchin Tracker wasnt registering the requests because its Javascript (bots done fire it). It had to be a malicious user. I dug through yesterday’s logs and it turned out to be some script kiddie in Australia making about 25 requests per second from one IP address. What? No DDoS? Anyway, our machines performed well after more memory was allocated but they were thrashing a little.

Page load time

CPU load

Faceting with Solr

May 14, 2007

After my discussion with Yonik Seeley, one of the Solr developers, I have to come to realize the way I was faceting was not correct. Although it worked, it did not work for speed and scalability. The proper way to query by facets is to use the ‘fq’ field as many times as needed. Each of these result sets is then cached, speeding up the next query as you add more facets. In line with the example I have been posting about, things have many tags, here is an example. When you run this query you will get anything with the phrase ‘stuff’ that is also tagged with ‘things’ and ‘junk’.


Copyright © 2005-2011 John Clarke Mills

Wordpress theme is open source and available on github.