Archive for the ‘Computers’ Category

Learning PHP after years of Java

April 17, 2010
 by 

Its been nearly 6 months since my new gig as a lead engineer at Transpond and I havent posted anything about software or much of anything else for that matter. I’ve been busy with many other projects as well as learning and writing PHP. Well, mostly writing as the learning curve isnt bad at all. The more I was able to block Java and what I know of it out of my mind the faster I was able to learn PHP.


Anyway, there are tons of books and blog posts about how to write PHP so Im not going to go into that. What I have compiled though is a list of things that helped me get over the hump. Although Java and PHP have a lot of similarities, their life cycles are different. Here are the things the differences that stood out the most in my mind.

Classes

Now that PHP is fully object oriented we can use classes as well as objects. Lots of new open source projects out there seem to take advantage of them; however, a lot do not, especially older ones. What I find to be very common throughout PHP are the untyped array objects with lots of nested arrays. This appears to be very standard. The sooner you get over this fact the sooner you will be able to code. The point of a dynamic language is to be able to build quickly and not deal with POJO’s.

Instantiation

Just like Java, classes in PHP have constructors and even destructors; however, they are not declared as you may think. You will need to use the special notation for these functions, __construct and __destruct respectively. Also, be careful when building more complicated stacks or frameworks. Objects get destructed at unusual points in a requests lifecycle so getting your debugger running can come in handy.


What is also interesting is that classes cannot be static in the way we would normally think about them. You can make a class that has all static methods which effectively makes a class static but you cannot use the static keyword for a class declaration.

Imports (require or include in PHP terms)

Unlike Java, PHP is file based not package based. This makes for some interesting issues at first. Trying to grok this idea and deal with file paths can be daunting. What we found to be helpful was name spacing which is somewhat recent to PHP. We use that in conjunction with __autoload() which makes class loading very simple. Essentially we can dynamically load classes easily based on their namespaces and not their file location as much (our autoloader handles that).

Methods, functions, and void (or lack thereof)

As you probably already noticed, methods are called functions. These function do not need to declare what type of object they will return, as this is a dynamic language. In fact, the same method could return nothing (void), and object, or a string. This puts the burden of good engineering on the caller.


Functions in PHP (methods in Java) are however similar in the way they are declared. They can have public, private, and protected declarations, as well as static, abstract, and final keywords. This makes the transition from Java pretty simple, however; there is something strange I noticed. Member variables cannot have the final declaration which I find to be frustrating but there are obvious ways around it. The main reason for this being, most objects are passed by value so modifying a member variable is harder than in Java. I will go into this more later.

Conclusions

So far I am really enjoying PHP. The environment is very simple and straightforward, there is a ton of support for it, and there is a plethora of open source projects to choose from. All of this makes for a quick development time and a quick ramp up time. As I mentioned before, the sooner I stopped thinking about how Java worked, the sooner I was about to pickup PHP.

Gigabit Ethernet for your home

March 9, 2009
 by 

Its always been my dream to have an automated house and the first step is hardwired Ethernet. Whether you are going to be running video over Ethernet, VoIP, or just the good old Internet you really just cant beat copper. Although wireless has come a long way since 802.11b it will just never keep up. Besides the fact that copper performs flawlessly and extremely fast, there are no reliability issues that you inevitably run into while using on WiFi. If anyone out there reading has found a perfect WiFi Access Point that can run at gigabit speeds for hours at a time with 5 – 10 devices on it please let me know. It would have saved me a lot of trouble!


For the full writeup with tons of pictures please see my post Wiring For High Speed Ethernet on San Fran Vic.

SanFranVic.com – Let the restoration begin

February 24, 2009
 by 

In true geek blogging obsession/narcissism, I have decided to blog the restoration of my 1890 San Francisco Victorian home at www.sanfranvic.com. Its going to be a long journey and I’ve decided to document it for my friends, family, and any future owner of the house.

It should be interesting to learn what goes into an old house like this one, where it came from, and where its going. Although I have a lot of experience engineering and building many many things, this is my first home restoration. So follow along with me, learn from my mistakes, and try not to yourself like I am going to. Enjoy!

Hacking Apple TV, quickly and easily

January 23, 2009
 by 

For years I have been trying to create a Media Center to attach my TV. I’ve tried Windows Media Center years ago, Myth TV on Linux, and everything in between. Nothing was simple, cheap, and easy to setup. I have friends who have tried those off the shelf media boxes like the ones from Netgear and the likes, but those have their pitfalls too. I wanted something open source, truly hackable, and preferably something Unix/Linux based. Thats why I turned to the Apple TV.

For my use case, I did not care about disk storage. I am using my Apple TV’s as terminals or endpoints to play media from storage somewhere else on the network. This allowed me to buy the cheap version of the Apple TV with a 40 GB drive for about $220. Not only that, there is a Toslink (optical) out that I will be attaching to a DAC and set of vintage tubes for crisp beautiful sound. For that kinda money, its pretty hard to beat the Apple TV.

Once in my possession, I immediately hacked it to expand its functionality. Without it, the Apple TV is actually pretty restricted and an overall weak product; however, Boxee helps make it all better. Boxee allows you to watch all those formats that iTunes does not support, like DivX, etc. Not only that, it allows you watch Hulu as well as other online video services.

Hacking your Apple TV and Installing Boxee

There’s actually nothing to do really and I would barely consider it hacking. A few folks out there have made it quite easy. In a nutshell, all you need to do is create a ‘patchstick’ on your PC using a thumb drive you have laying around. You insert it into your Apple TV’s USB drive, restart it, and voila! It installs itself and you’re all set. You can get the USB Patch Stick creator from the Google code repository.

Can you see the concurrency flaw?

November 3, 2008
 by 

A coworker and I came across this block of code which was causing us numerous headaches over the past few weeks. For a while, there was absolutely no time delay which was causing an inordinate amount of CPU cycles but thats besides the point. After that was fixed with a short delay we were left with what you see below. Can you tell what’s wrong?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
   while ( true ) {
        TriggeredSend ts = queue.peek();
        if ( ts != null ) {
            switch( doPost( ts ) ) {
                case SUCCESS:
                    queue.take();
                    ts.complete( true );
                    break;
                case FAILURE:
                    queue.take();
                    ts.complete( false );
                    break;
                case SYSTEM_UNAVAILABLE:
                    systemIsAvailable = false;
                    break;
            }
        }
        Thread.sleep( delay );
   }

As with most concurrency bugs, its hard to determine what’s actually happening without experience. You can turn on your debugger but then you are changing the way the code behaves by slowing it down, essentially taking out any race conditions. Turns out, the same task was getting invoked multiple times well as some other strange conditions. That led us to the block of code you see above.

Figure it out yet? Well, what’s happening is that when the code is peeking into the queue, its assigning the result to a variable which is then being called. According to the java.util.concurrency spec, the peek method is used to see if the queue has tasks in it and not used to take any tasks off the queue. Not only that, but after the task was invoked, another task was being pulled off the queue. So what happens when multiple threads come through that block of code and peek? Well, they can both get the same task and invoke it. Bad bad bad.

The Semantic Question: To Delete or Not To Delete

September 12, 2008
 by 

Originally published on semanticweb.com


A few months back I posed a question to the folks at DERI (Digital Enterprise Research Institute) from the University of Ireland when they came to visit Radar Networks. This is a question that I have struggled with for a long time; seeking for answers anywhere I could. When I asked them, I heard the same question in response that I always hear.


Why? Why would you ever want to delete something out of the ontology?


Ontologies were not originally created to have things removed from them. They were created to classify and organize information in a relational format. Just because a species goes extinct it doesn’t mean it should removed from the domain of animals does it? Just like a car that isn’t manufactured anymore or a disease that was officially wiped out. These and many more are probably the reasons why Semantic Web gurus and ontologists alike don’t like the idea of deleting entities.


I am helping to create a social site where users generate content; objects, notes, data, and connections to others and other things that are their own. If they want to delete something that they have created, so be it. Sounds easy right? Well, yes and no. This problem is dealt with throughout computer systems. It is essentially all about managing pointers in memory. You can’t delete something that other things are pointing to. Who knows what will happen or how our application will respond when certain pieces of information are missing? Some things we account for on purpose because they are optional — but some things we just can’t. Every application has its unique constructs, whether it is built on a relational database or a triple store.


So what I have to do is define ontological restrictions, stating what links can and cannot be broken. On top of that, we must worry about permissions. Am I allowed to break an object link from someone else’s object to my own? Also, what if the object being deleted has dependent objects that cannot exist alone, or more importantly, don’t make sense on its own? A great example of this is a user object and a profile object. There should either be zero or two, never just one.


My friend and coworker Jesse Byler had dealt with a similar problem in the application tier a few months back regarding dependent objects. He had written a recursive algorithm that would spider the graph until it hit the lowest leaf node matching certain criteria and then begin to operate. I took this same principle and pushed it down into our platform and began to mark restrictions in our ontology.


This is where it became tricky. Some relationships are bi-directional and some are not. In the user and profile example above the relationship is bi-directional; however, many of our relationships are not. An example to this would be a group and an invitation to that group. Sure the group can exist without invitations, but the invitation cannot exist without the group.


All in all, the design pattern isn’t overly complex, nor are the ontological restrictions. But as a whole, it makes for an interesting problem with a lot of nuance. Careful consideration must go into this process because mistakes could be catastrophic. Data integrity is paramount and dangling references could leave the application in a terrible state due to one bad linkage. Although simple in practice, the execution is anything but.


And for those of you who are still asking yourselves why?


Here’s the answer. Scalability. As if working with a triple store wasn’t hard enough, keeping useless data around that will never be used again will definitely make matters worse. We are attempting to build a practical everyday application — not classify organisms. Surely there is a place somewhere in the mind of the ontologist where he can think practically about using ontologies for data storage. Isn’t there?


As more applications are built using OWL and RDF, this problem will become more and more real — and there’s nothing the ontologist can do about it but adapt, or die a little inside. Either way, at the end of the day, I am still an engineer trying to make do with what I have.


If we must delete, then so be it.

Open source home automation project

August 15, 2008
 by 

open automator screen shot
Being a long time geek and open source advocate I’ve finally decided to start an open source project for home automation. I’ve always dreamt of being able to wake up to my curtains opening and my heat turning on rather than the usual “beep! beep! beep!” that beckons me to work each day. How about checking on the house when I’m away or open the front door while I’m in the back of the house. The possibilities are endless and my ideas are just beginning to flow.


The project is called Open Automator and it can be found on Source Forge’s site. Its still a very basic framework but its coming along well. It works using the Insteon protocol that communicates over the power lines in your house. Its very similar to the X10 of yesteryear but with updated speed, reliability, and redundancy. It costs a little bit more but it will be well worth it. These devices can be found all over the internet from sites like smarthome.com.


This first rendition will be geared toward the iPhone and Simple Home Net’s EZServe which communicates via an XML socket connection. That being said, I will be making this application very flexible allowing for many different types of implementations, not just Insteon either. The application is Java based with Maven being used for building and dependency management. This will be coupled with Spring and Hibernate to manage database and model view support. The final product will have Jetty and some sort of SQL engine rolled in to make the application extremely portable and platform agnostic.


I will be writing a lot more about this project in the months to come as well as recording some online videos about how it works and how to set it up. I would like to make this product as simple as possible so its usable by the masses. Just because its open source doesnt mean its going to be complicated. If you are interesting in being a developer please let me know.

New fuel map for the 2002

July 7, 2008
 by 

Fuel map for the ITB BMW 2002Its been a long time since I last tuned the car ever since I started on the new M3 motor. Just as I was getting back into practice and tuning for efficiency given the high gas prices, a friend gave me his fuel map for a similar car also with Individual Throttle Bodies (ITB’s). After about 30 minutes of tuning the car ran superbly!


It had taken me quite a long time to get where I was and from the looks of the map above it would have taken me several more outings. One of my main trouble points was coming back off overrun and right off idle. The latter is a common problem with ITB’s that normal cars dont have with one throttle body. When the throttles quickly open the pressure drops to near full atmosphere and therefore we are getting much better volumetric efficiency requiring more gas.


So far everything is performing wonderfully. The hesitation off idle is gone and the power band is strong all the way up. Also, overrun back onto wide-open throttle is amazing. The fuel efficiency appears to have improved as well. This may not make sense to some but hopefully this is useful to somebody out there (click on the picture above to see a larger image).

SEO and the Semantic Web

July 2, 2008
 by 

I posted this over at SEOmoz’s YOUmoz blog, their user generated blog. If you happen to be a member and like the article please vote for it so it will get promoted on their main blog.


From SEO and the Semantic Web on YOUmoz:

With the proliferation of the Semantic Web, all of our data will be structured and organized so perfectly that search engines will know exactly what we are looking for, all of the time. Even the newest of newbies will be able to create the most well-structured site that would take tens of thousands of dollars today. Everyone’s information will be so precise and semantically correct there will be no need for Search Engine Optimization anymore!

The fact of the matter is, this is never going to happen. Being a long-time SEO practitioner myself, I am very interested in the ramifications of the Semantic Web on today’s search, especially because I am tasked with optimizing Twine when it first becomes publicly readable this summer.

Before we dive too deep, let’s first look at what SEO experts and professionals do today. In a nutshell, we research, study, and test hypotheses learned by watching the heuristics of a search algorithm. We implement by writing clean and semantically correct HTML in certain combinations in order to allow robots to easier asses the meaning of a page. We use CSS to abstract the presentation layer, we follow good linking structures, add proper metadata, and write concise paragraphs. We organize our information in a meaningful way to show bots clean parse-able HTML. In some sense we are information architects, in another we are marketers.

But what would happen if a search engine company published their algorithm? Although that probably isn’t going to happen anytime soon, what if they would tell us exactly what they were looking for? That’s what the Semantic Web is going to do to search. Just the other day Yahoo announced SearchMonkey for just this purpose. It is only going to get bigger. Being told how to mark up your information certainly takes a lot of the guesswork out of it. But in terms of the role of the SEO expert or professional, I don’t think we can retire just yet.

The Semantic Web is organized by people just like the Web of today. The only difference is that now we are going to organize around better standards. Just as people have a hard time organizing their closets, attics, and garages, people have a hard time organizing their websites. Although the Semantic Web will add structure to the Internet, make it easier for novice users to create structured content, and change the way we search, there is still a need for experienced help.

Enter SEO. Some of our roles may have changed, but for the near future there will be still be a lot of similarities. The need to study and analyze robot behaviors to better tune information isn’t going away. They will still have to be on top of the emerging trends, search technologies, and organic ways to drive traffic. The fact of the matter is, nothing is going to change drastically for a while. In the near term, I am mostly worried about how to integrate Twine into the Web of today.

Not very semantic, huh? Well, that’s not say we aren’t going to integrate with microformats, display RDF in our pages, and publish our ontology. All of this is extremely important as the Semantic Web emerges; however, in a world where search is run by Google we have to cater to them. There are a growing number of semantic search engines and document indices out there, which are definitely raising awareness to the mainstream. Yahoo just jumped on the semantic bandwagon publicly and you know Google can’t be too far behind.

In conclusion, there’s nothing to worry about anytime soon. The SEO expert’s salary isn’t going back into the company budget. We still have to tune our pages to the beat of Google’s drum for the time being. When things do take a drastic turn, we will adapt and overcome as we always have. That’s what good SEO does. As for me, I will tune Twine just as I used to tune pages over at CNET, following the teachings of Sir Matthew Cutts et al.

Semanticweb.com published my semantic tagging article

June 23, 2008
 by 

Unfortunately its been a while since I have written anything new and interesting here although I have been working on some very exciting stuff (more on that later). In the meantime, Josh Dilworth from our great PR team at Porter Novelli submitted my last article to semanticweb.com and they published it! Its the first thing I have ever written that has been published somewhere besides this dinky little blog ;) . Anywho, for those who didnt read it the first time, here is the article on semanticweb.com.

Copyright © 2005-2011 John Clarke Mills

Wordpress theme is open source and available on github.