Freespace — A Temporary Space for a Lasting Change

March 15, 2014

Freespace in the San Francisco Chronicle

Last May my friend Kyle Stewart, Executive Director of ReAllocate, came to me last May and asked, “what would you do with a 14,000 square foot warehouse in the heart of San Francisco?” Assuming this was a loaded question I first asked him how much it would cost. He assured me that it wasn’t a concern of mine and told me to grab my tools because we had work to do.

It turns out that our friend Mike Zuckerman had secured a warehouse at 1131 Mission Street for just $1 for the month of June to conduct a civic experiment. A ‘culture hack’ as he refers to it; repurposing otherwise unsused or private properties for different uses then they were intended. For me it was about creating a modern day community center run by hackers, completely based on merit, not money. If you could swing a hammer, paint a portrait, program a computer, or organize a coup you were given a canvas in which to experiment.

Much like how children rally together to build a treehouse with no reservations, we did the same. This all happened so fast and there was no time to fuss over governance, planning, and structure. Traditonal democractic process was thrown out the window and its place we were left with an adhocracy. Where immediacy, merit, and vision collide.

Freespace San Francisco from the front

And so, starting in June the great experiment began. For the entire month the place was hopping with excitement from ten in the morning to ten at night. There were free yoga classes, free drawing classes, and free dance classes. There were artists painting murals on every flat surface you could find and hackers soldering away at LED light projects. Artists were working with entrepreneurs and the homeless were interacting with computer programmers. Everywhere you looked you would find people mixing ideas and cultures.

After only a few weeks of being open, we realized that one month just wasn’t going to cut it. In true Freespace fashion, we gathered a headless team of volunteers together to start a crowd funding campaign and media blitz to raise $24k to pay the full rent the following month. Sure enough, we had enough press coverage and touched enough people that we were able to raise the money and continue our experiment.

Fast forward 6 months later to today, Freespace in its original incarnation is gone but the idea is spreading. One just opened up in the heart of Paris and the city of San Francisco gave us a grant to do another one on Market Street. In only 6 months the conversation has changed from us begging and pleading for space, to now where building owners are contacting us to activate their vacant spaces. Requests are coming in across the globe as the idea and the vision spread, farther and wider than any of us ever imagined.

Mike has now made it his mission to help foster its growth and he is currently on airplanes every week to do just that. So if Freespace or Mike comes to your city or town, please stop by and say ‘hi’.

Burning Man Interview for Huffington Post

September 17, 2013

Right before Burning Man this year the Huffington Post reached out to me to do a live interview about the event, specifically about its shared economy. Neither the producer nor the host had been there so naturally they had a lot of misconceptions about what actually goes on. From what they had heard, the shared economy is the emphasis of the event.

The other two guests and I were able to shift the focus onto the event itself and less about how it’s “shared economy is changing the world”. At the end of the day, Burning Man is many different things to many different people. The shared economy of Burning Man is only one of the 10 principles that makes this event so special. Everyone takes something different home from the event and its just something you have to experience.

Watch a recording of the interview on Huffington Post.

Building a Chicken Coop

July 14, 2013

Chicken coop finished

A few months back my friends Kyle and Nick joined me in an endeavor to build a chicken coop for our friends at the Avolon Hot Springs. Avalon is an amazing place high above Middletown, CA in the mountains of Lake County. It dates back to the 1850′s when it was called Howard Hot Springs and catered to the rich and famous Victorians.

Today it still retains a lot of its history with original springs still on the property, rustic cabins, and a beautiful main hall. We were happy to help out and add to the charm on Avalon. Also, fresh eggs when we visit are a plus.

Below are some of the photos and plans. We precut and built most of it in my basement and then transported it up there. Even with that pre-planning it still took us a day and a half to assemble and paint it.

Chicken coop cleaning door
(We added a door for cleaning and a thick plastic floor for easy cleaning)

Chicken Coop Insulation
(We used organic recycled materials for the insulation)

Chicken coop plans
(The plans I drew up. Click the photo for a larger image)

All in all it was a very fun and rewarding project. It’s also quite simple and the average DIY’er should have no problem tackling it. It was all of our first times too! For more photos, check out the chicken coop build photo set on Flickr.

Birdhouse for Twitter HQ

April 11, 2013

Twitter birdhouse kiosk

A few months back my friend @nick showed up at my door with an idea and a cardboard box full of wood. He and his coworkers from Twitter were building a kiosk that would take photos and tweet them from @twisitor. Naturally, I thought the project was hilarious so we headed down to the garage.

The wood he brought was a nice veneered bamboo with a bull-nosed edge on one side which gave it a nice finished look. The shingles were cut by @nick and his cohorts from cheap shims they got from the hardware store. The nailgun made quick work of the project and it turned out great! Everyone liked it so much that it ended up in the lobby. Thanks @nick for a fun and random project!

Automating and Scaling Business Workflow Pipelines with Humans and Software

June 15, 2012

Every business has workflows or pipelines it must maintain to keep operations running. Manufacturing companies consume raw materials to create finished products and in turn the sales department pipelines the products out the door to its customers. Internet tech startups aren’t any different. From hiring, lead generation, data onboarding, and sales pipelines all the way down to the software release management just like a brick-and-mortar manufacturing company. Every company produces some sort of good or service and can be broken down into a series of workflows.

People play an incredibly large part in every company workflow; however us humans can be costly and hard to manage which creates a challenge when scaling horizontally. A human can only remember so much and without machines it becomes impossible to grow effectively. With the advent of commodity software in our workflows we can get multiples of scale with the use of less human intervention. Another way to look at it is make the humans you have on staff more efficient.

Software is definitely the answer to hit multiples of scale but on the flip side outsourced human labor has also become a commodity due to services like Amazon’s mTurk to labor on oDesk and eLance. We’ve even successfully hired dozens of folks locally for over a year from Craigslist when we really need to maintain a high degree of data integrity and skill. One could easily achieve another level of scale by just hiring more people.

The big question really is where is the line between human intervention and software automation? The engineer in me immediately screams, “software!” but in reality this is the wrong way to begin to look at the problem. If you’re automating something of a known quantity like a grocery store then go get yourself some point-of-sale system and call it a day. Software is clearly your answer and you can stop reading now. When you’re a fast paced startup and need to pivot quickly you wouldn’t architect an entire software stack around an ever-changing business model. You will spend more time and money rewriting code as your business adapts rather than having a human do the job from the beginning.

This is exactly how UPS scaled their business. They would use humans extensively and slowly automate different pieces of their jobs. One script after another would be written until finally they would string them all together into a pipeline of software. They did this for the same reason we do it; they were in uncharted waters. There’s no reason to double down on engineering that is 10x more expensive than much cheaper labor that can adapt quicker than machines.

Discovering Inefficiencies

Unfortunately every business is different and identifying scaling problems takes proper vision and knowledge of the business. If you’re reading this article then you probably understand all this so I will keep this section brief. To me, there’s one rule you have to follow that will allow all others to build upon it; “that which is measured is improved”. You cannot even begin to optimize until you find the highest level of return or biggest problem area to focus on. Sure you can blindly go into different departments to help them optimize but how do you decide where to start?

The analogy I like to think of is diagnosing car troubles. If your water temperature gauge doesn’t work but your voltage gauge does and you solve your electrical issues your car still may overheat down the road. In reality, you can fix the battery issues later as long as the car starts but the highest value of return is fixing the overheating issues immediately.

Solving Inefficiencies

Taking a scientific approach to solving these problems is the most financially prudent and effective way to find scaling solutions. There are a few ways we have gone about doing this. The easiest (also least likely) way is to have the humans tell the engineers what they spend their day doing. This helps the engineers keep their heads down on other projects without directly getting involved in the nitty-gritty.

Unfortunately, not many non-engineers are able to actually explain what it is they do all day long in enough detail to replicate. We’ve had mixed success but our staff is learning how to distill their use cases down more and more. In order to help them discover inefficiencies we have one golden rule; if you find yourself doing the same task for more than a few hours a day tell your boss or the engineers immediately!

If we step back even further and look at the bigger picture what I am really saying is question your job! Don’t just do something because you are told to do so. Many folks have a hard time with this concept and want to please their boss but in reality what we are after is scale. The only way to get to the next level of scale is to be introspective.

The other way we find inefficiencies is to send in the engineers. We have successfully achieved multiples of scale by having engineers shadow employees that have highly repeatable workflows. In some cases, we have even removed the human from the task for a few weeks while the engineer fills in to really dig deep into the problem. This can be a painful process for the engineer but it achieves extremely good results. My founder Adam Sah refers to it as “coding your way out”.

Knowing When to Stop

Always remember that just because you can do something doesn’t mean you should. Much like writing web software, scaling pipelines is an iterative process and it’s much easier to roll forward than to roll back. Removing all the human touch points too fast will mask problems and likely create more. Automation itself needs to be monitored otherwise you will lose sight of what’s really happening.

One of the tricks we employ is to automate things to the point of complete automation minus the final step. We do this for quality control purposes in many cases. For example, let’s say you need to gate an approval process part of your system. Rather than automatically approving an event to happen, we send an email to the administrators with the proper information they need to make the decision and several URL links we dub ‘one-click approval links’. This is our way of gating sensitive or otherwise high-risk events that require human attention. This buys us many levels of scale without sacrificing quality. If we hit the next level of scale and still require humans for quality reasons we can easily outsource this for very little cost.

Some pipelines like the one I just described will always require human interaction but other times we do fully automate. When this does happen we have to write even more software to monitor these events. Whether it be audit trails or reports that are emailed to us we need to keep an eye on the gauges. As we scale up it becomes more and more important to have a complete picture as to the status of your pipelines.


There really is no silver bullet to any of these problems and solutions come from careful study and measurement just like scientists would study behavior in lab. Software alone cannot solve all of your logistics problems to scale just as humans will never be able to achieve the same scale without software. Since humans are smarter than machines and will be for the foreseeable future you will need to weld the two together. Finding out how far to push the automation is really based upon your business needs and how closely you study your logistical systems.

My recommendation for anyone who is attempting to scale up their logistical pipelines from shipping product to onboarding data is to use humans and study their behavior closely. It will cost you less up front and in the long term as well as delivering a better more accurate end result. Measure twice and cut once.

Data collection patent filed

April 3, 2012

A few weeks ago, right before my 30th birthday, I filed my first patent with the founder of my company Adam Sah and our other engineer and my housemate David Merritt. I dreamt the up the idea while brainstorming lead generation at trade shows that we attend. A large part of my job involves bridging the physical world with the online world and how to share data between the two. We are always looking for ways to make this happen not only faster and easier, which equates to cheaper, but also more accurately.

The patent involves software that is used on commodity hardware such as a cell phone to collect data and use it in a novel way. Unfortunately I cannot share much more information due to the simplicity of the design how easy it is to replicate. We are currently using this technology internally and soon we will be sharing it directly with our customers. Once the provisional is approved I will provide the final document and a demo.

Interview for

December 6, 2011

Recently Jane Dagmi, a writer for Bob Vila, found me on Twitter and began to follow the restoration process of my Victorian house. She approached me along with 3 other DIY renovation bloggers to write about our workshops and our restoration process. It was an honor and the first interview I’ve ever done like this. I hope you enjoy it.

SanFranVic’s John Clarke Mills: In the Workshop

Since then, I have also syndicated my house blog content onto Bob Vila’s crowd-sourced blog for do-it-yourselfers. Check out Bob Vila Nation for a look at what other DIY’ers like myself are building.

Stale cache serving strategy with reactive flush

November 27, 2011

This is a cache I have created that allows me to always serve the user cached data, i.e. never serving a miss. There are times when some data is just too complicated to make the user request thread wait and other times you just want to be sure you have high availability with no user facing recompute time at all. This is a great technique for template caches and API wrappers which I have employed more than once in my career.

I achieve the goal of no cache misses by first building what I call a persistent cache; memcache backed by disk or database. This way if memcache ever takes a miss I look in the database and pull the result from there. I then recache that result in memcache for a short amount of time and return the response to the user. At the same time, I then put a task on my task queue (MQ) to go recalculate the data and then update my persistent cache. If all caches are empty then I hit the API and upon success, cache the results and return the data to the user. If the API throws an error I put a task on the queue and hopefully the next time the user requests that data the cache will be warmed upon a successful API query.

Essentially this creates a type of MRU cache, only using memcache for frequently used data. I could use cron or something like it but I like that my users and bots are the ones who flush my caches based on demand rather than time or some other indicator.

Stale cache serving strategy
(click diagram for a larger version)

The diagram above describes how I built the caching for Klout Widget, a hack I created just to demonstrate this strategy. Their API was a little finicky so I decided to create a persistent cache that would be resilient to bad or missing data. Go ahead, give it a try.

Anyway, hope this strategy is useful. If anyone knows the name for this type of cache I’m all ears. As far as I know its something I dreamed up.

Django template tags for SOLR queries

November 2, 2011

Yes, I know the idea is nuts but when you’re a fast moving bootstrapped startup sometimes you build things like this. That being said, you better have good business reasons for making something this hifalutin as there are many other ways to skin the cat.

The reason we opted to do this is to allow us to quickly and easily make SEO friendly pages without changing backend software. This is also built into our CMS so churning out data driven pages happens at the drop of a hat. Currently most of our SOLR queries are AJAX’ed from frontend so web crawlers wont index that data. The alternative is to push all this logic into the backend but then we have to recompile and deploy code for changes.

Without further adieu, here is the django query language I ended up with. As you can see, they work just like regular filters. Note on line 2 I am using urlparam to take in arguments. Strings also work here.

{{ "new"|solr_command }}
{{ "__QUERY__"|solr_set_replacement:urlparam_q }}
{{ "text:(__QUERY__)"|solr_set_query }}
{{ "AND recordtype:(company)"|solr_wrap_query }}
{{ "rows"|solr_set_param:"50" }}
{{ "run"|solr_command }}
{% for company in %}
{% endfor %}

Below is the associated python code. Note there are a few App Engine specific things but everything else is standard. Also, note the use of globals in this example. You will have to handle this in your own way.

import logging, simplejson as json
from django import template
from google.appengine.ext import webapp
from google.appengine.api import urlfetch
# these should be a part of thread local or
# a similar mechanism, i.e. not here!
def set_replacement(key, value):
  value = value.replace(':', '\:')
  SOLR_REPLACEMENTS[key] = value
  return ''
def set_param(key, value):
  SOLR_PARAMS[key] = value
  return ''
def set_query(value): # pylint:disable=W0613
  global SOLR_QUERY
  SOLR_QUERY = value
  return ''
def wrap_query(value): # pylint:disable=W0613
  return ''
def run_solr_command(command):
  if command == 'run':
    return _run_query()
  elif command == 'new':
    return _new_query()
    raise Exception('Solr command not found')
def _new_query():
  return ''
def _run_query():
  query_string = SOLR_QUERY
  for value in SOLR_QUERY_WRAP:
    query_string = '(%s) %s' % (query_string, value)
    query_string = query_string.replace(key,
  for key in SOLR_PARAMS:
    query_string = '%s&%s=%s' % (query_string, 
                        key, SOLR_PARAMS[key])
  query_string = query_string.replace(' ', '+')
  url = 'http://localhost:8983/solr/select?wt=json&q=%s'
  url = url % query_string
  result = urlfetch.fetch(url)
  if result.status_code != 200:
    raise urlfetch.DownloadError()
  res = json.loads(result.content)
  TEMPLATE_CONTEXT['solr_results'] = res
  return ''
register = webapp.template.create_template_register()
register.filter('solr_set_replacement', set_replacement)
register.filter('solr_set_query', set_query)
register.filter('solr_wrap_query', wrap_query)
register.filter('solr_set_param', set_param)
register.filter('solr_command', run_solr_command)

I hope someone finds my hack useful. I was going to try the Solango project but it looks like it was abandoned.

Copyright © 2005-2011 John Clarke Mills

Wordpress theme is open source and available on github.