Wednesday, January 18, 2012

Wifi and the Internet of Things

Today is a day for thinking about the future of the web.  By virtue of being the author of this blog, that gets back to the future of communications infrastructure, and especially wifi.  Apologies in advance if I wax poetic, Wikipedia, Craigslist, and Reddit are all down today in protest of SOPA/PIPA. 

In recent years there's been a lot of chatter about the nature of technological convergence and the future of global telecommunications infrastructure.  Technological optimists, particularly those interested in the developing world trumpet, "mobile, mobile, mobile" and, on many levels, they're right... While we won't be ditching big screens and real keyboards any time soon, all statistics point to the fact that mobile devices account for an increasing number of transactions, tasks, and  amount of user online time.  Without doubt, mobile has been disruptive in the west, speeding the transition to cloud computing; a game-changer for Africa, bringing millions online; and largely responsible for an global wave of grassroots protest movements.  The thing that people tend to miss, however, when talking about the future dominance of mobile is that the device and the communications infrastructure are not one in the same.  

Our lust for data is killing mobile providers (you need only look as far as the near-extinction of the unlimited data plan), and it's technologically challenging to shrink mobile cell sizes enough to [economically] support traffic loads on limited spectrum licenses.  Mobile carriers depend on the extremely efficient use of spectrum that comes with careful site planning and modeling.  This gets exceedingly hard to do as cell sites become more numerous and demand cheaper installations.  

The licensed nature of mobile spectrum/devices also makes the sort of grassroots innovation and entrepreneurship that typifies the online world very difficult.  If we were only talking about making smartphones maybe we wouldn't care, but the smartphone is only the tip of the iceberg for the growing world of ubiquitous, cloud-enabled computing.

The ability to be "connected" at all times is clearly desirable (even if we choose to unplug sometimes for our own sanity).  Smartphones have achieved a permanent position in our pants pockets formerly afforded only to wallets and house keys; but connectedness is only useful to the degree that we can interact with our world through being connected.  Social was easy:  "I'm connected. You're connected, let's share some photos and poke each other", but what about STUFF?  

The natural next step for a connected person is to have a connected world to interact with.  We've already started with the big ones: Thermostats, security systems, home theater.  But the possibilities are endless, and it seems that the consumers are hungry.  If you think I'm kidding, check out this Kickstarter project from some of my former Media Lab colleagues that garnered a half a million bucks to create a wifi-enabled rubber(?) brick, with a temperature sensor and accelerometer, that can be used for useful stuff like telling you when your laundry is done.  Innovation to fill this new space of connected "stuff" will only thrive with the low entry barriers of an open internet, unlicensed spectrum and cheap hardware.  

In any case, the monolithic model of wireless communication is just not going to work.  A natural consequence of having EVERYTHING connected is a whole lot of unplanned and uncoordinated wireless communications generated by cheap, duty-cycled devices, flying around all over the place. This is the sort of thing that breaks traditional mobile communications. 

Wifi, especially with 802.11n and soon 802.11ac boasts so much capacity that even in an environment where 40 or more uncoordinated access points are all visible to each other (such as my bedroom), you can still expect to share 15Mbps or more with all the devices in your home, while your neighbor can do the same, simultaneously.  The limitations of wifi, in terms of range and penetration of obstacles, are strengths in the world where everything is a transmitter.  You won't send 15Mbps simultaneously to every home in my neighborhood with 4G...

As opposed to falling off in popularity as mobile improves, wifi has continued to strengthen.  Wifi will ultimately bridge the gap between wireline and mobile providers.  Cable companies have already embraced wifi as a way to increase the value of their services.  In Europe, mobile providers like T-Mobile have embraced wifi for years, and US mobile providers are now being forced to add wifi to offload traffic.  Ubiquitous, cheap, unlicensed.  It's a recipe for innovation, and a guarantee that wifi will only increase in importance as we become increasingly connected.  Keep hacking...


Tuesday, January 17, 2012

Development Kits Live!

Thanks to Nathan from California, the first Fabfi development kit went out the door today.  With the Monday holiday, I had the chance to do a little fit testing of the hardware on a spare pole I had lying around.  The itelite enclosure with integrated 18dBi dual-polarization patch took some modifications to mount the RouterStation, but was very robust and had a solid rubber gasket and grooved interface around the edges to keep water out.  The built-in ethernet extension plug was also a nice touch.  19" pigtails for the external antennas exit through an optional port on the itelite enclosure and screw directly into the 2.4Ghz antennas.





Below is a sample look at the current kit spec all mounted up.  Mounting with the top of the pole above the tops of the antennas is a common trick to protect against lightning strikes (20cm is usually enough):


I'd prefer to see the stock antenna mounts getting the antennas farther from the pole for decreased RF shadow and better spatial diversity, but this could be easily remedied with a crossbar. For applications requiring less 5Ghz gain, a clean option might be to offer a 6-way radome.  I might get a chance to test this out in the near future.

Saturday, December 31, 2011

Fabfi In Domus Magazine

Yesterday Amy and I received a belated Christmas present via DHL. Though we knew it was coming, we had no idea what it might be.  Upon opening it we discovered a thick, colorful book from Italy, entitled DOMUS.  Domus is an Italian architectural magazine packed full of colorful images and stories about art and design.  I'm always surprised how often Fabfi gets picked up in the design-world media.  Perhaps designers and architects are freer to follow ambitious visions undeterred by technical challenges of implementation; or maybe they just think fluorescent orange acrylic looks snazzy.  Either way, we're grateful for the publicity, and the clever re-rendering of my graphics as seen below (see how CC-BY-SA is useful!).  I'll refrain from commentary on the editorial itself, which is roughly historical in nature.  One geeky side point is that anyone with the magazine has all the data they need to build a Fabfi reflector.

Find the article online article HERE.

Happy New Year everyone!



Friday, December 2, 2011

My Router has a bad case of S.A.D.*

In the last post I blathered on quite a bit about sizing solar panels for routers.  As it turns out, the theoretical and the empirical don't match up so well.  Instead of running 24h/day, the Farm School solar nodes are currently running more on the order of 6h.  I suspect this has mostly to do with the fact that the sun is so low for much of the day in this part of the world that it's behind trees, but it's also possible the new charge controllers aren't as efficient as billed.  Whatever the reason, I'm about 18h/day short of the full monte (translate: waaaaay short), which has me thinking about power optimization.

In laptops, a very common way to save juice is to adjust the clock speed down when the device is idle, and it seems someone has built a package to control CPU clock for the RouterStation.  It would be a great project for someone to wrap this with a script that measures load and adjusts speed accordingly.  Anyone interested??

*S.A.D. stands for Seasonal Affective Disorder

EDIT:  I took a few minutes to try manually switching the clock speed with the above package.  A few Results:

  • Changing speed requires a reboot (the package is simply a command line tool for rewriting the bootloader)
  • The overall reduction in power going from 680Mhz to 200Mhz is only abvout .25W at idle
  • Running a RouterStation at 200Mhz makes it boot waaaaay slower




Wednesday, November 16, 2011

Sizing Solar to Routerstation Power Specs

As I mentioned in the last post, we've been having a little trouble solar powering our nodes (in the woods) through a New England winter.  Turns out we get crap for sunlight.  Who knew? :P.  As a result I'm trying to get the most out of limited resources and redesign the nodes so they fail gracefully when they run out of juice.

I did a little experimenting with the Ubiquiti RouterStation earlier today to figure out exactly what its characteristics were.  Here's a few things I learned:
  • Minimum reliable operating voltage 9V? (It was difficult for me to determine this number exactly due to the limits of my power supply, but it will DEFINITELY not operate reliably below 8.5V and at 9V it seems to boot reliably and send data over one radio.  Using multiple radios hit the current limit of my power supply at 9V, so we just have to assume that the RouterStation itself is not current limited in the same way)
  • Peak current draw with three radios transmitting at capacity @12v 1.5A(est)
  • Average power consumption with three radios transmitting full time: 17.5W(est)
  • Average power consumption with three radios powered, but idle: 7.6W
The first element to consider when working with 12V batteries is that the margin between the minimum expected battery voltage and the minimum operating voltage of the device is quite small (~2V).  Even in a perfect world, the voltage drop when sending the peak current over 24AWG cat5 wire (assuming worst-case single conductor, 11V battery voltage), limits the cable length to just over 20 feet.  This number gets smaller if you have questionable cabling.  This short analysis suggests the use of 24V solar systems instead of 12V.

An additional problem that may arise, in the event that I'm right on the edge with the amount of battery storage I'm using, is that the devices will brown-out and hang.  This has already happened a couple of times during some cloudy weather, and requires someone to go out and hard-reboot the devices.  An easy solution to this problem is to use a power supply with a Low-Voltage Dropout (LVD) circuit instead of powering devices direct from the batteries.  This has the dual feature of protecting the batteries from over-discharge and cleanly cutting off your device before the voltage gets too low; in this manner enabling devices to turn off in the wee hours when there is insufficient light and come back in the morning without fanfare.

I've been eying these gadgets for quite some time now:



They come in 12 and 24V inputs and with direct power out or DC-DC conversion.  They take input from a solar panel and passive POE, charge your batteries, and provide both passive POE and wired output with LVD.  At $60 in quantity 1, they're a reasonable one-stop solution to building a single setup that will work almost anywhere. 

Since I'm stuck with some big 12V panels that I can't easily re-wire, I bought the 12-24V up-converting version.  According the company, this conversion is 80-85% efficient, which is why in the final spec we want to run native 24V.

Now some solar math (doing all this math for the intended system, not the one at Farm School)...

First find yourself a solar energy calculator.  For the US, this one is pretty cool.  For locations outside the US, try the older version.  Using the map, you'll be able to find the amount of energy per m^2 at your location, and then import into the PVWATTS calculator.  The PVWATTS tool calculates total available solar energy by month, as well as the total energy output of your system after derating (since we're using direct DC, you can omit the derating for any of the AC components). 

Now you might be asking "what does kWh/m^2/day have to do with the 60W rating on my panel?".  To make sense of all this, you must read the fine print
PV module power ratings are for standard test conditions (STC) of 1,000 W/m2 solar irradiance and 25°C PV module temperature. Caution: these are different than PVUSA (PTC) test conditions
 The important take-home from above is that "standard" conditions for the rated wattage of most solar panels are based on a solar energy of  1kW/m^2.  As a result the, value of kWh/m^2/day, is equivalent to the number of effective hours that the solar panel will be able to operate at its rated output (assuming the panel operates at the same efficiency over a broad range of input energy, which is generally true).  for example, a 60W panel during a day with 3kWh/m^2/day of available solar energy will generate 180Wh of energy before derating.

Looking at our device, we draw 17.5W at full throttle and 7.6W at idle.  If we're only using a little under 50% of the radio capacity on average, we get about 12W worst-case power draw.  Over one day that amounts to 288Wh of energy.  In Athol, MA, the worst months provide about 3kWh/m^2/day of energy..  Our DC power system, based on the PVWATTS numbers, probably derates to about 0.86 of the nameplate value, so we need:

288Wh / 3h / 0.86 = 111 W of solar to operate 24hours during the dead of winter.
Since mounting etc, are bound to be imperfect, you'll probably want to round this number up a bit too. 

Though a 120W panel might be enough to power our device on average, any given day might have more or less solar energy.  In order to size panels to the average, you need enough battery capacity to cover cloudy days.  Batteries are generally rated in Amp hours (Ah), which is the number of hours they can provide the equivalent of 1 Amp at 12V, which is 12W.  Conveniently, our device draws 12W, so the number of Ah in a battery is the same as the number of hours it will operate the device, except for one twist.  The Ah value usually describes the capacity of the battery to full discharge.  Fully discharging the battery over and over can damage it, so we want to plan for a less than full discharge.  For a deep cycle battery (deep-cycle = old-fashioned lead-acid battery with thick, unsophisticated electrode plates), 80% is a drop-dead limit.  For easy math, we'll say 75% is our discharge limit, and that we want to operate for 36h sun-free.  In the worst case, that means a 48Ah battery, which is on the order of the battery in your midsize passenger car. 

At Farm School, I'm operating two radios on the solar-powered devices at near-idle, so estimating 8W and derating a little extra for the DC-DC conversion gives me a need for about 85W of solar and 36Ah of batteries.  We'll see how this goes...

Tuesday, November 15, 2011

Start Small. Make History (Farm School Testbed)

Originally this post was slated to be a cut-and-dried show and tell describing our live pilot at The Farm School, but the developments over the last 16h at Occupy Wall Street, NYC, and some of the awesome guerrilla reporting that's been happening, provides some perspective for us developers that refuse to let anything be done until it's "perfect".

The particular story I have in mind is that of a liveblogger named Tim Pool, of TheOther99, who has been streaming live now from lower Manhattan for about fifteen continuous hours.  The part I love about  Tim and his stream is how he has cobbled together an ultra-simple technical solution with free services and has been debugging the setup and learning the tech in real time with a live audience.  Over the course of the stream, Tim sources replacement batteries from the viewing audience, takes technical suggestions from their text stream, and works through mini-crises  like his phone number getting posted to the internet (oops!) to keep his stream live.  By the time of the pivotal hearing verdict at nearly 5pm, he amasses an arsenal of equipment spares, has over 100,000 viewers, is being rebroadcast by major networks, and has a pro-level live video rig on the way (donated) from ustream.tv.

When we're sitting around (on Skype) at Fabfi repeatedly pondering the intricacies of "perfect" security and "fully-automated" setup, we would certainly do well to take inspiration for the model of, "get out there now.  Sort the details later".  This was, in fact, the way we ran the show for Fabfi 1-4 (there hasn't been a single version of Fabfi yet where I knew exactly how it would work when I arrived on location).  With all this "planning" we've been doing, it's easy to forget that the best way to do user-centered design is to design some stuff, subject a whole bunch of people to it, and make rapid changes based on real data.

Every time we've gone live at a new site in the past we learned four dozen new things in no time at all, but in our latest iteration we've been thinking so big (1000 nodes), that we've been neglecting the value of a small  live test for some time.  What can five or six nodes tell you about a network that scales to a thousand?  As it turns out, quite a bit...

Over the past two weeks I've been working out at The Farm School building a small FF5 testbed.  Despite being little more than an hour from Boston, this site has a lot of the features we might find in a developing area:  A DSL connection provides only 3Mbps of capacity, many of the residences (staff live on the farm) are entirely off-grid, and small buildings provide little opportunity for high mounting points without significant investments in mounting poles or hardware.  About a dozen users reside at the site full-time, with day users potentially doubling that number.

Keeping with the theme of approximating developing areas, I went out of my way in this deploy to minimize the additional mounting hardware I brought to the table, instead mounting nodes on existing structures with materials I was able to scrounge on site.

Here's some field-test porn:

Wireless gateway node (101):


 Off Grid Nodes:


Node on a (stationary) bus:


Live map:


Within moments of installing the network, we began to learn new things about our system.  Here's the highlights:

  1. The largest time investment in node installation comes from mounting solar panels.  For the panels to be effective they must be mounted at a relatively accurate angle and direction.  Working this out with "found" supplies is tedious and slow, even with power tools and a lot of experience.  
  2. The AP+ADHOC single radio config that worked for us in bench tests, had very strange behavior (>95% packet loss on one link)  when three nodes were live with connected clients.  This was resistant to all sorts of RTS/CTS configs and was remedied by either dropping out the third node or separating the AP off to a separate radio.  We're unsure whether this was due to the shift to a real environment or a change on the ath9k driver between builds.  
  3. You need more solar than you think.  I originally believed would be possible to sustain a routerstation with two radios on as little as 48 Watts of solar at our site and a 10Ah battery.  In practice, even our setup with 80W of panel and a 14Ah battery had difficulty after a few cloudy days.  (likely more battery is needed).  130W and 24Ah seems to be more than sufficient if there is no sun obstruction, but we have plenty of that...
  4. <(expected) sad face>RS devices are not tolerant of brownouts, and require manual power-cycling after low-voltage condition
  5. 2.4Ghz speeds are more than double those of from our urban tests, and signal strengths below -75dBm provided useful speeds.  

The new network has a number of useful features that were missing from FF4, namely a live map, which clicks though to network statistics.  Nodes automatically add themselves to the map and stats interface.  This interface will be steadily improved over the coming weeks.

At the moment, the network is operating  with un-encrypted open access, but Antoine promises that his new "portalgun" access controller is but days away.

more to come...

~Keith

Wednesday, October 26, 2011

Wireless Performance: Gosh Darn Urban Environments!

Over the last couple weeks I've been doing some testing to determine the performance of the baseline hardware config for FF5.  Some things I've been thinking about are directional vs. omni antennas; RTS/CTS vs. not; how fast is 802.11n, really?

The executive summary:
  • Use directional antennas whenever possible
  • Move traffic to 5Ghz links as quickly as possible
  • RTS/CTS provides a noticeable benefit in obstructed PtMP applications
  • Sharing 2.4Ghz radio between access and mesh networks is unlikely to scale.  
Now some details:

Here's the general idea for FF5:


A 5Ghz backbone serving a local 2.4Ghz mesh clouds, which also provide 2.4Ghz access to clients.  The key objective for this design is to make 2.4Ghz (Circle) nodes installable by relatively untrained users, while still allowing the system to perform at broadband speeds.  5Ghz backbone (Triangle) nodes are expected to be installed by more thoroughly trained technicians.  Ideally, this network should provide every user performance equivalent to the broadband connection in a suburban US neighborhood.  In technical terms, this breaks down to the following requirements:
  • Peak Client Speed (link-local): 4Mbps
  • C-Node Aggregate 2.4Ghz Throughput to node with 5Ghz Uplink: 10Mbps
  • C-Node 5Ghz Throughput to T-Node: 10Mbps
  • T-Node Aggregate 5Ghz throughput to child C-Nodes: 30Mbps
In the 5Ghz layer, I was able to achieve the desired T-C speed with a single 20Mhz channel and a single Omni antenna at the T-Node, connecting to C-Nodes with directional links.  This represented the best compromise of simplicity, cost and performance.  Handshaking (RTS/CTS) proved useful with the existence of hidden nodes, and directional links improved throughput.  Interestingly, the 5Ghz layer with directional links achieved roughly 80% of the expected theoretical maximum, even in the nasty RF soup that is my urban residential neighborhood.  Here's our test setup:



At 2.4Ghz, the most surprising result of preliminary tests was the effect of congestion control backoff (presumably) on throughput.  Living in an urban neighborhood, a 2.4Ghz node can easily "see" as many as 50 APs at any given time.  Consequently, it is always sharing airtime...  Inside my plaster and wire-lathe walled apartment, it is possible to find locations where a node will provide 50Mbps of real throughput to a laptop.  Take the same setup outside, and suddenly the performance drops to a little as 15Mbps at close range.  While it's reasonable to expect a much more friendly situation in the developing world, these speeds call the feasibility of a design sharing 2.4Ghz radios between clients and mesh at our desired scale into question (sorry, Amy).  It also highlights the importance of moving traffic off of the 2.4Ghz mesh as quickly as possible.  Here's some graphs of upload and download speeds vs. signal strengths for various configurations.  Larger (more horizontally directional) omni antennas provides marginally better results after normalizing for power:



Click here for  raw Somerville Wireless Test Data