The Holiest Pandas

Wow these are really nice. After 4 months or so, I finally took delivery of a box of Invyr Holy Panda switches from the Drop group buy.

To me they feel like Topre’s but with a very un-Topre-like mechanical switch bottom out. The sound on them is great, even just plain unlubed on my Drop Alt. I can see why the youtubers rave about this switch. It sounds great and feels great.

Only minor issue is that my “5”-key seems to be flaky when I have a HP switch in there. Possibly a bad switch, as the socket has no issue when I have a gateron red plugged in. Also some possible oddity with the particular socket being loose. The switch housings on the HPs feel like they are made from a softer plastic than the Gaterons, so there could also be some additional flex there that is causing issues. For now, that socket has a Gat Red in it, and I don’t notice because typing a 5 is quite rare.

Holy Pandas finally arrive. Packaging not as bad as what I saw in videos, but these holders were still slightly loose.

Interestingly, my hands also seem to tire less with tactile switches. Not sure if its just a fluke, but possibly the different way in which tactile switches cause your fingers to apply force may actually be good for my hands. With linears, you have to apply force all the way through the travel, whereas with tactiles, my fingers let up as soon as they feel that they’re over the bump. So even if these switches have heavier springs, somehow they feel like they are less work to type on. Very unexpected result, that makes me also want to try other tactile switches like Zealios and Zilents. This is going to get expensive very quickly….

Wrist Rests are Bad?

This random video that I found on an old reddit post has been blowing my mind recently. It talks about RSI and how the common advice of aligning fingers on a the home row actually is really bad and causes unnecessary tension.

Wrist rests are also bad because they cause you to pivot from the wrist rather than moving your arm to get to farther out keys.

So I’ve ditched my wrist rest, pulled my keyboard closer, and have been re-learning how to type. Going down this path because I couldn’t find any wrist rest configuration that led to complete comfort. I was swapping out my Drop Alt HP for my apple keyboard a couple times a day due to fatigue.

So far this approach seems to be working. No more tension in the wrist / hand. But because I have to keep my hands in the air to type, my arms are getting tired. Hopefully this is temporary and just need to build up some endurance here.

A side effect of not resting your wrists though, is that now you have the whole weight of your hand to type, which makes light switches feel way lighter then when you use a wrist rest. This has caused me to revert to using all gateron reds, instead of a mix of reds and clears. I’m also looking to possibly step up a weight class… Gateron yellows and other 50-55g actuation switches might actually very reasonable now. I can see why people who type in this manner prefer heavier springs. They provide a sense of “landing” instead of your fingers just crashing in to the keyboard.

More Switches and Keycaps. A few Learnings

Since my last entry, I’ve managed to go through a bunch of iterations on the Drop Alt HP.

  1. Hands were still feeling uncomfortable after a while with spring swapped Gateron Clears, so tried adding o-rings.
  2. Tried Cherry silent reds. Spec’ed at 45g actuation. Soft landing and reduced sounds
  3. Cherry silent reds felt too heavy so modded them with same Sprit 55s springs with 40g actuation
  4. Mushy landing didn’t help with comfort, so switched out to Gateron Reds.
  5. Got a group by “sumiwatari” mt3 profile keycaps. They look super nice but are too tall, adding to discomfort. Returning.
  6. Got HK Gaming 9009-themed cherry profile keycaps.

Some learnings:

  • Buy key testers. Specs on switches give you some idea of how they feel, but specs across brands cannot be compared. 45g actuation on a Cherry silent red feels significantly heavier than a normal cherry red (also 45g) which also feels slightly heavier than a Gateron red (also 45g).
  • Profile’s really matter for comfort. I could not figure out why my Leopold FC660C always felt more comfortable than this Drop keyboard regardless of the switches I tried. After looking around, I realized that the “OEM” profile of the Drop board’s included keycaps is actually on the taller end of the spectrum as far as keycaps go. Cherry profile caps are much shorter, and reaching the QWERTY row is much easier on the hands (less craning of the wrist).
  • Don’t go for the super high quality first. Whether its key testers or cheap keycap sets, if you’re not sure about what you want, buy the cheap examples, confirm that you want them, then spend the $$$ on the really nice versions. The Drop Sumiwatari MT3 profile set looks super nice, but is basically useless to me because of the cap height. Thankfully Drop has a good return policy but these things were not cheap.
  • Seems like my hands prefer shorter keycaps. I’m still planning to try dsa and xda keycaps to see how they compare to cherry. I can’t imagine the uniformity across rows will feel great though.

40g Modded Gateron Clears

I’m on a mission to find my favorite switch configuration. My journey so far:

  1. Drop Alt High Profile keyboard ordered with Kailh speed silvers. Too heavy.
  2. Spring swap Speed Silvers with Sprit 40s springs (30cn activation). Too light, and high activation point meant many typos.
  3. Swap switches out to Gateron Clears (35cn activation)

The Gateron Clears were way better than the modded Silvers. Far fewer typos but still very light feel. After a few days however, I noticed my hands getting tired. The Clears, while better than the modded silvers, are still easy to accidentally activate if you rest your fingers on them. If you use a wrist wrest, you have tense your fingers enough so that they’re not fully resting on keycaps. This gets tiring if you’re doing it all day.

I really love the light switch feel, but it’s no use if it tires my hands out. The whole point of light switches was to reduce the wear on my fingers. Time to increase the spring weight one more notch.

In terms of pre-built switches, the next step up from Gateron Clears are basically red switches: Gateron Red, Cherry Red, Cherry Silent Red, etc. These are all 45cn activation. However the Cherry Red I have in a tester feels way heavier than a Gateron Clear. 10cn seems like a big difference.

So instead of new switches, I ordered Sprit 55s springs1. According to Sprit’s page, these have a 40cn activation force — exactly between the Clears and the Reds. I swapped out the springs in the Gateron Clears, and verified that they do activate somewhere around 40g (about 8 US nickels of weight). Interestingly, unmodded Gat Clears consistently activate at 6 nickels.. closer to 30g rather than the spec’ed 35g. Not sure what’s going on there, but perhaps that explains the large difference in feel between clears and reds.

I will live with these for several days before I try out heavier switches. Another path may also be to explore modded tactile switches. The initial tactile bump in these switches may be enough to support resting figures, while still allowing the use of super light springs.

1When exploring weights of switches, it’s also a lot cheaper to buy springs than to keep buying new switches. Switches tend to be 0.50-1.00$ a piece, where as a 100 pack of springs is usually $15. It is a bit more work, but for a 65% keyboard it is not too bad.

Drop Alt High Profile Keyboard

After a long hiatus, I’m back in the mechanical keyboard scene. Turns out, I was just a bit too early, hanging out on way back in the mid-aughts.

What’s changed in the last 15 years?

  • Way more switches. Even some 100-key switch testers.
  • Way more community. Geekhack, Reddit, Youtube

Jumping back in with hotswap

15 years ago, the modding scene hadn’t really developed and I wasn’t about to teach myself to solder. This mean that I had to buy fully assembled boards outright.

This meant buying and selling boards on eBay and Craigslist. It was workable if you were patient, but it still was a huge pain in the ass. You really just wanted to try a new board or a new switch for 10 minutes, but you had to commit way too much money and time just to get the chance.

Modern hotswap boards are a game changer. No more trading entire boards. Just get a hotswap board, and if you want to try new switches? just swap them out. New keycaps? swap them out. A $100-200 new keyboard purchase becomes a $30 pack of switches.

If you’re getting into mechanical keyboards for the first time, start with a hotswap board. If you end up exploring many options, the cost savings will pay for itself.

The Drop Alt High Profile

After looking at several options, I went with the Alt High Profile group buy from Drop. The things that were important to me:

  • It’s a 65% layout (no F-keys, no numpad, but with arrow block)
  • Solid aluminum base
  • Hotswap sockets (obviously)
  • Layout programmability (I tweak the standard 65% layout a bit)
  • Drop itself. I’ve had several good experiences with group buys on the site.

I ordered with the Kaihua Speed Silver switch option. My goal is to get a very light touch keyboard — something lighter than the Topre silents on my Leopold FC660C.

Speed Silver’s have conflicting specs online. The manufacturer spec sheet seems to suggest actuation at 27g and bottom out at 50g, but if you feel one of these things, it seems way off. They feel heavier than Cherry blacks.

I tried swapping in Sprit 40s springs which actuate at 30cn. While I succeeded in making the switches super light, other problems emerged. Speed Silvers have a high actuation point — almost 1mm higher than a standard Cherry MX switch. This combined with a super light spring meant that simply touching or grazing the top of a key could lead to an unintentional actuation. Since the spring force increases with the compression, I’d guess the actual actuation force for this combo was somewhere in the 20g to 25g range.

The combination of super light spring and high actuation leads to a bunch of typos even if you’re being careful. Keys like Y and T are most problematic. You are reaching over other keys, and in the process of pressing those keys, you might lightly touch the tops of H or G. This means yhou end up witfh lots of extra characters when you tyhpe tghese letters.

I tried double 2mm O-rings on each keycap which help somewhat. This causes the overall key travel to shorten. Shorter travel means that you’re less likely to press down a neighboring key you had to reach over, because the key doesn’t need to go as far to bottom out. But alas, even with lots of practice I still couldn’t get rid of all the extra typos, and it became stressful to continue to type on these switches.

After re-swapping all the springs back to the original, I was back to the drawing board.

Gateron Clears

Was my 40g spring just too light? Or was this strategy doomed because of the high actuation point? I needed to test both theories.

After a bit more research, I settled on trying Gateron Clears, which are linear switches with a 35g actuation force. These would be more than 30g required by my modded Speed Silvers. The Clears also have a lower actuation point than the silvers (though higher than standard MX)

The clears arrived today, and I have been typing with them for the last 6 hours or so. I am happy to report that:

  • No more typos due to grazing keys. The actuation point is low enough that the problem is avoided.
  • 35g actuation is still plenty light. It feels like almost no resistance at all.
  • I kept reading that Clears are very “smooth” and now I know what they mean. Cherry switches have a bit of extra friction on the downpress if you don’t hit keys exactly in the middle, directly above the stem. The Clears feel the same regardless of where your finger lands. It gives a noticeable consistency and high quality feel to the typing experience.

Ok, so I wasn’t really away from the scene

I wasn’t modding boards, but it’s a lie to say I’ve been ignoring keyboards for the last 10+ years. Some of the boards I’ve tried in the meantime:

  • MS Surface wireless
  • MS Surface Ergonomic
  • Das Keyboard with Cherry MX Brown
  • Kinesis Freestyle Edge
  • Whitefox with Hako True
  • Realforce RGB
  • Leopold FC660C
  • Logitech MX Keys
  • Apple Keyboard
  • Havit Low-profile TKL mechaical board (some variant of Kailh choc switches, I think)

Where to go from here?

I’m really just starting to catch up on all the developments of the modding scene in the last 10 years. Some projects I’m looking forward to:

  • Trying out Holy Pandas, which I managed to get in the Drop group buy for. I’m worried that with a 67g spring, they might be too heavy for my tastes, but I’m curious how they behave with lighter springs.
  • Lubing switches and stabilizers. Everyone says it makes a huge difference.
  • Exploring keycaps. I really like the looks of SA profile, but we’ll see if it’s actually hard to type on.
  • Rebuilding the whitefox. I got the board through the kickstarter, but hate the Hako True switches it came with. (WAY too heavy with a 100g bottom out force). Once I figure out my favorite switches, my plan is to actually learn to desolder and swap the switches out.
  • Building a choc switch board. I still really like low profile switches and laptop-style scissor switches.
  • I’m not done with the Leopold yet! My main annoyance with it is the lack of programmability, but I have a Hasu controller board replacement on order, and excited to give that a go.

Amazon Echo vs Google Home for DIY home automation

If you just want a general comparison of the Echo vs Home, there are plenty of articles out there, and they’re a quick Google search away. I want to focus on one narrow aspect for this comparison.

After looking at a bunch of home automation setups, I decided to go with building my own Z-wave based automation and code my own automation server. I’ll document the justification and the setup more in another post.

For the sake of this post, all you need to know is that to actually integrate an Echo or Home into my home automation, I actually have to write code. For Echo, I have to use the Alexa skills API, and for google the conversation API or ifttt. In this post I’ll compare the two systems and how easy or hard it is to get your own custom commands going.

The incumbent: Amazon Echo

For the Echo, to build your own automation, you need to implement an Alexa “Skill”. While this is not too hard, it does take some doing, and for me it took some time to learn the basics of Amazon Web Services. There are also two kinds of skill APIs you can implement: the “custom” skill api, and the “smart home” skill API.

The custom skill API lets you basically capture any kind of phrase, but the main limitation is that to invoke your phrase, you have to use certain patterns that identify the skill. In my case, I named my skill Jarvis, so I have to say “Alexa, ask Jarvis to …”. This works, but often feels silly. I really wish I could just say “Alexa, ….” and have the phrase parsed by my skill’s language model and handled appropriately.

The smart home skill works differently. You implement an endpoint that can enumerate a list of devices with names, and some description of what those devices can do (for example, can you handle a “turnOn” command?). Once this is implemented, the system defines all the phrases. So if I told the API, there’s a device named “living room” and it can respond to “turnOn” and “turnOff” then I can say “Alexa, turn off living room” and the right thing gets called.

The Smart Home API has some other nice properties, for example, it can handle different kind of sentences and all map them to the same intent. I can say “Alexa, living room off” or “Alexa, turn off the living room”, and it won’t get confused.

The main limitations right now are that the types of verbs that can be used are limited. Basically it only makes sense for on/off switches, dimmers (where you would say “Alexa, set living room to 50”) and temperature controls. I recently installed some z-wave controlled window shades, and have resorted to having to say “Alexa, turn on shades” which still feels silly an non-intuitive.

One of the best things about the Alexa system for DIY folks like me is that skills can be coded up and deployed in “developer” mode. This is really intended for people who want to deploy their skills eventually to the public, but need a way to test. In developer mode, the skill gets only associated with your account, so only your account can look up and use the skill. It ends up being perfect for one-off custom development.

The Challenger: Google Home

I was pretty excited when Google announced their Home product. It offered the promise of better speech recognition (given Google’s history) and potentially a better programming platform.

At release, there was literally zero API support. You could ask it questions and it could hook up with a small set of supported smart devices, but no way to hook in and do custom stuff.

Then a few months in, Google announced the IFTTT integration. This integration is really interesting in that it is able to capture any phrase from the user.. no prefix necessary. So if I want to capture “Hey Google, eat my shorts” I could do that. You can also capture phrases with parameters, so I could specify a wild card like “eat my $” and then hook that up to a “custom web request” action to communicate the parsed result to my automation server.  So far so good. In some ways it offers more flexibility than the Alexa skills. I could implement my dream of being able to say “Hey Google, let there be light!” and have it turn on all the lights.

Awesome, right? Well sort of. As I discovered, there are some key limitations to the IFTTT approach:

  1. While you can use text wildcards to capture parameters, these wildcards can’t come at the beginning of the phrase. So while I can capture “turn off $”, I can’t get “$ off”
  2. The wildcard capture mechanism is very literal.. so if you specify “turn off $” and say “turn off the shades” then the parsed parameter is “the shades”. This means that on my automation side, I have to support many kinds of variants of “shades” to really make it feel natural.
  3. Because the IFTTT applet has no prior sense of what are possible words that fill in the parameter, in my experience, the speech rec is actually a little worse than Alexa’s smart home skill integration. “Turn off shades” is often interpreted as “Turn off shave”. I could map “shave” to “shades” in my code, but something feels really gross about that.
  4. You can’t create a dynamic response. While you can include the parsed wildcard part of your phrase in the response, if your automation fails, for example, there’s no way to communicate that back to the user. This is particularly annoying in combination with #3, where sometimes it mishears what you said, and you can’t really figure out why your command actually producing the desired result.

There’s also just a general clunkiness to the IFTTT UI. It’s great for setting up simple rules, but for complicated setups with lots of variances, the UI becomes cumbersome. I need to copy and paste the same set of URI, post params, etc. between each applet for all the phrases I want to capture, and there’s no way to do it in a programmatic way, which makes maintenance a pain.

Google Actions

As Google promised when they released the Home, they later came out with an “actions” API. I looked through the docs quite a bit, but on initial analysis, it appears to be very similar to the Alexa custom skill API. It inherits the same limitation that your custom ability must be named and invoked by name.. in otherwords “Hey Google, Ask Jarvis to blah blah”.  When I found this out, it was a disappointing step back from the IFTTT model where I could capture any phrase.

Google actually has some nicer UI for actually specifying the language model that your integration understands. They used’s UI and you can create a pretty cool intent parsing system and train it with lots of examples.

But then comes deployment. As of this writing, to test deploy a custom action on google, you use a command line tool, and it can only be deployed for 24 hours. WTF!? That’s right, if you wanted to build a “private” skill like you can in the Amazon system, you’d have to write some script to continually republish your skill in dev mode multiple times a day. Thanks, but no thanks.

(One interesting difference with Google is that users don’t need to “install” your skill to use it. They can just directly start saying “ask ____ ” as if it were pre-installed. Don’t know how they’re going to manage that name space though)

Because of these limitations, I haven’t actually brought myself to try to implement a google action yet.

Google has also announced “direct actions” which sound more like the smart home skill api and support all kinds of specific scenarios. These are only accessible to approved partners currently, ruling out the DIY crowd for now. If they work like Alexa’s smart home skill, then it could be pretty promising.

There’s one potential final approach that could also work. Google has a protocol called Weave, which is a standard way for smart devices to talk to the google cloud and register their presence. It was actually announced several years ago at a GoogleIO but then not much has happend since. I *think* if you actually manage to implement a device that can talk this protocol, then these devices can be controlled by Google Home.

There’s a Weave developer website but the documentation is unfortunately a bit scant. There’s a device library (libiota) which can supposedly run on linux, but its a C library, and I just don’t have the time to try to write little C programs to create a bunch of fake devices. If there was a python wrapper, maybe, but in general it feels like this bit of code has not seen much love.

The Verdict (as of Feb 2017)

For DIY home automation purposes, right now, Amazon Echo is a better choice. For a few reasons

  1. Easier custom skill development and deployment suited for DIY.
  2. Smart Home skill API can get you to a pretty usable end result with pretty good flexibility
  3. Devices are cheaper. Pretty easy to get a few Echo Dot’s and spread them around.

That being said, things are still early. Google will likely open up a lot more stuff going forward, but to catch up they need to:

  1. Fix their deployment situation so that you can permanently deploy “private” integrations
  2. Enhance their API so that it’s easier to register a set of devices that can handle a common set of requests and program their behavior.
  3. Give me some cheaper devices!

Also, note that this whole space is really early. There’s a few things that I would really like both of these platforms to be able to do, and I’m hoping these capabilities are coming.

  1. Be able to capture arbitrary phrases w/o having to inovked the name of the integration.
    1. In Amazon’s world, you could imagine having a “default” skill which would get dibs on all the things that the user is saying and get a chance to handle them before the normal system kicks in. Another option would be to be able to use the Echo as purely a microphone and speaker system and be able to connect it directly to your own engine built using the new Lex and Polly systems.
    2. In Google’s world, I’m not sure what this means. Maybe they can enhance the IFTTT integration to be more capable. I also saw on some community thread the devs mentioned support for “implicit” actions.. one hopes that this means you can be invoked w/o your skill name.
  2. Be able to recognize who is speaking, and pass that in as a parameter. I don’t care if there is a training step required. Being able to respond contextually to me or my wife or my kids would be super powerful.
  3. Getting the ID of the device where the request originates. This seems like a simple thing, but neither Echo or Home API’s give you this when you’re processing requests. Many of the capture devices are in fixed places in the house. It would be really nice to be able to understand that the Echo in the bedroom caught the request and to map something like “turn off the lights” to just the lights in the bedroom. (Currently there is a hack to set up each of your echo’s with a different amazon account, but that is just gross)
  4. More control over audio playback from an API. Right now there is no way to get either system to programmatically play a spotify track, for example. (One really hacky way I thought of is to teach Alexa to ask Google to do something by actually playing back a response with of the form “Ok Google, ….”)

In terms of general, non DIY home automation usage, I actually prefer the Google Home. It’s fun to ask it all kinds of random questions and see it answer them usefully in a surprisingly large number scenarios. The ability to also play music to groups of Google Cast devices is pretty awesome (though, if Sonos gets their shit together, I think both Alexa and Home will be equal in this regard).

Using MoCA instead of pulling CAT


I’ve been living in my current house for a couple of years now. Being our first house, I didn’t exactly have the experience to evaluate all the little details like networkability. I had a vague idea that I could pay someone to pull CAT6 through the walls, but didn’t realize what a disruptive endeavor that would actually end up being.

Shortly after moving in, I decided to upgrade from my Netgear R7000 router to a Unifi-based system. This meant I had to get high-bandwidth wires to key places so that I could hook up my access points. Fortuantely, my house has a centrally located heating duct column that connects the attic to the crawlspace. With help, I was able to fish through some CAT6 and ceiling mount an access point right under the attic. Now I had pretty good WiFi coverage (UniFi stuff being pretty awesome), but the setup wasn’t complete. I wanted to have even better connectivity in the office, where I had my desktop rig. While I could maybe pull 400 Mbps through 5Ghz AC wifi, that was far shy of the gigabit speeds you would easily get with a physical wire.

I ruled out a bunch of potential solutions.. a dedicated wifi extender? Drilling more holes and mounting an AP just in the office room? Running outdoor-rated CAT6 up the wall on the outside and bringing it in to a box in the wall? All these ideas seemed like a lot of trouble, or were still limited in terms of overall bandwidth.

Then I remembered.. I have coax everywhere. I mean, literally everywhere. The previous owner was quite the Cable TV enthusiast, and when he had done is remodel, he made sure to have most rooms have at least two coax jacks. I vividly remember when the Comcast guy came the first time out, we found a 4-way and 8-way amped splitters out at the cable box so that every room (and even the detached garage) could be fed with sweet sweet cable TV. He told me that with so much splitting, he was doubtful he could get a good signal. I told him to unhook all the splitters and just connect the modem directly to the incoming feed.

My first thought was that maybe I could use the existing coax as a sort of fishing line to pull CAT6 through to the same places. I pulled out a few of the wall plates to see how much give there was but unfortunately, the coax was clearly securly attached to the studs.

Defeated, I went looking on the web to see if there was any way to use these coax wires to do IP networking. I mean, they’re getting up to gigabit speeds with Comcast right? If a cable modem can do that over long distances, seems something should exist to do this within the confines of a house.

Which leads me to…

How to use your existing coax to build a MoCA network

MoCA is a standard to enable data networking over residential coax. My understanding is that some cable providers in the US already install MoCA equipment. I’ve only ever used a self-purchased modem with Comcast, so I never encountered this tech. But here is the magical product:

This is the Actiontec Bonded MoCA 2.0 Ethernet to Coax Adapter sold in pairs at Amazon for around $150. These awesome devices let you use your coax to bridge your ethernet over your coax. Using this I was able to use the coax jack in my office to bring near gigabit speeds right to my desktop.


If you look closely, these adapters have a coax in and a coax out. That means they can be hooked in series with other devices that use the coax, and overlay the data network on the same cable.

Turns out this works even for the cable modem itself! These devices supposedly detect the frequency bands used by the cable modem, and use other bands to make sure things don’t conflict.

I use this exact setup. I have my main coax line come into a closet in the middle of the house. Here I have the cable modem, Unifi gateway, and switch. Before the coax hooks into the cable modem though, I have one of these devices in between. I then hook up one of the switch ports to the ethernet jack on the MoCA adapter which then uses the same coax line to spread the network out to other devices connected by coax.

Ok, that’s a mouthful. Here’s a picture instead..

As you can see the two ethernet switches here are connected to each other through the MoCA adapters, but the adapters are using the same coax line which feeds the cable modem! Maybe this is ho hum for people who are familiar with MoCA, but I’m still amazed this works.


From reading reviews, it sounds like your mileage may vary here, but most reports suggest that this works way better than your average WiFi (AC 5ghz) connection or powerline network.

In my case I get a solid 930Mbps between devices connected directly to the two switches in the diagram above (measured with iperf3). Ping time is also a consistent 3ms. I’ve used it for a good year now, and have never noticed any reliability issues.

That being said, I’ve read also that certain things can affect performance:

  • Quality of the coax cable. RG6 is better shielded than RG59, but the latter is used in many earlier coax installs. I’m pretty sure I have some 59 in there as well.
  • Run length and splitter signal loss: the farther the signal has to travel on a long coax run, the more signal you will lose. I currently have 3db loss at the splitter and approx 70-100ft of coax between the two adapters, and it works fine. I’ve read in a spec somewhere that MoCA devices like this should be able to survive at most 15db loss.

Other Notes

A couple more smaller considerations:

  1. Make sure your coax splitters support the full MoCA frequency range. Many old/cheap splitters do not, and can basically filter out the signal required for these devices to work. Basically look for splitters that are compatible with the 500-1650Mhz frequency range.
  2. Because MoCA works by pushing signal back out through coax (much like a cable modem), in theory the signal will also traverse back out the line that comes into your house. Some say the cable companies install filters on their end, but you never know. You can buy a “point of entry MoCA filter” like this to filter out those signals before they go back out to cable provider (and your neighbors).

In the works

As you can see from my diagram, currently I only have a pair of these on my network. But since I have so many coax jacks to play with, I’m planning to get a few more adapters to be able to place more access points.

Separately, the MoCA Alliance has already announced MoCA 2.5, which apparently can support a 2.5Gbps bridge using coax. This is an interesting development, but there no devices you can purchase quite yet. Very few people have 10GigE stuff at their house, and I expect the new 5GigE equipment to take a while to make it out to the market. Without something to actually use the 2.5Gbps supported by this standard, I don’t expect to see an adapter at this speed for quite a while. But when it does arrive, you can bet that I will jump on it immediately (unless I bite the bullet and pull CAT6 by then)

Panasonic LX100 Review

I’ve been in the market for an enthusiast compact for a while now. I had my eyes set on the RX100mk3, but then out of nowhere, Panasonic announced the LX100. 4/3 sensor? check. Manual-ish controls? check. Fast autofocus? check. Built in EVF? check. Small size? check. I couldn’t resist, so as soon as the first couple reviews came out, I picked one up for myself. Below are my impressions after about two months of ownership.

Image Quality

Overall, the image quality is great. The sensor is on par with any of the other 4/3 sensors out there. Actually, I suspect most of the modern m43 bodies are based on the same Panasonic sensor (E-M1, GX7, GH4, LX100). While the output is a little different from my EM-1, in terms of overall quality, dynamic range, etc., it goes toe-to-toe with the EM1. Not terribly surprising given the shared sensor.
One thing that immediately turned me off was the default JPEG rendering. I think I’ve been spoiled by the E-M1’s JPEG rendering over the past year or so, but the LX100’s default JPEG setting is pretty terrible. Crappy noise reduction and what looks to me like toy-ish colors. At minimum you have to turn the JPEG noise reduction setting waaaay down. For the most part, I avoid it.

People often make fun of the filters on the camera, but I personally like them. Yes, you can re-create the effects in Lightroom, but it may take you some time. I like having them right on the camera, and especially being able to preview the shot in the selected style. I like Panasonic’s “Dynamic Monochrome” more than the Olympus equivalent, but I like Olympus’s retro style more than Panasonic’s. Go figure.


I think this was the big surprise for me. For my E-M1, I only have primes. I use the super-sharp Leica 25mm/1.4, the Olympus 45mm/1.8 and on occasion the original 20mm/1.7 — all three top-notch lenses.

As I started to shoot with the LX100, I tried to manage my own expectations. This is a compact zoom, with relatively large aperture. There’s no way it’s going to compete with the best primes for the system. I had read a few of the reviews talking about how corner sharpness suffered at some focal lengths.

But this lens has surpassed all my expectations. Yes, I bet if I peep, I can see some of the corner fall off, but the center is damn sharp. In fact, the Leica 25mm prime is only sharper if you really stop it down to it’s optimum f/4. For something so small and convenient, this is an awesome result. The net of it is that I can do most of my shooting without really worrying about the lens sharpness.

The long end of the zoom gets you about 75mm equivalent. While useful at times, I find 75mm to be a bit of an odd length and not that useful. If I had the choice, I would give up the 50-75mm part of the range for a smaller lens or wider aperture.

Speaking of aperture, one disappointment is that the maximum aperture decreases pretty rapidly as you zoom in. I really like to shoot at 35mm, and at that length, max aperture is f/2.3, which means you’ve lost almost a full stop from it’s maximum 1.7. f/2.3 means that I need ISO6400 to capture my kids at 1/160 indoors, which gives you a bit of grain. At 50mm equivalent, it’s down to f/2.7. This is where the Leica 25/1.4 still comes in


Of course the whole point of this thing is the small size. If you’re coming from a bigger m43 body or even larger system, the size does not disappoint. It will fit most large coat pockets, and will take up no space in any kind of bag. That being said, it is definitely not pants-pocketable. The lens barrel just sticks out too much, even when collapsed.

When I realized this, I initially hesitated. I really wanted rx100-levels of portability with the 4/3 sensor capability and better controls. However, as I’ve taken this camera anywhere, I’ve come to realize that it’s generally so small and light that I can just use the included strap and string it over my shoulder and forget about it. The camera is so small and light that it doesn’t get in the way of most light activities. For example, after a day at the park, when I get back in the car to drive home, the E-M1 would have to come off for me to drive. With the LX100, I just adjust the strap so that I wear it diagonally and then just sit down and drive.

Physical Controls

The other big draw of the LX100 are the full set of external dials for all the major controls (shutter speed dial, aperture ring around the lens barrel, EV comp dial). In general, I love having these and they make shooting with the camera lots of fun. Interestingly, when I tried a similar scheme over a year ago with the Fuji X-E2, I didn’t like it as much. But since then, I’ve learned to use the manual mode on my E-M1 a lot more. So I think this type of control scheme really requires a level of maturity with camera operation. If you’re still learning the ropes, the dials are going to require you to up your game and really think about all the parameters.

Even then, I still sometimes get tripped up by the EV comp dial. On a more DSLR-like control scheme like the E-M1, switching from shutter to aperture priority typically resents the EV comp value. No so on a LX100 (or any other body that employs similar control). Once you dial in -2 EV, you had better remember to switch it back when you’re on to the next shot. Other times, the EV comp dial gets knocked while the camera is in my bag, and next thing I know I’ve shot a few shots at -1 EV. Over time I’ve learned to be more vigilant about the dial, but still it is a source of stress.

The other slightly odd decision was to make the focus ring by default adjust the shutter speed. This is configurable in the menus but the behavior is really confusing until you notice it. It’s very easy to move the focus right every so slightly, especially when you’re going for the manual aperture dial, and by doing so, you’ll adjust the shutter speed a little bit off of it’s dial setting. In general, for a given shutter speed dial setting, you can fine-tune it using either the manual focus ring, or the rear wheel by up to 1 stop in each direction. While this is handy in that you can set “in-between” shutter speeds like 1/160, it also means that the current setting may not be exactly the same as the dial, which means you have to look at the screen if you want to be sure.


For the most part, the autofocus is amazingly fast. I was really surprised by this. I’ve never used a compact camera that focused this fast. In most cases, it’s faster than my E-M1 with 25mm/1.4 (not the fastest lens in terms of focusing). It also retains it’s AF speed in low light better than the E-M1. Basically no complaints here.

One of my torture tests for AF is to take shots of my kids as they go back and forth on the swings in the park. LX100 can handle this no problem. Just set the shutter to 1/1000, S-AF and fire away.

I haven’t done too much with the C-AF. In my limited use, it seems OK, but not amazing. I suspect it’s probably on par with the GH4. Panasonic’s face detection works quite well, and I prefer the UI to Olympus’. However, the one thing I’ve gotten used to on the Olympus is the ability to set it up so that by default you have your selected focus point that gets overridden automatically when a face is detected. On the LX100 (as other Panasonic camera), face detection is it’s own mode. So if you want to switch between Face detection and single point focus, it takes some menu navigation to do it.

Other Minor Observations and Nit-picking

One very nice feature is the nearly silent leaf shutter. I’ve played a bit with the Fuji X100s, but this shutter is even more silent. So much so that when you first start  laying with the camera, you can take a shot w/o realizing it. I’ve seen this happen to multiple people I’ve handed the camera to. They push the shutter, they see the screen blank for a second, but because I had auto image review turned off, they’re not sure if they took anything. The shutter barely makes a sound and registers no tangible vibration. The ultimate sealth camera.

One disappointment is a lack of built-in flash. There’s a small flash in the box, which is usable in a desperate situation, but I much prefer a built-in bounce-able flash like the one found on the a6000.

It’s very hard to chimp your RAW files on this camera. While this problem exists on any cameras, it’s particularly bad on the LX100. The jpeg preview that is rendered when you take a shot in RAW is way too low-res. You zoom in two levels to 4x, and that’s the limit. Any more and you just see stretched blur. The main problem is that the resolution is not enough to be able to confirm focus. I just have pray that I got it and check on the computer later.

I found the WI-FI feature ok when it works, but have had tons of problems getting it to connect at all.. especially when I’m out and about with no wifi networks around.  Perhaps an issue on the iphone side, but it’s bad enough that I haven’t been able to use the feature much, even in cases where I really wanted it. Perhaps I’m just doing  something wrong, but I’ve never had this much trouble from any of the apps from any of the brands.

I think one of the persistent complaints about the LX series has been the lens cap. The LX100 is no different. Included in the box is a standard snap on cap with a short leash chord you can use to attach it to the body. There’s just something about it that feels super onerous when you just want to get a quick shot. There is a petaled lens hood, but availability of the black one in the US seems poor, even months after launch. I may just go with a UV filter, but haven’t decided yet.
Other reviews have noted this as well, but on startup, it takes a long time for the lens to fully extend. It’s sad because the camera feels so snappy once it’s ready to go, but turning it on feels sloooow. On a related note, if you turn it on, and then go into playback mode and start looking at some pictures, the lens retracts. To get it to extend again, you have to half-press the shutter, but then again it takes it’s time to fully extend again. Kind of annoying when you’re out and you take a few shots of something, then want to review them in playback, and next thing you know the lens is back in so when you’re ready to take some more shots there’s a delay.

I haven’t seen it mentioned anywhere, but the default luminance setting of the rear LCD seems really off. Images look as if you did a +2EV exposure adjustment in Lightroom. You can tone it down in the menus, but this confused me initially. In combination with the fact that I often didn’t realize I had knocked the EV-comp dial to a -1EV position, I underexposed a bunch of shots. I’ve learned to try to keep an eye on the live histogram as much as possible.

Another minor nitpick. When you turn on both the leveling meter and the histogram on the live view, it tends to cover a large part of the image. When I handed the camera off to a friend when it was in this state, she asked me, “where’s the actual picture?” 😦

I haven’t used 4K photo mode that much, but when I have, it’s been pretty nice. It’s certainly a very interesting way to shoot, and I definitely plan to play with it more in the future. For the scenarios I tried it in, it seemed highly dependent on the C-AF performance during video. In other words, you get 30 frames per second to choose  from, but that doesn’t mean anything if your subject is moving out of the focus plane and the AF can’t keep up. Also, the in-camera UI to browse the video and pick out frames is kind of clunky and slow. It will work in a pinch, but it’s probably better to do it back on the computer, in which case, I’m not totally sure what software I’m supposed to use.

Lens stabilization seems reasonable. It’s not anywhere close to the E-M1, but it does let you get down to the 1/20 shutter speed or so w/o much issue.

Bottom Line

In most of my day-to-day, outdoor situations, the LX100 has replaced my E-M1 as my go-to camera. It’s smaller enough than the E-M1 that I will take it out in more situations. So much so that debating getting something bigger as my “big gun” (D750, A7ii, etc.)

I still turn to E-M1 + 25mm f/1.4 for indoor situations (mostly because of max aperture). I’d prefer if this combo focused as quickly as LX100 though. But those are often situations where I don’t need as much portability, so it’s starting to make sense to differentiate further and get something slightly larger for those situations.

Early 2016 Oculus/Lightroom PC build notes

The last time I built a PC was in late 2010. I was pretty sure it was going to be my last build ever. Apple products were slowly taking over my household, and the only reason for the build was to play Starcraft 2. I went with mid-range parts and ended up with a machine which served me well for 6 years, but had lots of little annoyances.

Fast forward to late 2015, several things had changed. I had a bit more space to work with, so I no longer needed to cram everything onto a laptop. As much as I love my 13 inch Macbook Pro, I was getting annoyed at it’s performance (mainly it’s lack of a quad core processor). While Lightroom is not the best at fully utilizing multiple cores, for the most common operations, it can at least decently scale up to 4 cores. As I was starting to shoot more at my son’s sports outings, I was getting very frustrated with the sluggish workflow.

The simplest solution would be pick up a 5k iMac or a Mac Pro. Initially that was the plan, but then Oculus announces the launch of the Rift. I’m in the camp of folks that believes VR is going to be one of the biggest shifts in computing experience we’ve seen in a long time. It’s an opportunity for anyone to jump in and do foundational work in defining the computing experience of the future. I had to get involved somehow. But that also means, for now, that I needed a PC.

After weeks of debating whether I actually could tolerate all the troubles of a PC, I decided to go for it. My goal was to build something that met the requirements for the Oculus Rift (both in terms of using the devince and developing experiences for it), as well as provided a good Lightroom/photo-editing workstation. A second, but also highly important requirement, was that this machine be as silent as possible. My last build had all kinds of fan / coil whine issues which annoyed me every time I used it. This time I would do all the research to make sure I would avoid all those problems.

Below is the build I ended up with along with some build notes and observations.


  • Intel i7 5820K CPU (LGA2011-v3 socket)
  • Asus X99-A / USB 3.1 motherboard
  • Kingston DDR4 4x8GB 2133Mhz Value RAM (KVR21N15D8/8)
  • EVGA GTX970 ACX 2.0+
  • Noctua NH-U12S CPU heatsink + fan
  • Samsung 950 Pro 512GB NVMe PCIe M.2 SSD
  • EVGA Supernova P2 750W Power supply
  • Fractal Define R5 White ATX case

Upgrade Path

  • Broadwell-E based CPU
  • More RAM
  • NVidia Pascal-based GPU
  • 4K or 5K monitor
  • Hard drives


  • CPU: If all you want to do is play Oculus games, then any of the LGA2011 CPU’s are overkill. You should either go Skylake (i.e. Core i7 6700K) or even a couple older generation which still meet the Oculus minimum spec. I went with the cheap 6 core because a) it supposedly overlocks pretty well, and b) the cores will become more useful as Lightroom improves on it’s multi-core support and also when I want to start developing software for Oculus. Also X99-based platforms have higher memory limits so I can easily go to 64GB, or move to a Xeon chip if I really feel the need.
  • Memory: From everything I’ve read, overclockable RAM seems hardly worth the effort. So I just go for baseline Kingston stuff, which I’ve always had good experiences with.
  • Graphics: The GTX 970 meets the minimum spec for oculus. This was intentional. I wanted basically the cheapest card that meets the spec, for 2 reasons: Nvidia was going to intro Pascal architecture-based GPUs in the next year (for which a large perf bump is expected) and I also really wanted support for DisplayPort 1.3 (which can properly drive 5K monitors w/o special hacks).

Build Notes

  • Overall: Pretty smooth build. Things have gotten way better since my last build, and I really appreciated the case having built-in cable management and a power suppy which allowed me to remove all the cables that I was not using.
  • Case Fans: One thing I was confused about was the fact that the included case fans only have 3 pin connectors, where as all the headers on my motherboard for driving/controlling those fans had 4 pins. After a bit of research, I realized that the 4 pin headers can accept the 3 pin connectors, and are in fact keyed in a way that forces you to align the pins correctly. In the BIOS these fans get correctly detected as voltage-driven rather than PWM-driven.
  • Coil Whine: Though I had done a ton of research. I wasn’t able to completely avoid it. I noticed that I get some at very high load (running Prime95) or during lots of network activity. Thankfully the Define R5 case has enough sound dampening so that I don’t hear it.
  • The fractal design-branded fans that came with a case have a slight rattle when they are running. You can only hear it with your ear right up against it, but the Noctua fan on the CPU cooler has no such rattle. I might invest in a couple Noctua case cans at some point to eliminate the rattle.
  • I chose a white case because I was a sick of the standard gamer PC look. While the Define case is indeed white, it has plastic panels and other features that still make it a far cry from a cleanly designed Apple product.
  • The EVGA power supply produces some noise from it’s fan, unless you turn on the “ECO mode” switch. My understanding is that in this mode it only turns on the fan when it gets to a certain temp, which it doesn’t reach most of the time.

Update (Jan 20, 2016)

  • The EVGA GTX 970 card comes only with full display ports. The Dell P2415Q comes with only a mini-DP to full-DP cable. So I had to get another cable. Apparently the 2415Q can be a big temperamental depending ont he cable you pick, I went with the Accell UltraAV (B142C-007B-2) cable which purports to support the full DP 1.2 spec bandwidth.
  • There’s something a little wonky with either the display or the graphics card. Once a week or so, the display gets into a rut where it seems to lose the sync on the signal once every few seconds. This manifests as the screen going black momentarily and then coming back, with random screen glitches in between. Whenever it happens, turning off the display and turning it back on seems to fix it. Par for the course for a $400 monitor? Maybe.
  • The display has an ever so slightly green tint on the left side. But you have to really look with a test pattern to see it. It’s subtle enough that in practice I never notice. Also probably what you get for $400. I will just have to wait until there are high end SST 5k displays to be had for reasonable prices
  • I installed the ASUS performance tweaking utility. It looked scary. When I had it up, Lightroom slowed down to a crawl. Not sure if it’s related, but I uninstalled most of the ASUS stuff immediately. Lightroom isn’t slow anymore. Again, I can’t prove cause and effect, but I’m not taking any chances. Seems like most of the software duplicates stuff in the BIOS anyways.

HP Stream Mini + Ubuntu 14.04.3 + Ubiquiti Unifi Controller

I’m a recent convert to the Unifi system. So far I’ve gone in for two AC AP’s and the USG router.

The one annoyance with unifi is having to run the controller. I’d been running it on-demand on my laptop, but wanted a better solution. Separately, I had also been thinking up about setting up a low power, linux server for some other home stuff. I decided to take the plunge after realizing that a small home server would also be a great place to run the unifi controller 24/7.

Hardware: HP Stream Mini 200-010

I was looking for something cheap, low power, reasonable cpu, and could run linux. I had vaguely heard about people hacking chromeboxes (desktop form factor of chromebooks) for this purpose, but upon researching found out that it required a bit of hacking and even some bios rom editing.

Then I came across the HP. I have mixed feelings about HP – I’ve used some of their good high end laptops, and workstations, but also a lot of junk – but figured this is a standard laptop-hardware-in-a-desktop build, so it’s hard to screw up. A few blog posts and youtube videos out there also provided empirical evidence that a smooth linux install was a not totally unreasonable expectation.

For those who are unaware of this product. It’s basically a windows-based chromebox competitor. It sells for ~$180, has 2GB of RAM, a haswell-class celeron, and 32GB ssd. RAM and SSD are easily upgradeable. And because it’s not a chromebox, it has a normal sane bios. Seemed worth a try.

Fast forward to today, I got the thing in the email. It took me a bit, but I did manage to get it deployed as a small server in a matter of a few hours. Here are my notes.


Backing up windows

In case the thing didn’t work out and I wanted to send it back, I needed a way to reset back to the factory windows install. This is done through HP’s “recovery manager” software, which has a “create recovery media” option. Plug in a USB stick when prompted and just wait a while, and voila!

Installing Ubuntu server

I used the instructions here to create a bootable installer for Ubuntu server. The instructions call for Ubuntu desktop, but I just downloaded the server iso for 14.04.3 (amd64 edition) and it worked fine.

When booting with the ubuntu drive, you really have to mash the escape key to get into the bios (the time window is really narrow). Once there, muck with your boot order so that the usb stick is higher than the “windows boot manager” entry in your UEFI sources, and you should be able to get it to boot into the ubuntu installer.

From there on, follow the standard Ubuntu installer. It’s been a few years since I personally had to do this so I had a couple minor points of confusion.

  1. I’m not sure if this behavior is only present in the server version, but it doesn’t attempt to configure the wifi chip during the install, which means if you’re not wired, then some of the network auto config stuff will fail. I just plugged it into the network for the initial install, and things progressed on from there.
  2. It seems partition schemes are more complicated in the UEFI world. I just sed the “guided use entire disk” option. I probably could have done something better.

Installing Unifi controller

I originally went to Unifi’s page to download their pre-packaged .deb for their controller, but then realized that their release notes have instructions for setting up an apt repository to get the controller. Sweet! Just followed the instructions there and got everything installed with only one minor complication.

I originally skipped using the separate mongodb source repository that is labeled as “optional” in the instructions. I then couldn’t get my browser to connect to the controller (see below), and so in the process of debugging, I tried to add the separate mongodb repo, and re-install. (The custom mongodb package has some file conflicts with Ubuntu’s native mongodb-clients package, so you have to do some muckery to get the custom version to install. In my case, I removed the custom mongodb source from my sources.list.d, then apt-get update, then apt-get remove mongodb-clients, then re-add the source, then apt-get install mongodb-10gen.)

It turns out that my connection issue had nothing to do with which mongodb I used (though I did not try the ubuntu native one). Rather, it was that I was connecting to http://hostname:8443 instead of https://hostname:8443. Unifi controller helpfully does absolutely nothing when you for get the s. It just closes the connection, leaving Safari utterly confused.

Sadly it took me a good hour to figure this out, but once I did, things have been smooth sailing. To get your new controller to adopt all your existing devices, you can use the backup feature in the controller to export then import the entire configuration. (As of Unifi 4.6.6, the backup features is in the settings dialog under “maintenance”)


In summary. Cheap hardware (under $200), smooth Ubuntu install, custom apt-get repo. Don’t forget https in your URL. Seems like it’s working well so far.

Update: memory upgrade

I decided to spring for a bit of extra memory to give me some headroom beyond the included 2GB. I ended up going with this 4GB Kingston module ($23 from amazon). Popped it in and seems to work fine.