The wife build

Finally got around to building Qian a PC of her own. Cobbled together pieces of my old P4 box (case, power supply), my newer desktop (my older monitor, 2GB of RAM [of which 1GB was inaccessible to 32-bit XP anyways]), and some peripherals she had for her laptop (keyboard / mouse), et voila, a $250 desktop (including tax!). And it runs pretty fast to boot, almost can’t tell the difference w/ my desktop.
There were some new parts (little reviews for each):

  • Intel Pentium E2180 Allendale chip (2.0 ghz)
    This is the new replacement for the Celeron. It’s based on the same microarchitecture as the C2D, just with less L2 Cache. Definitely fast enough for non-gaming use.
  • Intel DG35EC mATX G35 chipset mobo
    Main thing here was that it has everything I needed, including a DVI and HDMI output. This is a pretty complete board featuring the X3100 integrated GPU, ICH8 (which for some reason doesn’t have XP 32-bit AHCI drivers?) and built in gigabit/sound. A little on the spendier side for motherboards these days ($100 off of newegg), and definitely not a OC’ers board, but that’s exactly what I wanted. It comes back from standby very quickly (I’d say a second or so, feels like a Mac)
  • Seagate 7200.10 250GB hard drive (3.0GBps SATA)
    This is pretty standard, though I see that they make them slimmer than they used to. Probably one has one platter.

The mATX case went into a recycled Lian Li Aluminum full ATX tower case. Which leaves for a very empty inside, especially with no extra PCI cards. This lets me keep the noise down by not plugging in any of the case fans. CPU seems to stay plenty cool with just CPU and power supply fan.
I was interested to see how power-hungry this minimal box would be. The other relevant part power-wise is the 430W Thermaltake power supply. It’s not particularly quiet (two fans) nor efficient (not 80-plus). With all of it plugged in, my kill-a-watt tells me that the wattage from the socket is 70W at idle. That’s higher than I would have liked, but it’s fine for mostly-on-standby machine. My desktop with a Seasonic efficient power supply and C2D E6600 measured in at 110W idle, also higher than I expected.
I did the math to see how much this actually costs me. One watt of consumption left on 24/7 for a year comes out to about $1 on the electricity bill, at the SF standard 12 cents per KWh.
I’m still doing research to build on always-on home server. It looks like trying to get under 50W (ideally 40W) at idle will be the goal there. I know it can easily be done with a VIA board, but I’m not so sure of their Linux support.
I risk of starting to sound a little old, but it’s amazing to think that you can almost buy a full pc for $300-400 these days. In 1992 dollars, that’s about $200 (1992 is when I first started really caring about spending my own money to augment our home PC), and $200 maybe got you a new hard drive back then.
On a final note, I’m back down to one monitor again. Feels kind of cramped, and at the same time refreshingly simple. Dual monitors is nice, but always adds extra stress when managing windows. Maybe this will motivate me to resume my search for a new monitor.

More Hardy on X60 investigation…

Still tinkering away with Ubuntu Hardy Heron on the Thinkpad X60. Since my last posting, Ubuntu updated their kernel to 2.6.24-16, and the corresponding modules package has a newer iwl3945 driver (version 1.2.0, though compat-wireless has 1.2.23). The wireless on this machine appears to work out of the box now. I suspended and resumed a few more times, and that appears to work fine as well.
The only last remaining show-stopper is the heat. This laptop’s thin design means when its insides get hot, you can really feel it in your hands. In particular, many internet postings note that the right half of the wrist rest area tends to heat up.
In addition to what I mentioned in my last post, I’ve learned a few new tricks. For example, to set the power saving setting on the hard drive:

sudo hdparm -B 250 /dev/sda

The value can be anywhere from 0-255. The higher the value, the more power saving. Apparently 255 can have some issues with parking the drive heads too often to wear out the drive, so 250 is the compromise (I’m sure there aren’t actually 255 distinct settings, but it seems like setting 255 is the important part.)
To get a good read on the temperature of the drive, use the hddtemp utility:

sudo apt-get install hddtemp
sudo hddtemp /dev/sda

With a value of 250, my drive appears to warm up to about 36C and then stay around that level. That’s about body temperature, so it doesn’t feel particularly warm. Making sure to run hdparm seems to keep things a bit cooler.
I’ve set the fan to run at max cooling (using the thinkpad_acpi module in experimental mode), and the followed all the tricks detailed at Intel’s LessWatts site. After all this, the CPU temperature hovers under less than 38C, but my mini-PCI sensor still tells me that it’s between 41C and 42C, which I can definitely feel.
I even went so far as to track down this image (courtesy of PC Watch), that shows the board layout of the x60. It’s that big black chip in the middle that’s getting hot. Unfortunately, form the image I can’t tell what that chip is, and I don’t really feel like dissecting my laptop. If anyone has any idea what that chip is let me know. But my current working theory is that there is some other power management setting that should be tweak-able, but it’s just not available on Linux at this point in time.
Oh well, back to Windows.

Opening Microsoft

I guess Microsoft just made an announcement that they’re going to become more open or something. The announcement seems pretty vague but lets hope that they at least do some of what the promise.
But I tend to think that this is not Microsoft’s attempt to become all nice and fair. History has shown that businesses will be nice and fair only when they have to be. Microsoft, being the target of constant antitrust investigations has some incentive to be nice and fair, but that’s not their true motive.
I think people within the company are starting to realize that given the presence of open source software, and given the presence of a new set of companies whose core technologies are not based on MS software, that if MS is retain its position as the platform to build on, then they are going to need to change their game. Although many will argue that the MS platform is still better than the OSS platform (and in many ways, it still is), the advantages it has are diminishing at an accelerating rate.
Companies have realized that it’s no use trying to compete directly with Microsoft. You have to do something that MS doesn’t. And when you figure out what that something is, there’s a good chance that you’ll build it with OSS. Why? Because usually to have any chance of success, your idea will have to be radically different from what MS does, and the more different it is, the more attractive OSS looks because it’s so much more flexible. Using an MS platform makes sense when the changes you need to make are in line with the flexibility that the MS platform provides. But with OSS, everything is flexible. You can literally change any part of the code if you’re so motivated. This is a huge advantage (as well as a nice assurance) for those looking to build something new.
A MS that keeps chugging along unchanged, continuing to define only on its own terms how flexible its platform should be, will eventually be obsolete. Their announcement today can be seen as an acknowledgment of this reality. For MS to survive in such a world, they need to make their technologies more flexible — able to be re-combined, built upon, mix-and-matched. This is not going to change the fact that thousands of great paid programmers will be able to produce code that is better than what many OSS projects can provide. But the same programmers will not be able to compete against _all_ of the OSS world. It’s simply too big and too dynamic. So in order to provide value in such a world, MS will need to concentrate on its best products, and make sure what it produces will integrate well with the rest of the world. Otherwise, it will slowly drift in to irrelevancy.
There’s one thing that OSS people always seem to forget when they bash MS and their closed source ways: MS doesn’t _need_ to go open source, as long as it can still “win” being closed source. And not only MS. It is still strictly advantageous as a software company to have a closed source component of their offering. That compromises their unique value, and gives them a competitive edge. But this also does _not_ mean a closed-source company cannot later go open source. If it can be shown that it will work to their advantage, then there’s no reason for them not to. In fact, it would be stupid for them not to. But MS’s business is huge and complicated. It’s still extremely unclear whether opening up all their code would be of any benefit to them. But it’s my firm belief that the day that it does become a unmistakable benefit, MS will be right on it.

KDE on Windows

Although I haven’t used KDE in a while, I applaud their efforts to make KDE apps available on Windows. Having lots of users is one of the most important aspects of a successful open source project, and the KDE developers have just opened themselves up to the remaining 90% of the user base.
The more you can get Windows users to use open source software on Windows, the less barriers they’ll if/when they ever switch to another desktop OS.
My initial guess is that amarok for windows will be the most popular. I’d also be interested to see if they could produce a better Webkit-based browser than Safari for Windows.

Macworld Keynote thoughts

Time Capsule: meh. I don’t know that it’s any more compelling than services like mozy. I guess it could be convenient though. I can see that people who just want it to work would pay for it.
iTunes movie rentals: Cool, but netflix is still cheaper. If it were $1.99 per rental than I’m definitely in. At the current price, I’m still on the fence.
Macbook Air: Cool, but skeptical. I guess I’m not sold on this type of machine anymore. And it seems a little expensive to be used as a “subnotebook”. I’m sure its great for all those who would have bought a MacBook. I’ll see how the ThinkPad folks respond.
Other thoughts: They didn’t update the cinema displays.. but maybe they just didn’t want to take up keynote space with it, since it’ll likely be fairly unremarkable. I’m really happy that they didn’t mention anything about HDCP and other hardware DRM schemes. It’s really sad that MS has spent all this time building the technology for nobody to use, while Apple managed to get an easy to use rental scheme in place that a lot of people will use.

Holy Smokes

I spent an evening messing around with Pyblosxom. I was going to use it to restart my keyboard blog. It was supposed to a somewhat fun exercise in python, and I was also tired of the Movable Type admin interface being so slow.
After a couple hours of hacking, I decided that Pyblosxom is too much work.
And then I found out about MT4’s built in fastcgi support. This artcle provides you with the config details. It’s strikingly simple. You also have to turn on FastCGI for your domain using the Dreamhost Control Panel if you’re on DH.
Now the admin interface is much more tolerable. Looks like I won’t have to roll my own after all.

Dell 3007WFP-HC continued

I just came back from the Apple store to check out the somewhat older 30 inch cinema display.
It also has a noticeable anti-glare coating. But I have to say I didn’t find it as distracting as on my Dell. While it was fairly prominent, it seemed to produce less color variation. It’s important to note that this wasn’t apples-to-apples: the Apple has more ambient light than my room at home, and the white point of the screen also seemed different.
The glossy screens have none of this effect at all, so it really must be the anti-glare coating. I’ve always preferred the matte coating to the glossy, but I when the matte is this salient, the glossy seems a bit better.

State of FOSS Review 2007

It’s the end of the year, and my office is empty, so seems like as good a time as any to reflect on the past year of FOSS developments.

Noteworthy Happenings

Ubuntu 7.04 and 7.10 were released: While perhaps not as revolutionary as the original Ubuntu releases, these releases represented incremental improvements, and solid steps forwards for Ubuntu as a viable desktop alternative. Dell also started selling Ubuntu pre-installed on a limited line of laptops, though the uptake, as far as I understand it, has been pretty small. The newest Ubuntu release also brough 3D desktop effects to the masses, though not without its problems.
Several media-covered Linux-based devices released: This includes the Asus eee PC as well as the OLPC, and the Wallmart gPC. All these are instances of Linux based systems finding niche markets to fill.
ATI’s major shift in attitude: After AMD bought them, they became much more agreeable to opening specs for their drivers.
That’s far from a complete list, just what comes to mind.

Commentary

Money still makes the world go ’round
As far as I can tell, nothing happened this year to show otherwise. The most successful projects were those with significant commercial interest and financial backing supporting them. The canonical example is Firefox. The Google-Mozilla partnership is a key example of how development of open source technologies can be developed for money but be distributed for free. Another example in the same domain is WebKit. With Apple, Nokia, and now Adobe and Trolltech all riding no the same code base, there is a lot of money interest in advancing this piece of code for the benefit of all.
As far as a desktop operating system, there are few instances where desktop Linux distributions have been connected to strong revenue streams, at least not directly (I don’t quite count Ubuntu, even though it’s backed by Shuttleworth’s money. I’ll count it when it’s financially self-sustaining). There is a lot of peripheral work happening that makes one hopeful for the future however. Companies like Intel have realized that being tied to Microsoft is not necessarily a good thing. Support for the latest CPU features seems to always happen in Linux first, and I’m sure Intel’s contributions have a big part to do with that. I’ve been told also that many Intel engineers are working on core ACPI support. Advancing the Linux platform features for desktop and laptop scenarios can only benefit Intel, providing that they can do it cost-effectively.
On the other hand, devices like the gPC, eee PC, and OLPC may not have been possible without Linux. Sure, there are devices like the UMPC’s, but seriously, who’s going to pay a $1000+ for these devices? Linux on the small form factor allowed super-cheap devices with lots of functionality for the price. Better yet, these devices actually sell, which provides financial incentive for developers of the software running on them to advance the platform.
If you look at it in the context of the lessons described in the Innovator’s Dilemma, small form factor machines might just be the “new use case” that Linux needs to really go main stream.
FOSS does provide value and facilitates innovation
Just not in the way that people traditionally think. FOSS software ‘products’ usually don’t succeed in the traditional sense of shrink-wrapped or consumer-purchased software. By themselves, they don’t provide turn-key solutions for a user’s problems.
Instead, FOSS is really good at producing good quality commodity software components. These are reasonably solid implementations of commodity software ideas. For example, a good XML library, or a good mail sending mechanism, etc. While these components aren’t useful to end users directly, their free (as in speech) availability to commercial companies greatly improves the companies’ ability to produce new value by re-using them.
WebKit is a great example.. it allows Apple to develop a slick browser, while all of the core HTML code is essentially commodity (though they have made their share of improvements). Adobe is able to develop a radical idea like the Adobe Integrated Runtime because it can just re-use webkit, and not have to develop a HTML rendering library all on their own.
VMware is another great example of this. We use open source software components all over the place, whether it be GTK, libxml, or a bunch of random Linux/Unix tools. A huge part of our internal developer environment is based on free tools, and we have a fairly high number of Linux fans as developers. We also try to work with the community where it makes sense, whether it be submitting patches to GTKmm, or trying to get a fair paravirtualization framework into the kernel. If you ask many of our developers, they will say without a doubt that our ESX product (the main money maker) would not have been possible if Linux didn’t exist.
A great number of web companies also work the same way, whether it be Facebook, or Google, or Yet Another Web 2.0 startup. The free availability of Linux, MySQL, Apache, and various server-side programming environments really lowers the barrier for new companies who want to build upon existing technologies.
You could even look at distributions as an example of this. They provide a lot of value (an entire OS package with a lot of built in features) for free! Surely this would not be possible were it not for the myriad of open source projects that are out there producing good code.
So while it will probably remain for at least the medium-term that “products” that land in the user’s hands will be produced by “companies”, FOSS has already provided immense value to companies who know how to leverage it to further their own products.
FOSS keeps companies (and prices) honest
That’s all great for companies, but what about Joe consumer?
I just talked about how FOSS technologies allows companies to create high-value products at less cost by re-using commodity technologies. There’s a corollary effect that benefits the consumer as well.
Lower production cost for a product should result in lower prices. But even then, a company may choose to try to sell a product for a price higher than what you’re willing to pay for. Especially if the market is willing to pay it (perhaps it’s also ironic that I mention this since VMware is accused of doing the same thing). But here’s the deal.. if the product is partially based on freely available commodity technologies, then a company can only really charge money for the parts of the product which are not free. If a commercial product is mostly built with freely available parts, then it should be close to free.
With time, as more and more consumers and corporations use open source products, the feature set available in FOSS products tends to increase. This also means that the functionality using just baseline open source software tends to increase. Assuming the cost of this baseline functionality is virtually free, companies that provide non-free solutions will be immediately measured against the free solution. This essentially limits companies from setting prices on their products too high. If they do, it creates a gap for other companies to come in, leverage more free software, and produce a “good enough” product for a lower price.
There are so many examples of this effect that I’ll just talk about a few that I know about.
Home NAS server based on Linux: For the power user, these storage devices (available from numerous vendors) provide a huge amount of functionality. The DLink DNS-323 not only does basic file service, but has an FTP server, an ITunes server, and a UPnP server, all based on open source code. Further more, with a little work, you can install all types of other servers. All for less than $200. Windows Home Server debuted at less than $200, and it’s still unclear that they will even succeed at this price given all the alternatives.
Linksys WRT54G router: This is another famous one. Firmware’s based on Open source code now give this less than $50 router the features of a high-end enterprise product.
$3 Windows XP in China: Microsoft has to sell XP in china for $3 a piece to get anyone to buy it. This isn’t entirely due to Linux, since the main problem is the pirated copies of Windows all over the place, but the availability of the Linux alternative clearly has an effect here.
Vista Ultimate for $400: If you look at XP, and see it was several hundred dollars for a full version, and then look at Vista, look at the new features, and do the math, you might be able to somehow justify that Vista Ultimate can be worth $400 dollars. But these days, instead, you look at Ubuntu. You say, what do I get in Vista that I can’t get in Ubuntu. Is that really worth $400?
Apple computers: I claim that Apple machines would be way more expensive, and way less featureful if it weren’t for open source software. They use OSS in tons of places to add features to their platform.. whether it be gcc, or WebKit for Safari, sqlite for spotlight, or the BSD stack. It’s clear that Apple is winning a lot of converts from the Unix/Linux world because of their inclusion of all the basic Unix features. And at what cost? It’s probably far cheaper for them to maintain their Unix tools than it is to develop all their core closed-source code. And yet, their small investment has won them a lot of new users and a lot more hardware sales.
Windows is here to stay (especially on the desktop)
For a while at least. I know I just wrote all about how FOSS is great and its changing the world. But here’s the thing, Windows has lots of aspects that aren’t so easily commoditized. Therein lies it’s real strength. Some of these aspects include:
User Community: Open source projects like to talk about their big communities. They should, as community is fundamental to their success. However, look a the size of the community around Windows. It is astronomically large compared to Linux or even the Mac.
A larger community leads to more diversity. The diversity of software on windows (at least in areas where MS allows it) is amazing. The stuff you can find for free is also amazing. Have some problem with XP? Google is much more likely to find the answer for you than some random problem with some random Linux distro.
Great Commercial Applications: Nobody is going to contest the fact that there are still great commercial apps available on Windows for which no FOSS equivalent exists. Photoshop (and the entire Adobe/Macromedia suite), Office, Games, Sound Editing and production apps.
Porting these applications to Linux or even OSX is non-trivial. Companies are unlikely to invest in these efforts without a crystal-clear business case.
Also, since the Windows platform represents so many users, many of the best open source applications are available on Windows. Firefox, Thunderbird, Vim, Emacs, Pidgin, MySql, Apache, Gimp, Cygwin, the list goes on. The more such programs that are available on Windows, the less the advantage of running an open source OS to run these apps. (Note however, that if these apps become the best-of-breed apps on the platform, then running Windows becomes less important)
Infrastructure and Eco-system: This is related to community, but slightly different. There’s a machine like aspect to the Windows software and hardware world. And many a company’s financial livelihood depends on this machine, for better or for worse. There are many aspects to this machine, but some of the more important ones are:

  • The WHQL process
  • Stable driver frameworks and application API’s
  • Strong documentation
  • Backwards Compatibility
  • Best-of-breed Application development environments (VS, IIS, .NET)

To put it another way, all these “features” allow other companies to make money off of Windows. If a software company wants to produce a new desktop software product, not only will it choose to support Windows first because it has the biggest market share, it also has the best environment for deploying a new application. And this is no coincidence. MS has spent tons of money to make this work well (which also leads to the mystery of why they broke it so much with Vista), and Linux (especially on the desktop) has got a long way to go to figure to match Windows in these aspects.
That’s not to say that distributing third party apps on Linux is impossible. Google, VMware, and a handful of other companies have shown that it can be done. The problem really is that it’s still really hard. Hard enough to make it unattractive. Hard enough to make Linux not the first choice.
What’s even more unclear is whether the FOSS world will be able to solve this problem. FOSS projects tend to have a “you become OSS too, or screw off” attitude. Such is the case with kernel drivers, and in practice, with other components as well. Purely FOSS projects are not good at maintaining backwards compatibility, stable API’s (both source-level and binary), and documentation. This is not usually because the project members are incapable, rather, it is a dynamic of the development model. These problems are really, hard, require lots of work, and the community hasn’t figured out how to solve them without paying people to do the work.
There are exceptions, of course, like the Linux syscall interface. Or the basic X protocol. Even glibc, or the python language. But these things are all really basic. Microsoft extends this stability, compatibility, and documentation much more broadly across all their products and technologies.
This also applies to hardware itself. If you’re a hardware company, and you want to get your new device out there as quickly as possible, which platform are you going to choose? If you go with Windows, you get a stable driver interface that you know will work for at least a few years, as well as lots of potential users. You can also distribute the driver with the hardware, and update it on your own schedule. If you go with Mac, you can still write a driver, but who knows if Apple will break the driver interface. If you go with Linux, you can write the code, even submit it to the kernel, but who knows when the next distro release will pick it up? What if you want to keep some of the code closed? play nasty binary blob tricks like the intel driver or the nvidia driver?
Anyways, the point is there are still lots of aspects of Linux that make it less suitable for the desktop. It’s not surprising really. It’s Microsoft’s turf, and they’ve mostly succeeded at it for the last 15 years.
Evolving the Open Source development model
If open source is to succeed more and more, the development model needs to evolve. Companies continually evolve their development processes as the competition environment changes.
One of FOSS fundamental strengths is it’s ability to connect developers to their users in a way that cuts through a bunch of bullshit. It’s not just about the source code. It’s about the open development model — which means open discussion forums, open bug tracking systems, open decision making systems. This strength also lets hobby developers, user-developers contribute back to the project relatively easily. The easier this is, the better for everyone.
Here are a few areas where I think there can be improvement:
Reporting of crashes: Right now, on most Linux distros, users have to install weird debug packages to get any kind of useful backtrace info on a crash. That’s a big burden. A windows-like PDB (separate debugging info file) could be a huge benefit here.
Improve the communication chain from the user to the developer: I think Launchpad.net is fantastic in this regard. There is not really that big of a reason why every OSS project should have it’s on bugtracking system. There is, on the other hand, a huge benefit of consolidating various bug tracking system into one that is cross-linked and comprehensive. The easier it is for a motivated user to contribute problem reports back to a developer, the better.
Make it easier to hack on things: It’s surprisingly easy to start hacking on windows, even for a component that is in the middle of a complex dependency chain ( a dll for example). Download the source code, build it, replace the dll, and go. I don’t think it’s so easy on Linux. Complex package managers are partially to blame. How do I hack on just cairo and pango, for example? I have to check out their trees, then know enough about my distro’s layout to rename files and make symlinks and do all kinds of hacky stuff just to make it use my local versions.
Another way to put this is that for Linux development, the developer’s environment can look very different from the user’s. A different kernel, different libraries, different compiler. All are possibilities. The more the development environment can be standardized, the more efficient the development environment can be.
A wild and crazy idea maybe is a more unified distro and upstream development system. The distro side doesn’t take tarball drops from upstream, but rather just integrates their source control directly. The distro builds its packages by synching to known revisions in the upstream tree (maybe also maintaining patches locally), but then also gives the power for the end user to do the same. So as a user, I could check out the same sources that the distro is using, rebuild the same piece of my installed system, and also get all the benefits of a SCM system for my local changes (which also makes it easy for me to send patches).
I still use Windows
Lastly, I’ll just note my personal reasons for still using Windows (XP 32), so that I can re-evaluate next year.
Applications: Some apps I want are only on Windows; some commercial (Lightroom), some free (FABDVD Decryptor). The OSS apps I want are mostly on Windows too (Firefox, Gimp, Python). Notable exception is Rhythmbox, but I can live with iTunes. I also like gnome-terminal over PuTTy, but I can also live with the latter.
Other features: Font rendering is still better on Windows. Though I don’t mind the Mac’s rendering so much either. Color management. Solid support for input methods. Webcam/skype. Blackberry. Occasional games.
I could see moving back to a desktop form factor Mac in the next year though, especially now that VMware Fusion is available.
Finally
Longest blog post evar. I wrote it mostly for myself to solidify a lot of the stuff I’ve been thinking about recently. Hopefully someone else will at least find part of it interesting.

Display price update

Over a year ago, I wrote up a quick and dirty analysis of display prices. Seems like a good time to revisit that now.
First, I’ll note the differences in prices since last time:

  • A Dell 30 inch (3007WFP-HC) is now $1189 (down from $1700 for the non-HC version)
  • Apple’s 30 inch is now $1800 (down from $2000)
  • There are several other 30 inch contenders, including Gateway (~$1700), Samsung 305T (~$1250)
  • Low end 20 inch displays are now $250 ~ 300 (down from $400)
  • High end 20 inch displays are now $800 ~ 1000 (down from $1200)
  • Low end 24 inch LCD displays bottom out around $350
  • High end 24 inch LCD’s are around $700, except for the Eizo’s which are still around $1300 (down from $1700)

Also new is the introduction of the 27inch form factor, which still sports a 1900×1200 resolution, which enter around $1000.
Based on this info, I calculated the same price per display square inch, and price per kilo-pixel as I did last time. For the price of each monitor size, I looked around and found that model that I would pay for, and took that as a representative price. I’m probably picker than most, so results may be different for you.

Diagonal Size Price $/sq inch $ / kpixel
20 (4:3) 350 1.82 0.18
24 600 2.31 0.26
27 1000 3.04 0.44
30 1200 2.96 0.29

As expected, the 30-inch model is cheaper per-pixel than the 27 inch models. The 27 inch models seem like an especially bad idea if pixel count is what you’re after. The 20 inch is still the cheapest, both in terms of physical display area, and pixel count. The price per pixel of the 24 is closer to that of the 30 than it is of the 20.
Update: Some math corrected.. my spreadsheet had the 24 and 27 inch models as having 1600×1200 resolution instead of 1900×1200. That makes them a bit cheaper in the price per pixels column.

Poor man’s high resolution monitor

Many people have been waiting for high PPI monitors for a long time. There’s a brief period a few years back where laptop companies offered WUXGA 15.4 inch screens, but it appears those monitors proved unpopular among the general population who still had to deal with tiny fonts in windows.

The poor man’s way of making your monitor seem like it has greater resolution is to sit farther back from the surface of the screen. But how much farther back? Well, let’s do a little math.

The equation in the diagram relates PPI, distance from the screen, and the angle of the arc covered by a single pixel in your field of vision (theta). For two different configurations, if the value of theta is equivalent, the percieved PPI of a screen should also be equivalent.
Here are a few set of values that I calculated:

X (in inches) Y theta (in degrees)
18 96 3.315 e -2
18 200 1.591 e -2
36 96 1.651 e -2
18 150 2.122 e -2
28 96 2.131 e -2

96 PPI and 18 inches is a pretty standard setup. I tried doubling either the PPI or the distance to see what would happen, but as you can see from the equation, since X and Y play both equal parts in the denominator, doubling either variable produces the same effect.

The last two rows I used to figure out, how from on a 96 PPI monitor would I have to sit to percieve 150 PPI.

So I’ve tried pushing my screens back at work to see how it works out. I don’t have a tape measure so I can’t be exact about the values, but there seems to be a positive effect so far. However, there’s also something very jarring about reading monitors that are so far away. I’m not sure if my eyes are just not accustomed to it, or whether humans have a natural reading distance for small text.

But the whole point of this was to see if it would make the color fringing on my Ubuntu-rendered subpixel anti-aliased text less noticeable, and it seems to at least partially have that effect.

Update I left out a critical step in simulating the high DPI experience.. which is to adjust your font sizes. In Gnome this is easy, just go to the font control panel, hit “Details” and then set the DPI to a higher value. This should get reflected in most of your apps. Without doing this, you’re essentially just looking at really small text. If you double your percieved DPI without modifying the fonts, then you essentially made your 8pt font now look like a 4pt font. Doubling your font DPI setting will make your 8pt font 8pt again, giving it twice as many pixels to render with.