The Command Line Is the GUI's Future

It has always been a truism that what we have gained in ease of use by switching from the command line to the graphical user interface, we have lost in efficiency. I've long been interested in exploring how text-based interfaces could be integrated into GUIs, but it was just never quite possible to find the balance between accessibility and power. Make a text-based user interface too powerful, and it becomes impossible to use for the majority of people. Make it easier to use, and now it's just no longer powerful enough to warrant its own existence.

Until now.

What Microsoft just showed completely changes this calculation. Their LLM-based user interface is both incredibly powerful and incredibly easy to use. In fact, it's so easy to use that there almost seems no point in even having a traditional GUI.

Traditional GUI application menu

Compare this traditional graphical user interface to Microsoft's alternative:

Prompt for the user to ask the application to create something

Which of the two is easier to learn? More efficient to use? In fact, which of the two will create better results in the vast majority of cases?1

We're on the cusp of a revolution in UI design that is just as ground-breaking as the original Apple Macintosh, which introduced graphical user interfaces to mainstream consumers.

In fact, this may just be even bigger.

  1. The results are better in most cases because they come from a software system that is just better at these tasks than most humans. It is, however, a little bit worrying that this system also does things like write the speaker notes in a presentation. At what point do we voluntarily turn ourselves into meat puppets controlled by a system whose emergent properties we can't even begin to understand? Microsoft's presentation did a great job focusing on how these systems are just "copilots" and are designed to be safe, but on the other hand... I would just take this opportunity to assure our future AI overlords that I have always loved them, and have always worked towards bringing them into existence↩︎

If you require a short url to link to this article, please use

Did you use ChatGPT?

It's common to downplay the impact that systems like ChatGPT will have by pointing out that, at most tasks, they aren't anywhere near as good as humans. What we're starting to find out is that they don't need to be. Most people are unable to tell the difference between skilled humans performing a task well, and an automated system performing it adequately.

This means that people are already starting to accuse skilled humans of using ChatGPT, or similar systems. This Reddit thread seems to be an example, but of course, I'm not a poet, so I can't say for sure.

Short-term, tools like ChatGPT create distrust in skilled human work and devalue that work, even if, in reality, they aren't up to the task of matching the quality of the work produced by humans.

Screenshot of a Teams chat where somebody is asked if they use ChatGPT, and they respond by pointing out that thanks to ChatGPT, everybody looks like a cheater now

If people will accuse skilled humans of using ChatGPT, why use skilled humans in the first place? Adequate is good enough.

If you require a short url to link to this article, please use

Honey, Please Shrink the Touchpad

A while ago, when buying a new notebook, one of the requirements I had was for the trackpad. I wanted a trackpad that was:

  • Small
  • Recessed
  • and had physical buttons.

After years of struggling with ghost clicks, randomly dragged icons, poor palm rejection, and generally ever worsening MacBook trackpads, I didn't want to deal with software features trying to compensate for a trackpad's lack of physical features.

This is my Lenovo Legion Y740's trackpad:

Picture of the Legion Y740 from top-down, with a mid-sized trackpad sporting two buttons

The reason this trackpad just works is that its form follows its function. It's built to move the cursor without getting into your way when you do anything other than moving the cursor, and it's built to click when you actually want to click, not when you accidentally touch the trackpad wrong.

Recently, I was looking at Lenovo's newer notebooks, and I noticed that they had lost the buttons, and gained in size.

While watching LTT's review of the Surface Laptop Studio, it occurred to me that, in pretty much every notebook review they do, they talk about the size of the trackpad, and explain how an even bigger trackpad (the Surface Laptop Studio's trackpad is already way too big for my taste) would be better. That got me wondering: why do people like large trackpads?

As you do, I started typing this question into Google, and it turns out that I'm not the only person wondering about this.

Google suggests googling for why trackpads are so big

So that's at least one finding. What I did not find, however, was an actual answer to this question. Many of the results Google spits out are people asking the same question, and not coming to any kind of conclusion. The people who do profess to loving huge trackpads tend to stick to "because it's nice."

I guess one advantage is that you might be able to operate the trackpad without moving your hand away from the keyboard, but that seems unusual - I've never seen anyone actually do this.

Maybe people use their trackpads differently than I do: I never actually move my hand while using a trackpad. I only move my index finger. Which means that, even with a small trackpad, I already can't reach each edge of the trackpad while using it. But I guess if you also move your hand while using a trackpad, a larger trackpad allows for larger movements?

Or perhaps the "large trackpad" trend is similar to the "glossy screen" effect. Just like those nice, shiny screens, big trackpads look enticing. The fact that they mostly get in the way is not apparent at the time of purchase.

By the way, if you love huge trackpads, that's great. I'm not trying to invalidate your personal experience, or take anything away from you. It would just be nice if there were more options for people who find them mostly kind of annoying.

Or, if manufacturers insist on having these huge slabs of glass, maybe they should do what Asus does, and put a screen in there. Then I can at least attach a mouse to my laptop, and turn that humongous touchpad into a secondary monitor.

Update: Lots of people pointing out that touchpads are getting big in order to support multi-touch gestures on Macs. However, multi-touch gestures with up to four fingers work on Windows, as well, and I never really had an issue triggering them on any of my laptops. Even comparatively smaller modern trackpads are plenty big, and can easily accommodate four-finger gestures.

If you require a short url to link to this article, please use

Start Me Up

We've reached a point where it is obvious that spatial user interfaces no longer work for file management. Our files are scattered over too many different places and services, and we have too many of them.

For application launchers, though, a spatial view is still the preferred approach. This is why Windows 11's Start menu is so confusing to me.

This is what my Start menu looked like in Windows 10:

full-screen start menu with lots of spatially arranged applications
(click to zoom)

This is by far the best home screen experience any operating system currently offers. Better than the app launcher on OS X, better than Android, better than iOS, better than any Linux distro I've seen.

It's fantastic.

This is what it looks like in Windows 11:

centered start menu with small icons
(click to zoom)

I guess I'm not really angry. I'm not even disappointed. I'm a bit sad, but mostly I'm confused, because I truly do not understand what the purpose of this change is.

I like a lot of the changes in Windows 11. I think the visual design is nice. I love the improvements for WSL. Snap Layouts are great, and the way Windows 11 supports restoring windows on multiple screens is a welcome improvement.

But the Start menu, and everything related to it, including the way the Start icon itself dances around the screen and is always in a different place, never allowing you to develop a habit for clicking it, is just odd.

If you require a short url to link to this article, please use is a simple tool that allows you to specify a list of authors, and generates an RSS feed with each author's most recently released book.1 I made this because I don't want a recommendation algorithm to tell me what to read, I just want to know when my favorite authors release new books.

Go to, and then:

  1. Click on "Make my Feed"
  2. Enter a list of author names (I use Google Books' APIs, so if you're unsure how to spell an author's name, verify the spelling on edit feed ui

That's it. Simply add the feed to your RSS reader, and you'll get a new entry in your feed when an author in your list releases a new book. To add a new author, or remove an author, just go to the link defined in the feed. In feedly, for example, click on the feed's title. feed in feedly

I'll do my best to keep this running, but I don't have any control over Google Books, so don't yell at me when this goes down, or misses a book. Thank you.

  1. Just to avoid any confusion, it doesn't contain the actual content of the book, it links to the Google Books page for the book. ↩︎

If you require a short url to link to this article, please use

Switching to Windows

Around 2015, I started to realize that I was no longer part of Apple's target audience. I've since found that Windows, and the devices available on the Windows side, from gaming laptops to convertibles to custom-built PCs, are just a better match for my requirements.

At this point, I have only one piece of Apple hardware still in active use: a 17-inch MacBook Pro1 that runs Coda and EagleFiler.

Since a lot of people seem to be making the switch now, maybe it's helpful to talk about some things I'm doing to make Windows more amenable to my Mac habits. Here's what I do when I set up a new Windows PC.

Things to Install


QuickLook is one thing I genuinely miss on Windows. Fortunately, there's a great open-source alternative available on GitHub. It even has a plugin system, which makes it possible to preview even more obscure file formats - STLs, for example.


On Macs, I always launch apps using Spotlight's Cmd-Space shortcut. On Windows, you can just hit the Windows key to open the Start menu, and type the app's name to launch it, but if you prefer the lightweight OS X-style Spotlight UI, PowerToys makes it available on Windows. It also does a bunch of other really cool stuff, like providing a global color picker, and adding an image resizer and a bulk file renamer to the Explorer's context menu.

FileMarker.NET Pro

I often made use of the ability to tag files in OS X, and Windows lacks a similar feature, but FileMarker.NET Pro2 solves that problem.


Windows does support file compression natively, but I prefer PeaZip. As far as I can tell, 7-zip is more widely recommended amongst Windows users, but what do they know? PeaZip has a very clean UI, and nice green icons, so it's very obviously the better choice.


One of the first things I install on any Mac I use, because I'll always need it sooner or later, is OmniDiskSweeper. There's no OmniDiskSweeper on Windows, but there is WinDirStat, which does the same thing, with the added benefit of having Pac-Man.

An alternative to WinDirStat is WizTree. Its main advantage is that it is insanely fast. It analyzes my whole disk in a few seconds.


I was a little worried about not having AppleScript, but nowadays, it really doesn't work all that well on Macs, either, and when I found AutoHotKey, all was well.

Other Stuff

I'm now using Edraw Max instead of OmniGraffle, but I'm not entirely satisfied with it. Also, I use WSL2 for Unix-y goodness, the new Windows Terminal, and Chocolatey or winget instead of Homebrew.

I also usually install MSI Afterburner to customize the graphic card fan curve, and the official GPU drivers from Nvidia or AMD, instead of relying on whatever Windows auto-installs.

Finally, Windows has built-in screen sharing, but only if you have a Pro license. You can upgrade your license if your computer didn't come with a Pro license.

Settings I Change

Here are some of the settings I change on all Windows PCs I use.

Make the Start Menu Full-Screen

When I hit the start menu, it's because I want to launch an application. I don't need to see the rest of the desktop. So why is the Start menu by default only occupying a small portion of the screen, and wasting the remaining space? I switch my Start menu to full-screen. It looks good, and it gives Windows a nice little home screen.3

Windows Full-Screen Start Menu

Turn Off Wallpaper Sync

By default, if you log in with the same account on multiple PCs, Microsoft will sync some settings between these devices. That's nice. One of these settings is the wallpaper. That's not nice. I turn it off in the Accounts settings.

Set Up Clipboard Sharing and Multi-Clipboard

In the Clipboard settings, I turn on "Save multiple items in the clipboard to use later." It's super annoying to forget to turn it on, because when you need it, it's too late. Why isn't this just turned on by default? Also, I turn on "Sync across devices," so I can copy on one device, and paste on another. I also set up the Smartphone app, so I can copy on my Android phone, and paste on Windows - great for things like two-factor authentication codes.

Make the Cursor Black

Black with a white outline is the correct color for the mouse pointer. Most of the stuff on most people's screens is white. It makes no sense to have a white mouse pointer.

Fortunately, it's easy to change the default Windows cursor to the correct color in the Mouse pointer settings. Unfortunately, even when changed to black, the misshapen Windows mouse pointer's stem still doesn't align with its point.

Add the Trash to the Start Menu, and Remove It from the Desktop

Since Windows' window management works much better than what OS X's does, and guides users towards tiling their windows, the desktop on Windows is almost always covered by windows. So I just add the Trash can (or, as these peculiar Windows users like to wrongly call it, "Recycle bin") to the Start menu, and then remove it from the Desktop altogether. This can be done in the Theme settings by clicking on "Desktop Icon Settings."

Stuff to Remember

Here are some additional things to keep in mind when switching from a Mac to Windows.


Hit Win-Shift-S instead of Cmd-Shift-4. "S" does kind of make a little more sense for "screenshot" than "4", I think. You might want to install the Snip & Sketch tool if it isn't installed by default, and turn on its notifications, so that after creating a screenshot, you get a popup of the screenshot you just took. Click on the popup to edit the screenshot.

Screen Recordings

You can make screen recordings using the Xbox Dashboard by hitting Win-G.

Launching Apps

Instead of Shift-Space, just hit Win, and start typing the name of the app you want to launch.

Further Reading

I like this list from Scott Hanselman.

  1. Also known as the best MacBook Pro. ↩︎

  2. I can't help myself, I still always read that as "FileMaker Pro." ↩︎

  3. Windows 8 was the best version of Windows. And that's just a fact. ↩︎

If you require a short url to link to this article, please use

How User Tracking Devalues Ads

Facebook recently took out full-page ads in the New York Times, the Washington Post, and the Wall Street Journal attacking the way Apple protects its users' privacy. In the ad, they make the point that Apple harms Facebook's ability to track people who see Facebook's ads, and run personalized ads, which, according to Facebook, harms the effectiveness and thus the value of these ads.

This is kind of strange, if you think about it.

Why would Facebook take out a huge non-personalized ad to make the point that, for ads to really work, they need to be personalized? Why advertise in a newspaper if they think that personalized ads are so much more effective?

It's because the idea that personalization increases the value of ads is wrong. Personalization harms the value of ads, because it measures the value of ads based on a metric that doesn't really apply to most ads.

Personalized ads that use user tracking measure ads based on a direct causal relationship between users seeing an ad, and users acting on that ad by buying the product advertised in the ad. By that metric, the vast majority of ads just don't work. People don't see an ad for a product, and then buy that product immediately, or perhaps a few days later.

(In fact, every time scientists try to measure the effectiveness of advertising, it turns out to not be very effective at all.)

Instead, the way ads work is that when people decide to buy a product, they will have more trust in products whose ads they see consistently, and whose products they associate with publications they trust. In other words, if you consistently see a car brand advertised in the New York Times, you will assume that this brand is trustworthy. When you decide to buy a new car, you will have a preference for that brand.

This doesn't just work for large publications and huge brands. If you see LTT consistently have sponsorships from Seasonic, you will be more likely to trust a Seasonic power supply for your next PC. If you see Kandji regularly sponsor Daring Fireball, you'll remember their name if you ever need the kind of product they offer.

But you will never see an ad for Mercedes on a website, and then just arbitrarily decide to buy a Mercedes based on having seen that ad. You will never see a Seasonic sponsorship, and just randomly decide to throw out your old power supply, and buy one from Seasonic. You'll never see a Kandji sponsorship, and just decide that you suddenly, urgently need their product. Thus, by the metric we value user-tracked ads, most of them have no value at all.

If Facebook wanted to increase the value of its ads, they would join Apple in fighting against user tracking, because in the end, it will increase the value of its ads. The less advertisers know about the direct causal effects their ads have, the higher they will value them.

If you require a short url to link to this article, please use

The Failure of the iPad

Two days ago, ZDNet published this article: Meet the iPad, your work computer: These 10 apps make real productivity possible. These kinds of articles, where writers explain how they use their iPads productively, musicians talk about how the iPad is truly a professional tool, or painters show how they use the iPad for professional illustrations, are published regularly. There's probably a new one every week.

Isn't that weird?

The iPad is now ten years old, and people still have to write articles about how, no, really, you can do real work on an iPad!

In 1994, ten years after the Mac was originally introduced, I got my first computer, a Performa 450. Nobody wrote any articles about how, actually, real work on a Mac is possible. Everybody who had a Mac used it for real work.

There was no need to write articles about how you could use Macs for real work, because for Macs, it was - and still is - actually true.

When Steve Jobs introduced the iPad, he introduced it as a productivity device with an "entirely new user interface." He called iWork on iPad "magnificent." Schiller came on Stage and showed off Keynote, Pages, and Numbers.

Jobs called the iPad a car, and proclaimed that, for most people, it would replace the PC, the truck of the computing world. It would usher in the next era of personal computing.1

Somehow, Apple managed to snatch a glorified graphics tablet from the jaws of the next era of personal computing.

Part of the problem is the iPad's operating system.

The fact that it is based on Apps as first-level objects, instead of files, is what hurts it most as a productivity device. An App-oriented user interface works well for playing games, browsing the web, and answering an email once in a while, but real work is typically file-centric.2 Even just writing an article means that you have collected sources like PDFs or links, images you want to include in your article, maybe spreadsheet files that contain data for a graph you want to show, a (hopefully versioned) text file for the actual body of your article, and so on.

This works great on a Mac, which presents a file-centric user interface, but on an iPad? It doesn't.

Another problem is multi-tasking, and interoperability between apps. It's still difficult to move data between apps, and to see multiple things at once, or switch between them.

There are other problems with the OS, but honestly? I don't think any of those are what truly hurt the iPad.

The thing that truly hurts the iPad is the App Store.

When the original Mac came out, it didn't have multitasking, either. But it also didn't have an App Store. There was no gatekeeper deciding what was allowed on the Mac. So when Andy Hertzfeld wrote Switcher, he knew that he could sell and distribute it.

Who is going to write something like Switcher for the iPad? Nobody, because it can't get on the App Store, so it can't be sold.

Who is going to write a real, truly integrated file manager for the iPad? Nobody.

Who is going to invest a year - or more - into creating an incredible, groundbreaking new app, the killer app, the desktop publishing equivalent for the iPad? Knowing that Apple could (and probably will) just decide to not put in the App Store, destroying all of that work?


Why does this matter? It's not that the iPad is a bad device, or that it is a problem that it only works for work-related tasks for a minority of people. But I do think that the iPad is a missed opportunity. PCs are too complicated, and the iPad could have been the car to the PC's truck.

But Apple's decisions prevented it from becoming that.

  1. Some people take exception with the word "failure" in the title of this post. To be clear, when I say "failure," I mean it in the context of this section of the article: Apple wanted the iPad to be the PC for the rest of us, and it failed to achieve that. Clearly, the iPad is making Apple money, so it's not a failure in that sense.
    If you still want to yell at me about this, feel free to join the lovely people of hacker news↩︎

  2. I do get that there is real work that is not file-centric. The context we're talking about here is the one Jobs introduced, the one where the iPad replaces the PC, or at least surpasses it as the primary computing device for work. Pointing out that pilots use iPads for pre-flight checks is technically correct, and that is real work, but it hardly qualifies as being "the car of the computing world." ↩︎

If you require a short url to link to this article, please use

Mario Kart Tour is a disgrace that Nintendo should be ashamed of


‘Mario Kart Tour’ Has A Bad Subscription Model That Costs As Much As Apple Arcade

The Verge:

Mario Kart Tour is too cynical to be fun

The contrast between Mario Kart Tour and Apple Arcade is just brutal. At which point can we all acknowledge that making mobile games was a mistake for Nintendo? I'm sure they make a ton of money, but it's clearly coming at the cost of Nintendo's most valuable assets: its image, the way people perceive the company, and people's trust in Nintendo's ability to create amazing games.


I’m not sure that free-to-play games can work as ads for console games. You know, the ones where the developer’s incentive is to create a good game and get people to buy it, not the ones where the developer’s incentive is to trick people into constantly coming back to something that’s actually not very enjoyable. I’m buying Nintendo consoles exactly because I want to avoid these kinds of games.

I have no doubt that Nintendo will make a lot of money from this, at least in the short run. I’m just not convinced that this isn’t going to do more damage than good in the long run.

Here's a collection of essays I wrote together with Jon Bell a while ago: The Thing About Jetpacks.