The legibility of the text is only one of Apple’s many design failures. Today’s devices lack discoverability: There is no way to discover what operations are possible just by looking at the screen. Do you swipe left or right, up or down, with one finger, two, or even as many as five? Do you swipe or tap, and if you tap is it a single tap or double? Is that text on the screen really text or is it a critically important button disguised as text? So often, the user has to try touching everything on the screen just to find out what are actually touchable objects.
Apple is hardly the only company guilty of this, but it is true that they are among the worst offenders.
3D Touch on the iPhone 6S, while technically astonishingly impressive, is only the most recent example. Apple, who steadfastly refused to add a second mouse button on the Mac, has now needlessly added one to iOS, a platform that never had a second mouse button, and where nobody ever asked for one.
I really think this can’t be overstated: Apple has added a hardware feature to the iPhone whose sole purpose is to help developers add hidden features to their apps.1
I can’t get over the feeling that Apple added 3D Touch to the iPhone because it is incredibly cool, not because it makes the phone easier to use. It’s «wow!» design, not «it just works» design.
That’s not a great way to make design decisions. Remember how funny we thought the Blackberry Storm was, with its «sometime you just tap it, but sometimes you have to press harder and make it actually click» screen? Well, that’s now your iPhone.
Likewise, people made fun of Windows 8, and how people found it hard to use at first, but one of its genius decisions was to put all of its hidden features behind edge swipes. In order to figure out how to find possible actions in Windows 8, you had to learn exactly one thing: swipe from the sides of the screen to see your options.
There’s no such rule for iOS or Android.
I think people should stop being so preoccupied with things like the burger menu,2 and start worrying about all of the hidden interactions Apple — and, to a slightly lesser degree, Google — are adding to their mobile platforms.
Some people think that this is just a way of creating a two-tiered user interface that is simple to use for normal people, but offers additional features for power users. This used to be true, a few years ago, when iOS tried to surface commonly used features, and hid a small number of additional power user features behind non-obvious interactions.
But today, commonly used features are hidden behind gestures, and there are so many different features hidden behind different kinds of interaction patters that it teems unlikely that even most power users know about them. And why should they? You can expect people to figure out that the app switcher is hidden behind double-tapping the home button, or that you can rearrange apps on the home screen by long-tapping them, because these are commonly used features, and there are only two of them. By now, though, every single app is full of these things, and it’s only getting worse.
So I agree that Apple’s intention is likely to create a two-tiered user interface. But that doesn’t mean it’s not also bad usability. Thinking that it must be either or the other is a false dichotomy.
At least the hamburger button is something people can actually see, and as it has become more prevalent, people are getting good at recognizing it.
If you require a short url to link to this article, please use http://ignco.de/731
This is kind of funny:
Not because I think it’s useless, but because this just arrived in my mailbox today:
I originally backed this physical iPhone keyboard on Kickstarter because typing Swiss German on an auto-correcting German virtual keyboard is difficult. If I turn off auto-correction, typing English and German becomes difficult. There’s no built-in auto-correction for Swiss German.
On Android, Kännsch helps a lot. It’s a dedicated keyboard for Swiss German.
Still, at least for me, typing on a physical keyboard always worked better. It helps me type correctly without relying on auto-correction.
Of course, I no longer use an iPhone at all, so my new Spike keyboard case is not much use to me anymore.
I do have a friend who is legally blind, and recently asked me if I knew of a way of attaching something physical to an iPhone to get a tactile feel for where the keys are, so I’m going to give it to him and see whether he likes it.
Clearly, these keyboards are niche products. But I’m not sure why people sometimes seem to be almost angry about the fact that they exist at all. If you’re perfectly happy with your virtual keyboard, that’s great. Nobody is ever going to take your virtual keyboard away from you.1 But the fact that you don’t like physical keyboards shouldn’t mean that nobody else is allowed to like them, and I’m quite glad to see Samsung do something in this space.
So I don’t think the problem with the Samsung keyboard is the fact that it exists. The problem is the fact that it seems to suck.
If you require a short url to link to this article, please use http://ignco.de/723
When compared to the latest iPads, these first two iPads are simply inferior tablets with slow processors, heavy form factors, and inferior screens. But none of that matters with owners. This is problematic and quite concerning, suggesting that many of these tablets are just being used for basic consumption tasks like video and web surfing and not for the productivity and content creation tools that Apple has been marketing.
The Apple media has long been pushing against this narrative, and, in doing so, has helped shield Apple from much-needed criticism at a time when decreasing sales had not yet forced Apple to acknowledge that something was wrong with the iPad.
I wrote about this a while ago.
In reality, one reason sales momentum was slowing was iPad owners weren’t upgrading their device.
I think this is a highly problematic argument. The fact that Apple is bringing this up shows just how poorly the iPad doing. If it is already relying on upgrade sales for a major portion of its sales, and sales are actually falling without upgrade sales, it is not reaching enough new customers to maintain growth. That’s bad.
The PC market relies on upgrade sales. The plastic spoon market relies on upgrade sales. The pants market relies on upgrade sales. But a device as young as the iPad should not be relying on upgrade sales to this degree. If Apple thinks that the iPad’s sales are falling because of a long upgrade cycle, the implication is that the iPad has already reached a large portion of all people it’s ever going to reach.
By selling a device that is truly designed from the ground-up with content creation in mind, the iPad line can regain a level of relevancy that it has lost over the past few years. In every instance where the iPad is languishing in education and enterprise, a larger iPad with a 12.9-inch, Force Touch-enabled screen would carry more potential.
Better hardware would help, but I think it’s very important to acknowledge that the thing standing in the way of productive work on the iPad is not its hardware. It’s iOS.
iOS is a cumbersome system for even reasonably complex productive tasks. Apple has started fixing the window management problem, but there’s still the document management problem1 (most real-world tasks involve multiple documents from multiple sources — there’s pretty much no way to organize and manage document from different applications in iOS), and the workflow problem (many real-world tasks involve putting the same document through multiple apps, which iOS is still not great at, albeit getting better).
And then there’s the fact that few developers are willing to invest a lot of money into productive apps on the iPad. They are expensive to create, the market is small, and Apple’s handling of how apps are sold on its devices does not instill confidence.
The thing that’s preventing people from using the iPad productively is not the small screen, it’s the operating system.
Right now, for most of its users, the iPad is a consumption device. It’s not a PC replacement, and it’s not really much better than a phone for gaming or watching movies or reading. That puts it into an awkward position. But it doesn’t have to be. There’s no reason the iPad couldn’t replace most PCs in people’s homes, and be better than those PCs at most tasks people currently use PCs for. No reason — except for Apple’s lack of willingness to make the iPad into that device.
While I largely agree with his thoughts on the importance of new, differentiated hardware, Cybart doesn’t address what for me is the more critical issue: the fact that so little software innovation has happened on the iPad since its debut. Until recently, Apple’s approach has been to closely tie the iPad’s operating system with the iPhone’s, a decision that has contributed directly to consumers really being at a loss for why they need to own these devices.
Writing about this topic is difficult, because the response is predictable. It’s often along the lines of «but I use my iPad for productive work», or «I have replaced my MacBook with an iPad.» I’m completely honest when I say that I think this is fantastic. If it works for you, that’s awesome. You’re using the right tool for your job, which is what all of this is about.
Of course, the people writing these things are often, well, writers. It’s true that there are plenty of Markdown text editors on the iPad. If that’s what you do, then the iPad is a great tool for productive work.
But we should also acknowledge that, if you go visit most normal people who use iPads, it’s sitting on their kitchen table or sofa, being used as a web browser or TV guide. If they have to create a «please help me find my lost cat» poster, or scan and archive the tax documents their bank sent them,2 they do it on their PC. I’m sure both of these things could be done on an iPad, but if it’s harder to do on an iPad than on a Windows PC, why bother?
And, down the road in three years, when they need to replace their PC, are they going to replace it with a better iPad, or are they going to stick with their current iPad (which works fine for browsing the web) and buy a new Windows laptop? If their friends ask them how much they like their iPads, are they going to say «this is amazing, I can do so much on it, and it’s much easier than my PC», or are they going to say «it’s a great web browser, if you need that?»
That’s why iPad sales are falling.
Unlike Apple writers, normal people don’t have an incentive to invest weeks into figuring out how to work around the iPad’s limitations, and moving their work tasks there. So until Apple makes this easier for them, they won’t.
I’ll end this with a quote from Federico Viticci’s article, which I think is exactly right:
But back down to Earth for a moment. For all its advances, the iPad is still surprisingly not suited for common computing tasks such as downloading files with a web browser, attaching documents to an email message, and referencing two distinct files or pieces of information at once while doing something else. And I could go on, mentioning the inability to listen to a video in the background and the primitive state of iOS’ media player (essentially unchanged since 2010), the astounding limitations of Apple’s Mail app compared to its Mac counterpart and the lack of innovation in the system Calendar – but I’ll save these thoughts for another article on iOS.
Apple’s challenge for the next five years of iPad is to clarify whether this device is a portable screen for specific tasks or a general computer in a portable form factor. And if it can excel in both scenarios without losing its way. Apple needs to design the iPad so that its everyday computing nature also facilitates highly specific tasks and use cases.
Fixing this problem does not mean «giving access to the file system.» When I say that Apple needs to fix document management, people sometimes assume that I’m saying that they should bring something like the Finder to iOS. I’m not. The Finder approach to file management is broken. It was designed for a time when people had a tiny number of apps, and almost no storage space. That time doesn’t exist anymore, and neither should the Finder.
If you require a short url to link to this article, please use http://ignco.de/720
Undo on mobile devices has always been problematic. Shake to undo never really worked well, and most mobile apps don’t have a permanent menu bar where they can stick an undo icon.
In Android, Google has started using a kind of ephemeral undo. When you execute an action that can be undone, Android temporarily shows an undo button. Here’s what you see when you delete an alarm in the Clock app:
If you want to undo the action, you can tap on the button.
The Mail app works in a similar fashion. Archive a message, and it gets replaced by an undo button:
I think this works well for applications that don’t have a single, well-defined undo stack (e.g. a text editor or an image editor).
However, Google’s specific implementation of the feature effectively breaks undo for me. The problem is that these undo buttons can easily be dismissed. Tap anywhere outside of the Clock app’s undo button, and it disappears. Scroll the list of messages in the Mail app, and the undo button is removed. Pretty quickly, this behavior has trained me to automatically tap outside of the undo button immediately after removing an alarm, and to automatically scroll my messages immediately after archiving a message. Archiving a message and removing the undo button has become one single, seamless, automatic action for me.
As a result, undo now effectively does not exist for me anymore.
I’m not entirely sure what has caused me to develop this behavior. Perhaps it’s because the undo buttons look so out of place, and just tempt you to get rid of them. Or perhaps it’s because it feels like the action hasn’t been concluded properly as long as there is still a visible undo button. Perhaps it’s similar to the common behavior of automatically dismissing modal error or confirmation dialogs as soon as they pop up.
Whatever the reason, I wish Google would change the behavior, and either remove the undo button after a preset timeout, or only remove the previous undo button once the next undoable action is triggered.
If you require a short url to link to this article, please use http://ignco.de/718
When designers talk about their process, they often talk about things like sketching and wireframing and usability tests. But it has recently occurred to me that this is not what I usually start out with. The first thing I typically design is the application’s data model.
What kinds of things are there in the application? What properties do the things have? How are they related to each other? Can we normalize this structure, make it simpler?
If the application grows, can this model accommodate the changes?
Recently, I had a very preliminary design meeting about a website that would help people organize soccer matches. This seems like a simple kind of application. You probably have users and teams and matches. Users belong to teams, and teams participate in matches. Well, you probably also need to have events, if there are several matches at the same event.
But wait, if you have events, doesn’t that mean that you might not know all of an event’s matches beforehand? Maybe the event has some kind of run-off system where the winners of a set of matches play against each other, so the participants of that match aren’t known in advance. Okay, let’s drop that for now, but still try to design the system so that we might be able to support something like this at a later date.
So a typical use case would be for an organizer of an event to create a new event, add some matches, add teams to the matches, and add players to the teams. But some teams probably already exist in the system; perhaps the teams recreated their own teams in-system. Wait, we probably need to let players create their own accounts. But if they do that, can they choose which teams they want to belong to?
Or can only team creators invite players to teams? What if a player isn’t yet in the system, but the person who created a team added the player to the team anyway… we need to support something like this, but can the player then claim the spot in the team? What if different people added the same person to different teams, each creating their own player; can the person then consolidate these things into their canonical account?
All of these questions come down to model design. What are the basic entities in the system? How do they relate to each other?
This is often the first thing I think about when designing an application. You might think that it’s not really a designer’s job to do that; let the programmers deal with that stuff. You’d be wrong. The model fundamentally defines how an application behaves, what kinds of features it can support. If your model is an afterthought, if it’s inconsistent with the user interface, your application will never work right.
Start out with the model, and keep it in sight during the whole design process. Don’t let programmers take it over. It’s part of your job.
There are a million small experiential ramifications for your data model, and it’s the death of a thousand cuts if you aren’t thinking of them from day zero.
By looking at the UI, you should not be able to see already the data structure. The UI should be solely tailored to be easy to perceive and understand by the user. Users don’t think in data structures typically – don’t make them. (…) With the former rule at hand, some of us engineers tend to go out, design a view and make the data structure look exactly the same. This it not how it is supposed to be. We have layers in all of our applications to abstract away the data structure, for example in the database, from the user interface. So we should do it. Design the data structure to be efficient and elegant for storing. Not like in the user interface.
I think this is true for systems where you have no control over the data model; in those cases, you need to be careful not to fall into the trap of letting the existing data model dictate the user interface.1
But the simple fact is that the data model does govern what the UI can do. If we’re coming at this from the point of view of a frontend designer who has to turn a poorly designed backend into a human-friendly user experience, we’re already starting out on the losing side.
I think the conclusion we should draw from this is the following: having a backend engineer design the data model, and then trying to build a UI on top of it, is backwards. It should be the other way around. We need to design our data structures such that they support the kinds of UIs we want to provide, and such that they are flexible enough to support different kinds of UIs in the future.
This doesn’t mean that we should always store data denormalized, or store it exactly in the way it’s shown in the UI. It mustn’t even mean that UX designers need to become data model experts, and spend their time attending database design classes (although that might be helpful). Instead, it can simply mean that we should do at least some basic UX design work first, and derive data model requirements from that.
If you require a short url to link to this article, please use http://ignco.de/715
While writing about window management in iOS, and comparing it to what Microsoft had done in Windows 8, it occurred to me that we truly live in amazing times. The fact that three different companies are giving each other a run for their money on OS design is fantastic for people who actually use these devices. I’m growing more and more tired of the partisan reporting that happens in the tech industry, where people pledge allegiance to one or the other multinational corporation and support everything that corporation does, while automatically dismissing and belittling everything everybody else does, particularly if other companies copy features from the one company they like.1
This type of jingoistic support for one’s chosen platform, combined with acerbic, often sarcastic dismissal of the competition, helps nobody — not even the company being supported. That company would be better off if its own users provided honest feedback on its shortcomings, not fawning, thoughtless support.
Indeed, people who buy Apple products should be ecstatic that Microsoft and Google are competing with Apple, and vice-versa.2 Everybody should be happy if each company takes each other company’s greatest ideas, and improves upon them. The last thing we want is another 90s-type situation, where one company controls 95% of the market, and as a direct result, progress just halts for a decade.
I’m really hoping that we’re not living through another Amiga/Atari era, were we have a bit of competition for a few years, but eventually, some companies die, others fade into irrelevance, and one company ends up owning most of the market.
In fact, I wish we’d see even more competition! I wish Samsung would get serious with its own OS. I wish HP would revive Web OS. I wish Blackberry would stop making bad decisions, and start kicking ass again. I wish smaller companies like Jolla, Ubuntu, and the Firefox OS team would be better able to compete with the big guys. I wish Microsoft would get more credit for the progress it has made in UI design, instead of just getting crap for changing things from how they were in Windows 95. And I wish people would look outside of the confines of their chosen platform, and acknowledge the positive contributions that other companies are making. Get out of your bubbles!3 Other systems are great and interesting and useful, too!
This goes further than just interaction design. For example, I hope Apple keeps holding Google’s feet to the fire on the topic of privacy and encryption, and I hope Google’s more open stance on app development and on platform access will eventually force Apple to follow suit.4
The more competition, the better the products. The worst thing that could possible happen to each one of us would be for our favorite company to win, and for everybody else to stop competing.
(This was originally published as part of the window management article.)
Regardless of what device you’re currently using, unless you actually own huge amounts of stock in one of these companies, or work for them, please remember that the manufacturer of the device or the OS you’re using doesn’t really care about you. There’s no need to feel emotionally attached to a legal entity; it can’t feel the same towards you. Obviously, we humans intuitively do this, we like one company over another, but I think it’s worth consciously reminding ourselves from time to time that this is a one-way street.
I’m only using Apple aficionados as the example in this sentence because I’m one of them, not because they’re particularly guilty of this. The situation on the Android and Microsoft (and Linux) sides is similar. It’s possible that we Apple customers are a bit more susceptible to «us vs. them» thinking due to Apple’s near-death experience in the 90s, but in general, the difference between the groups is small.
As an aside, there’s this meme going around that apps are this generation’s new art form. I think this is true, but I also think it is a pretty sad statement on this generation that its new art form is one that is effectively controlled by a single multinational corporation that will not allow art that involves political caricatures, overt social criticism, or nudity. If paint and canvas had those same restrictions, the only paintings we’d have from the old masters would be still lifes of fruit bowls.
If you require a short url to link to this article, please use http://ignco.de/709
Dr. Drang writes:
Unlike Mac users, iPad users won’t be dumped immediately into a multitasking environment. Those who prefer to use and see only one app at a time can continue to do so—the multitasking interface will stay out of their way and won’t confuse them.
But for those who need to refer to one app while working in another, Slide Over and (especially) Split View will be a godsend. And it’s seemingly eliminated one of the biggest problems with using Mac-like multitasking environments: window management. There are no windows in Split View, there are only parts of the screen, with one part wholly given over to one app and another part wholly given over to another. There’s no overlap and there’s no Desktop peeking out from behind. The only thing the user has to think about is the position of the dividing line between the two apps.
Here’s a video of Craig Federighi, introducing the new feature.2
This is very similar to what Windows 8 does, but, in an odd way, seems to be less predictable, less powerful, and more complex. Notice how Federighi is on the home screen, jumps to the task switcher, opens Safari, and then swipes in from the right. This shows Messages, and Federighi acts as if that was exactly what he had expected to happen, but… why? Why is it Messages and not some other app? He just swiped in from the right, how does iOS decide which app to show?3 I would assume that, since there are dozens of apps you might want to see, and only one app that actually gets shown, the app you’re actually swiping in will be the wrong one most of the time.
You can pull down from the top and change the app, but now you’re doing some serious interface magic, where you have to know to swipe in from the right to show the app, then swipe down from the top to show the task switcher.4 There are no real affordances5 for either of the two actions.
Note that, so far, you’re not actually in split view. You’re still in «slide over» view. There are two different multitasking views in iOS 9 — another layer of complexity. To go into split view, you have to know about another piece of hidden interface magic: tap on the divider, and you turn on split view (unless you don’t have an iPad Air 2).
Apple is also giving developers the ability to opt out from multitasking and they’re saying that camera apps and hardware-intensive games should probably eschew multitasking.
This seems to add additional inconsistency to an already odd implementation of the split window feature.
This reminds me of 90s Internet mystery meat navigation, except that there’s not even any mystery meat, and you’re just randomly dragging around and tapping on things to trigger actions that might or might not be supported by the application you’re actually trying to use. You could argue that split view is a power user feature,6 and power users can just go watch a YouTube movie that explains how the feature works, but I’ve now watched this section of the keynote twice, forgotten how it works once already, and I’m completely sure that I will have forgotten how it works again by tomorrow.
This is exactly the kind of magical user interface that people have faulted Windows 8 for, except it’s even more confusing. In Windows 8, you only had to remember to swipe in from the screen edges. Once you did that, the UI was visible, and guided your actions. In iOS 9, it’s layers of hidden UI magic. The one advantage Apple has is that you don’t need to know any of it to use iOS, but still. I think we should expect better of Apple.
Because the thing is: it is not necessary to have all of this complexity.
Windows 8 has a more powerful split view that is also hidden from novice users, but still manages to be easy to learn and intuitive to use. Drag in from the left to show the task switcher. Tap on a window to activate it. If you’re a power user, all you have to learn is that you can also drag windows from the task switcher. Now you can drag them into the screen and place them where you want them to be, either as new split views, or replacing windows already in split views.
No hidden secondary task switcher, no multiple split screen modes, no excluded apps.7
In conclusion, I’m extremely happy that Apple is introducing a split screen view on iOS, but it’s difficult to understand why they decided to go with such a complex user interface. It all looks nice, but the interaction design seems, well, odd, and a little bit concerning.
At the same time, Apple is also starting to tackle window management on OS X. Here, Apple is trying to figure out the same kind of balancing act between the existing window management system, and a new, simplified one, that Microsoft has failed to solve with Windows 8, and is trying to improve upon with Windows 10.
There seems to be a new kind of hierarchical window management system for full-screen apps, where «inner» windows can be minimized into tabs at the bottom of the screen (kind of like tabbed folders in Mac OS 9). There’s also a new split screen mode that shows windows side by side (and if you do that in the normal window manager, you seem to be put into the full-screen window manager automatically).
Putting all of this stuff on top of all of the existing window management cruft doesn’t exactly make the Mac’s window management system simple and easy to understand. I think Apple is trying to avoid Microsoft’s mistake of having two completely separated window management systems on its desktop OS, but it’s becoming clear that the alternative — one system that tries to accommodate vastly different kinds of usage — is no panacea, either.
Either way, iOS and Mac OS are taking another small step towards each other. Nobody can predict the future, of course, but it seems possible that, as iOS becomes more powerful, and Mac OS gains features from iOS, the two operating systems might eventually converge into two versions of the same product.
Apple also showed a nifty new natural language search. But if they have all of this data, and know when a user worked on a document, and with whom that user worked, I wish they’d just expose this data in a real graphical user interface. I’m never sure how to talk to natural language UIs, and if they fail, I’m never sure if it’s because the system doesn’t have the information to answer my question, or if I merely asked the wrong question. Just give me a visual UI for things like date- or people-based file management.
I’m not a fan of some of the UI decisions Apple has made, but I am a huge fan of the basic concept. I think that a lack of power in all areas is one of the reasons why iPad sales are shrinking. Unless your needs are very specific, and quite limited, the iPad is a poor choice of device for getting work done. With the split-screen view, Apple is starting to address one important aspect of this problem. Other aspects — lack of availability of professional apps,8 sharing files between apps, organizing documents by project instead of by app, and others — remain only partly solved, or entirely unsolved. There’s still a lot of work to be done to make the iPad a viable desktop PC replacement for most people, but Apple just checked off an important item on this list.
I’m very happy that Apple is making a step in this direction.
This article originally contained a section on partisanship and competition. I’ve put that into its own article.
BTW, I hope they improve the «keyboard as a trackpad» feature before it ships, because in the demo, Federighi doesn’t seem to be able to easily select text without also typing gibberish at the same time.
Maybe it’s the last used app, but what if the last used app is not supported? What if you’re starting from the home screen? It must often seem non-deterministic to the user.Apparently, it’s the app last put into sidebar, which is logical, but mit not be entirely intuitive. Depends on how people will use the feature.
Apart from a small grey pill at the top of the «slide over» panel, that is presumably intended to tell you that you can swipe down to trigger some kind of action. All of this is exacerbated by the «flat» user interface, which lacks many of the ambient interaction hints that come from having user interfaces that are closer to what humans interact with in the real world.
Perhaps in part because people are unwilling to invest a lot of money into developing an iPad app if they can’t be sure it will be approved, which hurts sales of apps that are available, because you need an ecosystem of different apps for a platform to become viable.
If you require a short url to link to this article, please use http://ignco.de/708
When Apple originally released the iPhone SDK in 2008, I was extremely excited about it, and immediately started working on multiple different iPhone apps. By 2009, though, I had become so uncomfortable with Apple’s stewardship of its mobile platform that I released the one game that was furthest along, and abandoned the other games and apps. In hindsight, that probably was a good idea, because at least one of the apps I was working on (temporarily) became illegal, according to Apple’s App Store guidelines.
Instead, I started looking into writing web-based games running in Safari. As part of evaluating whether it was technically possible, whether the original iPad could even run web-based games acceptably well, I wrote a simple two player tower defense game.1 Here’s the game, running on an original iPad.
I spent very little time optimizing this code, and there’s probably plenty of room to squeeze more performance out of it. Even so, the game easily runs dozens2 of independently moving sprites, has pathfinding, and displays a progress bar on every single tower, resulting in hundreds of simultaneously, independently moving pieces — all on the original iPad.
Since then, web performance has skyrocketed. If I run sunspider on my original iPad, it totals 2761 ms. According to Macworld, the iPad Air 2 runs it in 287 ms.
You can see the game for yourself here (it only runs on iPads, because I’m detecting if it’s installed on the home screen, and because I’ve hard-coded the screen size).
Getting this tower defense game to run properly on an original iPad was easer than getting the native game I released to run properly on an original iPhone. It took me two days to write the whole tower defense game, including doing some simple optimizations. It took me months to write the iPhone game, including porting it from Apple’s native APIs to OpenGL, and spending a lot of time in Apple’s profiling tools, trying to fix performance and memory problems.
To be fair, the first iPhone was quite a bit slower than the first iPad, and very memory constrained. But still, I think this is something to keep in mind when people complain about web app performance. If you don’t think that native apps were impossible to get to perform properly one or two device generations ago, then neither should web apps be impossible to get to perform properly on today’s devices.
Having said that, the people complaining about poor web app performance do have a point. Most web apps do perform poorly. I just think that blaming browsers, and concluding that web apps can’t perform well, is misdiagnosing the problem.
Why are many web apps performing so poorly?
DOM manipulations can be slow, and are difficult to optimize
One of the reasons why my tower defense game performed quite well despite not being highly optimized is that it avoids using the DOM for most of its UI. Instead, it uses the HTML canvas3 to draw pixels directly.
Importantly, though, that’s not to say that DOM-based web apps can’t be fast. In fact, I’ve written complex DOM-based web apps that achieved native-like performance (and a native-like interaction design) way back when the target platform was IE6. However, optimizing performance on these apps can be hard, because DOM manipulation can be slow if done carelessly, and browsers sometimes behave slightly differently in ways that aren’t immediately noticeable to the developer, but result in vastly different performance across different platforms when doing the same DOM manipulations.
In short, it’s absolutely possible to write web apps that use the DOM and still perform well, but, particularly for more complex user interfaces, it does require developers to spend time optimizing their code.
And, of course, you don’t have to use the DOM if something else works better for your particular app.
As an aside, this is often where people sneer and say something like «well, it’s not a real web app if it doesn’t even use the DOM», but that seems weird to me. That’s like saying «it’s not a real native app if it uses OpenGL». Drawing some — or all — of a native app’s UI without using Apple’s own frameworks is not «a scathing condemnation» of UIKit. In fact, my own native game uses OpenGL for almost everything it draws, just like the web-based game uses canvas for almost everything it draws. And just like the web-based game uses the DOM to draw stuff like buttons and the help screen, so does my native app use Apple’s UI frameworks for this aspect of the game.
Nothing about using the canvas makes a web app any less «webby.»4 All of the good things web apps bring — cross-platform compatibility, simple deployment, a high-level language, an open platform — are available to you whether or not you’re using the DOM for most of your UI. The DOM works well for some things, and canvas works well for others — that’s why there is a canvas. Pick whichever works for you.
People use a lot of needless middleware frameworks
Back when I wrote web apps that targeted IE6, I evaluated a ton of JS frameworks. I ended up not using any of them, because all of them caused huge performance problems. This is exactly where we are today with mobile web apps. Performance of a 2005-era PC running IE6 is roughly about what you’ll get out of a 2015-era mobile phone running a current version of Chrome or Safari. It was possible to create a fluid, responsive PC web app in 2005, and it’s possible to create a fluid, responsive mobile web app now, but not if you rely on megabytes of (sometimes poorly written) JS frameworks that all have to be downloaded, parsed, and executed by a mobile browser, killing loading time, execution speed, and bloating memory usage.8
Tools don’t solve problems any more, they have become the problem. There’s just too many of them and they all include an incredible amount of features that you don’t use on your site — but that users are still required to download and execute.
If at all, web devs optimize for loading speed, not execution speed
When pages were mostly static and people were using slow analog modems to dial up into the information superhighway, loading speed was all that mattered. That’s why we have a ton of good profilers to improve loading speed, and different techniques for caching and compressing data. Loading speed briefly became very relevant again when mobile phones started connecting to the Internet, and initially suffered from very slow connection speeds. To some degree, that is still the case, and people should optimize for it.
Most web developers probably don’t optimize for loading, but even fewer optimize for execution speed. With today’s fast connections and dynamic websites. execution speed often becomes a bigger issue than loading.
Admittedly, optimizing load times is easier than optimizing execution speed, since we have more experience doing it, and the profiling tools for loading optimization are more sophisticated. But it is becoming increasingly important to look at execution speed, as well.
And finally, the thing that kills mobile performance: loading dynamic, animated, interactive overlay ads from crappy, slow third-party ad servers. How often do you open a page on your mobile browser, and it’s painfully obvious that the thing you actually want to see — an article, for example — has already loaded, but is hidden below layers and layers of shitty ads that are slowly pulling in stuff, preventing you from accessing the thing you actually want to see?
You can’t have a fast web app if you’re monetizing it by punishing your users with shitty ads.
It’s frustrating to see people complain about bad web performance. They’re often right in practice, of course, but what’s annoying is that it is a completely unforced error. There’s no reason why web apps have to be slow. The technology to make fast web apps is here — we just have to take advantage of it.
I never continued work on it, because I eventually became so uncomfortable with Apple’s behavior (and Android got to a point where it was a valid alternative to iOS, and arguably even superior in some ways) that I pretty much stopped using iOS devices altogether.
If you are one of the select few good JS devs, please apply here :-)
Via Marco Arment.
If you require a short url to link to this article, please use http://ignco.de/703
I really like e-ink displays. I’m not a huge fan of OLED displays, though; they’ve become much better in recent years, but they can still be difficult to read in bright sunlight. Conceptually, the Yotaphone 2 made a ton of sense to me. I’ve been using one for a month now.
The hardware itself is beautifully designed. It looks fantastic. Because there’s no home button, and because the SIM card slot is hidden behind the volume buttons, the device itself seems almost featureless.
It’s very thin, particularly considering that there are two screens in that thing. The phone’s edges are bent inwards, which looks amazing, and makes it easier to hold.
The back screen is slightly curved along the left and right edges, which is cool and makes it easier to hold, but can be annoying when you actually use the screen. The back screen has an antireflective coating, which also gives it a bit of grip. It’s still a pretty slippery phone, though.
All in all, I think this is one of the most beautiful phones on the market right now.
The phone has a 2500 mAh battery, which is kind of anemic by today’s standards. The phone I carried previously, a Galaxy Note 3,1 released back in 2013, came with a 3,200 mAh battery. As a result, battery life is not great. I usually get around 14 hours of life out of the device, which means that the phone typically doesn’t quite make it through a day. It’s about on par with what I got out of the iPhone 4S I used to own,2 and quite a bit worse than the Note 3.
Ostensibly, the e-ink screen is supposed to fix this problem; just avoid using the OLED screen, and battery life will skyrocket! Reviews of the phone claim that you can easily extend the phone’s battery life from one day to two or even three. To me, this doesn’t seem plausible. On average, my Yotaphone claims that the OLED screen is responsible for about 25% of its battery usage. If I only used the e-ink screen,3 that would extend battery life from 14 hours to maybe 20 hours — nowhere near two days.
The battery problem is compounded by Android’s terrible standby battery usage. iOS barely uses any battery if the device is in standby mode. I can let an iPad sit on my desk for a week, and it’ll still have a charge when I turn it back on. The same can’t be said for Android. Unless I turn my Android devices off, they keep draining battery at a pretty astonishing pace.
All of this means that even if I avoid using the OLED screen whenever possible,4 the Yotaphone still has pretty mediocre battery life. That doesn’t mean that the e-ink screen is irrelevant to battery life, though. Having it means that I can read a book on my phone without draining the battery in a few short hours.
In other words, the e-ink screen doesn’t allow me to easily make it through more than a day on a single charge, but if I do something on my phone that requires me to look at its screen for long periods of time, it will prevent the battery from dying prematurely.
The phone doesn’t have a replaceable battery or an SD card slot. I don’t mind the battery part that much, though I would prefer to be able to swap the battery. The missing SD card slot, though, is a bigger problem. The Note 3 I used previously had 32 GB of internal storage, and I added a 128 GB SD card. The internal storage held apps, the external storage held downloaded podcasts, photographs and movies taken with the phone, and similar data. As a result of this, I effectively did not have to care about storage space. Going from not having to even think about storage space to having to actively manage storage space sucks, and just shouldn’t be necessary anymore.
The phone’s front screen is covered with Gorilla glass 3, but for some reason, I’ve already noticed some faint scratches. They’re barely visible, but still; this is something I haven’t seen on a phone in a long time. It’s a 5 inch AMOLED screen with 1920 × 1080 pixels, which is more than good enough, even though it seems comically small after using 6 inch screens for years.
The back screen is also covered by Gorilla glass 3; at 4.7 inches, it has 960 × 540 pixels, a resolution of 235 ppi. I’d like it to be slightly higher, but it’s certainly good enough for most situations. The back screen is curved at the edges, which looks really cool (and makes the phone easier to hold).
Unfortunately, it can mean that it’s harder to find a position where there’s no glare on the back screen — a problem that Yota’s own picture of the phone shows beautifully.
The back screen’s texture makes it less reflective and helps make it less slippery, but when holding it «backwards», you’re effectively holding a slab of glass. This is particularly problematic when you put it down backwards on a table or sofa. If it’s not entirely flat, the phone will just slide away.
One issue I’ve noticed with the e-ink screen is that the Yotaphone is not great at detecting when it should do a full refresh. E-ink screens show a visible ghost of the previous image. To get around that, e-ink devices refresh the screen from time to time (turning the full screen white, then black, then white, resulting in a visible flash). Yotaphone’s built-in apps that are specifically designed for the e-ink screen know when to do that, but if you use normal Android apps on the e-ink screen, the phone seems to have some basic heuristics for deciding when to refresh the screen. Sometimes, this works well, but other times — when using the Kindle app, for example — the refresh is barely ever triggered, and ghosting starts to accumulate.
The only thing that should be visible on this screen is the book page’s text, and a page number and progress percentage at the bottom. Everything else is ghosting.
It’s not a huge problem, just a small detail that could likely be improved with a software update.
The device has 2 GB of RAM, which is not quite enough to run Android well. The Note’s 3 GB of RAM meant that I could easily run multiple apps and switch between them, but on the Yotaphone, I can barely switch between two apps without the first one being auto-killed.
The camera on the phone produces quite beautiful pictures, even in most low-light situations. It’s not quite fast enough — from lock screen to taking the first picture can take a few seconds, and there’s perceivable shutter lag. The camera sometimes has problems auto-focusing on objects close to the lens, but tapping on the screen to manually focus, and then taking a picture, usually fixes the problem.
Like the iPhone, the phone doesn’t have an LED. Personally, I really like notifications LEDs, since I tend to leave my phone lying around, and the blinking light tells me if I’ve missed any notifications. No such luck on the Yotaphone, though.
The Yotaphone 2 runs stock Android with some added features related to the e-ink screen. There are basically four different ways you can use the e-ink screen:
- YotaPanel puts interactive widgets on the e-ink screen (alternatively, you can put a picture on there, but, weirdly, you can’t have a picture and some widgets)
- You can take a screenshot of the front screen, and put it on the e-ink screen
- Special e-ink apps can be launched from Android, and then take over control of the e-ink screen
- YotaMirror allows you to use regular Android apps on the e-ink screen
Of the four, YotaPanel and YotaMirror are the most useful. The e-ink apps are cool, but there are only a few of them. Putting a screenshot on the back panel sounds useful (you might think that could take a screenshot of a train schedule, for example, and always have it available on your phone — even if it runs out of power), until you realize that the screenshot only stays on the back screen for a few mintues, until the phone decides to go back to showing your YotaPanel widgets. And once the phone starts running out of juice, it automatically puts an «I’m in battery saving mode» picture on the back screen.
YotaPanel and YotaMirror are really cool. Just being able to see the time without turning on the phone is actually more useful than I thought. I can’t help but think that there’s more you could do with a touchscreen on the back of the phone, though. How about using the back touchscreen to send touch events to the front touchscreen, for example? That would allow you to play games without covering the touchscreen with your fingers.
One interesting aspect of the Yotaphone is how the phone figures out which screen you’re using. Through experimentation, I’ve come to the conclusion that it uses both the phone’s orientation and its touchscreens to decide which screen you’re looking at. If your hand covers one of the screens, it assumes that you’re looking at the other screen. If it can’t tell based on that, it assumes that the screen pointing up is the one you’re looking at. In everyday usage, this works surprisingly well; at times, it’s almost spooky how well the phone knows what I’m doing. Lock the phone, turn it arouhnd, unlock it — now the other screen is unlocked. It’s not quite perfect, though. Every few days, I’m looking at the OLED screen, and it suddenly goes dark — because the phone somehow interpreted my hand movements as an attempt to unlock the back screen.
Yotaphone’s e-ink screen has caused me to neglect my Kindle a bit, and do more reading on my phone. This has given me a renewed respect for Android, and really reminded me of why I switched from iOS to Android. Just the Kindle app alone is so much better on Android than iOS. It can use the volume buttons for turning pages, which works beautifully on the Yotaphone. And it has a built-in, fully integrated store!
But using it has also reminded me of the issues I have with Android, particularly with its hardware ecosystem. There are a ton of different Android phones, but they’re all variations on the same theme. The few phones that do something special typically only do one thing. You can get the waterproof phone, or the phone with the pressure-sensitive pen, or the phone with the superhuge screen, or the phone with the e-ink screen, or the phone with the multi-day battery life, or the phone that’s rugged and won’t break if it falls, or the phone that has a fantastic, fast camera, or the phone that has two SIM card slots, or the phone that has a lot of internal storage — but you can’t get a phone that does all — or even more than one or two — of these things.
My phone is probably my most used electronic device I own. I want to do sketches on my phone, but for that, I need a pressure-sensitive pen and a larger screen. I want to take my phone everywhere I go, but for that, it should be waterproof and rugged. I want to read books on my phone, and use it in bright sunlight, and for that, an e-ink screen is a fantastic feature. And so on.
The phone I want doesn’t exist.
The Yotaphone 2 is an incredibly beautiful phone with one incredibly useful feature. I just love going for a walk in the sun, turning on the OLED screen, seeing almost nothing, turning the phone, and being able to see its screen perfectly. I particularly love reading on this thing.
But this is also a phone that has quite a few flaws, and that I can’t easily recommend to most people. Unless you do a lot of things that work well on the e-ink screen, this phone is probably not for you.
For me, I love e-ink screens so much that I will put up with the phone’s problems, at least for now. In the long run, though, I wish that Android phone manufacturers would stop making «one special feature» phones, and instead start making more well-rounded phones that are aren’t kind of mediocre in every aspect except one.
The Yotaphone has always been running quite hot. Now that it is summer in earnest, it often gets painfully hot. I could live with that; unfortunately, the phone seems to be throttling the CPU when it gets too hot, which, given the current weather, seems to be the case almost constantly. As a result, my Yotaphone has become quite slow recently. After a day in a hot office, it sometimes gets so slow that it takes the phone seconds to respond to any touch input.
Something else I’ve noticed is that the recent update to Android 5.0 has introduced a very odd bug in the Gallery app: it can take hours for pictures I’ve taken to show up in there. The main other problem the Android 5.0 update has caused is that the phone now seems to be even more memory-constrained, killing background apps even faster.
By the way, comparing the Note 3 to the iPhone 6 plus makes the 6 plus look a bit ridiculous. It’s quite a bit larger than the Note 3, but its screen is visibly smaller; side-by-side, the 6 plus looks bulky, and its design seems wasteful.
Technically, you could use the device without ever turning on the OLED screen. All you need to do is add the Android launcher as one of the apps that can be launched from the back screen, then unlock the back screen and start the Android launcher — voilà, you’re running Android on your back screen without ever turning on the OLED screen! In reality, some apps don’t work well on the back screen, because they require colors or a higher resolution than the e-ink screen provides to be properly usable.
If you require a short url to link to this article, please use http://ignco.de/699