Rather than speculate on whether Apple is making a watch, when they might unveil such a product, and how much it would sell for, I'm going to take a few minutes to talk about how such a device would fit into the ecosystem of products and why you'll want one.
It wasn't so long ago that most people wore watches and used them to tell time. Long after the majority of adults carried early-generation cellphones or pagers that kept more accurate time than fobs on wrists, we still wore them because digging out (or unholstering) a phone just to check the time was a chore.
As phones shrank into our pockets this slowly changed, but it wasn't until we started seeing cellphones and pagers as small multi-function devices that we started leaving our watches on the nightstand. By 2008, nearly two-thirds of teens never wore watches, and only one in ten wore a watch daily.
The watch's core function of timekeeping was easily taken up by mobile phones, and they quickly took on – and improved upon – secondary watch tasks like alarms, timers, calculators and calendars. Cases and leashes even let phones take on some of the fashion duties previously shouldered by the wristwatch. Watch sales are less than half what they were a decade ago and many watch manufacturers have pivoted to sports, driving sales of GPS watches, heart-rate monitors, and ruggedized waterproof timepieces to maintain relevance via unique functionality.
In a time when so many people have reached the point of attention saturation, dividing their moments between smartphones, tablets, laptops and televisions, there seems little justification for a 'fifth screen' that provides no new capability while depriving us of a chance to glimpse our online life while checking the time.
Watches have become anachronisms.
Most people would probably be surprised to discover how many times they pull out their phones on an average day. (There should be a pedometer-style app just to count phone unlocks. Oh wait, there is.) Yet for all the power at our fingertips, most times we pull the phone out of our collective pocket it's in response to an alert or to check a small piece of information. And it's this kind of interaction that may give the watch a way to get back in to the game.
Let's start with the simple stuff a watch could provide if it were linked via Bluetooth to the phone in your pocket, purse or bag: Of course there's telling time. There's also controlling your music. There's finding out why your phone (or wrist) just buzzed or, if you're one of those afflicted by phantom buzzing, whether it buzzed. Want to read the text that just came in? A 320x240 1.7" screen has exactly half the pixels of the original iPhone. Plenty of room to display meaningful data. Want to see who's calling before you decide whether it's worth digging out your phone? Easy.
But let's go a little deeper and find the balance between a simple notification device and a full 'wrist smartphone'. First, battery life is critical. Bluetooth 4.0 support was introduced with the iPhone 4S and allows 'always listening' peripherals to use extremely small amounts of power. By now most of the iOS devices in use support Bluetooth 4, and nearly every new Apple phone, iPad or iPod Touch supports it. Even the new iPod nano supports Bluetooth 4. Only the discounted iPhone 4 lacks Bluetooth 4 support, and that model will almost certainly be discontinued this Summer.
A Bluetooth watch slaved to a phone (like the Pebble) gets to leverage the power of its master, but third-party watches can only integrate as deeply as the OS will let them, and as broadly as third-party developers specifically include support. An Apple watch would not only enjoy deep OS and service-level integration and APIs, but would also bring to bear Apple's decade of experience making smaller and more powerful personal electronic devices. Most people are loath to wear a chunky watch, and Apple would never sell one. Like the iPad, in the works in one way or another for over a decade at Apple, an iWatch would never be productized until it reached a form factor that wasn't a compromise.
So let's assume a 1.7" 320x240 screen (vertical, because a landscape watch screams 'computer-strapped-to-wrist'). Let's also assume Apple tries to make a design statement with a curved display, lowering the profile of the watch to half that of an iPod nano on an accessory wrist strap. We may need to use an OLED display instead of LCD both because of improvements in power consumption and contrast ratio on a small bright screen and because of the difficulties in getting LCD backlighting to illuminate evenly across such a pronouncedly curved surface. Earlier today Tim Cook disparaged OLED's color fidelity, but Apple has a long track record of dismissing technologies or form factors right up to the moment they unveil their own version, where they overcame the limitation and “got it right.”
The iPod nano (6th generation) had a square 240x240 1.4" display at 220ppi. A 240x320 1.7" watch would have a third more pixels and, at 235ppi, it would have a higher resolution than a MacBook Pro Retina display. More importantly, the nano proved that multitouch gestures are useful even on a 1.4" display. You wouldn't hammer out texts on it, but as the primary input interface (secondary actually, but we'll get there) it would be completely suitable for the general navigation and control gestures needed for wrist-top apps.
Let's take a quick tour through some of the basic built-in apps and consider what value a wrist experience would bring:
Messages - Being able to see new messages as they come in without having to pull out a phone? Simple and useful.
Calendar - See upcoming appointments, even navigate the month calendar with a bottom swipe-picker to find free time in the future.
Photos - Browsing albums. Probably no camera. (Dick Tracy will be crushed.)
Maps - Current location on a pinch-zoomable mini-map. Walking directions. Automatic “where did I leave my car' feature, based on the last time the phone connected to your car's bluetooth. Throw in a compass and accelerometer and you have a powerful live scrolling map on your wrist. This is actually pretty killer.
Weather - At a glance. Weather has always felt like it was designed for the small screen.
Stocks - See weather. Charts, scrolling portfolio list. Done.
Reminders - Shopping and to-do lists are particularly useful on the wrist when your hands are busy, and geofencing makes it even better.
Clock - Well, yeah. With timers, alarms, and stopwatches of course.
Passbook - This is where it starts to get really interesting. Passbook's utility is growing now that you can use it in place of tickets at many movie theaters, instead of your wallet at Starbucks and instead of your boarding passes on many airlines. This would be even easier (and yes, cooler) if you just had to flash your wrist at the reader instead of fishing out your phone.
Consider that any iOS app developer could quickly add a second, basic interface to their app, one that would run on the watch. Pandora would have a station selector and standard play/pause buttons. Facebook and Twitter would do well at formatting their micro-content to a micro-screen.
With an accelerometer and deep integration with the phone, an iWatch would easily be a replacement for the recent spate of wrist-based fitness trackers. Fitbits, Jawbones UP and Nike Fuel bands would become redundant when Apple releases its own fitness app, and/or incorporates a 'fitness API' into the OS for third-parties to leverage.
While the iWatch would be a fantastic 'lightweight consumption' device, a small touchscreen doesn't lend itself well to composition tasks. Sure, playing and pausing music is fine, but replying to a text? No way. But this apparent deficiency would actually be the iWatch's masterstroke.
The watch would have only one button, on the side. A single press brings the watch to the home screen. Two presses puts it to sleep. Holding the button down for a moment brings up Siri, just as it does on your iPhone. A microphone in the watch accepts your commands and the audio is sent to the phone for processing (and from there to the cloud, if onboard processing hasn't yet made it to iOS).
Now an iWatch is a fully functional texting client. Voice commands become the fastest (but not the only) way to pull up most pieces of information or to execute most commands. Initiate a phone call. Create calendar entries, find locations in Maps, check the weather or stocks, add reminders. Do the quick single-action tasks that fill your day without having to mode switch from the real world into 'iOS land' just to add an item to a shopping list.
Since the watch would probably have a speaker as well as a microphone you could use it for phone calls in a pinch, though you'd probably still pull the phone out for that, or use a Bluetooth or corded headset.
The watch itself would need little to no memory of its own. It would be a thin client tied to the iOS device. If your phone runs out of juice the watch would still have a minimal amount of utility, but not much. Think alarms, but no calendar access. If you leave your phone behind somewhere though, you can bet your watch will let you know when it falls out of Bluetooth range.
Without the heavy-lifting that an iPod nano contends with an iWatch should be able to last several days between charges, and should be able to get a day's worth of charge in the time it takes to shower. I'd be surprised if it didn't have a Lightning connector.
It's possible that such a phone could have more standalone functionality, with a mini-runtime for calendar and other apps, but that starts to fuzz the line between a secondary input and display peripheral and another device with its own codebase, which could be a much bigger hassle for developers and cause more user confusion.
Strategically, an iWatch makes a lot of sense. It's a (ahem) peripheral strategy. Unlike the latest generation of iPhone, it can fail without spelling disaster. It doesn't cannibalize sales of other Apple products. The idea of watches is a proven one, and by overcoming (and actually being supported by) the reasons that watches fell of favor over the last 20 years, there's a good chance that we'll see their return.
Apple can easily make this a proprietary play. The OS-level integration means nobody else can play at their level on the iOS platform. An Android initiative would be challenged by the slow adoption rate of new Android OS releases and hardware fragmentation, in addition to possible turf wars between Android device vendors.
Like iTunes, an iWatch can also be a differentiator, driving new user adoption in iOS. All else being equal, they may go to the platform with the integrated watch. For the hundreds of millions of current iOS users, the watch is an opportunity to get more out of their current device at a marginal cost.
Above all, done right, an iWatch could be a play in the classic style of both Apple and Google: An attempt to dramatically redefine a market that had grown stagnant through lack of innovation.
So, when will we see it? If I had to guess I'd say we'd see an official announcement by this Spring's WWDC at the latest. If you want developers to augment their apps to support a wrist-top experience, you'd have to sell the vision at WWDC, if not before.
And if I guess correctly, this year's WWDC is going to be largely about Siri. It's been a year and a half of incremental changes, and given Google's performance lead in on-board voice recognition I have to think Apple is burning the midnight oil to match that capability while also creating a cogent strategy to extend Siri's capability to third party apps.
Oh, and that front-facing camera I said wouldn't be there? Maybe next year. You've gotta have a reason to upgrade, after all.
This week the University of California unveiled a striking new logo and brand for their network of campuses, and it hasn't gone un-noticed. Following the trend of emotions and bright colors over words and nuance, several of the nation's most prestigious centers of higher education scrambled to cement their own continued relevance in this new era.
First to react was Harvard University:
Steeped in tradition but wanting to keep its image fresh and accessible to future generations, Harvard sought a logo that represented its historical role of bridging the gap between the upper-middle class and the ultra-wealthy.
The red square is representative of Harvard Square.
Just across Cambridge, the Massachusetts Institute of Technology displayed their own vision of the future, seen here:
Formalizing its cherished nickname, 'The 'Tute', the new wordmark also integrates important aspects of the university's cultural history. The drop-shadow, invented there in 1976, is integrated tastefully into the logo, as is the quote “How does that make you feel?”
One of the first statements made by ELIZA, the groundbreaking chatbot created by MIT professor Dr. Joseph Weizenbaum in 1966, the statement is included to give a sense of introspection, hope, and ambition to those who read it.
Stanford's press office announced the new logo this afternoon, citing that “The university's mascot, the Stanford Cardinal, is a bird. The early bird gets the worm. We want to attract the most ambitious students in the world, so what better logo to bring them here than an enticing worm?”
Asked why the university eschewed its traditional cardinal red in the new branding, they replied, “Everyone else is using red, and we wanted to be different.”
Finally, the University of Colorado, seeing to keep itself well outside the umbrella of the larger and more famous UC, also relied on a fresh new palette to differentiate itself from the pack.
In 1990 I was applying to colleges. I had a love of computers and writing, but I decided to abandon computers because being a geek in high school was so unrewarding. I applied to three 'big name' schools (Harvard, Stanford, and MIT), UC schools (Berkeley, UCLA, UCSD), and three small liberal arts colleges (Oberlin, Swarthmore, and Carleton). Carleton was far and away my first choice. I'd visited the campus and found the small but focused liberal arts culture to be exactly what I was looking for. Located about 40 miles south of Minneapolis/St. Paul in Northfield, Minnesota, I thought I'd also find an experience very different than the life I had in the San Fernando Valley.
I asked two of my favorite teachers to write my college recommendations. Teaching English and Calculus, they were also the coaches of my Academic Decathlon team (and they eventually dated and got married, but that's probably outside the scope of this post).
When I handed them the recommendation forms, they looked at each other and she asked, “Are you sure you want us to write your recommendations?” I instantly knew this was one of those moments that required a definitive answer, right off the bat. Either take that feedback along with the forms, thank them, and find other teachers to write my recommendations, or acknowledge that these were the two teachers who knew me best, and tell them with certainty “Absolutely. You two know me better than any other teachers,” counting on that vote of confidence to reflect positively in the recommendations they were to write. I chose the latter.
Okay, a bit of background here is needed. In high school I was a fantastic test-taker, and a horrible procrastinator. I would learn everything the class had to teach, but usually on my own in the final weeks of the class, or immediately before each unit test. Assignments were chores to be avoided or rushed through, and test were the saviors that would buoy my grades. If not for teachers using tests to comprise the majority of their courses grades, I would have done more assignments, and done them better. I just did the math and saw that if I aced tests I wouldn't have to work hard on the rest. And so while in the top 5% of my class and with SAT scores in the 99th percentile, I was still considered a poor student.
Over the next several months college applications were filled out, recommendations were written, paperwork was submitted, and we entered the long cold winter of expectations and anticipation. My two teachers had the custom of giving their students copies of the recommendations they wrote, a tradition they broke with in my case. This was my first (though clearly should have been my second) clue that my college plans might not be as bright as I had hoped.
To cut to the chase, of the nine schools I applied to, six of them required teacher recommendations and those were the six schools I was rejected from. The three schools I was accepted to (UC Berkeley, UCLA and UCSD), relied almost entirely on mathematica formulas, which made me a shoo-in.
In the end I went to UC Berkeley, intending to major in either Physics, English, or Dramatic Arts (yeah, I know, a lot of people have no idea what they want to do when they start college though). Within the first two weeks there I met folks from the Berkeley Mac Users Group, started volunteering on their helpline a few weeks later, got a job as their campus liaison a few months later, got an internship at MacWEEK magazine a few months after that, started independently developing software for the Apple Newton, then moved over to web development (back in 1995 when the web was in its dark ages), spent as many years out of school as I had in, taking a year or two out here and there to work for SoMa web companies, and finally returned to Berkeley to finish my degree when their Cognitive Science department had fully taken root and I realized that was exactly the education I was looking for, blending my liberal arts and scientific interests into a greater whole.
I finally graduated from Berkeley 10 years after I started, firmly entrenched in the technological world. I spent a year designing at Yahoo before leaving to get a masters degree in HCI at Carnegie Mellon where I met my wife, and then came back to the Bay Area to design UX for Google in 2003. My life is completely different than it would have been if I spent the first four years of my post-secondary life in Northfield, Minnesota studying literature and creative writing.
I'll never know what that life would have held, but the life I have now is so different and so much more fulfilling than the fears I had as a graduating senior about pursuing computer science. Every aspect of my life can be traced back to that one moment when I made a snap decision in answering the question “Are you sure you want us to write your recommendations?” In the short term I thought I gave the worst possible answer to that question, but in the long term it was the best mistake I ever made.
In the last few days Nate Silver has become the third most talked-about man in politics, with pundits left and right saying he's audaciously staked his professional reputation on an Obama win.
This is sad and shows how little we understand about the nature of statistics and probability, even the more educated among us. Nate's electoral prognostications over the last several months have really been two separate things melded together:
First, they are predictions of the accuracy of the national polls, the tracking polls, the swing-state polls and those pollsters estimates of how registered voters will translate to likely voters. Polls use well-worn statistical models to give confidence intervals for those polls, but by merging several polls and increasing the sample size, Silver is able to reduce that confidence interval significantly, giving a more accurate model. Silver's 'now-cast' numbers are purely based on those polls, how likely they are to be wrong to a degree that would swing the result in that state, and a Monte Carlo simulation to generate a probabilistic distribution of outcomes. Then he shares what portion of those outcomes lead to an Obama victory, a Romney victory, or a tie.
The second thing Silver does (or, to be more accurate, did) is predict future effects that could change the electoral response between the time the poll was taken and Election Day. This involves a great deal of educated guesswork about economic factors, foreign policy issues, natural disasters (ahem) and, more than anything, a general regression to the mean. Throwing those variable ingredients in to the Monte Carlo soup churns out an outcome distribution that Silver presents as the 'Nov. 6 forecast'. One could definitely make the case that since there's a level of subjectivity in weighting different factors, bias could creep in to the model at this stage. It's extremely hard to document whether such a bias actually exists in these forecasts but thankfully at this point we don't have to.
I mentioned that Silver 'did' use multi-factor predictive models because as the poll dates approached the election date, those factors that might change the feeling of the electorate in the intervening time were, naturally, given less and less weight until today when that factor is zero. Today's estimate, the one getting so much press, is based entirely on polling data and confidence intervals and not on future factors. Today the 'Nov. 6 forecast' and the 'Now-cast' are exactly the same. Pundits could still argue that there are other vectors of possible bias including Silver's weighting of polls against each other and calculations of 'house bias', but those are all pretty clearly grounded in historical data and criticisms of them are harder to give credit.
It's a shame we don't do more to teach statistics and probability in school because the average person usually sees different kinds of probabilities the same way. Take a football game: You can generate a reasonably accurate probability model of who will win based on past performance, where the variance comes from the 'noise' in the game. A single interception or a lucky play can drastically change the game's outcome. In this sense the measure of probability is to say that if the two teams played 100 games with the same team members in the same state of health, the tallied wins for each team would fall roughly in line with the probabilities. There is internal chaos in the game that forces the probabilistic distribution.
Predicting an election based on polls is an entirely different matter. The election will turn out one way or another. If the same people voted for President 100 times without an external factor interfering differently across samples, the outcome would be the same every time. There is almost no internal chaos within the game of voting that forces a probabilistic distribution (technically there are extremely minor chaotic factors within the system, such as voters who literally coin-flip on their way in, or who mis-cast their vote, but those chaotic factors have no 'lean' toward a particular candidate and en-masse are nearly impossible to change a sample's electoral outcome).
In these cases where the event being predicted has such low internal chaotic factors, the statistician isn't actually predicting the probability that candidate X or Y will become President because that event is already unchanging. Instead, they're predicting the accuracy of their model. In this case, Nate Silver is predicting with a confidence of 91% that his model is correct in saying that Obama will win today's election.
Don't believe me? Let's look at it a different way. Say there were two statisticians trying to predict the same election. One has a single poll from each state to work from, and the other has ten polls from each state. The first statistician, using his polls and the relatively low confidence intervals his single polls provide, can say with a 56% certainty that Obama will win the election. The second statistician, with more data, more people polled, and much higher mathematical confidence levels and smaller confidence intervals, predicts with a 91% certainty that Obama will win.
Both of these models can be completely mathematically correct even though they're vastly different because, as stated earlier, they're predicting the confidence that their model is correct. As each statistician is using different models, they naturally have different probabilities. Given 100 completely different elections, the statistician with more polls to work with would be right more often.
Take a third hypothetical statistician who, amazingly, is able to poll every single voter just before they vote. That statistician has a nearly absolute certainty in their polling data with a confidence interval that is nearly zero. That person can predict with 99.9999% confidence who will win the election.
This is a trick the sports bookie can't accomplish because, even with absolute knowledge of the opening state, the outcome is in doubt. But elections aren't football games or horse races (no matter that the pundits so enjoy those metaphors), and longer odds in closer races don't have to be the product of audacity and bias. They're simply the result of more polls, better science, and a lack of a need to create the sense of a 'dead heat' to bolster ad revenues.
At the beginning of the year, Tim Cook said they were going to revamp every single product line this year, and this will almost certainly be their last media event of the year.
The event will start with a recap of the iPhone 5 launch, the phenomenal sales numbers and media praise.
Then they'll move on to sales of the iPod nano and iPod touch. They'll show the new commercial, and show some great numbers.
They'll do a quick recap of iTunes 11, remind everyone that they said it would be out in October, and announce that it's available as of now.
They'll talk about how it's in Apple's DNA to make their products better over time, and how because of their experience and product volumes they can make products of a higher quality and precision than anyone else in history.
It also lets them make things smaller.
They'll talk about the 15" Retina MacBook Pro, what a game-changer it is and how well it's been received. Then they'll introduce the 13" Retina MacBook Pro.
They'll talk about the venerable iMac. First introduced in 1998, it turned 14 years old this summer. It's been revised over a dozen times, getting better each time. 'Today, we're announcing a new iMac. It may look familiar but it's been rebuilt from the ground up. Faster, and smaller.'
Getting really small, they turn to the smallest footprint Mac, the Mac mini. It gets a processor upgrade to Ivy Bridge, and will be faster than a Mac Pro from just a few years ago. (It may also be made smaller if they ditch the two-disk server model.)
They'll thank everyone for their hard work and when nobody (and I mean everybody) expects it, Tim Cook will say “Oh, and there's just 'One more thing…'”. The crowd cheers. And then they unveil the iPad mini.
The iPad mini stats have been rehashed to death, but on pricing, my guess is a base of $399 for 16 gigs and Wi-Fi. $100 more for 32 gigs, another hundred for 64 gigs. Add $129 for 3G+LTE. Black or white, with smart covers and smart cases.
These base prices line up nicely across the line: iPod nano: $149. iPod touch: $299. iPad mini: $399, iPad: $499. It's possible that they'll get the price down to $349, but that may make an uncomfortable gulf between the iPad mini and the iPad, while making the iPod touch look expensive in comparison. I would have speculated that they could lower the iPad base price to $449, but with Microsoft just announcing that their Surface RT will start at $499 it's clear there's still good support at this level without undercutting Surface on price.