How See Tickets outraged 100,000 BBC Radio 1 listeners
On the weekend of June 23, nearly a hundred thousand people will descend upon the Hackney Marshes for BBC Radio 1’s Hackney Weekend music festival. Big names such as Jay-Z, Rihanna, Florence + The Machine, Jessie J, Deadmau5 and David Guetta have been booked for the free, two-day show, with the British music-loving population clamouring to sign up for tickets ever since the lineup was announced. However, when signup opened at 11 a.m. on Saturday, March 24, users experienced enormous capacity issues as See Tickets’ servers attempted to respond to several thousand requests simultaneously. It took many users several hours to get through the three-page signup process, with many getting frustrated and quitting mid-way.
Could See Tickets have prevented the Twitterstorm that erupted as thousands of eager fans tried to secure tickets to their preferred day of the show? While it may have indeed been impossible to prevent all capacity problems with an event of that scale, See did several things poorly that certainly exacerbated the problem. If you’re a community manager running an event, preparing for capacity — and making sure downstream parties in the supply chain are likewise prepared — is absolutely crucial to preventing a seamless signup experience.
1. Properly estimate demand
If running a rush signup process in which a finite number of tickets are expected to be allocated very quickly, the first and most crucial step is estimating demand. This is easier to estimate if you ask your users to signup beforehand, such as Hackney Weekend did. However, this will only give you a ballpark number of how many people will be trying to get tickets — not how they plan on signing up. Note that most people have multiple ways of signing up and will try to use them simultaneously if there are issues due to demand — in addition to signing up from their computer, users may also try to sign up via mobile phone web-browser, and/or from a tablet. In such a case where a user is using a laptop, tablet and mobile phone simultaneously, demand will be three times what it would be had the user used only one device. In short, there is no substitute for having a sufficient number of servers.
2. Streamline UI interactions
When dealing with servers that are at capacity, the ability for a transaction to fail increases with the number of user interactions.
For Hackney Weekend, users were sent to a signup page after leaving the BBC’s website, which then had them click through to another page. After entering their user information, they were taken to another page allowing them to sign up guests, which, after entering the guest’s user details and clicking the “Add Guest” button, allowed the user to choose a festival day to attend before then going to an order confirmation page. There, the user finally entered credit card details to pay for See Tickets’ service surcharge, clicked another button, and was then whisked to another page stating the transaction was complete. That equated to roughly five user interactions (or four for users not signing up friends as guests), each requiring a response from the server. As such, on a heavily-loaded server, each interaction introduces a new potential failure.
This is unavoidable in some cases — the main user needs to be confirmed against a database of registered users, the guest needs to be checked against that same database and the credit card has to run through its own set of server interactions. However, the four page signup process could likely be condensed to two, maybe three page max — a detail entry page, an event selection page with credit card details and a confirmation page. Further, as users were only allowed one guest, the AJAX “Add a guest” functionality was wholly unnecessary — why not just verify the guest’s details when moving to the credit card information page, something the server will have to handle anyway? This would have also prevented confusion for users who entered guest details but forgot to click the “Add a guest” button before attempting to move on to the next stage of the ticket acquisition process.
Removing even a single stage of user interaction reduces not only the likelihood of failure, but also the load a server experiences.
4. Use a Content Delivery Network (CDN)
In high traffic situations, content that isn’t generated by the server shouldn’t be hosted by the server. Cloud-based solutions like Amazon S3 allow static content to be hosted on a platform that scales with demand. While See Tickets appear to have used a separate server for static content, this resolved to the same IP address as their other servers and may have experienced the same sort of load issues as their main servers. While using a cloud to serve static content might not be as cheap as serving internally, the reduction in load to your server can be quite substantial — especially if serving whole pages, such as is the case with pages where no dynamic response from the server is necessarily — for instance, the initial landing page.
Some of the above tips will be more useful than others, largely dependent upon how much demand you estimate in the first step. If you estimate that your event will have a few hundred attendees and not sell out, you probably don’t need to do the latter steps. However, ensuring capacity is crucial when things simply have to work, and this applies beyond simply the registration phase — I can’t count the number of times I’ve been at a conference where the wifi connection wasn’t up to par, mainly due to underestimating the number of connecting devices each attendee would have. While following the above won’t guarantee you won’t see problems, it’s at least a starting point to help you to begin to properly plan some of the deeper technical logistics for your events and prevent a damaging amount of Twitter fallout — such as See received due to their lack of planning: