August 2020 Autoplay Occasionally, someone will complain that “Twitter keeps resetting my autoplay setting” on the website, and they’ll usually imply that we’re doing it for ads or other nefarious purpose. Truth: we don’t do it for ads or any other nefarious purpose. If anything, there might be a bug. And sometimes, sorry, it’s user error (sorry, but sometimes it is). I’ve looked into the code quite a few times, hoping I can find something. It's something we take seriously because we recognize that while autoplay is a nuisance for some, it's a major problem for others. Videos can trigger epilespy or trauma. Sometimes they are even sent maliciously, to create that effect. Every time I look, I think I find something, but it’ll turn out that I’ve misunderstood some nuance somewhere. Heh, I see you reading with skepticism: it’s a boolean setting - where is the room for nuance in that? Great question. When you opt-out of autoplaying videos on Twitter (and, let’s be completely honest here, I’m 42 years old and I don’t understand how anyone can use the site without disabling them), we store a setting in the site memory, and we send it to the server. On the server, we save it with your data-saver settings. On Twitter, you can choose data-saver mode, which reduces your bandwidth consumption dramatically. You might be in a place where bandwidth is available but expensive, or you might be on a memory-constrained device, so this is a useful setting. Video autoplay is closely related, so we store and fetch them with the same API. (It’s a memcache store served by GraphQL) Unusually, we don’t simply save this to your user id. You might want to have autoplaying video on your laptop, and not on your phone. That’s reasonable. So we store it to your user id AND your OS/browser. So, right now, I’ve got a record of 221915745 (my userid) and Mac/Chrome and Autoplay: OFF. This isn’t obvious to users. If I switch to another browser or another device, autoplay will default back to ON. Or, if I switch to another user account (we support up to five accounts at once on web), then you’ll get the setting from the other account. This might be unexpected, but it’s also by design. Ok, so for the sake of simplicity, let’s say you have only one user account and only one device and only one browser. What happens when you load the site? We will fetch the autoplay setting from the API and bake that into the HTML. This is important - we need to know your data-saver settings immediately, and we don’t want to slow down the site load with an extra API call. When the site loads, we read that value from the HTML and inject it into the memory store (redux). This works fairly well, and if the API call on the server fails (any API call should be expected to fail some percentage of the time, even if that percentage is really small), we can recover by calling the API again on the client. Since we didn’t load with the right setting, we might have it wrong, but we’ll recover almost immediately. We also save the HTML to the cache in the serviceworker. This lets you load the site quickly on a later visit, perhaps even when you’re offline. Some errors can appear here: if you change your setting, the HTML cache isn’t updated, so might reflect an old value on a future visit. We cover this in a couple of ways: we cache specific versions of the HTML that either doesn’t contain these values, or timestamps them, so that they’re only valid for short periods. We should talk about tabs. If you’ve got multiple Twitter tabs open, how do they synchronize settings? In truth, this gets a bit confusing. Each tab has its own memory (redux) store. The tabs will share serviceworkers and serviceworker caches, except when they’re on different domains: we host on mobile.twitter.com and twitter.com and the desktop browser is able to access both. The server store (via API) will have a single value. This can get weird. You could enable autoplay in one tab, then switch to another tab where autoplay was still disabled, and toggle data-saver, which would then send the disabled autoplay setting back to the server! So, the primary causes of real bugs are: cached values soon after changing the setting, and errors with the API (ie, if the data-saver API is broken, nobody gets their setting and everybody gets the default). I think it’s interesting how a simple setting has a complicated implementation but still has common failure modes. I think a possible solution might be to assume OFF in the cases where we know there’s an API failure rather than simply not having a setting saved. Perhaps we’ll do that if we get more reports. Read

April 2020 Well this is weird Well this is weird. It's April 20th. I've been working from home for about seven weeks now. Jack's been off school for five weeks. The whole world is on lockdown, is social distancing, is staying at home, is wearing masks, is only doing essential trips, is only working essential jobs. When the UK Prime Minister got it, that felt very movie-like. The Queen made a public address. The US president ... well ... he continues to be an arsehole. Some things haven't changed. We're very lucky that we haven't been affected too badly. There are very few cases in the local area (to date, Walnut Creek has had 37 cases from a population of 65,816), we have a large house that's well equipped for kids to play around, and I'm able to work from home in my little office. The supermarkets have lines outside, or so I hear. I haven't been. Only Agnieszka shops. Everyone talks about the supermarket staff and health workers, who are doing important but risky jobs. This is true, but I also think about producers of consumables, of the transport infrastructure, and of the utilities. It's cool we have a well stocked freezer, but if the power is out for 24 hours, that's all to waste. I've bought myself a big battery, but that only buys us a few hours. When we're able, we're investing in solar. I think about the little stores in town that are probably struggling. The toy shop? The book shop? The ice cream shop? All little independent enterprises that will struggle. We can still get all of those things from Amazon, but it would be sad to lose the places to go, when we're once again allowed to go. They talk of freeing up the restrictions. They talk of cresting the wave. But if you relax the restrictions that lowered the spread, what stops the spread from increasing again? It doesn't get bored or anything. The only way to get back to normality is to build herd immunity over time, or get the vaccine out there. Both of these will be slow. We won't be vacationing this year. The uncertainty is unhelpful. It's not that we don't know when things will open up again, but nobody has even given us the framework. We need targets and intent. We need to know that there is a plan. But it seems there is no plan, just constant reaction. We wake every morning. Sometimes Agnieszka will head to the supermarket early. I'll help the boys get plugged into the television and make myself breakfast. I'll head up to work at 8am or so, maybe later. At 9.30 the boy will appear and we'll try to do some maths. He's reluctant. He hates writing in particular, but sometimes you'll get him into a flow and he'll enjoy it. Those are good days. He's a huge distraction from my work, but it's best if I work with him. It would be unfair to expect Agnieszka to have both boys all day. At 10am he has a half-hour video session with his class on Zoom. They don't do much. They tell jokes and endlessly relearn video etiquette. Afterwards we return to the prescribed work, doing the bits we can handle or prefer. Jack's good at the maths if you can get him into it. He needs to work on his spelling and writing. At 12 we come down for lunch. Agnieszka has made us burgers and bread. Burgers are popular with both boys mostly. I head back up to work. At 1pm Jack's iPad unlocks and he can play games. He lives for Roblox. Before and after that he'll chain-view YouTube videos of other people playing Roblox. At some point in the afternoon I'll take Jack out for a hike in the hills. We're lucky we can do that near here. The views are spectacular, and the weather is mostly excellent. If he's not coming, I'll run instead, pulling my t-shirt off as I hit the park and the sunshine. It feels good to (pretend to) be free. In the afternoon I can get a couple of hours of real work in. I'm avoiding more meetings than usual, because I've got such little time available. I'm working on improving our smoke tests and developing the conversation tree view for web. I'm excited that we're nearing experiment, and hope that we can launch soon. At 5.30 or so, I'll come downstairs. There'll be some drama with the boys. We'll try to have dinner together, though Max will refuse to eat much. We'll watch an episode of Red Dwarf. We'll cajole the boys into pyjamas, persuade them to brush their teeth, and nudge them to their rooms, repeatedly. By 9, Max will be asleep. By 10.30, so will Jack. That's not great but these are strange days. We do what we can. Some nights I'll sit up with a beer and watch TV. It's not good, it just makes me more tired the next day. But you need a few moments to yourself. To be yourself and yet escape. To enjoy a beer in peace. Stress levels are high, and part of that is because we can't see the future. When does this end? How does this end? What of our aging parents, now an impossible-to-reach five thousand miles away? For now, we're all coping. It's not great, but it's ok. ![](/covid.jpg) Read

January 2020 2019: What I did this year My annual review of the past year. This year I tried to take a week off each quarter, rather than saving for a big trip like last year. I kicked off the year with JSConf Hawai’i. It was a bit weird going there without the kids, but also freeing to get a break. I ran every day, twice a day, along Waikiki beach, which was hard work but glorious. The days were long and packed with talks. I was impressed with how much work must go into each one. Maybe one day I’ll be able to commit to that. This year I decided to do resolutions per month. In January, I tried Keto. I liked it, I got thinner (particularly because I don’t really like eating fats) and I liked being thinner. I found myself oddly thirsty, all the time. And I craved weird carbs like cereal, and those egg/bacon wraps they have at Dunkin Donuts. The next month I tried wearing a watch every day. I haven’t done this for years, so wasn’t sure how it would go. I got myself a self-winding watch, and on days when I don’t move much, it stops. It’s the original Fitbit! On the third month I was feeling exhausted so I tried not shaving. I confirmed that I still look ridiculous when I don’t shave. By April I’d run out of ideas that I had the energy to commit to. In May, my brother Howard came to visit for the week. I decided to take the week off too. So we took Jack rafting on American River, which was amazing! He got a bit cold and tired by the end of it, but did really well for a seven year old. At one point, the boat captain pulled us over to the side and said, “who wants to jump off that big rock?” Jack not only volunteered, first, he just went and jumped into the freezing cold water without a moment of hesitation. I was astonished and proud. We also took Howard to see Hamilton. I had taken Agnieszka to see it a few months before, and actually even took Jack and Granny Viola to see it a few months later. It is so so good. I spent the rest of the year playing the soundtrack through, over and over, on Spotify. In summer, I took Jack to England. We spent a couple of days in London to adjust to the time and meet with old friends. We also took the chance to see the London Eye and London Dungeons. Then we headed to Waldringfield and spent a glorious week living in a tent, eating outdoors, enduring thunderstorms, crab fishing off the beach, and sailing on the river. We had one more day in torrential rain in London before heading home. It was a really good trip, just me and Jack, so I think we’ll do it again next year. Shortly after our return, Agnieszka and I got our American citizenship! For the first time, our little family all share a nationality! 🎉 On a whim, I took the family to Legoland for thanksgiving. Max in particular has been obsessed with Lego (wegos) recently, so I thought we’d enjoy it. It was indeed a good place, but we were unlucky with the weather. Torrential rain. And then, somehow, I hurt my shoulder really badly. Last year I was pleased at doing my pushups and controlling my shoulder problems. This year I switched to using dumbbells instead of doing push-ups and it got worse again. I got xrays, which showed nothing, and started physio, which honestly didn’t make much difference. And then thanksgiving in Legoland. The pain was unbelievable, and it was almost impossible to lie down comfortably to sleep. I got an MRI soon after, which shows a little tear, maybe. We’re doing a cortisone shot and then surgery if the shot doesn’t help. Bleah. The pain faded after a few days but I couldn’t move my arm above shoulder height for six weeks. For Christmas we went to Hawaii, to the Disney resort. For the first time, the kids really enjoyed meeting the characters and getting hugs and high-fives. Jack liked the river and the slides, and Max loved the beach. It was weird being away from home for Christmas. At work, I celebrated my nine-year anniversary. I switched teams for no particular reason, mostly just to keep working on a project I liked. Next year I’d like to do less myself and help others more. I find that hard. Read

August 2019 Cookie banners In the EU, there's a requirement that users are warned about the cookies that a site uses. It's a bizarre quirk that resulted from well-meaning legislation that appeared before GDPR. However, it suggests that users know what a cookie is and what it might be used for, which they absolutely do not. And it's oddly specific: I can register a serviceworker or use localStorage to store data and run code on the device, but for cookies I need a warning banner. https://www.cookiebot.com/en/cookie-law/ Every website implements this banner differently, leading to a mix of different ways to pop up a banner over the content of whichever site you're currently viewing. There's no prescribed way to handle this, so each site owner must solve it for themselves. This leads to a lot of superstition about what the requirements actually are, especially for teams without formal legal advice, and especially for teams with formal legal teams (because legal teams tend to hedge against the possibility of legal action rather than acting in the interests of the user). What would be a better solution here? I notice that some sites ask me if they can send me push notifications, or whether they're able to read my current location. These prompts don't need to be provided by the site owner - they're built into the permission model of the browser. _Why is it different for cookies?_ Indeed, at a basic level, it is the browser who stores cookies for the websites. It is the browser attaching them to requests, reading them off the responses and managing cross-domain security. A browser is the only actor that can prevent a cookie in an http response header from being saved, based on approval given in a prompt. As I browse the internet in the EU, the vast majority of sites I see give me cookie notices. However, some do not. I would be interested to know if the browser vendor could be held legally responsible for not warning the users of these websites about the cookies that are issued. That might prompt swift action from the browser vendors to implement a native prompt. A native prompt could have other benefits: one might be able to set a setting to say "always accept" or "always reject", or to artificially limit the lifetime of cookies across all sites. I imagine that we could agree a means to provide the browser with a link to our cookie policy, and that the browser could display it without using cookies, before cookies are accepted by the user. Perhaps this would be at a [well-known](https://tools.ietf.org/html/rfc5785) url per site. Cookies are essential for complex and useful websites. Let's ask the browsers to help us out. Read

August 2019 Taking notes There are different kinds of note-takers. Some people record everything: every word. They might or might not share them. Others will write down important points or action items. I tend to note down my own action items and let others fend for themselves. Others take no notes. Let's say a waiter comes to your table to take orders. If they take no notes, you're happy, because they're confident. But you're also uncomfortable. Unless this is a very classy place, you _know_ there's a mistake in there. Lemonades will be forgotten; steaks will be overcooked. Waiters should be taking action items. Let's say you're in a 1:1 at work with your direct manager. They're looking at you, but they're typing every word you say. Everything. It's not that they're not listening - they _must_ be listening to every word. But they're distracted - they're not _hearing_ you. And the chances of a Slack ping popping up when they're typing is 100%. They'll just answer it and "what were you saying can you just repeat that?" Infuriating. Managers should be present in the moment, taking important points, preferably to paper so that you can see them. Both of these recommendations involve paper, but they don't need to. A waiter taking orders on a tablet can work fine. A manager taking notes in a shared doc works really well, especially if you're remote. When I was a manager, I tried a mix of these methods. I couldn't keep up the conversation transcribing. My touch-typing just isn't good enough, and the conversation misses the context of the day anyway, so they don't even make good long term records. I often took _no_ notes. I enjoyed this - I could give my reports my full attention. However, our 1:1s became very repetitive, neither of us made progress. While I liked the idea of compiling a record of meetings at the end of the day, it simply never happened - I never found the time. Sharing a single doc of basic bullet points seems to work. Each meeting, make a new section at the top with the date. Review your previous notes at the start of each new meeting. Make new bullets. This provides a useful shared track record, and gives to opportunity to build forward. Actually, this would work pretty well for restaurants too. Read

August 2019 Identity and Endorsement _Verification_ is a feature common to Facebook, Twitter and several other social media networks. The problem it was created to solve was to differentiate between Jan Smith the famous actor and Jan Smith, the absolute nobody who wants to build social capital off the reputation of Jan Smith. Or indeed, to attempt to sully the social reputation of Jan Smith. All the "real" Jan Smith needs to do is provide proof of identity and their username, and the networks will add a reassuring "tick" to let everyone know who is real and who is not. Twitter is one of the few networks that still allows anonymous and parody accounts to be created, so I believe the risk of confusion and impersonation is greater. As with basically every other feature ever, the problem is more complex than it first appears. The "blue tick" is now seen as an endorsement. This mixes the reputation of the network with the reputation of the verified user. Additionally, since it appears to be an endorsement, and is of limited availability, the award of the blue tick is greatly prized. There is always someone in my DMs asking if I can help them get verified (I cannot). Once a user has a blue tick confirming their identity, what does that mean? Can they then change their photo, their name, their username, their bio? And if they do, what does the tick mean? In a world of seven billion individuals, who gets to decide who the "real" Jan Smith is? How does Smith apply for verification? What if a Jan Smith has already been approved for verification? What if the individual lives in a place where they do not have a way to prove their identity (a surprisingly common issue)? Verification is a combination of **identity** (proving that your name is what you say it is), and **credibility** (showing which person with that name you are). If I were to try to define such a system, I imagine I would try to build my qualification criteria off existing systems. Perhaps I'd leverage passports, government ids, and credit cards for identity. And for a measure of credibility I'd look to large reputable organisations like newspapers (for journalists), sports teams, religious institutions, elected government officials, and so forth. Beyond those, I'd try to establish a measure of credibility in their field, based on mentions in the press, or wikipedia. Without a measure of _credibility_, we might be asked to verify every single "Jan Smith", which would undermine our solution to the original problem: how do I find "my" Jan Smith? The trouble with mesuring credibility based on external factors is that we will accumulate all the biases of the original sources. If men are more likely to get published in the press, and more likely to apply for verification, they're doubly likely to be approved. Have a think about how we could qualify the credibility of someone in their field. Perhaps you would base it on the approvals of a given amount of other network users? LinkedIn does that for "skills". But that factor could easily be abused; you'd need to account for that in your model. And remember the seven billion people problem: we can't employ a private investigator for each and every applicant. It's a difficult problem. In my opinion the best solution (not my idea, widely discussed already), is to split the confusion of identity verification and endorsement of credibility. Identity verification should be widely available, and networks could even tell us which method they used, eg "Facebook has seen copy of US Drivers License". Endorsement of credibility should still be left to others. Allow people to link to official websites and crawl those websites for usernames to be verified. When approved, add the website to the user's profile. This means the networks need only vet official websites for validity. Obviously this is still not trivial, but might be more managable. This takes time and resources, both of which are carefully managed at any social network, regardless of size. I don't expect to see it fixed soon. One final problem to consider appeared last year. My friend contacted me on behalf of [@dog_rates](https://twitter.com/dog_rates), asking if I could help with verification (I couldn't). There were plenty of impersonating accounts, so I can see why it makes sense. But how do you verify a user without an identity? Read

August 2019 Losing control Twitter added a new control to the home timeline recently: "latest tweets". It's a toggle that let's you switch between "Home" (ranked or algorithmic timeline) and "Latest" (reverse-chronological list). The algorithmic timeline has always been controversial, both inside the company and out. To account for that, it was rigorously tested over months and months. The results are unequivocal: our users like more tweets, read more tweets, and use Twitter more, when they have the ranked timeline. For new users who have subscribed to a few spammy accounts (for example, news), the ranked timeline reduces the effect of the overactive accounts, letting the quieter accounts show through. For power users, the ranked timeline helps cut through the many accounts you follow to see the more valuable tweets, surface new accounts to follow, and conversations you're interested in. If the ranked timeline is better for everyone, why is it controversial? Why can it be so unpopular? It's a long-standing issue at Twitter that we don't really know what the magic is. We know why people use Uber: they need to get from A to B. Why do people use Twitter? A whole bunch of reasons. Why does it succeed where others did not? Unclear. One of the most successful initiatives in recent years has been defining the role of Twitter: to provide news. This doesn't mean that we're dismissing other uses, simply that we can focus on optimising for news. One thing that distinguishes Twitter from news tv, from newspapers, from Facebook, from Google, is the ability to choose who you follow and control your experience. You know why you're seeing the tweets on the screen: you chose to follow those accounts. For the ranked timeline, this is no longer true. You've lost control over the experience. And with that, you've lost _ownership_ in the product. Previously you were free to make the Twitter experience your own. Now, someone else is changing that. It doesn't matter if the tweets are better or not, the feeling of ownership is lost. Was that part of the magic? Is that why Twitter added a toggle for "latest"? Actually no. We added it to recognise that sometimes you are following a live sporting event and need the tweets to be in chronological order. That's why the toggle resets after a few hours; when the event is over. Twitter isn't alone in pushing a feature that claims to know what you want better than you do. Apple famously design their hardware and software without user research. They hire experts and want to solve the issues users haven't thought about yet, not the issues users are talking about (which tend to be top-of-mind). The removal of headphone jacks, or the keyboard, or the touchbar, are classic examples. Facebook's News Feed has been ranked for a long time, which has led to accusations of intentional (or unintentional) political (and emotional) manipulation of its users. Apple and Facebook are some of the most successful companies in the world, suggesting that taking control away from users does not hurt the bottom line. It'll be interesting to see if this continues to be a winning formula, or whether new competitors offering to return a sense of ownership in the product will win through. Read

August 2019 Shipping After months and maybe years of stress, meeting, late nights, bug reports, dogfooding, requirements changes, dependency changes, management changes, user testing and actual coding, you're ready to ship your significant rebuild. What happens next? There's often some kind of anti-climax at the launch. If your site has existing traffic, you can't just flip a switch. You need a/b testing, holdbacks, gradual rollouts, comms. It can take a lot longer than expected. With an established software development team, you've probably pivoted the whole team away from the day-to-day mission, to focus on this rebuild. Managers will be familiar with [Tuckman's stages of team development](https://en.wikipedia.org/wiki/Tuckman%27s_stages_of_group_development): forming, storming, norming and performing. With any luck you'll be a fast efficient team in the performing stage of the project. Indeed, the smaller tasks near the end, where the team are very familiar with the code (because they've just written it), can feel the most productive. Your team has been focussed on a mission. The mission was the project, and the project is nearing completion. But what is a team? It's a group of people with a mission. Without a mission, there's no team. In 1977, Tuckman added a fifth phase "adjourning" to the model, recognising the end of the team, and to allow for a cyclic process for teams. What might that phase look like? **Folks will leave.** Either the company or the team. It's likely many have stayed to complete the project anyway, and you'll have seen unusually low attrition for the last third of the project. That will catch up. **External stakeholders will come knocking.** It's likely that you had to freeze some features or push back on requirements for the duration of your project. Those external partners will be expecting more attention, likely before you've even shipped. A human aspect is that they felt left out of your project launch and expect compensation. **Management will reorg.** Rather than leave the high-performing team alone, senior management often sees this as an opportunity for a reorg. Like the external partners, they've seen slow responses to requests and received pushback on their requirements because of the long project. Perhaps your team has been left out of previous reorgs to keep the project on track. Senior management will be looking to normalize the team with the rest of the company, possibly simply by disbanding it. From the point of view of the team, all of this can be stressful. The team likely has a significant backlog of technical debt which was taken on to achieve the deadline, there'll be an accumulation of key knowledge in the team which should be codified, there'll be bugs to address, there'll be cleanup tasks for the previous systems, especially if they're still running in parallel. With the team's future in question, it can be hard for the team to focus, especially when they believe they should still be celebrating the launch. To make this easier, I have some ideas: Get closure - Celebrate the launch. - Look for signs of burnout and manage it. Let folks take extended vacations and reassure them that they'll be welcomed back. - Be honest about the end of the project and how you plan to address the wrap-up work. You should have that work fully scoped before you launch. Don't make it a six-month documentation sprint. Prepare the next phase - Compensate the team. _You now have the most skilled team in the current codebase that there will likely ever be._ Increase salaries rather than giving spot bonuses. It'll be more clear that they're valued after the project rather than being compensated for work done. - Be honest with your manager if you're considering leaving the company. It gives them a chance to either offer you more opportunities, or at least to manage the transition early. Start the next phase - Consider a new mission but be careful how you discuss it - allow the team the time to enjoy the launch but let them know there will be meaningful work afterwards. - Consider the team at the forming stage again. New processes need to be established. Stakeholders should be reviewed and reconnected. The acknowledgement of the "adjourning" phase allows the group to respect the end of things the way they were, and to move onto the next project. Read

August 2019 Removing cookies Cookies are hard to manage. As you'll know, the cookie API is ... _[infelicitous](https://tools.ietf.org/search/rfc6265)_. You can set a cookie like so: ``` > document.cookie='name=value; Path=/; Domain=kenneth.kufluk.com; Max-Age=1'; ``` When a cookie is set by the server, it uses a similar format, in the "set-cookie" header of the response. Reading the cookie back just gives you the serialized name/value pairs: ``` > document.cookie; < "name=value;another=value" ``` When the cookies are sent to the server in the request they're also just the name/value pairs. What this means is that the full metadata of a cookie is never available, to the client or the server, except when it is set. If you want to delete all the cookies for your website, this is tricky. You can delete a cookie by providing a new cookie if the same name with an expiry date in the past. However, cookies are partitioned by domain and path. If those aren't set appropriately on the deletion, you won't clear the right cookie. (There is a new header "[Clear-Site-Data](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Clear-Site-Data)" if you want a nuclear option, which will instruct supporting browsers to remove all cookies.) Given that we need to set a cookie to delete a cookie, we will need to know the name, domain, path and other meta data of the cookies to be able to delete them. We only know the names of cookies on the current page (ie, based on the current path and domain), and the path/domain/meta info for those cookies is not available. In other words, it's important to issue your cookies appropriately in the first place. **Always set the Path to root (/)** **Always set the Domain** If our cookies can expire within a reasonable timeframe then there's little need for deletion. A common solution is to use session cookies. If you don't specify an expiry or a max-age when setting a cookie, it's set to the "session" and is deleted when the browser window is closed. In an old browser context where we had one site per window, this made sense. However, modern browsers are able to preserve and restore pages, windows and tabs even after a reboot. The session cookies are no longer short-length, they're indefinite length. During a recent survey of cookies at Twitter, we observed session cookies in requests that we hadn't issued for _three years_. > There are two different kinds of cookies often called "session cookies", > which can be confusing. I'm using it here to mean a non-persistent cookie > that should be removed at the end of a "session". It is often also used > to describe a cookie that contains a serialized set of values. **Always set the Expires or Max-Age header** (_max-age is not supported by older browsers, so expires is still prefered_) Given that we now have cookies with are properly issued with expiry dates, we should question what a reasonable expiry time would be. Let's consider a couple of examples. We show a tooltip pointing to a new feature of the site. When a user has dismissed that tooltip, we issue a cookie so that the user doesn't have to see it again. We don't ever want the user to lose that cookie, so we set the expiry time to "infinite", which our cookie library helpfully sets to the year 9999. In other words, eight thousand years from now. > Every cookie you set with be attached to every request. While small, > these bytes can add up over time, and can cause issues. We call > this _request bloat_. An issue we see at Twitter is when the cookie > size, combined with other headers, exceeds the limit of our http > framework. These requests are immediately rejected with a status 431, > which is bad for users, because they won't know why the request failed > and won't be able to submit a similar request until they clear out some > cookies. Another example is your login. When you log in, we issue a cookie representing your credentials. The cookie allows you to make subsequent requests as that user. We set that cookie for a month. Let's consider those expiries against expectations of a user. If you take a vacation for a month, then reopen your laptop, would you expect either of those cookies to disappear? Probably not. Are you likely to be on vacation for 8000 years? Probably not. I think we could set realistic grounded values here, based on common sense. If you leave your computer for more than three months, it wouldn't be too much of a hassle to log in again. If you saw an educational tooltip 18 months after you first saw it, that mightn't be too annoying (assuming the code is still in the site). However, cookie expiries don't work that way for the login cookie. It's not measuring the time since the cookie was last used, it's measuring the time since the cookie was issued. The best solution here is to keep a rotating value managed by the server. Store the login cookie value in a table on the server. If it's seen and it's more than 30 days old, issue a new one. If it's seen and it's more than 90 days old, consider it expired. By checking the expiry on the server not the client, we can set the cookie expiry to anything reasonable over 90 days. **Consider 18 months a maximum lifetime for your cookie** **Manage login expiries on the server side** **Refresh/reset cookies that you want to keep longer** If your site has lost its login cookie, it might find itself in a bad state. Maybe you have cached content in the serviceworker, in other cookies, in localStorage, in indexedDB. In this case, we have historically cleaned up the user storage as if they had been logged out, but we found this caused problems for users, where they were unexpectedly cleaned up. It turns out that some privacy-protection browser extensions can strip cookies from the first request. A common example is Privacy Badger, which strips the cookies from the page requests [if the page is served by the serviceworker](https://bugs.chromium.org/p/chromium/issues/detail?id=946908). As these extensions are common, the developer should guard against them by checking login via XHR and either refreshing the page, or popping a dialog asking for advice. Since cookies are a distributed store of data that is hard to read from and manage, it's important to be careful about which cookies you issue and when, and try to limit those cookies as much as possible. Storage such as localStorage should be used in preference, where possible. Cookie spec: https://tools.ietf.org/search/rfc6265 Read