You cannot possibly avoid building an API. Even if you build "just a Website", it will still need to get its data from your backend somehow. However you decide to do this, that is your de facto API.

Knowing this, the real question isn't whether to build an API, but how to build it. You can do it on-the-fly as an ad hoc thing -and indeed, many Websites are built exactly this way- or you can design it carefully to be usable in other contexts. Put into this context, it becomes pretty clear that your colleague is right: you should do the API first, and then build your site on top of it.

Nevertheless, this brings with it some concerns, as you point out. To address them:

It's way too abstract to run your backend off an API. You're trying to make it too flexible which will make it an unmanageable mess.

That depends on how you do it. As George Pólya points out in his excellent text How to Solve It, oftentimes "the more general problem may be easier to solve". This is called the Inventor's Paradox. In the case of programming, it often works by means of separation of concerns: your backend no longer has to be concerned with the format of the data that it puts in and takes out, and so its code can be much simpler. Your data parsers and renderers no longer have to be concerned with what happens to the data they create, so they, too, can be simpler. It all works by breaking the code down into more manageable chunks.

All the stuff built into MVC seems useless, like roles and authentication. For example, [Authorize] attributes and security; you will have to roll your own.

I confess that I find it extremely difficult to sympathize with people who refuse to learn their tools. Just because you do not understand their use does not mean that they are useless, and it certainly doesn't mean you should roll your own. Quite the contrary; you shouldn't go rolling your own tools until you understand the alternatives, so that you can be sure to address the same problems that they do (even if only in your own ways).

Consider Linus Torvalds, who is most famous for writing Linux, but who also wrote git: now one of the most popular version-control systems in the world. One of the driving factors in his design was a deep opposition to Subversion (another extremely popular VCS, and arguably the most popular at the time git was written); he resolved to take everything that Subversion could, and to whatever extent it was possible, solve those problems differently. To do this, he had to become an expert on Subversion in his own right, precisely so that he could understand the same problem domains and take a different approach.

Or, in the process of learning your tools, you may find yourself finding that they're useful as-is, and don't need to be replaced.

All your API calls will require security information attached, and you will have to develop a token system and whatnot.

Yes. This is how it should be.

You will have to write complete API calls for every single function your program will ever do. Pretty much every method you want to implement will need to be ran off an API. A Get/Update/Delete for every user, plus a variant for each other operation eg update user name, add user to a group, etc. etc. and each one would be a distinct API call.

Not necessarily. This is where architectures like REST come into play. You identify the resources your application works with, and the operations that make sense to apply to those resources, and then you implement these without worrying so much about the others.

You lose all kinds of tools like interfaces and abstract classes when it comes to APIs. Stuff like WCF has very tenuous support for interfaces.

On the contrary, interfaces become much more important when you're using an API, not less. They come out in the representations you render them into. Most people nowadays specify a JSON-based format for this, but you can use any format you wish, as long as you specify it well. You render the output of your calls to this format on the backend, and parse it out into whatever you wish (likely the same kind of object) on the frontend. The overhead is small, and the gains in flexibility are huge.

You have a method that creates a user, or performs some task. If you want to create 50 users, you can just call it 50 times. When you decide to do this method as an API your local webserver can named-pipes connect to it and no problem - your desktop client can hit it too, but suddenly your bulk user creation will involve hammering the API over the Internet 50 times which isn't good. So you have to create a bulk method, but really you're just creating it for desktop clients. This way, you end up having to a) modify your API based on what's integrating with it, and you can't just directly integrate with it, b) do a lot more work to create an extra function.

Creating a bulk version of an existing method is hardly something I would call "a lot more work". If you're not worried about things like atomicity, the bulk method can wind up being not much more than a very thin frontend for the original.

YAGNI. Unless you're specifically planning to write two identically functioning applications, one web and one Windows application for example, it is a huge amount of extra development work.

No, YANI (You Already Need It). I outlined that as above. The only question is how much design work to put into it.

Debugging is much harder when you can't step through end-to-end.

Why wouldn't you be able to step through end-to-end?

But more to the point, being able to examine the data going back and forth in an easily-recognized format that cuts out all the display cruft actually tends to make debugging easier, not harder.

Lots of independent operations that will require lots of back and forth, for example some code might get the current user, check the user is in the administrator role, get the company the user belongs to, get a list of other members, send them all an email. That would require a lot of API calls, or writing a bespoke method the specific task you want, where that bespoke method's only benefit would be speed yet the downside would be it would be inflexible.

REST solves this by working on complete objects (resources, to use REST theory's terms), rather than the individual properties of objects. To update a user's name, you GET the user object, change its name, and PUT the user back. You might make other changes at the same time as you change the user name too. The more general problem becomes easier to solve, because you can eliminate all those individual calls for updating individual properties of an object: you just load it and save it.

In some ways, this is not unlike RISC architectures on the hardware side. One of the key difference between RISC and CISC (its predecessor) is that CISC architectures tend to include many instructions that operate directly on memory, while RISC architectures tend to operate mostly in registers: in a purely RISC architecture, the only operations on memory are LOAD (copy something from memory into a register) and STORE (take a value from a register and put it into memory).

You'd think that this would mean taking many more trips from registers out to memory, which would slow down the machine. But in practice, the opposite often happens: the processor (client) does more work between trips to memory (server), and this is where the speedup comes from.

Long story short: your colleague is right. This is the way to go. In exchange for a little up-front work, it will dramatically simplify the code for your Website and enable better integration with other Websites and apps. That is a price worth paying.

Further reading: