About this talk

In this talk, we won’t be looking at Rider (a new cross platform .NET IDE from JetBrains, integrating the language analysis features of ReSharper inside the IDE functionality of IntelliJ)’s feature set. Instead, let’s geek out and see how and why we built our own custom, asynchronous, declarative, reactive, inter-process, cross runtime communications protocol.

Transcript

- This is Rider here, so we can see it's just, as you'd expect, it's an IDE. It's got a solution explore on the left. We've got projects listed here. I know they're a little small here. Resizing the entire ID isn't as good as resizing the editor text. But we've got the text on the right, C# files, and you can see we've got, sort of, squigglies and inspections going on here. So hover over that, you'll get the tool tips. We can Alt+Enter, change that to an expression body and it'll rewrite the code for you. We've got hins we can Alt+Enter and convert to auto-properties and it'll do you, sort of, standard things that you're used to as ReSharper. So, it'll inspect your code, find issues, and then try and fix it for you. Allowing you to rewrite code and, hopefully, get better, nicer code at the end of it. We've got Solution Explorer. We've got editors. We've got inspections, Alt+Enter, refactorings. There's unit testing, which we've got down here. We can run all of those. We've got a NuGet window as well. Which, ooh, I've got WiFi now, that might work. Let's see how it works. And, so, that's nice and quick as well, so we can search for things. And it'll find those and bring those in quickly. And it's all nice and lovely. There's a debugger on most platforms. Long story, we've got a debugger. If you're using .NET framework, we've got debugging for Windows, Mac, and Linux. If you're on .NET Core, which we also support, we've got debugging on Windows. We're working on it for other platforms. Yeah, there's a lot of backstory there. We don't need to worry about that right now. So, that's kind of Rider there. Download it and give it a go. It's just using standard solution files, standard CS prod files. It's not gonna harm anything you've got so you can give it a go and give it a whirl. Alright, so, let's get back to this kind of thing. So, this isn't Rider but what this is is a ReSharper IDE that we built at around 2004, I think it was. So, the ReSharper one timeframe. We got ReSharper one shipped and then we got time for ReSharper two, we decided what we're gonna do was build a whole IDE. And we got quite fast through with it but there's a whole 80/20 rule. We got a lot of it done pretty quickly and then the remaining 20% of it was gonna take 80% of the time to get it all finished and polished and everything. But it had a lot of features. It had Solution Explorer. It had editors. It had tabs. It had IntelliSense. I like this way it's got, sort of, the rich islands of content in the editor there. So, we're rendering XML doc comments as a fancy block of something there. And this is before Visual Studio had its WPF editor and everything, so this is all Windows Forms. So, there's quite a lot of work gone into all of this. It had debuggers and everything of stuff. But we didn't ship it. We decided that there was too much work to finish it to get it to production-ready and everything. So, when we didn't ship it, we thought the best thing to do was to actually finish, be a plugin into Visual Studio and that's serviced really well for several years since. But, along the way, we continually asked two questions, really. Firstly, when are we gonna get F sharp support? Secondly, when are we gonna build our own IDE? And both of those is now the answer to, is Rider. This also gave us something else, which is really, really useful, because it's using the ReSharper code base but it's running in its own IDE and, so, we built interfaces for everything. And, so, we separated out the Visual Studio content from the IDE that we built here and that was really useful as we've used that in lots of places elsewhere. So, you can get a command line version of ReSharper, which you can run on continuous integration systems and it'll analyse your code, get a load of inspections, and report those as an XML file and continues integration. Which is useful. Other things we use it for is for testing. All of our tests run as a headless version of ReSharper. We don't fire up Visual Studio to test it. We run everything in process, in memory within the test runner. There's about 100,000 tests as well so we don't really, really don't, want to be firing up Visual Studio to run those all the time as well. And the other thing that allows us to do is support multiple versions of Visual Studio. So, we're not tightly coupled to the Visual Studio interfaces. We can put something in-between them and we can have different implementations per different version of Visual Studio, which is really cool. So, if we didn't build one then, we've been asked a lot of times, in between times, you know, to build an IDE, why are we building one now? There are lots of reasons for us building our own IDE. There's plenty for not as well but for building our own IDEs. Firstly, we're a plugin to Visual Studio and, so, we have to live within that environment. And that means that we are sharing resources with Visual Studio. So, we have our own memory constraints and performance constraints but we have to share that, now, with Visual Studio. So, that any changes to Visual Studio will impact us and vice versa. And we're also limited to running in a 32-bit process. So, those resources, themselves, are constrained. And, so, there is, kind of.. We've got lots of little reasons why we would want to move to build our .NET IDE but there was no, sort of, single, compelling reason to actually do it. There's no sort of, nothing really, kind of, pushed us over the edge. The other thing which did, sort of, make sense for why we should do it was the .NET Core announcement because .NET Core was gonna be cross platform, and if you want to run your co cross platform, your Autonet is to be cross platform. And it's like, right, now, with all of these things and cross platform, it's now worth us investing in it and giving it a go, and so that's what we've been doing. We've been building that now. So, that's the why. The next question is how? How do you build a .NET IDE? Or, more importantly, how do you build a cross platform .NET IDE? Because one of the big things about an IDE is it's a graphical, you know, graphical programme. So, you need to have some sort of UI toolkit and there isn't, really, a decent UI toolkit, which is cross platform for .NET. So, the model has a version of Windows Forms but it's not, necessarily, complete. It's got a sort of look and feel on various platforms. There's GTK sharp as well which MonoDevlop, itself, uses. Which is good in that. But we've already got investments in a WPF application and, so, how do we do this? You know, what's the best way of doing that? So, let's kind of change the question slightly. Well, okay, so, how do we build a cross platform IDE? It's like, "Hi, we're JetBrains, we build cross platform IDEs." So, we've got IntelliJ, really. We've got IntelliJ, although, what we call IntelliJ is actually IntelliJ IDEA. IntelliJ IDEA is the Java ID that we have which is what -- Has anybody used IntelliJ IDEA, by the way? Yeah, if you're working in Java, chances are you'll have used it. It's our uber ID. It, sort of, has all the features you need in it. It'll do Java. It'll do Groovy. It'll do Grails. It'll do HTML, CSS, Javascript. The, you know, the works, really. And it can get a lot of plugins then to sort other languages as well. But that's not just it. We've got IntelliJ Platform. So, what we've done is we, kind of, separated it out. So, we've got, not just IntelliJ IDEA, that's a product. That's in everything. That's a full-featured IDE. We've got the platform, which it can use to build IDEs and that's what we use to build all of the junior IDEs. It's a WebStorm. PHP storm. RubyMine. Datagrip for DBAs. Those are using the IntelliJ Platform and using, essentially, plugins to that to provide functionality for particular languages. So, the IntelliJ Platform will give us a whole bunch of infrastructure for building things. So, it'll give us the project-view on the left hand side. It'll give us the code completion windows. But it doesn't know anything about any languages and various plugins have to provide that language understanding. And the other thing is it's open-source as well. So, IntelliJ Community is a community version of IntelliJ IDEA and it's open-source. It's also the same as the platform. It can chop bits off to make your own IDE and you can have a go at that. It's all open sourcing, you can download it and go for it. But IntelliJ is a JVM application and that's how it's cross platform, it's because it's JVM. It runs on Windows, Linux, and Mac but, by being a JVM app, how do we make use of ReSharper and all of ReSharper's knowledge of C# and inspections and refactorings, and everything, when we are using a JVM application? ReSharper is, of course, a .NET app. So, how do we build a .NET IDE in the JVM? Two great tastes. I like the way we've got two things here. So, clearly, two great tastes and they're both lovely and both appeasing. People love them both but only one of them is glistening and shiny and everything. You'll make up your own mind which language that one is. So, we have various options here. There's a bunch of things we could do to make this work. Firstly, we could rewrite ReSharper in Java. That's probably not a good idea 'cause that's, like, 14 years worth of investment and implementation, and rewriting that in a new language and a new platform is going to take absolutely ages. It's probably not a good idea. It would just include, it would maybe also have, like, an extra implementation of that, as well as ReSharper, so, not terribly useful. And, so, what else could we do? We could use some sort of automatic conversion and rewrite, automatically rewrite your .NET code, C#, as Java and run it on the JVM but, again, that's kind of fraught with peril. It's probably not going to lead to a great experience. Another option, then, would be to run ReSharper out of process. So, we can already run ReSharper as a command-like process. So, we could run that out of process and have a new UI in the front end. Have something provide the user interface for that. We could use IntelliJ because that's our cross platform IDE that we've got there. We could make use of the investment we've made in IntelliJ as well as the investment we've got in ReSharper itself. So, that seems like a good idea. We could even go crazy with some other options like having ReSharper running out of process and build a web front end for that but that's a lot of investment on the web front end. And I don't know JavaScript well enough so I can't do that. Or another alternative would be some sort of hybrid thing where we develop a lot of stuff into IntelliJ. Such as the parsing and the syntax trees we've got then but then have all of the inspections running in the back end where we've already got them implemented but that can be, kind of, confusing. So, we did the obvious choice, really. We integrated the two. We have ReSharper running as an out of process server as a language service. And we have IntelliJ running as the front end, as the user interface. So, ReSharper runs as a language server. It knows everything there is to know about the C# stuff and IntelliJ provides the user interface. So, ReSharper's running headless as a command-line process. IntelliJ is the UI. This means we're now cross platform. IntelliJ is already cross platform. Runs on Windows, Linux, and the Mac because it's a JVM application. And ReSharper, itself, can run cross platform because it's just a .NET application. So it can run a .NET framework when you're on Windows and it can run on Mono on Mac and Linux. We do want to run it on Corsula on .Net Core but we're still waiting for the .NET Core 2.0 when we got all the APIs back. We would've had to do a lot of work rewriting the APIs to work with the stripped down version and now they're adding it. It'll happen sooner or later. This gives us a couple of useful benefits. If we're running out of process, we no longer have the constraints of running inside Visual Studio. So, ReSharper is now no longer fighting for resources against Visual Studio, which is a highly interactive application. So, things like, WPF going on all the time. There's a lot of memory traffic going on for WPF. We don't have to work with that now. It's just our own memory usage. We can also run 64-bit. So, the front end can run 64-bit in the 64-bit JVM and ReSharper can run 64-bit as well. So, we no longer have all these constraints that we've had before. It also means that we can innovate in areas where we haven't been able to look at before in Visual Studio. So, NuGet, for example, we haven't been able to do anything, really, with the NuGet user interface. We've been able to do things like have an alt10 to run on an unrecognised type and we can say, search on NuGet.org and we've got an index of all the types on NuGet and it'll say, well, they used in these packages here and you can quickly instal the package from there. But A, not many people know that and B, that's very different to actually trying to say, well I want to instal end units. Whereas now we can do that. We've got our own implementation of the NuGet Window's there and we can, sort of, sprinkle on the JetBrain's magic source and we can put on layers of caching and background updates, and things like that, and it can be much faster. The other nice thing is that we get, we get to make use of continued investment in ReSharper. So, anything we build in ReSharper, we'll make into Rider and any change we make from Rider, we'll make into ReSharper. So, basically, now we've got two products, which are going to help each other, really. So, ReSharper's still going to be there. It's still going to be a visual studio plugin. We're still going to be building features for it because that's where all the C# analysis happens but all of that stuff makes it into Rider as well. So, does that make IntelliJ a thick client or a thin client? That's kind of an interesting question 'cause IntelliJ provides us a whole load of high-level user interface elements. So, it provides us with a tree views, the infrastructure for debugging, the infrastructure for code completion for editors, it provides the whole editor itself. Setting pages and everything. So, it provides a lot of functionality but it's dumb. It doesn't know anything about the languages. So, all the language information come from ReSharper, So that's very much out of process. So, all the inspections, the refactorings, the code completion, everything.. IntelliJ provides us with the infrastructure but not with the actual implementation. So, in that respect, it's a thin client. So, it's like a really, really smart thin client. And there's also some useful standalone functionality in there as well. So, IntelliJ gives us useful things like, it will index all the files and provide quick finding paths, photo control system and database stuff as well. Where it doesn't need to know anything about languages. Right, so let's get into some of the interesting stuff of how does it actually work. How do we join the two together? How do we make this work? How does it work when I press Alt+Enter? So, everyone knows the Alt+Enter thing where we've got a, sort of, conspection. It's highlighted as a squiggly, hit Alt+Enter We get a list of menu of items and you hit one of those and it fixes the, whatever the issue is. So, in this case, IntelliJ provides us with the editor. It provides us with the text caret so we know whereabouts in the document we are, and it tracks it when we hit Alt+Enter. At that point, IntelliJ will ask for the current language for the items to display. So, IntelliJ won't know what items are there, it's all down to what the language is that's there. IntelliJ won't know that you can rewrite this to an expression bodied member or something but ReSharper will. In Rider, the current language is a facade for the ReSharper out of process and, so, when IntelliJ asks for the current language, that then gets proxied out to out of process, comes back, and reports everything. So, ReSharper will return back a list of the display names for the menu, the icons, and any submenus that are there. So, it's all really quite lightweight there. Just a handful of things there and IntelliJ puts those in the Alt+Enter menu and there we are, we've displayed the items. Selected, it'll go back to ReSharper and, actually, action it as well. Inspections go the other way. So, when you hit Alt+Enter, you go to the ReSharper. When you do inspections, everything comes from ReSharper. So, again, IntelliJ provides the infrastructure. It knows how to display the squiggly items but it's all pushed from ReSharper. So when IntelliJ opens a document, it notifies out of process ReSharper that it should be open and ReSharper will do a bunch of analysis and asynchronously notify back IntelliJ with all the information. It just gives back.. It'll publish, effectively, a list of ranges, severities, and tooltips. IntelliJ then just displays them in the right place. And if we look out how we modify source, this is now bi-directional so, you can have this going either direction. The user could type something and, so, we have to tell ReSharper about it or ReSharper could refactor something and have to tell IntelliJ about it. And this happens, really, quite straightforward. The user types, IntelliJ publishes a change and publishes just the delta of what's been typed and where. It turns out this is basically just a stream of characters and it's nice and easy to send those characters off. When ReSharper rewrites a chunk of code, it publishes that delta again as a chunk and it says, well, put this in this block here and delete this block here, and that's it. So, if you renamed a variable or whatever, it'll just say, change that, change that, change that. And, again, so it'll just publish the values. Everything alright, so far? Make sense? Cool. So, that's, kind of, three useful things. Alt+Enter, modifying code, and inspections. We're going to ReSharper, back from ReSharper and both ways as well. Some useful observations from this. We're not actually implementing anything here, we're enabling functionality. So, if we can show one Alt+Enter menu item, we can show them all. If we can run one inspection, we can run them all. If we can rewrite any bit of code then we can have all of our refractions working. So, just by having those three things, the Alt+Enter menu, the inspections, and the modifying source, we get a huge amount of functionality straight away. So, we don't need to, sort of, wire-up individual bits of functionality one by one, we just enable functionality. We just turn it on. The only problem is when there's a user interface. So, if you hit Alt+Enter and it pops up a dialogue or you invoke a refactoring, pops up a dialogue, that'll crash because, on a Mac, you don't have Windows Forms. You don't have WPF. It can't work. So, we have to work around that. The other thing is that the data's really lightweight. We're not passing huge chunks of information around. It's not like saying, right, well here's a syntax tree, go and do something, something like that and passing this back here. We're passing back the three or four items in a Alt+Enter list. We're passing back a bunch of inspections but these inspections each list three things. The offset, the severity, and the tooltip. So, it's all really lightweight. So, how would you do this? How would you do the communication between the two? And this is where things then start getting interesting and get into a, sort of, interesting problem because there's an easy way of doing this. The interprocess communication, here, we could just use RPC. We could use remote procedure calls. Send a message to an endpoint. Send a blob of JSON to a HTTP endpoint. Do something, get a message back, act on it. And this would work. This is how OmniSharp works, it's how Microsoft's language server protocol works but we didn't like it basically. There's a lot of, sort of, boilerplate involved with this. For every action that you need to do, you have to define an endpoint and you have to define a request message. You have to define a response message. As you end it with a lot of, sort of, boilerplate going on here, if you need to add something new in there, if you're adding in a new refactoring, you add in a new message. Send it to a new endpoint. Have that, decode it, handle it. Bundle up a new message. Send it back again and it'd have to handle that. There's a lot of boilerplate involved with that. The other thing is it's very imperative, it's all very much, do this - get a response back. Do that - get a response back. Another thing is, there's no conflict resolution built in with this. If we've got this bi-directional transfer of state where you, the user, can type something or ReSharper can change something then it's easy to get things out of sync. And it's like, well, there's nothing built in to this kind of process to be able to reset everything, to resync everything together. And then you've got the easy questions. It's like, well, what protocol do you use? Do you use JSON? Do you use Protobuf? Do you use, whatever you want. And if anybody has ever actually tried it. ReSharper 10, about two years ago, introduced support for Protobuf. This is why. Because we built it here. If anybody is ever using Protobuf in ReSharper and they see anything like top it's purely because we were building it for ourselves. And then we didn't use it. So, we had a look and we thought of something different. We, kind of, looked back at the idea that data is all really lightweight and that, kind of, gave us a clue to something. That data we're passing around isn't like huge chunks of serious, important data. It's not, as I said, it's not like a syntax tree of a file and it passes over here and analysis that, and all that. It's all really lightweight. It's menu items. It's the values of a dialogue box. It's the items you display in your fine results windows. It's all, basically, user interface data. So we, kind of, realised we can actually model this as an MVVM pattern. So has anyone come across the MVVM pattern before? This is model-view-viewmodel so you have, kind of, two things. You have the model, which would be ReSharper itself, which is the source of the date, the truth, and everything like this. You have the view, which is rendering the data and is the user interaction. Then you have something in the middle, which kind of intermediates between them and, so, the view can be really done because it just uses the data straight off the View Model but the View Model then will intermediate with the actual data, which is being kept and stored and calculated and fetched from the model itself. And, so, this looks like it'd be a very good fit for us. I've said everything that's already on there. So, the other thing we noticed is, if you have a look at what we've got with an IDE it's really hierarchical. Everything is hierarchies in an IDE. So, you got the main windows there but you've got a hierarchy of windows. You've got tool windows. You've got a connection of tool windows. You've got a collection of editors. Editors have got tabs. Each tab has got an editor pane and a document. Each document has got a collection of definitions. It's got a code completion window. The code completion's got a list of items. Better look at the Solution Explorer. That's a hierarchy in itself. The same with the test window. So, there's a whole load of hierarchies going on here. So, we can, actually, make our view model into one big hierarchy, which could represent everything. The other interesting thing is that the hierarchy is not just all about the data but it's about the lifetime of the data as well. If you close the tab, everything related to that tab is destroyed as well. So, if you close the tab then the code completion window is defunct, you don't need that anymore. You don't have your collection of definitions. You don't have your collection of text in the document. So, it's all gone. So that's what we've ended up with. We've got a single shared view model which exists on both sides of the processors. Conceptually, we have just one view of this. It's shared and synced between the two. The front end and the back end. IntelliJ and ReSharper have the same view model and, basically, now all we have to do is sync the changes. Well, the data is the same on both sides and anytime you make a change, it just gets synced between the opposite direction. And, so, now it's very easy. We don't have to have the boilerplate of creating new messages. We just make a change to the data model and then whenever we want to actually change something, that just gets synced across by the protocol. And that becomes declarative. So, there's much less imperative stuff going on there because the view model represents a thing and the view model, itself, is reactive. It's observable. It's not the RX extensions. It's not IDE observable. Although it's very similar in many respects. But it is composable. So, you can subscribe to things to find out when things change. Excuse me. So, if you have a collection, a list of things you can subscribe to that collection and say, tell me when something changes, and you can react when that happens. So, when you open a document, on the front end, you can juts push a document into this collection. That gets synced across the back end. The back end sees a new item added and it says, right, I've got a new document and it can start tracking that. And it's two-way as well. So, a change can be contributed by either end. The client can push a change or the server can push a change as well. And it all gets pushed straight through. So, the example there is a button click. If you push a button click, from the front end that makes its way to the back end. If you do a refactoring, the results come from the back end into the model and then synced across and the front end sees that. An interesting question of whether that makes us tightly coupled. Because we've got this, sort of, very lightweight view model thing, which is specific to what the user interface sees and everything. Does that make us tightly coupled to the protocol? Is the front end now tightly coupled to the back end? And that's an interesting question because the answer is, probably not because the action that happens requires that input and it doesn't require any other input. So, it's probably not tightly coupled in that respect. The other answer is, we don't really care. They're both our products so it's fine. But there is nothing stopping somebody implementing the same view model with a different user interface. You don't require any extra data to get the functionality to work. The other thing is that this protocol allows us to do conflict resolution. So, we can build this into the protocol itself now without having to, sort of, do anything specific and imperative, really. It's already there. The idea is that the client is always right. The client, in this respect, is IntelliJ. It's the front end and, so, the client will always override everything. Each value in the view model has a version number and the version only increments when the client makes a change. So, if the server makes a change, if ReSharper makes a change, there's no version update. So, the version will come across without any values. And the values are only accepted if they have the same or a newer or a higher version number. So, on my amazing diagram here. Took me ages, this did. We can see the client on the left hand side that's the IntelliJ front end and then the server on the back end with ReSharper. And let's say we have a value. Which is, the value, it has a version number of one. That's how it gets set. The protocol will sync that from the front end to the back end and it's the first time. It'll just be accepted, it's fine. So, now the back end says foo one, everything's great. If the front end then changes the value to be bar then it increments the version number. That makes its way to the back end. It's a new version number, I'll accept that, that's great and the value is bar. If the back end makes a change now and change it to quux. It doesn't update the version number. So, let's just say that's a refactoring has happened and it says, right, I'm making you change now because of the refactoring. I'll make the value to quux and I get sent across. The front end sees it, it's a new value. It's got the same version number and it says, yep, I'll accept that, everything's good. Now, if the front end and the back end make a change at the same time. Lets say the front end changes it to wibble and the back end changed it to blah. I told you, it took me ages to do this. Took me a really long time to just pick the names. If the front end changes it to wibble, it updates the version number. If the back end changes it to blah, it doesn't update the version number. So, the front end version would make its way to the back end. It would say, ah, a higher version number, I'll accept that. If the back end version was trying to make its way to the front end, the front end will say, nope, I've got a newer version, I'm not accepting that. So, the back end would get the latest version from the client. From the front end. Of course, it would, kind of, resync things as well. Because everything is observable, we could have an observer on that and when that value gets pushed from the front end, the back end could say, ah, I've got a new version I'll just recalculate what I need to do and everything's back in sync and everything's good again. And, so, we've got conflict resolution, sort of, built in. Is that all okay and, yes? - Is there quite a common case where that actually occurs that you're aware of? - No. Short answer is no. But it can occur with, just by the asynchronous nature of everything because you can be type.. If you are, if everything's, kind of, quick enough or slow enough in certain cases, if you do an Alt+Enter refactoring, ReSharper's trying to work that out, and if you could type at the same time then it could do something like that. But, in practise, we don't see it but we do need it. It is necessary. Especially in earlier versions. As we were working there was lots of conflicts and we had to get this bit was critical to get right. It's working really nicely now actually. And, so, now if we want to talk about the wire protocol. So, if we mention, before we could have something like JSON or Protobuf or something like that. This now, kind of, that almost disappears. You almost, by definition, have a custom protocol because we're just syncing changes across now. We just need to say this value has changed and it now has version blah. And, so, you don't send messages. You don't have to define a new message for saying, let's refactor to, I don't know, to an expression body. You just make a change. You just push a change in and that change needs to get synced across. So, if you want to add a new feature, you don't change the protocol. You just extend the model. So, if you want to add in a new refactoring menu item, you would just have to add in a new, sort of, front end model to say, well, these are the dialogue items that I'm gonna be setting and then you have an okay button which then invokes the thing and kicks it all off. And as long as the back end knows about that model then it'll just work. The wire protocol also supports batching. So, we can batch things up if we need to but we don't actually need to. Surprisingly, this is all fast enough. So, even when you're typing, we actually send each key press across, one bye one. So, we can batch it if we want to, if we needed to. We built it in and then realised we didn't need it. Things are fast enough, which is nice. The serialisation of everything so each, sort of, item in the hierarchy of the model needs to be serialised. That's all handled by code generation. We have a DSL, we have a domain spec.. Ugh, I can't say it! Domain specific language. I wanna say library for some reason. Domain specific language which describes the whole model and we, then, generate code over the top of that. We'll have a look at it in a minute to actually build this serialisation. So, the serialiasation is really fast because it says write string, write a string, write and it's done. You know, it doesn't have to do any reflection. I wanted to use the Java word there, that was bad. And then it's binary. It's binary protocol. So, no JSONs, no Protobufs, things like that. It's our own custom binary protocol. We've got logging built into it as well. So, we can just switch on logging and we get a dump of everything that's going on. You can see, the key presses as you do it and it's all good. It's just plain old sockets. So, it's just two sockets talking to each other and it's good. We did have queuing in there, message queuing at one point but that ended up being overkill. We don't need it. So, it's just role sockets now. So, that's, kind of, the first code, the lowest level. The next level up, then, is the framework, which provides all of this functionality. This will be called, imaginatively enough, the Rider Framework. And that's two libraries, C# and Kotlin. Kotlin is a JVM language, which JetBrains wrote when Java was stagnating and not doing very much and we really like it. It's, surprisingly, taken off really well. But, anyway, we're not talking about Kotlin. See, we've got two libraries. One which is .NET and one which is JVM based and these provide the primitives, we need to talk, we'll talk about primitives in a sec. And it handles the communication. The sending of the messages, the deltas across things. We also use Kotlin to build the DSL. So, Kotlin provides a number of nice features for building domain specific languages. I'll show these in a sec. We use that to describe the view model and then we generate, we basically run code over the top of that and generate the C# code and the Kotlin code to actually describe the models on both ends. Yes, so, yeah, the DSL generates that real code. And those include the interfaces, the implementation, and the serialisation. So, the write-string, write-string writing is done and all that, And what you have, what you do then is you have business logic which subscribes to these interfaces and manipulates the real model, does the real work. So, it's the interface then between ReSharper and the protocol and IntelliJ. So, we've got a number of interesting building blocks. These are the primitives which the Rider framework provides. A number of them are about things that you can send across. So, we've got, these ones are the easiest ones. String, int, enum, those are obvious, you know. If you want to send a string you want to send an integer. You want to send, like, an enum value. We've also god classdefinition and strucdefinition. So, a class is one of the nodes in the hierarchy. A class could be something like the fields in a refactoring dialogue. That will be something which, actually, I'm going to come back to those. I'm gonna come back to those. Over on the other side, I'm gonna ignore Lifetime for a minute. The other things we've got are signals which are events. We've got properties which is an observable value. We've got maps which are observable collections. And then we've got fields which is just a value. It's immutable. You can subscribe to the first three. You can't subscribe to field, it's just a value. So, a signal is an event - something's happened. A property is a changeable value, you know, so if I type into a field I can subscribe to the changes and listen to it. And then a collection would be like a collection of documents. Here's a new document so I've removed a document, and so on. A field would then be something like, display this on the dialogue but it doesn't change. We've also got, matter of fact, let's come back to this now. So, the class definition can contain any of the observable properties. So, a class definition can be a node in the view model. It can enter any of the observable properties whereas a struct is immutable. It's just a data class. So, a struct would just be fields. So, an example of the struct, where we'd want to use that would be where we've got our inspections, one of those ranges of where it starts and ends. That's immutable. That doesn't need to change, it's just a value there. So, that would be the struct. The other two interesting things that we've got there are call and lifetime. The call is where we actually do have RPC. So, I've just gone through saying, we don't do RPC because it's all imperative and all this kind of stuff and whatnot, and we do everything based on this observable view model but, actually, we still do some RPC. Because there are times when that just makes sense. So, for example, single stepping in the debugger. It's like, step. That's just an action. You don't want to be pushing a value saying, I don't even know what you'd push, you know, button button click, I don't know, step - set it to one, do an action, set it to zero. You can't. It's procedure call, just do it. So, single stepping in the debugger is a async RPC call. Oh, yeah, but it is asynchronous, of course like that. So, that would fire off and do something and then, in return, the back end would update the view model with some kind of value. The other thing, now, is lifetime and that's a really, really important concept in the whole of this thing because this is to do with the hierarchical view model and the idea that everything's hierarchical, including the lifetime. So, if you close a tab then everything below it closes as well, and that's all handled by the lifetime. So, everybody knows iDisposable, yeah? Yep. So, lifetime is, kind of, the dual of it in the way-- Does everyone know iRenewable and iObservable? So, iObservable being, sort of, the reactive extensions. There was a lot of talk, when that started, that they were the jewel of each other. So, iObservable was the, kind of the, almost the complete mirror image of iRenewable. iRenewable is all about pull values and iObservable is all about push. And everything you can do within iRenewable had, like, a reverse version in iObservable. I got this idea now of Spock with his pointy beard and one of the worms there, mirror image things. Niche joke, sorry. Only a couple of you are going to get that. But lifetime is kind of the jewel of iDisposable. iDisposable is really cool but it has a few issues. We've been using this in ReSharper for years and, basically, the team pulled this over into the protocol and into IntelliJ as well. The problem with iDisposable is that what you, kind of, call a method and say do something or get me something, or whatever, it'll return you back an iDisposable and say, this is how you clean things up. And you go, thanks. I got to look after this now. And you've got to do a whole bunch of management of this. And you've got to either, sort of, keep a list of them or pass it back to someone else, or what have you. Or you could pass it into a method but if you pass it into a method, who owns that now? Is it owned by the method you've passed it to or the calling method? What about ordering of removing things and so on? So, iDisposable has a, kind of, a bit of a management issue going on there. So, we've flipped it. We've got this idea of lifetime and lifetime definition. So, the idea of lifetime is really easy. You just pass it into something and you say, this is going to be the lifetime of whatever it is that's happening and if you want to do anything, you can add stuff to it. You can add a callback to it, basically. And when the lifetime ends, those callbacks will get called. And, so, it's up to the lifetime to do the management of all of those callbacks. And it's just dead-easy now. If you want to do something, you just add to the chain. And then you've got this idea of a lifetime definition, that's how you create one of these objects and the constructor, there, has got a parent. So you can have hierarchical lifetimes. When the parent terminates, it kills everything below it. It'll run all the callbacks and it'll terminate the lifetimes themselves. Now, finally, we've got the eternal lifetime as well which is like the root one. That never dies. So, you've got to have a parent for something. You can have an eternal one which will be the parent of that. Does that kinda make sense? I know it's, kind of, a quick gloss over it? But the idea is you pass it into something. You register against it and whoever owns that is responsible for killing it, and your stuff will just get cleaned up automatically off the back of that. Right, so we can, then, quickly have a look at things like the signals. So, we've got two things there. You can produce an event. You can subscribe to the event. Or you can compose it there, things like that. If you are subscribing to an event, you've got this advise method and you're passing a lifetime there. This is where it starts to use the lifetime. And that's how you do subscriptions. You don't get back some kind of subscription token or an iDisposable whatever. You pass in the lifetime and you say, this is how long I'm interested in being subscribed for and here's the action handler, and the signal implementation will take that handler and then do something with it. Shove it in a list or what have you. And then add a callback to the lifetime to remove it from the list when the lifetime gets terminated. So, it can do automatic cleanup like that. Properties then, sort of, build on top of that. You've still got the ISink interface there so you've got your Advise, again, there so you can advise for changes on a property, but then you've also got a value which is set so you can ask for the value at any time. So, it's gotta state full value as well as a, sort of, signal. And then we've got this view method. View is also really useful because that now looks, kind of, similar to Advise. You get a lifetime and so you say, I want it to last for this length time and then you pass in another action which, this time, instead of just taking the value, which has been set, it also gets in a new lifetime. So, here we've got a new child lifetime, which is the lifetime of that particular value. And, so, now you could even.. It's hierarchical again. So, you can now subscribe to the lifetime of that particular value. So, if I set the value to three, I can add a callback for what happens when three changes to four or, well, when three changes to whatever it changes to next. And that's a very powerful way of being able to work with this. And then maps just builds on top of that. We've got the same thing down there. Again, you can view it and you can get notified anytime anything changes. You get the lifetime of that change and you also get what the change is. Okay, so, those are the primitives there. Does that kinda make sense? Cool. Right, so, then.. I mentioned that we've got this domain-specific language. This is based on Kotlin and this is what Kotlin looks like, or some of it. That's a function at the top there. 'Cause Kotlin is fun. I kind of hate myself a little for that. Sorry. So, that's defining, like, a class def which would be one of the nodes in the hierarchical view model. It's got a name and it's got this weird construct here which is a function that has an implicit value of type class def node. So, that's like an extension method in C# whereby, instead of passing in the value as the first parameter and it's got a name like self or it or what have you. That's implicit. So, it'd be like doing a.. So, it'd be like having this down on the bottom here on the left where it says, you know, myClassdef.map but, because I've got it with this implicit receiver there, now I can just do map. myClassdef is now, sort of, implicitly there. So, it's like having an extension method without having, you know, a more terse version of extension method. But now, of course, I've written something which is now much terser and looks much nicer. The second thing which makes it really nice for DSLs is that if you have a function which, as its last parameters takes in a Lambda expression and that's what a Lambda expression looks like in Kotlin with just the squiggly braces. Then you can actually put it outside of the brackets. So, you can put the closed bracket here and just have it here, and now, all of a sudden, you look like you're doing something hierarchical here as well. So, you can end up with something which is this, which is, pretty much, standard Kotlin. And it looks like a DSL. You know, it kind of gives us a hierarchical thing. We've gotta define an object here which is a solution. We've got an init, so Kotlin has constructors which are called inits. I won't get into that right now. But you can have an init method which is, effectually, a constructor and the squiggly bracket there is a block of code which is gonna get called. Then we can define, like, a map of editors. A map called editors of string two. This note of editor type and, here, you can see it does look like it's a hierarchy. It's useful. Does that all make sense? Cool. Let's, in that case then, I'll show you some real code then. I wanna be on that one. So, that slide I was just showing you there, that's a, sort of, very simplified view of-- In fact, let's have another quick look at that one. Because, what this is showing you is here now really. We've got our editor itself has got a document property there which has got a list of characters so we could say that we represent our document as a list of characters. We don't. That would just be slightly crazy. But you could. Then we've got the caret position which is just a property, which is, like, an observable property of an integer. So, we've got an offset and so we've got an observable offset of where the text caret is at any one time. The map of highlighters, of range two highlighter, and, there we are, we've got plat - that should be struct. But we've got fields of start and length for our ranges for our highlighters. And we can define the highlighter itself there and we've got the void source there which is a, would be like an RPC call. That's like build. That's just a signal. Go off and build. And that all makes sense and it's nice and straightforward. In reality, the solution model is a little bit more complicated. So, it's all of this. Doo doo doo. Where are we. There we are. There's solution and we're defining a node called solution and it's got a whole bunch of interesting properties going on here. So, we've got things like, yeah, there we are. We've got a field or editors which is all of our documents. And I can navigate to the document type which will then show me what's going on inside. A document there. A bunch of fun stuff. That's even got callbacks and stuff. Even things like a map of icons to string which I don't even know what that does. But, in the end, we kind of end up with these two properties here. We have this map of solutions which is an init to a solution so Rider is a single instance application. When you open multiple solutions, it actually opens it in the same instance of IntelliJ. You get a separate ReSharper out of process thing. So this gives you a map between open solutions. And then this idea of this property which is a hasBacked solution. It means that, basically, the solution is loaded and it's ready. And, so, there's a whole bunch of things going on there. When that gets generated, we get even more code and it's all of this crazy stuff here. Which, I'm not even gonna bother going through there. Where are we? I've lost the bit where I wanted to be 'cause everything's called solution. Ah, there we are. We've got things right here so we can have, this is serialisation happens here, we can read all the values. We can write all the values. And it's not solution that I want here, it's solution model. If I could type. Okay, so, solution model. It has those two values there. It has our solutions and it has our hasBacked thing there. So, it's got a map from integer to solution and then just a property that's going on here. So, we can see where that's being used. Which is there. So, we've now got a value which is being exposed and if we go back to this, now we can flip over to another object. This is our solution host. So we've got our generated code. So we've got our DSL. It creates a generated code and then we've got our front end code. This idea we have these kind of, host objects, we call them. And these take in, these get our models. So, this is, kind of, the root of the hierarchy so it does solution model create. Gives it a lifetime, again. Everything's all about that lifetimes. And then, at some point, someone's gonna call set. And it says, here you are, here's a solution. I've opened a solution - here it is. And that will do model.solutions.put and it'll put the solution in there. That'll get synced to the back end and the back end will pick it up and know that a solution has been opened. And, so, if we just, quickly, go over to the solution initializer. We've got a method here which is beforeProjectloaded. So, one of the fun things about IntelliJ is IntelliJ calls what we know as solutions, it calls those as projects. So, that's just a fun bit of terminology to mess with your head. So, here, we've just went about loading up an IntelliJ project. We get our solution host. We create a new instance of a solution, which is gonna live in our model. So, that's actually calling the constructor directly. So, Kotlin doesn't have the new keyword it just calls the constructor directly and that creates a new instance. And, then, further down here we call solutionHost.set. So, that will call that set method we've just seen, which will push it on to the model, which then does the serialisation, which then pushes it to the backend. Does that make sense? That's the front end, that's IntelliJ opening a solution and pushing it on to the protocol. If we flip over to the ReSharper solution, we've got a whole bunch of the same code which had been generated on this side. So, there's a solution model. We've got our map of integers to solutions and our solution property there with solutions. We could find usages on this fellow. If it wants to. There it is and we can, then, come to.. There's a few going on there but it's.. Where are you? This one here. We have here, we've got a shell component which is a class-action shell component. This means it gets started when ReSharper starts. And we've got solution model solutions view. So, we start subscribing to that and then we get a lifetime, pass it in the lifetime for subscription. And then this is our callback. When area thinks something changes, we get a new lifetime which is going to be the lifetime of the solution. When the solution dies that lifetime will die. Then our data here which is our map of solutions to what have you. And if we have a look here we can now see here that we've got our solution opening and we've got their open solution based on the solution itself. So, now we see that we've got IntelliJ front end. It calls set. It goes into the solution host which goes into the generated code, which we saw the model for. That'll go across the wire on our protocol to the generated code in ReSharper which sets a value, which we're then listening on here. Which will then get the solution and then finally call open solution. So, we can, sort of, trace it through the whole thing. Yes? - Is the generated C# code coming from your call in there? - Yes. Yes. So, the call in DSL is not executable. It could be executable but we just don't bother. We just compile it and then we run.. What's Java's reflection called? Introspection. We would just introspect over it and generate code off the back of that. So, that generates us our C# model and our Kotlin model which we can then compile into the actual solutions themselves. So, that kind of brings us to the end, really, I guess. There are some challenges to all of this, you know. It's not quite seamless. There's things like, so, while it's worked out to be a very good fit, things like the Alt+Enter menus, the find usages, unit testing, and everything. They're very similar between them, you know. IntelliJ and ReSharper share a common DNA so there's a lot of similarity there and they fit very nicely. There are things which don't fit quite so nicely and the project model is one of those. So, I mean, a good example is the fact that they call a solution a project. And, so, there's things that's gonna confuse C# users. Because we all know what a project is and it's not what IntelliJ thinks it is. So, there's things like having to replace the word project with solution everywhere. So, that's easy. That's a nice challenge to have, to be honest. More interesting things is what happens about duplicate language implementations? So, WebStorm provides a whole bunch of JavaScript and CSS, and stuff like that. But so does ReSharper. Which one do we use? You know, do we use the things in IntelliJ or de we use bits in ReSharper? The benfits of using ReSharper's is that we've got the idea of cross-language references. So, in your C# files, you can have a string, literal, which is actually a block of CSS. ReSharper can provide highlighting, navigation, refactoring, and everything in that string literal. So, that's really cool but, on the flip side, WebStorm knows things about things like CoffeeScript and languages which ReSharper doesn't know. What we are doing, at the moment, as I understand it, because we are, kind of, merging the two is that ReSharper's providing the call functionality but then we're using some of WebStorm's language features on top of that. So, things like JSLint and everything. That will come from WebStorm because it works with the file rather than working with the syntax trees. So, where it works with files, we work with existing IntelliJ features. Same sort of thing with C++. We've got CLion on IntelliJ and ReSharper C++ as well. But they've got very different targets so it's an interesting one. The other challenge is that plugins are more complex. They can be much more complex in some cases. The front end and the back end are very different. So, IntelliJ really is, you know, Rider, in this, case use IntelliJ very much as a dumb client here because it doesn't know anything about the languages. So, if you want to use your interface and your user interface needs to know something about the back end, you're gonna have to build something into the protocol to get at that information. So, that's gonna be a bit more complex. What we want to do, here, is allow you to do that but also provide a, sort of like, an abstraction layer on top of that to provide common information. So, what we can do, at the moment, is we can load up plugins which talk purely to ReSharper and if your plugin just adds inspections then, and quick fixes, you know, sort of things in the Alt+Enter menu and squigglies and stuff, that'll just work. That'll be fine. So, the Unity plugin already does that. If you need something which is just in IntelliJ then that will work and that'll be just fine. It's when the two need to communicate that, right now, that's a bit hard. And then, the final bit then, is now we've got ReSharper working out of process with Rider, what about ReSharper out of process for Visual Studio? And that's a really great question and I'm glad you've asked. So, that's it. And that's it, eraser, thank you. Have you got any questions? - What about the VS code then? - What about VS code? - Well, I mean, if you can run right out of process, that ReSharper out of process. Is there a scope there for a VS code plugin? - Technically, yes. - Or is that the start for you to wear the mark to share with Rider? - We will not be doing it. - Yeah, okay. - We will not be doing it. Technically, yes, we can do that. There would be a lot of work because, as I say, it's a very, you know, it's a view model. It represents the views, the user interfaces that A, the features expect and B, fortunately, IntelliJ pretty much provides for us already. You'd have to recreate those for VS code. You'd also have to recreate the front end view of the view model. So, there's still a lot of work but, technically, yes. Technically, it's possible. We won't be doing it because we've got our cross platform solution, you know, which is a full-featured IDE which is an IDE rather than a text editor which you can upgrade and pimp out to be good and give you lots of useful plugins and stuff. We've, kind of, seen this scenario before with Eclipse, for example. You know, Eclipse was an IDE, which you could put lots of plugins in to give you a lot of functionality, and then we came along with IntelliJ which was, like, well, here you are. We've already done that for you. You know, that's our job. That's what you pay us money for and here is an IDE, just go for it. So, it's a similar situation here. Rider is going to be a fully built IDE for you that you won't have to build all the plugins for or bring all the other plugins in for you. So, yep. Anybody else got any questions? Yes? - I'm seeing the Intelli stuff to just let the key presses asynchronously. People expect to have, like, lots of memory but I guess these two, three characters have-- - Yes. - And, like, how do you deal with that? - code completion. We cheat a little bit with code completion. We actually pre-calculate a lot of code completion. So, what will normally happen in ReSharper is you would invoke something, you'd start typing at a particular location, and then we would calculate that information and then show it. With Rider, we actually kind of pre-calculate it. When you get to a resting point in anything. I might get the details wrong with this as well 'cause I don't know them. This is secondhand from the development team. It's alright, it's being recorded, so if they ever watch this.. So, when we get to a resting point, I believe, we then pre-calculate what code completion you could have at that point, and then when you start typing we've got a list of possible values and we can filter that client-side. So, then, that is quicker and more efficient basically. Yeah, cool. Yes? - To dive through very detailed-like, you spoke about lifelines. - Yes. - Which, presumably, it's kind of a tree mould of what, if you like. - Yep. - How do you, as developers, avoid the situation where you try to end the lifeline, which ends a child lifeline, which then goes and tries to end the parent again? Other than just the type of programming and making sure you avoid it. Is there a mechanism? - Yes. Yeah, the actual lifetime implementation has flags to prevent circular termination. I think it only checks that termination time. Let's have a look. Dun dun dun.. On the right one. Ugh, so many different keyboard shortcuts. If I'm in IntelliJ, I use the Mac keyboard shortcuts. If I'm in Rider, I use the PC keyboard shortcuts, even on a Mac, and it's really starting to mess with my head. Let me see what we've got here. Schedules, flags, memories, barriers, locks and everything. Maybe we don't. I thought we did but maybe we don't. - - It'll clear it all, it didn't do anything about circular references, that will just make sure that nothing changes while you're deleting everything. We grab a copy of the items. We clear them or we then lock, and then we actually work on that array and we've got a copy of all the items to delete. But it should be, as well, that when you try and add something when we've set this flag here then you can't do anything there. But, no, apparently we don't have a circular referencing there. But there's lots of fun to do with nested items there as well. I don't really know, it would work by virtue of the flags. So, we set the flag to say it's terminated then we go and terminate all the children and if one of those children is going to terminate another one here through some dodgy reference, it'll get through. It'll say, ah, I'm terminated and just quietly exit out. Or, yeah, it'll just quietly exit out rather than shout at you. So, yeah, but lifetimes are really, really cool. You can do some really fun stuff with them. So there's another contra called a sequential lifetime which you can say next on it, you call next on it. It'll terminate the current one and create a new lifetime. So, anything you've got subscribed to it will automatically then just roll and do things. So, it's a nice, sort of, again it's a composable thing and it's a reactive-type idea. It's dead-nice. I'd really like to pull lifetime out and make that as an open source thing but, you know, there's loads of things I'd love to see open source from here. We want to open source to write a framework, incidentally. But, you know, we're busy building the product so. Once that's done we will want to do some, tidy up on it and actually release it. So, that would be the framework, itself, rather than the actual view model. So, it goes back to the visual studio code question is that we'd have the functionality to build the, you know, to work with the framework and everything. But we wouldn't necessarily, well, I don't know what the plans are for publishing the model. I don't know if we have any plans for publishing the model. So, yeah. Hello, yes? - Oh, I was just stretching. - You were stretching. Good stretch. Go on, were you stretching as well? - No. So, I know everyone here is like mad keen on F#. - Yes. Everyone. Go on, hands up, show of hands, who's mad keen on F#? - One guy, yes. - Yeah, but his hand was right up so. - So is there any plans to bring that to ReSharper in Visual Studio? - Right. - Or is that too much to, kind of, bring it there or maybe not, what's the concern? - There are no current plans to do so because the way that the F# stuff works is, we are now running something else out of process, we're using the same protocol. I'm not entirely sure how the F# stuff works exactly. But what we're doing is we're using the F# compiling services as the back end so, traditionally, ReSharper, and well, IntelliJ as well. We build our own passes for everything. We pause C#, we've built a sematic model for C# and it's all our code. For the F#, so, it it's the compiler services and the compiler services were never designed to be run in this fashion. They were designed to be a compiler. And, so, they were designed to run as a batch process rather than as a continuous process. So, putting that into the visual studio, again, the constraint that 32-bit process is not gonna be a good idea because it's already gonna be in there. So, we're gonna end up with two versions of that unless we do something funky to try and sync up with existing compiler services which are there. Which probably won't work. That's probably just too scary to actually do. So, right now, there are no plans to put it into ReSharper. But never say never. You know, if ReSharper does go out of process from Visual Studio then we could look at it again. The other thing is that Visual Studio pretty much has the functionality of which we've added for Rider. So, you know, for Rider, right now, we've got code completion. Code-coloring syntax, folding, navigation but no Alt+Enter's, no inspections and things like that. So, Visual Studio already has those things and us adding that doesn't add anything. The only thing it does add, which would be really useful, is the cross-language navigation. So, if you're in a C# file and you do find usages, it would pull up the F# information or vice versa. Although, interesting, I don't know if that would work in Visual Studio but maybe. So, right now, it's actually a better fit for Rider for what we've got. And, also, it's still very, very new. We're still working on it and it's like, one thing at a time basically. We also want to open source that. That is gonna get open sourced. But, again, the comment about plugins being tricky in the moment, that's one example. So, right now, it's really hard to build. You need to be, well, you need to be on the VPN. For one thing. And the ReSharper code is using a different version of the SDK to ReSharper itself so that can't live on NuGet right now and you then need to get a reference to the IntelliJ code as well. So, building plugins is tricky. And, so, we need to fix that in order to be able to publish the source for the F# code. But, I've got it.. No, I thought I had it but no. But, yeah, it's doing some interesting stuff It's, kind of, mapping between.. It's mapping between the IntelliJ world, the ReSharper world, and the FCS world. So, there's a lot going on there. So, it's cool. Yes? - - Yes. - Do you see them converging on a standard approach - So, I think, out of process is probably gonna catch on a bit more. Having everything in a single process, like Visual Studio, I mean, we're hitting the constraints in Visual Studio. So, the more recent versions of Visual Studio used more memory and, of course, we're using memory and performance suffers because of it. You know, we have had a hard time with Visual Studio 2015 because they're using more memory and if they use more memory and we're there as well. We're the add-on. We get the blame. You know, rightly or wrong. Obviously, we have our performance issues as well but, on the whole, a lot of the issues we see with thing