Swift is intended to be a "Systems Programming Language", is it not? Yet, there is no support for "volatile" variables needed to support fundamental "system" features like direct memory access from peripheral hardware.

Why not support "dynamic runtime features" like the ones provided by the Objective-C language and runtime? It's partly a trick question because Swift is remarkably "dynamic" through use of closures and other features, but why not go "all the way?"

by VolaninSince you're the creator of LLVM, I'd like to know, in your opinion what's the greatest advantage of LLVM/Clang over the traditional and established GNU GCC compiler. Also, what's the greatest advantage of GNU GCC (or if you'd prefer, any other compiler) over LLVM/Clang, something that you'd like to "port" someday?GCC and LLVM/Clang have had a mutually beneficial relationship for years. Clang's early lead in terms of compiler error message expressivity has led the GCC developers to improve their error messages , and obviously GCC provides a very high bar to meet when LLVM and Clang were being brought up. It is important to keep in mind that GCC doesn't have to lose for Clang to win (or visa versa). Also, I haven't used GCC for many years, that said, since you asked I'll try to answer to the best of my knowledge:From my understanding, for a C or C++ programmer on Linux, the code generated by GCC and Clang is comparable (each win some cases and lose other cases). Both compilers have a similar feature set (e.g. OpenMP is supported by both). Clang compiles code significantly faster than GCC in many cases, still generates slightly better errors and warning messages than GCC, and is usually a bit ahead in terms of support for the C++ standard. That said, the most significant benefit is improved compile time.Going one level deeper, the most significant benefit I see of the LLVM optimizer and code generator over GCC is its architecture and design. LLVM is built with a modular library-based design, which has allowed it to be used in a variety of ways that we didn't anticipate. For example, it has been used by movie studios to JIT compile and optimize shaders used in special effects, has been used to optimize database queries, and LLVM is used as the code generator for a much wider range of source languages than GCC.Similarly, the most significant benefit of Clang is that it is also modular. It specifically tackles problems related to IDE integration (including code completion, syntax highlighting, etc) and has a mature and vibrant tooling ecosystem build around it. Clang's design (e.g. lack of pervasive global variables) also allows it to be used at runtime, for example in OpenCL and CUDA implementations.The greatest advantage I can see of GCC over LLVM/Clang is that it is the default compiler on almost all Linux distributions, and of course Linux is an incredibly important for developers. I'd love to see more Linux distributions start shipping Clang by default. Finally, GCC provides an Ada frontend, and I'm not aware of a supported solution for Ada + LLVM.by mvelosoWhere do you see LLVM going?There are lots of things going on, driven by the amazing community of developers (which is growing like crazy) driving forward the llvm.org projects. LLVM is now used pervasively across the industry, by companies like Apple, Intel, AMD, Nvidia, Qualcomm, Google, Sony, Facebook, ARM, Microsoft, FreeBSD, and more. I'd love for it to be used more widely on Windows and Linux.At the same time, I expect to see LLVM to continue to improve in a ton of different ways. For example, ThinLTO is an extremely promising approach that promises to bring scalable link time optimization to everyone, potentially replacing the default for -O3 builds. Similarly, there is work going on to speed up compile times, add support for the Microsoft PDB debug information format, and too many other things to mention here. It is an incredibly exciting time. If you're interested in a taste of what is happening, take a look at the proceedings from the recent 2016 LLVM Developer Meeting Finally, the LLVM Project continues to expand. Relatively recent additions include llgo (a Go frontend) and lld (a faster linker than "Gold"), and there are rumors that a Fortran frontend may soon join the fold.by Anonymous CowardIs there any hope for VLIW architectures? The general consensus seems to be that Itanium tanked because the compiler technology wasn't able to make the leap needed. Linus complained about the Itanium ISA exposing the pipelines to assembly developers. What are the challenges from a compiler writers perspective with VLIW?I can't speak to why Itanium failed (I suspect that many non-technical issues like business decisions and schedule impacted it), but VLIW is hardly dead. VLIW designs are actively used in some modern GPUs and is widely used in DSPs - one example supported by LLVM is the Qualcomm Hexagon chip. The major challenge when compiling for a VLIW architecture is that the compiler needs good profile information, so it has an accurate idea of the dynamic behavior of the program.by Anonymous CowardSo how much of Swift was inspired by Groovy? Both come from more high-end languages and look and act almost identical.It is an intentional design point of Swift that it look and feel "familiar" to folks coming from many other languages: not just Groovy. Feeling familiar and eliminating unnecessary differences from other programming languages is a way of reducing barriers of entry to start programming in Swift. It is also clearly true that many other languages syntactically influence each other, so you see convergence of ideas coming from many different places.That said, I think it is a stretch to say that Swift and Groovy look and act "identical", except in some very narrow cases. The goal of Swift is simply to be as great as possible, it is not trying to imitate some other programming language.by Anonymous CowardWhat do you think about Microsoft and C# versus the merits of Swift?I have a ton of respect for C#, Rust, and many other languages, and Swift certainly benefited by being able to observe their evolution over time. As such, there are a lot of similarities between these languages, and it isn't an accident.Comparing languages is difficult in this format, because a lot of the answers come down to "it depends on what you're trying to do", but I'll give it a shot. C# has the obvious benefit of working with the .NET ecosystem, whereas Swift is stronger at working in existing C and Objective-C ecosystems like Cocoa and POSIX.From a language level, Swift has a more general type system than C# does, offers more advanced value types, protocol extensions, etc. Swift also has advantages in mobile use cases because ARC requires significantly less memory than garbage collected languages for a given workload. On the other hand, C# has a number of cool features that Swift lacks, like async/await, LINQ, etc.by Anonymous CowardChris, what are your general thoughts about Rust as a programming language?I'm a huge Rust fan: I think that it is a great language and its community is amazing. Rust has a clear lead over Swift in the system programming space (e.g. for writing kernels and device drivers) and I consider it one of the very few candidates that could lead to displacing C and C++ with a more safe programming language.That said, Swift has advantages in terms of more developers using it, a more mature IDE story, and offers a much shallower learning curve for new developers. It is also very likely that a future version of Swift will introduce move-only types and a full ownership model, which will make Swift a lot more interesting in the system programming space.by jo7hs2As someone who has been involved with the development of programming languages, do you think it is still possible to come up with a modern-day replacement for BASIC that can operate in modern GUI environments?It seems like all attempts since we went GUI (aside from maybe early VisualBASIC and Hypercard) have been too complicated, and all attempts have been platform-specific or abandoned. With the emphasis on coding in schools, it seems like it would be helpful to have a good, simple, introductory language like we had in BASIC.It's probably a huge shock, but I think Swift could be this language. If you have an iPad, you should try out the free Swift Playgrounds app, which is aimed to teach people about programming, assuming no prior experience. I think it would be great for Swift to expand to provide a VisualBASIC-like scripting solution for GUI apps as well.by psergiuHow cross-platform is Swift? Are the GUI libraries platform-dependent or independent? I.E.: Can I write a single Swift program with a GUI that will compile, work the same and look similar on multiple platforms: Linux, Mac OS, Real Unix-es & BSDs, AIX, Windows?Swift is Open Source , has a vibrant community with hundreds of contributors, and builds on the proven LLVM and Clang technology stack. The Swift community has ported Swift itself to many different platforms beyond Apple platforms: it runs on various Linux distros and work is underway by various people to port it to Android, Windows, various BSDs, and even IBM mainframes.That said, Swift does not provide a GUI layer, so you need to use native technologies to do so. Swift helps by providing great support for interoperating with existing C code (and will hopefully expand to support C++ and other languages in the future). It is possible for someone to design and build a cross platform GUI layer, but I'm not aware of any serious efforts to do so.by andywestWhy did Swift NOT have exception handling in the first couple of versions?Swift 1 (released in 2014) didn't include an error handling model simply because it wasn't ready in time: it was added in Swift 2 (2015). Swift 2 included a number of great improvements that weren't ready for Swift 1, including protocol extensions. Protocol extensions dramatically expanded the design space of what you can do with Swift, bringing a new era of " protocol oriented programming ". That said, even Swift 3 (2016) is still missing a ton of great things we hope to add over the coming years: there is a long and exciting road ahead! See questions below for more details.by Anonymous CowardStrings are immutable pass-by-reference objects in most modern languages. Why did you make this decision?Swift uses value semantics for all of its "built in" collections, including String, Array, Dictionary, etc. This provides a number of great advantages by improving efficiency (permitting in-place updating instead of requiring reallocation), eliminating large classes of bugs related to unanticipated sharing (someone mutates your collection when you are using it), and defines away a class of concurrency issues. Strings are perhaps the simplest of any of these cases, but they get the efficiency and other benefits.If you're interested in more detail, there is a wealth of good information about the benefits of value vs reference types online. One great place to start is the " Building Better Apps with Value Types in Swift " talk from WWDC 2015.by superwizSince you have been involved with 2 lauded languages, you are in a good position to answer the following question: "are modern languages forced to rely on language run-time to compensate for the facilities lacking in modern operating systems?" In other words, have the languages tried to compensate for the fact that there are no new OS-level light-weight paradigms to take advantage of multi-core processors?I'm not sure exactly what you mean here, but yes: if an OS provides functionality, there is no reason for a language runtime to replicate it, so runtimes really only exist to supplement what the OS provides. That said, the line between the OS and libraries is a very blurry one: Grand Central Dispatch (GCD) is a great example, because it is a combination of user space code, kernel primitives, and more all designed together.by bill_mcgonigleSay, about fifteen years ago, there was huge buzz about how languages and compilers were going to take care of the "Moore's Law Problem" by automating the parallelism of every task that could be broken up. With single-static assignment trees and the like the programmer was going to be freed from manually doing the parallelism.With manufacturers starting to turn out 32- and 64-core chips, I'm wondering how well did we did on that front. I don't see a ton of software automatically not pegging a core on my CPU's. The ones that aren't quite as bad are mostly just doing a fork() in 2017. Did we get anywhere? Are we almost there? Is software just not compiled right now? Did it turn out to be harder than expected? Were languages not up to the task? Is hardware (e.g. memory access architectures) insufficient? Was the possibility oversold in the first place?I can't provide a global answer, but clearly parallelism is being well utilized in some "easy" cases (e.g. speeding up build time of large software, speed of 3d rendering, etc). Also, while large machines are available, most computers are only running machines with 1-4 cores (e.g. mobile phones and laptops), which means that most software doesn't have to cope with the prospects of 32-core machinesâ¦ yet.Looking forward, I am skeptical of the promises of overly magic solutions like compiler auto-parallelization of single threaded code. These "heroic" approaches can work on specific benchmarks and other narrow situations, but don't lead to a predictable and reliable programmer model. For example, a good result would be for you to use one of these systems and get a 32x speedup on your codebase. A really bad result would be to then make a minor change or bug fix to your code, and find out that it caused a 32x slowdown by defeating the prior compiler optimization. Magic solutions are problematic because they don't provide programmer control.As such, my preferred approach is for the programming language to provide expressive ways for the programmer to describe their parallelism, and then allow the compiler and runtime to efficiently optimize it. Things like actor models , OpenMP, C++ AMP, and goroutines seem like the best approach. I expect concurrency to be an active area of development in the Swift language, and hope that the first pieces of the story will land in Swift 5 (2018).by EMB NumbersI am a 25+ year Objective-C programmer and among other topics, I teach "Mobile App Development" and "Comparative Languages" at a university. I confess to being perplexed by some Swift language design decisions. For example,

CL: These two questions get to the root of Swift "current and future". In short, I would NOT say that Swift is an extremely compelling systems programming or scripting language today, but it does aspire to be great for this in the years ahead. Recall that Swift is only a bit over two years old at this point: Swift 1.0 was released in September 2014.



If you compare Swift to other popular contemporary programming languages (e.g. Python, Java/C#, C/C++, Javascript, Objective-C, etc) a major difference is that Swift was designed for generality: these languages were initially designed for a specific niche and use case and then organically growing out.



In contrast, Swift was designed from the start to eventually span the gamut from scripting language to systems programming, and its underlying design choices anticipate the day when all the pieces come together. This is no small feat, because it requires pulling together the strengths of each of these languages into a single coherent whole, while balancing out the tradeoffs forced by each of them.



For example, if you compare Swift 3 to scripting languages like Python, Perl, and Ruby, I think that Swift is already as expressive, teachable, and easy to learn as a scripting language, and it includes a REPL and support for #! scripts. That said, there are obvious missing holes, like no regular expression literals, no multi-line string literals, and poor support for functionality like command line argument processing - Swift needs to be more "batteries included".



If you compare Swift 3 to systems programming languages with C/C++ or Rust, then I think there is a lot to like: Swift provides full low-level memory control through its "unsafe" APIs (e.g. you can directly call malloc and free with no overhead from Swift, if that is what you want to do). Swift also has a much lighter weight runtime than many other high level languages (e.g. no tracing Garbage Collector or threading runtime is required). That said, it has a number of holes in the story: no support for memory mapped I/O, no support for inline assembly, etc. More significantly, getting low-level control of memory requires dropping down to the Unsafe APIs, which provide a very C/C++ level of control, but also provides the C/C++ level lack of memory safety. I'm optimistic that the ongoing work to bring an ownership model to Swift will provide the kinds of safety and performance that Rust offers in this space.



If you compare Swift 3 to general application level languages like Java, I think it is already pretty great (and has been proven by its rapid adoption in the iOS ecosystem). The server space has very similar needs to general app development, but both would really benefit from a concurrency model (which I expect to be an important focus of Swift 5).



Beyond these areas of investment there is no shortage of ideas for other things to add over time. For example, the dynamic reflection capabilities you mention need to be baked out, and many people are interested in things like pre/post-conditions, language support for typestate, further improvements to the generics system, improved submodules/namespacing, a hygienic macro system, tail calls, and so much more.



There is a long and exciting road ahead for Swift development, and one of the most important features was a key part of Swift 3: unlike in the past, we expect Swift to be extremely source compatible going forward. These new capabilities should be addable to the language and runtime without breaking code. If you are interested in following along or getting involved with Swift development, I highly encourage you to check out the swift-evolution mailing list and project on GitHub.



Any hope for more productive programming?

by Kohath



I work in the semiconductor industry and our ASIC designs have seen a few large jumps in productivity:

Transistors and custom layouts transitioned to standard cell flows and automated P&R.

Design using logic blocks transitioned to synthesized design using RTL with HDLs.

Most recently, we are synthesizing circuits directly from C language.

In the same timeframe, programming has remained more or less the same as it always was. New languages offer only incremental productivity improvements, and most of the big problems from 10 or 20 years ago remain big problems.



Do you know of any initiatives that could produce a step-function increase (say 5-10x) in coding productivity for average engineers?



CL: There have been a number of attempts to make a "big leap" in programmer productivity over the years, including visual programming languages, fourth-generation" programming languages, and others. That said, in terms of broad impact on the industry, none of these have been as successful as the widespread rise of "package" ecosystems (like Perl's CPAN, Ruby Gems, NPM in Javascript, and many others), which allow rapid reuse of other people's code. When I compare the productivity of a modern software developer using these systems, I think it is easy to see a 10x improvement in coding productivity, compared to a C or C++ programmer 10 years ago.



Swift embraces this direction with its builtin package manager "SwiftPM". Just as the Swift language learns from other languages, SwiftPM is designed with an awareness of other package ecosystems and attempts to assemble their best ideas into a single coherent vision. SwiftPM also provides a portable build system, allowing development and deployment of cross-platform Swift packages. SwiftPM is still comparatively early on in design, but has some heavy hitters behind it, particularly those involved in the various Swift for the Server initiatives. You might also be interested in the online package catalog hosted by IBM.



Looking ahead, even though a bit cliche, I'd have to say that machine learning techniques (convolutional neural nets and deep neural nets for example) really are changing the world by making formerly impossible things merely "hard". While it currently seems that you need a team of Ph.D's to apply and develop these techniques, when they become better understood and developed, I expect them to be more widely accessible to the rest of us. Another really interesting recent development is the idea of "word vectors," which is a pretty cool area to geek out on.