



We wrap up our conversation on complexity and play some more over/under as Allen thinks learning is backwards, Michael doesn’t write clean code, and Joe brings his dog to the discussion.

Reading these show notes via your podcast player? You can view this episode’s full show notes and participate in the discussion at https://www.codingblocks.net/episode89.

Sponsors

Datadog.com/codingblocks – Sign up today for a free 14 day trial and get a free Datadog t-shirt after creating your first dashboard.

Survey Says

Anonymous Vote Sign in with Wordpress What's your preferred password manager? 1Password

LastPass

KeePass + Dropbox

RoboForm

Keychain

I just use the same 3 passwords everywhere.

Whatever is built into the browser. vote

News

We thank everyone for the leaving us a review and appreciate you taking the time out of your day to do so. iTunes: Char Broiled String Cheese, gnargnor, Zhouxiang189, BootManager

Come see Joe speak at the Atlanta Code Camp 2018. And don’t forget to kick him in the shins when you do! Assuming you can catch him.

Come see Allen speak at Microsoft Ignite 2018. WARNING: Allen won’t appreciate being kicked in the shins.

Complexity Continued

Quick Summary

Random access to an item is always O(1). Data structures play a key role here, dictionaries, hash tables, etc.

List iterations are always O(n).

Nested list iterations are O(n^2) or worse.

Divide and conquer are O(log n) – dividing lists into smaller sets is logarithmic.

List iterations that also use divide and conquer are O(n log n).

If you keep adding loops for every item added, you’re now into O(n!) territory and should quickly find a way out!

Space vs Time Complexity

Time complexity allows us to describe how long an operation will take.

Space complexity is about describing where the data used in an algorithm is stored. Space, in this regard, could be memory, disk, or any streaming, such as over a network or to another device.

In regards to memory there is the heap and the stack. Scoped variables are held in a stack, which has a finite amount of space. The stack keeps simple values and pointers to the heap for more complex / larger objects. As your program executes, it adds items to the stack that don’t get removed until that function is completed (goes out of scope). Why don’t we care about the heap? We do, but it’s much, much larger than the stack. For .NET runtimes, the stack size is 1 MB, vs sizes measured in GB for the heap depending on the version of the OS. Stack overflow is more likely to occur in recursive functions when functions keep getting added to the stack before they get removed. Note: some languages support tail recursion, which allows the runtime to notice new stack frames are superfluous and doesn’t add them – for example a tail recursive fibonacci algorithm will have ONE scope on the stack, where a language that doesn’t could easily overflow the stack for a large request.

How does big O work for space complexity? Basically the same, big O doesn’t care if you add 3 variables to the stack for every input, it’s much more concerned with the fact that you’re adding things to the stack for every iteration O(n)

How much does Big O matter?

Do we have a better system for measuring distributed load? (parallel processing, distributed algorithms)

Times when big O will matter: It matters a lot if you’re designing a new cryptocurrency or cryptography algorithm! Interviewing. It will almost certainly come up during an interview. Game programming (graphics, decision trees) Machine learning / AI Business programming? Maybe not? Usually the bottle neck is in the data tier, network latency, or UI, so long as you use data structures well (hashes vs array). But that’s not to say that there might not be some bad complexity offenders within your line of business application.

When should you look into your algorithms? When things running slow under load. When you’re looking into hardware provisioning or predicting costs.

How do you figure out what to fix? Use profiling and load testing tools.

What do you do about it? Refactor, refactor, refactor. Examine your data structures. Maybe you should be using a hash or a plain array instead? Profile your app, find the slowest and most memory intensive spots. Look for duplicated work/calls.



Resources We Like

The Imposter’s Handbook (bigmachine.io)

Tail call (Wikipedia)

What is tail recursion? (Stack Overflow)

Everything is Fast for Small n (Coding Horror)

Latency Numbers Every Programmer Should Know (GitHub)

Is the use of dynamic considered a bad practice? (Stack Overflow)

Tip of the Week

Within Visual Studio, use the Paste Special feature to create classes from your JSON or XML. From the Edit menu, select Paste Special -> Paste JSON as Classes or Paste XML as Classes. Thanks @angryzoot! OK, OK, in fairness, Allen also mentioned this a while back in episode 31, but this tip is too good to not be repeated.

Use the React Typescript Cheatsheet to get all of the performance of React with all of the type checking of Typescript in your next JavaScript project. (GitHub) Thanks @Dance2Die!

SQL Server 2012 introduced a FORMAT capability so you can format your data in the string format you really want. (Formatting Types in .NET, FORMAT (Transact-SQL))

Check out @GaProgMan’s newest project, The .NET Core Podcast. (iTunes)