I recently had a very, very intense debate about the runtime complexity of a super simple algorithm with a colleague of mine. In the end we both agreed to disagree but as I've been thinking about this, it's challenged my basic understanding of computer science fundamentals and so I therefore must get additional insight on the matter.

Given the following python, what is the Big-O runtime complexity:

for c in "How are you today?": print c

Now, I immediately called out that this is simply on the order of O(n) aka linear. Meaning it's dependent on the length of the string so therefore this loop will grow linearly as the length of the string grows.

My colleague then said, "No, it's constant because we know that for the set of all strings we are dealing with (in our case), the max string is always 255 characters long (in our case), therefore it must be constant." He followed on by saying "because we have a max upper-bound on character length of the string this results in O(255) which reduces to O(1)."

Anyways, we went back and fourth and after 45 minutes of both of us drawing sketches we both dead-locked on the issue.

My question is in what world or what math system is the loop above a constant time loop? If we knew our upper-bound was say 1,000,000 characters and the set of all strings could be anywhere from 0 to 1,000,000 this loop will obviously exhibit linear running times depending on the size of the string.

I additionally asked him if he also thinks the following code is O(1) if the upper-bound size of n is known. Meaning we are certain this code will only ever operate on a max upper-bound of say 255 characters:

s = "How are you today?" for c in s: for d in s: print c+d

He said this is also constant time....even after I explained this is an O(n^2) algorithm and demonstrated that the following code would produce a quadratic curve.

So, am I missing some theoretical concept where any of the above is true depending on how the theory goes? Just to be clear his understanding is that I am correct if n is not known. If the upper-bound of n is always known he is asserting that the two algorithms on this post are both of constant runtime complexity.

Just looking to maintain my sanity, but perhaps if I'm wrong there's certainly some additional learning I can benefit from. My good, good colleague was very convincing. Also, if anybody has additional links or material on the subject specific to this question please add to the comments.