Better, but still not quite there. The x64 compiler no longer needs to sign extend the offset, and therefore can now take advantage of the implicit zero extension in x64 when working with 32-bit registers. (New x64 programmers are often confused by the compiler emitting MOV EAX, EAX instructions, which are not no-ops as they zero the high dword.) However, the compiler is still unable to fuse the additions together. A bit of experimentation with the kernel size multiplier reveals that the x64 compiler has an unusual attachment to the trick of doing an x2 add followed by an x8 scale in order to index 16-byte elements. In this particular case there's a possibility that the two adds might be faster than a shift on some CPUs, but with larger multipliers the compiler generates a SHL followed by an ADD, which is never optimal. Therefore, let's take over the indexing entirely:

The clue as to what's going on is in the MOVSXD instruction, which is a sign extension instruction. According to the C/C++ standards, integral expressions involving values smaller than int are promoted to int, which in the case of Win32/Win64 is 32-bit. Therefore, the expression (*indices++ * 2) gives a signed 32-bit integer. For the x86 compiler, pointers are also 32-bit and so it just shrugs and uses the signed value. The x64 compiler has to deal with a conversion to a 64-bit pointer offset, however, and seems unable to recognize that an unsigned char multiplied by 2 will never be negative, so it emits sign extension code.

It turns out that there are a couple of weirdnesses involved when the x64 compiler hits this code. The x86 compiler is able to fold the x2 from the indexing expression and the x16 from the 128-bit (__m128i) element size into a single x32, which is then converted into a left shift by 5 bits (shl). The x64 compiler is not, and ends up emitting x2 + x2 + x8. Why?

What this routine does is that it uses each index to look up a premultiplied kernel, and adds that to a short output window (8 samples). The output stream has 4x rate compared to the input stream. In a real routine the kernels would typically be a bit longer, but an example of where you might use something like this is to simultaneously upsample and convert a row of pixels or a block of audio through a non-linear curve.

Comments

Comments posted:

GCC 4.6.1 x86-64 has the exact same behavior. This coincidence is peculiar: when checking school tests, when two students have the exact same mistake, the teacher suspects cheating.

Z.T. - 29 06 11 - 23:08

Well, if the compiler isn't clever enough to realize that a byte multiplied by two won't overflow into the sign bit then I wouldn't surprised if it couldn't figure out that it can't overflow the 32-bit integer outright either. What happens if you give it a subtler hint by multiplying by 2ULL instead?As an aside working with (usually less-than-clever) 8-bit embedded compilers gives you something of an eye for spotting potential integer promotion inefficiencies.

doynax - 01 07 11 - 05:53

You still get screwed by type conversion rules in the above code - that's also why both GCC and VC++ agree about the way the address expression is generated. :)To be more specific, the multiply by 2u is still done as a U32. So what the compiler sees (after the address expression for the table access has been generated) is, when written with explicit types, "U64_from_U32(U32_from_U8(*indices++) * 2u) * 16ull". For general expressions, the compiler isn't allowed to simplify this further; of course in this case we happen to know (because we start from a U8 so the first multiply can't overflow) that the expression is really the same as "U64_from_U8(*indices++) * 32ull". Your modified code (with the casting) takes a third option - it replaces the address expression with "U64_from_U32(U32_from_U8(*indices++) * 32u)" - again we know this can't overflow so it's identical. It still has one redundant conversion in it, but since x86-64 (and most other 64-bit architectures) implicitly zero-extend results of 32-bit computations by clearing the high 32 bits this conversion is free.Note that you get a "shl eax, 5" not "shl rax, 5". That's because you still work in U32 temps. For a shift of 5 this doesn't matter, but if you have e.g. a shift of 3 you want (which could use the x86 scaled-index address modes) you really need to use the convert-U8-to-U64-then-multiply version.The underlying problem for all of this is that the C(++) integral types are int-centric and try to do work with ints whenever possible; this was great for 16-bit processors with 16-bit ints or 32-bit processors with 32-bit ints, but now that we have 64-bit processors and use 32-bit ints it causes a lot of friction.

Fabian Giesen (link) - 01 07 11 - 13:05

You're right about a 64-bit cast also fixing the issue -- I had forgotten about that. (I'd cast to size_t instead of unsigned long long, but it's the same here.) However, looking at the C++ standard again, I'm not sure that the U64_from_U32() cast you mention actually exists. We know that it has to happen in order for the instruction addressing to occur, but we're talking about what the standard specifies directly, since the compiler is allowed by bypass any part of it that it can through the as-if rule. The indexing expression a[b] is converted to *(a + b), and since the "usual arithmetic conversions" don't include any special provisions for a pointer being involved, the offset will be promoted only up to int or unsigned int. The pointer+offset addition itself is then governed by 5.7p5, which simply describes the result of addition with a pointer without any description of specific types. I don't see any language that says that the integer value is to be promoted to pointer size prior to addition, and I'm not even sure that's possible given that there may not be such an integer type. This also fits more naturally with CPU architectures that allow indexing with a smaller data type than the base address, 6502 and 68000 coming to mind.Again, though, there's nothing that prohibits the compiler from generating a single SHL instruction. The "as-if" rule would allow the compiler to remove all of the extraneous casts here according to detected value ranges. What we're really talking about here is just code generation quality.

Phaeron - 02 07 11 - 09:07

"I don't see any language that says that the integer value is to be promoted to pointer size prior to addition, and I'm not even sure that's possible given that there may not be such an integer type."Correct; the cast to U64 here is not part of C(++) semantics, it's just there in the case of x86-64 (and all other 64-bit platforms that I've written code for, for that matter), where both pointers are GPRs happen to have 64-bit size. That add has to be done *somehow*; either all values involved are promoted to a common size prior to addition (which I'm implicitly assuming in my post), or there is an add instruction that takes mixed-size operands, or there is an addressing mode that allows a smaller-than-pointer-sized index. In the case of x86-64, there's no mixed-size adds and all the address calculations are done in full 64 bits, so it must be the first option."Again, though, there's nothing that prohibits the compiler from generating a single SHL instruction."Yes, in this case; but if you were passing a "const unsigned int *indices" instead, there would be; that's not meant to imply that compilers shouldn't be doing a better job at this, they should. But I do think that the way the C/C++ type promotion rules work out here (doing the multiply on int-sized values), while well-defined and internally consistent, is spectacularly non-intuitive and bound to cause numerous people to unknowingly shoot themselves in the foot; that's unfortunate."The "as-if" rule would allow the compiler to remove all of the extraneous casts here according to detected value ranges. What we're really talking about here is just code generation quality."Yes - it's just that it does require an additional pass of data-flow analysis (tracking upper and lower bounds for arithmetic expressions based on the original types to check whether an overflow might've occurred or not) that wasn't necessary in 32-bit code; we'll get there eventually, but progress on these kinds of optimizations is way slower than we'd like, especially since all the major compilers have been distracted by other issues for the past few years: MS has been pouring most of its resources into .NET for years only recently realizing that this whole "native code" thing might not just go away after all :), GCC is still caught up in a cascade of major infrastructure upgrades to drag their compiler kicking and screaming into this millennium (plus they have this unhealthy culture of language lawyering beating serious practical concerns sometimes), and ICCs main mission still seems to be raking up good SPEC CPU/FP results. LLVM to the rescue? We'll see :)

Fabian Giesen (link) - 02 07 11 - 15:38

Any reason why writing the asm inline directly is not a better idea? It might be harder to read but you could solve that by leaving the C/pseudocode in comments.

Kentaro (link) - 03 07 11 - 12:38

Kentaro: VC++ on x86-64 doesn't support inline ASM at all, only intrinsics (they had to add a bunch of new intrinsics to make that viable). Which is totally moronic, but there you go.

Fabian Giesen (link) - 03 07 11 - 14:57

Here is what Intel Compiler 12.0.4.147 generates for your C code:test r9d, r9dpxor xmm0, xmm0lea eax, DWORD PTR [-1+r9]je .B1.5.B1.3::movzx r9d, BYTE PTR [r8]movdqa xmm1, xmm0add r9d, r9ddec eaxinc r8shl r9, 4paddw xmm1, XMMWORD PTR [r9+rdx]movdqa xmm0, XMMWORD PTR [16+r9+rdx]movdqa XMMWORD PTR [rcx], xmm1add rcx, 16cmp eax, -1jne .B1.3.B1.5::retSo, is that better than MSVC/GCC or worse?