I'm writing about arrays in the Modern Perl book now. While writing about push and unshift yesterday, I looked in perlfunc to see if I'd missed anything subtle about push -- and I had:

Returns the number of elements in the array following the completed push .

I can't think of a time when I'd used this in the past decade. Every use of push I can think of is in void context:

push @some_array, qw( some list of items );

Curiosity convinced me to look at the bleadperl source code. The push op is in a file called pp.c in a function called pp_push :

PP(pp_push) { dVAR; dSP; dMARK; dORIGMARK; dTARGET; register AV * const ary = MUTABLE_AV(*++MARK); const MAGIC * const mg = SvTIED_mg((const SV *)ary, PERL_MAGIC_tied); if (mg) { *MARK-- = SvTIED_obj(MUTABLE_SV(ary), mg); PUSHMARK(MARK); PUTBACK; ENTER; call_method("PUSH",G_SCALAR|G_DISCARD); LEAVE; SPAGAIN; SP = ORIGMARK; if (GIMME_V != G_VOID) { PUSHi( AvFILL(ary) + 1 ); } } else { PL_delaymagic = DM_DELAY; for (++MARK; MARK <= SP; MARK++) { SV * const sv = newSV(0); if (*MARK) sv_setsv(sv, *MARK); av_store(ary, AvFILLp(ary)+1, sv); } if (PL_delaymagic & DM_ARRAY) mg_set(MUTABLE_SV(ary)); PL_delaymagic = 0; SP = ORIGMARK; PUSHi( AvFILL(ary) + 1 ); } RETURN; }

I know this is a big chunk of lots of macros, but it's not too difficult to understand. The first if branch handles the case where the array on which to push has magic -- if it's a tied array, for example. Ignore that. The second branch loops through every list item provided to the op and appends them to the array.

I've emboldened a line at the end of that branch. The PUSHi macro pushes an integer value (an IV, in core parlance) onto the stack. The AvFILL macro returns the index of the final element in the array. Adding one to that number gives the number of elements in the array.

Every execution of this branch retrieves that value and pushes it on the stack. Even if the opcode takes place in void context such that the compiler can determine that at compilation time, this push occurs.

I wrote a patch:

diff --git a/pp.c b/pp.c index 9cedc3f..fbdc90c 100644 --- a/pp.c +++ b/pp.c @@ -4561,7 +4561,9 @@ PP(pp_push) PL_delaymagic = 0; SP = ORIGMARK; - PUSHi( AvFILLp(ary) + 1 ); + if (GIMME_V != G_VOID) { + PUSHi( AvFILL(ary) + 1 ); + } } RETURN; }

I've emboldened the important condition. The GIMME_V macro evaluates to the current context of the expression. Usually this context is statically determinable, but if this push is the final expression in a subroutine, the calling context matters. The G_VOID macro represents void context. In other words, don't push anything onto the stack to return a value from this expression unless something wants that return value.

Yitzchak Scott-Thoennes commented on my patch to say that GIMME_V may be more expensive than I intended. This is because looking up through calling scopes to find the runtime context is not always cheap. He suggested the simplification of:

OP_GIMME(PL_op, 0) != G_VOID

... to check only the compile-time context of the operator. You can see that this cheaper check is still correct in ambiguous cases:

$ perl -MO=Concise,check_push_context sub check_push_context { push @_, 'static void context'; push @_, 'dynamic context'; }^D d <1> leavesub[1 ref] K/REFC,1 ->(end) - <@> lineseq KP ->d 1 <> nextstate(main 61 push_ctx.pl:6) v:%,*,&,$ ->2 6 <@> push[t3] vK/2 ->7 2 <0> pushmark s ->3 4 <1> rv2av[t2] lKRM/3 ->5 3 <#> gv[*_] s ->4 5 <$&;gt const[PV "static void context"] s ->6 7 <> nextstate(main 61 push_ctx.pl:7) v:%,*,&,$ ->8 c <@> push[t6] sK/2 ->d 8 <0> pushmark s ->9 a <1> rv2av[t5] lKRM/3 ->b 9 <#> gv[*_] s ->a b <$> const[PV "dynamic context"] s ->c

I've emboldened the lines representing the push opcodes and I've emphasized the relevant context flags for these ops. The first opcode has a flag of v , which indicates that it occurs in void context. The second opcode has a flag of s , which indicates scalar context. Thus Yitzchak's suggestion will work for both cases without ruining any dynamic context call of this function.

As is the case with such optimizations, the question is whether the cost of checking such an optimization possibility is worth the cost of doing the work anyway. Measuring that, however -- well, you're not getting huge speed improvements out of this code. For a one-line patch and a very common use of this op, it may be worthwhile.