Let's say I have 3 double-precision arrays,

real*8, dimension(n) :: x, y, z

which are initialized as

x = 1. y = (/ (1., i=1,n) /) z = (/ (1. +0*i, i=1,n) /)

They should initialize all elements of all arrays to 1 . In ifort (16.0.0 20150815), this works as intended for any n within the range of the declared precision. That is, if we initialize n as

integer*4, parameter :: n

then as long as n < 2147483647 , the initialization works as intended for all declarations.

In gfortran (4.8.5 20150623 Red Hat 4.8.5-16), the initialization fails for y (array comprehension with constant argument) as long as n>65535 , independent of its precision. AFAIK, 65535 is the maximum of a unsigned short int , aka unsigned int*2 which is well within the range of integer*4 .

Below is an MWE:

program test implicit none integer*4, parameter :: n = 65536 integer*4, parameter :: m = 65535 real*8, dimension(n) :: x, y, z real*8, dimension(m) :: a, b, c integer*4 :: i print *, huge(n) x = 1. y = (/ (1., i=1,n) /) z = (/ (1.+0*i, i=1,n) /) print *, x(n), y(n), z(n) a = 1. b = (/ (1., i=1,m) /) c = (/ (1.+0*i, i=1,m) /) print *, a(m), c(m), c(m) end program test

Compiling with gfortran ( gfortran test.f90 -o gfortran_test ), it outputs:

2147483647 1.0000000000000000 0.0000000000000000 1.0000000000000000 1.0000000000000000 1.0000000000000000 1.0000000000000000

Compiling with ifort ( ifort test.f90 -o ifort_test ), it outputs:

2147483647 1.00000000000000 1.00000000000000 1.00000000000000 1.00000000000000 1.00000000000000 1.00000000000000

What gives?