About this page

Below you can find a list of intentional changes between the the FPC 2.2.4 release and the 2.4.0 release which can change the behaviour of previously working code, along with why these changes were performed and how you can adapt your code if you are affected by them.

All systems

Usage Changes

ppc386.cfg configuration file is no longer read

Old behaviour : In addition to fpc.cfg and .fpc.cfg , the compiler also looked for configuration files with the names ppc386.cfg and .ppc386.cfg

: In addition to and , the compiler also looked for configuration files with the names and New behaviour : The compiler only looks for configuration files with the names fpc.cfg and .fpc.cfg

: The compiler only looks for configuration files with the names and Reason : The ppc386.cfg name stems from the time that the compiler only supported the i386 platform, which is no longer the case. Naming configuration files like that has also been deprecated since quite a while, and the compiler warned about this in previous versions.

: The name stems from the time that the compiler only supported the i386 platform, which is no longer the case. Naming configuration files like that has also been deprecated since quite a while, and the compiler warned about this in previous versions. Remedy: Rename any (.)ppc386.cfg files you have and are using to (.)fpc.cfg

Language changes

Passing ordinal constants to formal const parameters

Old behaviour : Ordinal constants could be passed directly to formal parameters.

: Ordinal constants could be passed directly to formal parameters. New behaviour : Passing ordinal constants to formal parameters is no longer allowed.

: Passing ordinal constants to formal parameters is no longer allowed. Example:

procedure test ( const a ) ; begin end ; begin test ( 5 ) ; end .

The above program used to compile, but now it does not anymore.

Reason : It is not clear from the code above what the size of the ordinal constant (1, 2, 4 or 8 bytes) will be when it is passed to the procedure. See for example http://bugs.freepascal.org/view.php?id=9015 for how this can cause problems. The change in behaviour is also Delphi-compatible.

: It is not clear from the code above what the size of the ordinal constant (1, 2, 4 or 8 bytes) will be when it is passed to the procedure. See for example http://bugs.freepascal.org/view.php?id=9015 for how this can cause problems. The change in behaviour is also Delphi-compatible. Effect : Code using this construct will no longer compile.

: Code using this construct will no longer compile. Remedy: Declare a variable of the appropriate size, assign the value to it and then pass this variable as parameter instead.

Treating direct-mapped properties as regular fields

Old behaviour : The compiler allowed treating properties that directly read and write a field (as opposed to redirecting via a getter/setter) as direct references to this field. This means that you could pass such properties to var and out parameters, that you could take their address, that you could assign values to subscripted properties with non-pointer result types (see example //2 below), and that you could assign values to typecasted properties.

: The compiler allowed treating properties that directly read and write a field (as opposed to redirecting via a getter/setter) as direct references to this field. This means that you could pass such properties to and parameters, that you could take their address, that you could assign values to subscripted properties with non-pointer result types (see example below), and that you could assign values to typecasted properties. New behaviour : All properties are now treated equal, regardless of whether they directly map to a field or use a getter/setter.

: All properties are now treated equal, regardless of whether they directly map to a field or use a getter/setter. Example:

{$mode objfpc} type trec = record a , b : integer ; end ; tc = class private fmyfield : integer ; frec : trec ; public property myfield : integer read fmyfield write fmyfield ; property rec : trec read frec write frec ; end ; var c : tc ; begin c := tc . create ; inc ( c . myfield ) ; //1 c . rec . a := 5 ; //2 cardinal ( c . myfield ) := $ffffffff ; //3 end .

The above code used to compile. Now the line marked with //1 will be flagged as Can't take the address of constant expressions, and the lines marked with //2 and //3 as Argument can't be assigned to.

Reason : Properties abstract the underlying memory layout and class implementation. By ignoring this abstraction in case a property directly mapped to a field, it became impossible to afterwards transparently change the property into an indirection via a getter and/or setter. The new behaviour is also Delphi-compatible.

: Properties abstract the underlying memory layout and class implementation. By ignoring this abstraction in case a property directly mapped to a field, it became impossible to afterwards transparently change the property into an indirection via a getter and/or setter. The new behaviour is also Delphi-compatible. Remedy: Change your code so that the address of properties is no longer taken, that they are no longer used as var or out parameters, that subscripts of properties with a non-pointer result types are no longer assigned to, and that properties to which you write are not typecasted. Note that a class instance qualifies as a pointer type in this context.

Overloading the assignment operator with a shortstring result

Old behaviour : It was possible to overload the assignment operator ":=" for every possible shortstring length as result (i.e., for string[1], string[2], ..., string[255])

: It was possible to overload the assignment operator ":=" for every possible shortstring length as result (i.e., for string[1], string[2], ..., string[255]) New behaviour : It is now only possible to overload the assignment operator for string[255] as result.

: It is now only possible to overload the assignment operator for string[255] as result. Example:

type ts1 = string [ 4 ] ; ts2 = string [ 255 ] ; operator := ( l : longint ) res : ts1 ; begin str ( l : 4 , res ) ; end ; operator := ( l : longint ) res : ts2 ; begin str ( l : 20 , res ) ; end ; begin end .

The above code used to compile in previous versions. Now, the operator with ts1 as result is refused.

Reason : Since shortstrings of all lengths are assignment-compatible with all other shortstrings, an overloaded assignment operator defined for a single shortstring length has to work for assigning to shortstrings of all lengths (see bug #12109). As a result, having assignment operators for multiple shortstring lengths would introduce ambiguity in case there is no exact match.

: Since shortstrings of all lengths are assignment-compatible with all other shortstrings, an overloaded assignment operator defined for a single shortstring length has to work for assigning to shortstrings of all lengths (see bug #12109). As a result, having assignment operators for multiple shortstring lengths would introduce ambiguity in case there is no exact match. Remedy: If you require differentiating between multiple shortstring lengths, you now have to wrap these shortstrings in a record:

type ts1 = record s : string [ 4 ] ; end ; ts2 = record s : string [ 255 ] ; end ; operator := ( l : longint ) res : ts1 ; begin str ( l : 4 , res . s ) ; end ; operator := ( l : longint ) res : ts2 ; begin str ( l : 20 , res . s ) ; end ; begin end .

Absolute variable declarations

Old behaviour : It was possible to use absolute variable declarations to refer to expressions containing implicit pointer dereferences (class fields, dynamic array elements, pchar elements, ansistring/widestring elements, ...). Expressions containing explicit dereferencing were forbidden.

: It was possible to use variable declarations to refer to expressions containing implicit pointer dereferences (class fields, dynamic array elements, pchar elements, ansistring/widestring elements, ...). Expressions containing explicit dereferencing were forbidden. New behaviour : absolute variable declarations can no longer be used to refer to any kind of dereferenced expressions, be it implicit or explicit.

: variable declarations can no longer be used to refer to any kind of dereferenced expressions, be it implicit or explicit. Example:

type ta = class p : pointer ; procedure test ; end ; procedure ta . test ; var pa : ta absolute p ; b : pchar ; c : char absolute b [ 4 ] ; begin end ; begin end .

The above code used to compile, but now it is rejected.

Reason : Consistency (implicit vs. explicit dereferencing should make no difference), Delphi compatibility.

: Consistency (implicit vs. explicit dereferencing should make no difference), Delphi compatibility. Remedy: You can replace such constructs often either using initialized variables, or using with-statements.

Indexed properties and default parameters

Old behaviour : If a getter for an index property has default parameters, it was possible to leave away those parameters also when accessing the property.

: If a getter for an index property has default parameters, it was possible to leave away those parameters also when accessing the property. New behaviour : When indexing a property, you always have to specify all parameters.

: When indexing a property, you always have to specify all parameters. Example:

{$mode objfpc}{$H+} type { TForm1 } TForm1 = class private function GetFoo ( Index : Integer ; Ask : Boolean = True ) : Integer ; public property Foo [ Index : Integer ; Ask : Boolean ] : Integer read GetFoo ; end ; function TForm1 . GetFoo ( Index : Integer ; Ask : Boolean ) : Integer ; begin Result := Foo [ Index ] ; //1 end ; end .

The above code used to compile, because Foo[index] was interpreted as GetFoo(index), which caused the compiler to automatically add the default True parameter at the end. Now this code will fail.

Reason : Delphi compatibility, and the fact that you cannot use default parameters with setters (since there the set value appears as the last parameter) made this (unintential) feature behave asymmetrically.

: Delphi compatibility, and the fact that you cannot use default parameters with setters (since there the set value appears as the last parameter) made this (unintential) feature behave asymmetrically. Remedy: Always specify all parameters when using indexed properties.

Order of field and method/property declarations

Old behaviour : Field declarations could appear anywhere inside an object or class definition.

: Field declarations could appear anywhere inside an object or class definition. New behaviour : Per individual visibility block declaration ( public , private , ...), all fields must be declared before the property and method declarations.

: Per individual visibility block declaration ( , , ...), all fields must be declared before the property and method declarations. Example:

{$mode objfpc} type tc = class constructor create ; a : longint ; end ; constructor tc . create ; begin end ; begin end .

The above code used to compile, but now it will cause an error due to the a field appearing after the constructor.

Reason:

{$mode objfpc} type tc = class function getx ( i : longint ) : longint ; property prop [ i : longint ] : longint read getx ; default : longint ; end ; function tc . getx ( i : longint ) : longint ; begin end ; begin end .

The above code was ambiguous to the compiler, because when it finished parsing the property, it could not decide based on seeing the default token whether this meant that the property was a default property, or whether a field coming after the property was called "default". It did find this out after it had parsed the default token (because the next token was a ":" rather than a ";"), but by then it was too late.

In general, the problem is that several valid field names can also appear as modifiers for methods or properties. So in order to prevent any ambiguities, fields are no longer allowed to appear right after method/property declarations. This is Turbo Pascal and Delphi-compatible.

Remedy : There are two possible remedies: Move the field declarations before the method/property declarations Start a new visibility block before the field declaration:

: There are two possible remedies:

{$mode objfpc} type tc = class function getx ( i : longint ) : longint ; property prop [ i : longint ] : longint read getx ; public // added default : longint ; end ;

Local type definitions in parameter lists

Old behaviour : Parameter lists and function result types could contain local type definitions.

: Parameter lists and function result types could contain local type definitions. New behaviour : Local type definitions are no longer allowed inside parameter lists and function results.

: Local type definitions are no longer allowed inside parameter lists and function results. Example:

procedure write ( var f : file of extended ; e : extended ) ; begin system . write ( f , e ) ; end ; procedure writestring ( const s : string [ 80 ]) ; begin writeln ( s ) ; end ; function mystr : string [ 50 ] ; begin mystr := 'abc' ; end ;

All of the above subroutine definitions will now be rejected, because they all define new types inside their parameter lists or result type.

Reason : In Pascal, two parameters are only of the same type if their type refers to the same type definition. Allowing local type definitions inside a subroutine declaration therefore by definition causes errors in case the subroutine is declared globally in a unit. The reason is that the type definition in the interface and in the implementation definitions will differ (both times a new type is created), and hence the compiler will not be able to find the implementation of the interface definition. This change is also Delphi compatible.

: In Pascal, two parameters are only of the same type if their type refers to the same type definition. Allowing local type definitions inside a subroutine declaration therefore by definition causes errors in case the subroutine is declared globally in a unit. The reason is that the type definition in the interface and in the implementation definitions will differ (both times a new type is created), and hence the compiler will not be able to find the implementation of the interface definition. This change is also Delphi compatible. Remedy: Move the type definitions into separate type blocks:

type textendedfile = file of extended ; tstring50 = string [ 50 ] ; tstring80 = string [ 80 ] ; procedure write ( var f : textendedfile ; e : extended ) ; begin system . write ( f , e ) ; end ; procedure writestring ( const s : tstring80 ) ; begin writeln ( s ) ; end ; function mystr : tstring50 ; begin mystr := 'abc' ; end ;

Implementation changes

Alignment of record variables

Old behaviour : Variables of record types (not just their fields, but the records as a whole when declared as independent variables) would be aligned at most to the maximum alignment of their fields, limited by the maximum field alignment set for the target. This same limit was used to determine padding of the record size.

: Variables of record types (not just their fields, but the records as a whole when declared as independent variables) would be aligned at most to the maximum alignment of their fields, limited by the maximum field alignment set for the target. This same limit was used to determine padding of the record size. New behaviour : Variables of record types are now always aligned inside stack frames or as global variables in a way which provides optimal alignment for their embedded fields, regardless of the used packrecords setting. Moreover, unpacked records are also padded up to a size which is a multiple of this alignment (to also provide optimal alignment inside arrays of such records). The alignment of record fields inside other records obviously only depends on the packing settings of the "parent" record.

: Variables of record types are now always aligned inside stack frames or as global variables in a way which provides optimal alignment for their embedded fields, regardless of the used packrecords setting. Moreover, unpacked records are also padded up to a size which is a multiple of this alignment (to also provide optimal alignment inside arrays of such records). The alignment of record fields inside other records obviously only depends on the packing settings of the "parent" record. Example:

type tr = packed record d : double ; b : byte ; end ;

tr used to be aligned to 1 byte in stack frames and as global variable. Now it will be aligned to the native alignment of double (4 or 8 bytes depending on the target platform, limited by the maximum global/local alignment settings). Its size will remain 9 bytes as before (because of the packed specifier).

Reason : Performance.

: Performance. Effect : The size of some non-packed records may change compared to previous versions. Other than that, the different alignment rules cannot impact your code unless you are making unsupported assumptions (like taking the address of a local variable, adding some value to it, and expecting that you are now pointing at the next local variable).

: The size of some non-packed records may change compared to previous versions. Other than that, the different alignment rules cannot impact your code unless you are making unsupported assumptions (like taking the address of a local variable, adding some value to it, and expecting that you are now pointing at the next local variable). Remedy: If you depend on the layout and/or size of a record staying the same, always declare it as packed. Non-packed records are free to be changed by the compiler in any way that it sees fit (except for changing the types or order of the fields).

Byte/Word/Long/Qwordbool types

Old behaviour : Assigning "true" to variables of these types resulted in these variables getting the value "1". Typecasting ordinal values to Byte/Word/Long/Qwordbool also mapped these values onto [0,1] if the source and destination type were of different sizes.

: Assigning "true" to variables of these types resulted in these variables getting the value "1". Typecasting ordinal values to Byte/Word/Long/Qwordbool also mapped these values onto [0,1] if the source and destination type were of different sizes. New behaviour : Assigning true to such variables now sets them to "-1" (i.e., all 1 bits). Typecasting an ordinal value to such a type now leaves that ordinal value untouched.

: Assigning true to such variables now sets them to "-1" (i.e., all 1 bits). Typecasting an ordinal value to such a type now leaves that ordinal value untouched. Example:

var b : byte ; bb : bytebool ; begin bb := true ; writeln ( ord ( bb )) ; b := 3 ; writeln ( byte ( wordbool ( b ))) ; end .

This program used to print 1 in both cases, now it prints -1 for the first statement and 3 for the second.

Reason : Delphi-compatibility, compatibility with WinAPI functions. See http://bugs.freepascal.org/view.php?id=10233 for more information.

: Delphi-compatibility, compatibility with WinAPI functions. See http://bugs.freepascal.org/view.php?id=10233 for more information. Effect : Code assuming that assigning true to a Byte/Word/Long/Qwordbool variable or parameter results in assigning the ordinal value "1" no longer works. This may affect e.g. translations of C headers where certain int parameters which function basically as boolean parameters were replaced with longbool.

: Code assuming that assigning true to a Byte/Word/Long/Qwordbool variable or parameter results in assigning the ordinal value "1" no longer works. This may affect e.g. translations of C headers where certain int parameters which function basically as boolean parameters were replaced with longbool. Remedy: If you depend on the the ordinal value of a particular variable being 1, either use an expression such as longbool(1) (which can also be used in the declaration of a constant), or use a regular ordinal type rather than one of the *bool types.

Encoding of single-character constants assigned to widestring

Old behaviour : If a source file's encoding was not utf-8 and a single character constant was assigned directly to a widestring, then this character would not be converted from the source code's code page. It would therefore result in a string with ord(str[1]) equal to the ordinal value of that character as it appeared in the source file (i.e., in the source file's code page).

: If a source file's encoding was not utf-8 and a single character constant was assigned directly to a widestring, then this character would not be converted from the source code's code page. It would therefore result in a string with equal to the ordinal value of that character as it appeared in the source file (i.e., in the source file's code page). New behaviour : Such character constants are now converted at compile time from the source file's code page into an utf-16 character before being stored into the widestring. In case of ansistrings/shortstrings, nothing changes (i.e., a character with the original ordinal value as it appears in the source file is directly stored into the ansistring/shortstring, without any compile-time conversion)

: Such character constants are now converted at compile time from the source file's code page into an utf-16 character before being stored into the widestring. In case of ansistrings/shortstrings, nothing changes (i.e., a character with the original ordinal value as it appears in the source file is directly stored into the ansistring/shortstring, without any compile-time conversion) Example:

{$codepage cp866} {$ifdef unix} uses cwstring ; {$endif} var w : widestring ; a : ansistring ; begin w := 'Б' ; s := 'Б' ; end .

(it is assumed that the above source file is saved using code page 866)

The "Б" character has ordinal value 129 in code page 866. Previously, at run time ord(w[1]) and ord(s[1]) would equal 129. Now, at run time w[1] will equal widechar('Б'), and s[1] will (still) equal #129.

Reason : The previous behaviour was buggy, as multi-character constants were correctly converted from the source file's code page.

: The previous behaviour was buggy, as multi-character constants were correctly converted from the source file's code page. Remedy: If you want to store a particular ordinal value directly into a widestring without the compiler converting anything regardless of the source file's code page, use #xyza notation (i.e., use 4 digits to define the number, e.g. #0129 in the above example).

Sets in RTTI (run-time type information)

Old behaviour : The TParamFlags and TIntfFlagsBase sets from the typinfo unit used to be four bytes large. They were also always stored in the little endian set format.

: The and sets from the unit used to be four bytes large. They were also always stored in the little endian set format. New behaviour : These sets are now one byte large and always stored according to the endianess of the target system.

: These sets are now one byte large and always stored according to the endianess of the target system. Effect : Code parsing RTTI information may no longer compile if it typecasts these values into integers.

: Code parsing RTTI information may no longer compile if it typecasts these values into integers. Reason : Delphi-compatibility.

: Delphi-compatibility. Remedy: Change longint/integer typecasts into byte typecast when treating these fields as ordinals, and do not fiddle with the bits before using the set values on big endian systems.

Unit changes

OpenGL loading

Old behaviour : After calling any of the Load_GL_version_X_X routines of the glext unit, only the extensions/functions introduced by that particular revision of the OpenGL standard became available.

: After calling any of the routines of the unit, only the extensions/functions introduced by that particular revision of the OpenGL standard became available. New behaviour : All of the Load_GL_version_X_X functions now also load the functions related to any extensions by earlier versions.

: All of the functions now also load the functions related to any extensions by earlier versions. Example:

uses gl , glext ; begin if not Load_GL_version_2_0 then begin writeln ( 'OpenGL 2.0 is not supported' ) ; Halt ; end ; end .

The above program used to only initialize the functions from the glext unit added in OpenGL 2.0. Now, it will also initialize the functions from OpenGL 1.2-1.5 (the gl unit already contains everything up to and including OpenGL 1.1).

Reason : This is more logical, since all of those OpenGL versions are backwards compatible..

: This is more logical, since all of those OpenGL versions are backwards compatible.. Remedy: Remove any extra OpenGL extension initialization calls from your code. Leaving them there will not cause errors, but they are no longer necessary.

dom unit: memory management for nodes

Old behaviour : Destroying a TDOMDocument or TXMLDocument would free only those DOM nodes which were part of the document tree. Nodes not yet inserted into the tree would leak unless explicitly destroyed. Once inserted into the tree, a node could not be removed without destroying it. Node replacement/removal methods (namely, TDOMNode.RemoveChild , TDOMNode.ReplaceChild , TDOMElement.SetAttributeNode and TDOMElement.SetAttributeNodeNS) destroyed their return values and returned nil .

: Destroying a or would free only those DOM nodes which were part of the document tree. Nodes not yet inserted into the tree would leak unless explicitly destroyed. Once inserted into the tree, a node could not be removed without destroying it. Node replacement/removal methods (namely, , , and TDOMElement.SetAttributeNodeNS) destroyed their return values and returned . New behaviour : Every node created by one of TDOMDocument.CreateXX methods is "owned" by the document and is guaranteed to be destroyed together with the document. The behaviour of nodes created by other means is unchanged. The node replacement/removal functions listed above no longer destroy their return value and therefore return a valid node.

: Every node created by one of methods is "owned" by the document and is guaranteed to be destroyed together with the document. The behaviour of nodes created by other means is unchanged. The node replacement/removal functions listed above no longer destroy their return value and therefore return a valid node. Effect : The peak memory usage will grow if you replace/remove many nodes during document lifetime.

: The peak memory usage will grow if you replace/remove many nodes during document lifetime. Reason : Compliance to the DOM specification and compatibility with other DOM implementations, including Delphi.

: Compliance to the DOM specification and compatibility with other DOM implementations, including Delphi. Remedy : If you wish to keep memory usage at minimum, manually free the nodes that are returned by the methods listed above. This is not a requirement, however.

: If you wish to keep memory usage at minimum, manually free the nodes that are returned by the methods listed above. This is not a requirement, however. Example:

// code like this MyNode . ReplaceChild ( NewNode , OldNode ) ; // will now have to become MyNode . ReplaceChild ( NewNode , OldNode ) . Free ;

The described change is backwards-compatible, because calling Free for nil objects which were returned by older versions is actually a no-op.

dom unit: memory management for node lists

Old behaviour : Functions that return a TDOMNodeList object (namely, TDOMNode.ChildNodes , TDOMDocument.GetElementsByTagName , TDOMDocument.GetElementsByTagNameNS , TDOMElement.GetElementsByTagName and TDOMElement.GetElementsByTagNameNS ) create a new TDOMNodeList object on each call. These objects must be eventually disposed of by calling their dedicated Release method.

: Functions that return a TDOMNodeList object (namely, , , , and ) create a new TDOMNodeList object on each call. These objects must be eventually disposed of by calling their dedicated method. New behaviour : The node lists are now cached. The functions listed above return the same object when called multiple times. The returned TDOMNodeList can be destroyed by calling regular Free method, but doing so is optional. The TDOMNodeList.Release method has been removed.

: The node lists are now cached. The functions listed above return the same object when called multiple times. The returned TDOMNodeList can be destroyed by calling regular method, but doing so is optional. The method has been removed. Effect : Code using TDOMNodeList.Release method will no longer compile.

: Code using method will no longer compile. Reason : Delphi compatibility, Mantis #13605. The reference counting of TDOMNodeList's was never actually implemented, the Release method was just equal to Free .

: Delphi compatibility, Mantis #13605. The reference counting of TDOMNodeList's was never actually implemented, the method was just equal to . Remedy: To keep your code backwards-compatible, replace the Release call with Free. If backwards compatibility is not an issue, Release may be simply removed.

Almost all old 1.0.x socket unit functions have been removed

Old behaviour : Basic socket functions were available in both fp<name> and <name> flavours (e.g., both fpbind() and bind() )

: Basic socket functions were available in both fp<name> and <name> flavours (e.g., both and ) New behaviour : All <name>-functions whose functionality was identical to that of the fp<name> variant have been removed. Some of the bind() variants did offer different functionality and therefore have been kept. This may result in " can't determine which overloaded " errors under certain circumstances.

: All <name>-functions whose functionality was identical to that of the fp<name> variant have been removed. Some of the variants did offer different functionality and therefore have been kept. This may result in " " errors under certain circumstances. Reason : These 1.0.x-era functions have been deprecated since 1.9.x times. Some errors in documenting this deprecation have lead to removing them only now. The reason for the deprecation was their behaviour did not exactly match the POSIX socket functions, even though they had the same name. E.g., some of these functions' abilities to signal errors were incomplete and some of the names deviated from the standard names (e.g., getsocketoption() vs. getsockopt() ). The fp<name> functions match the POSIX standard both in name and behaviour, except for the additional fp -prefix.

: These 1.0.x-era functions have been deprecated since 1.9.x times. Some errors in documenting this deprecation have lead to removing them only now. The reason for the deprecation was their behaviour did not exactly match the POSIX socket functions, even though they had the same name. E.g., some of these functions' abilities to signal errors were incomplete and some of the names deviated from the standard names (e.g., vs. ). The fp<name> functions match the POSIX standard both in name and behaviour, except for the additional -prefix. Remedy: Use the fp* functions. Be aware of some differences in the argument types (pointer types vs. var-parameters), which may require an extra @-operator for certain parameters.

Infozip-based unzip renamed unzip51g

Old behaviour : The distribution contained two unzip units: paszlib/src/unzip.pp and unzip/src/unzip.pp .

: The distribution contained two unzip units: and . New behaviour : The unzip/src/unzip.pp unit has been renamed to unzip51g.pp .

: The unit has been renamed to . Reason : One of the two units had to be renamed to solve the name collision. The paszlib/ version seemed more complete, so was given the base name "unzip.pp". The unzip/ unit has been renamed to unzip51g because the source file says it is based on the 51g version of InfoZIP.

: One of the two units had to be renamed to solve the name collision. The version seemed more complete, so was given the base name "unzip.pp". The unit has been renamed to because the source file says it is based on the 51g version of InfoZIP. Remedy: In case you application depends on InfoZIP-specific functionality, use the unzip51g instead of the unzip unit.

All Unix-based systems

FindFirst/FindNext

Old behaviour : If the search pattern did not contain any wildcards (such as '?' or '*'), then the search attributes (faDirectory, faHidden, ...) were ignored.

: If the search pattern did not contain any wildcards (such as '?' or '*'), then the search attributes (faDirectory, faHidden, ...) were ignored. New behaviour : Regardless of the format of the search pattern, the search attributes are properly taken into account.

: Regardless of the format of the search pattern, the search attributes are properly taken into account. Effect : If you called FindFirst with a search pattern not containing any wildcards and no special attributes, such invocation would also return directories, hidden files, etc. Now only regular files will be returned, unless you specify the appropriate attributes.

: If you called with a search pattern not containing any wildcards and no special attributes, such invocation would also return directories, hidden files, etc. Now only regular files will be returned, unless you specify the appropriate attributes. Reason : Conforming with the documented behaviour, and the behaviour on other platforms.

: Conforming with the documented behaviour, and the behaviour on other platforms. Remedy: Specify the proper attributes if you also want to search for things other than regular files.

Signals/exceptions in libraries

Old behaviour : FPC always hooked the SIGFPE , SIGSEGV , SIGBUS and SIGILL Unix signals in the initialization code of the system unit.

: FPC always hooked the , , and Unix signals in the initialization code of the system unit. New behaviour : The aforementioned signals are no longer automatically hooked in the initialization code of libraries (or rather: unhooked once the system unit's initialization code has finished). In case of programs, these signals remain hooked as before.

: The aforementioned signals are no longer automatically hooked in the initialization code of libraries (or rather: unhooked once the system unit's initialization code has finished). In case of programs, these signals remain hooked as before. Effect : Catching exceptions resulting from the aforementioned signals will no longer work by default in FPC libraries. Note that this never worked anyway for libraries that were dynamically linked against a program at compile time, because in that case the program's initialization code ran after the library's initialization code, thereby overriding its signal handlers. Note that the behaviour of Pascal language exceptions is not affected by this change.

: Catching exceptions resulting from the aforementioned signals will no longer work by default in FPC libraries. Note that this never worked anyway for libraries that were dynamically linked against a program at compile time, because in that case the program's initialization code ran after the library's initialization code, thereby overriding its signal handlers. Note that the behaviour of Pascal language exceptions is not affected by this change. Example : See the test programs in the testsuite: the library and the host program

: See the test programs in the testsuite: the library and the host program Reason : Only one signal handler can be installed per signal. So if you dynamically loaded an FPC library at run time, then it would immediately install its signal handlers, thereby overriding any handlers the host program might already have installed. See http://bugs.freepascal.org/view.php?id=12704 for more information.

: Only one signal handler can be installed per signal. So if you dynamically loaded an FPC library at run time, then it would immediately install its signal handlers, thereby overriding any handlers the host program might already have installed. See http://bugs.freepascal.org/view.php?id=12704 for more information. Remedy: You can use SysUtils' InquireSignal(), HookSignal(), UnhookSignal() and AbandonSignalHandler() routines to selectively hook/unhook particular signals in your library, and to restore previously installed handlers. See http://bugs.freepascal.org/view.php?id=12704 for more information on these routines.

Mac OS X

Case sensitivity for unit names

Old behaviour : The compiler treated all file systems on Mac OS X as case-preserving, but case-insensitive.

: The compiler treated all file systems on Mac OS X as case-preserving, but case-insensitive. New behaviour : All file systems are now treated as case-sensitive by default. The result is that the compiler will no longer always find units whose filename does not exactly match the unit name as it appears in the uses clause, even if the unit is located on a case-insensitive file system.

: All file systems are now treated as case-sensitive by default. The result is that the compiler will no longer always find units whose filename does not exactly match the unit name as it appears in the uses clause, even if the unit is located on a case-insensitive file system. Reason : Mac OS X can also be used in conjunction with case-sensitive file systems, and the old behaviour caused problems under certain circumstances.

: Mac OS X can also be used in conjunction with case-sensitive file systems, and the old behaviour caused problems under certain circumstances. Remedy: The distinction between case-preserving/case-sensitive only matters when the compiler's internal file name cache is used. This cache is enabled by default, but as of FPC 2.4.0 it can be disabled using the -Fd command line option.