Other approaches from which we have borrowed or to which ours can be compared and contrasted are described in PEP 482 .

The type system supports unions, generic types, and a special type named Any which is consistent with (i.e. assignable to and from) all types. This latter feature is taken from the idea of gradual typing. Gradual typing and the full type system are explained in PEP 483 .

The proposal is strongly inspired by mypy [mypy] . For example, the type "sequence of integers" can be written as Sequence[int] . The square brackets mean that no new syntax needs to be added to the language. The example here uses a custom type Sequence , imported from a pure-Python module typing . The Sequence[int] notation works at runtime by implementing __getitem__() in the metaclass (but its significance is primarily to an offline type checker).

While these annotations are available at runtime through the usual __annotations__ attribute, no type checking happens at runtime. Instead, the proposal assumes the existence of a separate off-line type checker which users can run over their source code voluntarily. Essentially, such a type checker acts as a very powerful linter. (While it would of course be possible for individual users to employ a similar checker at run time for Design By Contract enforcement or JIT optimization, those tools are not yet as mature.)

For example, here is a simple function whose argument and return type are declared in the annotations:

Note that this PEP still explicitly does NOT prevent other uses of annotations, nor does it require (or forbid) any particular processing of annotations, even when they conform to this specification. It simply enables better coordination, as PEP 333 did for web frameworks.

This PEP introduces a provisional module to provide these standard definitions and tools, along with some conventions for situations where annotations are not available.

PEP 3107 introduced syntax for function annotations, but the semantics were deliberately left undefined. There has now been enough 3rd party usage for static type analysis that the community would benefit from a standard vocabulary and baseline tools within the standard library.

It should also be emphasized that Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.

While the proposed typing module will contain some building blocks for runtime type checking -- in particular the get_type_hints() function -- third party packages would have to be developed to implement specific runtime type checking functionality, for example using decorators or metaclasses. Using type hints for performance optimizations is left as an exercise for the reader.

Of these goals, static analysis is the most important. This includes support for off-line type checkers such as mypy, as well as providing a standard notation that can be used by IDEs for code completion and refactoring.

This PEP aims to provide a standard syntax for type annotations, opening up Python code to easier static analysis and refactoring, potential runtime type checking, and (perhaps, in some contexts) code generation utilizing type information.

PEP 3107 added support for arbitrary annotations on parts of a function definition. Although no meaning was assigned to annotations then, there has always been an implicit goal to use them for type hinting [gvr-artima] , which is listed as the first possible use case in said PEP.

Type checkers are expected to attempt to infer as much information as necessary. The minimum requirement is to handle the builtin decorators @property , @staticmethod and @classmethod .

A type checker is expected to check the body of a checked function for consistency with the given annotations. The annotations may also be used to check correctness of calls appearing in other checked functions.

(Note that the return type of __init__ ought to be annotated with -> None . The reason for this is subtle. If __init__ assumed a return annotation of -> None , would that mean that an argument-less, un-annotated __init__ method should still be type-checked? Rather than leaving this ambiguous or introducing an exception to the exception, we simply say that __init__ ought to have a return annotation; the default behavior is thus the same as for other methods.)

It is recommended but not required that checked functions have annotations for all arguments and the return type. For a checked function, the default annotation for arguments and for the return type is Any . An exception is the first argument of instance and class methods. If it is not annotated, then it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. For example, in class A the first argument of an instance method has the implicit type A . In a class method, the precise type of the first argument cannot be represented using the available type notation.

Any function without annotations should be treated as having the most general type possible, or ignored, by any type checker. Functions with the @no_type_check decorator should be treated as having no annotations.

The syntax leverages PEP 3107-style annotations with a number of extensions described in sections below. In its basic form, type hinting is used by filling function annotation slots with classes:

def greeting(name: str) -> str: return 'Hello ' + name

This states that the expected type of the name argument is str . Analogically, the expected return type is str .

Expressions whose type is a subtype of a specific argument type are also accepted for that argument.

Acceptable type hints Type hints may be built-in classes (including those defined in standard library or third-party extension modules), abstract base classes, types available in the types module, and user-defined classes (including those defined in the standard library or third-party modules). While annotations are normally the best format for type hints, there are times when it is more appropriate to represent them by a special comment, or in a separately distributed stub file. (See below for examples.) Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see below for forward references). Annotations should be kept simple or static analysis tools may not be able to interpret the values. For example, dynamically computed types are unlikely to be understood. (This is an intentionally somewhat vague requirement, specific inclusions and exclusions may be added to future versions of this PEP as warranted by the discussion.) In addition to the above, the following special constructs defined below may be used: None , Any , Union , Tuple , Callable , all ABCs and stand-ins for concrete classes exported from typing (e.g. Sequence and Dict ), type variables, and type aliases. All newly introduced names used to support features described in following sections (such as Any and Union ) are available in the typing module.

Using None When used in a type hint, the expression None is considered equivalent to type(None) .

Type aliases Type aliases are defined by simple variable assignments: Url = str def retry(url: Url, retry_count: int) -> None: ... Note that we recommend capitalizing alias names, since they represent user-defined types, which (like user-defined classes) are typically spelled that way. Type aliases may be as complex as type hints in annotations -- anything that is acceptable as a type hint is acceptable in a type alias: from typing import TypeVar, Iterable, Tuple T = TypeVar('T', int, float, complex) Vector = Iterable[Tuple[T, T]] def inproduct(v: Vector[T]) -> T: return sum(x*y for x, y in v) def dilate(v: Vector[T], scale: T) -> Vector[T]: return ((x * scale, y * scale) for x, y in v) vec = [] # type: Vector[float] This is equivalent to: from typing import TypeVar, Iterable, Tuple T = TypeVar('T', int, float, complex) def inproduct(v: Iterable[Tuple[T, T]]) -> T: return sum(x*y for x, y in v) def dilate(v: Iterable[Tuple[T, T]], scale: T) -> Iterable[Tuple[T, T]]: return ((x * scale, y * scale) for x, y in v) vec = [] # type: Iterable[Tuple[float, float]]

Callable Frameworks expecting callback functions of specific signatures might be type hinted using Callable[[Arg1Type, Arg2Type], ReturnType] . Examples: from typing import Callable def feeder(get_next_item: Callable[[], str]) -> None: # Body def async_query(on_success: Callable[[int], None], on_error: Callable[[int, Exception], None]) -> None: # Body It is possible to declare the return type of a callable without specifying the call signature by substituting a literal ellipsis (three dots) for the list of arguments: def partial(func: Callable[..., str], *args) -> Callable[..., str]: # Body Note that there are no square brackets around the ellipsis. The arguments of the callback are completely unconstrained in this case (and keyword arguments are acceptable). Since using callbacks with keyword arguments is not perceived as a common use case, there is currently no support for specifying keyword arguments with Callable . Similarly, there is no support for specifying callback signatures with a variable number of arguments of a specific type. Because typing.Callable does double-duty as a replacement for collections.abc.Callable , isinstance(x, typing.Callable) is implemented by deferring to isinstance(x, collections.abc.Callable) . However, isinstance(x, typing.Callable[...]) is not supported.

Generics Since type information about objects kept in containers cannot be statically inferred in a generic way, abstract base classes have been extended to support subscription to denote expected types for container elements. Example: from typing import Mapping, Set def notify_by_email(employees: Set[Employee], overrides: Mapping[str, str]) -> None: ... Generics can be parameterized by using a new factory available in typing called TypeVar . Example: from typing import Sequence, TypeVar T = TypeVar('T') # Declare type variable def first(l: Sequence[T]) -> T: # Generic function return l[0] In this case the contract is that the returned value is consistent with the elements held by the collection. A TypeVar() expression must always directly be assigned to a variable (it should not be used as part of a larger expression). The argument to TypeVar() must be a string equal to the variable name to which it is assigned. Type variables must not be redefined. TypeVar supports constraining parametric types to a fixed set of possible types (note: those types cannot be parameterized by type variables). For example, we can define a type variable that ranges over just str and bytes . By default, a type variable ranges over all possible types. Example of constraining a type variable: from typing import TypeVar, Text AnyStr = TypeVar('AnyStr', Text, bytes) def concat(x: AnyStr, y: AnyStr) -> AnyStr: return x + y The function concat can be called with either two str arguments or two bytes arguments, but not with a mix of str and bytes arguments. There should be at least two constraints, if any; specifying a single constraint is disallowed. Subtypes of types constrained by a type variable should be treated as their respective explicitly listed base types in the context of the type variable. Consider this example: class MyStr(str): ... x = concat(MyStr('apple'), MyStr('pie')) The call is valid but the type variable AnyStr will be set to str and not MyStr . In effect, the inferred type of the return value assigned to x will also be str . Additionally, Any is a valid value for every type variable. Consider the following: def count_truthy(elements: List[Any]) -> int: return sum(1 for elem in elements if elem) This is equivalent to omitting the generic notation and just saying elements: List .

User-defined generic types You can include a Generic base class to define a user-defined class as generic. Example: from typing import TypeVar, Generic from logging import Logger T = TypeVar('T') class LoggedVar(Generic[T]): def __init__(self, value: T, name: str, logger: Logger) -> None: self.name = name self.logger = logger self.value = value def set(self, new: T) -> None: self.log('Set ' + repr(self.value)) self.value = new def get(self) -> T: self.log('Get ' + repr(self.value)) return self.value def log(self, message: str) -> None: self.logger.info('{}: {}'.format(self.name, message)) Generic[T] as a base class defines that the class LoggedVar takes a single type parameter T . This also makes T valid as a type within the class body. The Generic base class uses a metaclass that defines __getitem__ so that LoggedVar[t] is valid as a type: from typing import Iterable def zero_all_vars(vars: Iterable[LoggedVar[int]]) -> None: for var in vars: var.set(0) A generic type can have any number of type variables, and type variables may be constrained. This is valid: from typing import TypeVar, Generic ... T = TypeVar('T') S = TypeVar('S') class Pair(Generic[T, S]): ... Each type variable argument to Generic must be distinct. This is thus invalid: from typing import TypeVar, Generic ... T = TypeVar('T') class Pair(Generic[T, T]): # INVALID ... The Generic[T] base class is redundant in simple cases where you subclass some other generic class and specify type variables for its parameters: from typing import TypeVar, Iterator T = TypeVar('T') class MyIter(Iterator[T]): ... That class definition is equivalent to: class MyIter(Iterator[T], Generic[T]): ... You can use multiple inheritance with Generic : from typing import TypeVar, Generic, Sized, Iterable, Container, Tuple T = TypeVar('T') class LinkedList(Sized, Generic[T]): ... K = TypeVar('K') V = TypeVar('V') class MyMapping(Iterable[Tuple[K, V]], Container[Tuple[K, V]], Generic[K, V]): ... Subclassing a generic class without specifying type parameters assumes Any for each position. In the following example, MyIterable is not generic but implicitly inherits from Iterable[Any] : from typing import Iterable class MyIterable(Iterable): # Same as Iterable[Any] ... Generic metaclasses are not supported.

Scoping rules for type variables Type variables follow normal name resolution rules. However, there are some special cases in the static typechecking context: A type variable used in a generic function could be inferred to represent different types in the same code block. Example: from typing import TypeVar, Generic T = TypeVar('T') def fun_1(x: T) -> T: ... # T here def fun_2(x: T) -> T: ... # and here could be different fun_1(1) # This is OK, T is inferred to be int fun_2('a') # This is also OK, now T is str

A type variable used in a method of a generic class that coincides with one of the variables that parameterize this class is always bound to that variable. Example: from typing import TypeVar, Generic T = TypeVar('T') class MyClass(Generic[T]): def meth_1(self, x: T) -> T: ... # T here def meth_2(self, x: T) -> T: ... # and here are always the same a = MyClass() # type: MyClass[int] a.meth_1(1) # OK a.meth_2('a') # This is an error!

A type variable used in a method that does not match any of the variables that parameterize the class makes this method a generic function in that variable: T = TypeVar('T') S = TypeVar('S') class Foo(Generic[T]): def method(self, x: T, y: S) -> S: ... x = Foo() # type: Foo[int] y = x.method(0, "abc") # inferred type of y is str

Unbound type variables should not appear in the bodies of generic functions, or in the class bodies apart from method definitions: T = TypeVar('T') S = TypeVar('S') def a_fun(x: T) -> None: # this is OK y = [] # type: List[T] # but below is an error! y = [] # type: List[S] class Bar(Generic[T]): # this is also an error an_attr = [] # type: List[S] def do_something(x: S) -> S: # this is OK though ...

A generic class definition that appears inside a generic function should not use type variables that parameterize the generic function: from typing import List def a_fun(x: T) -> None: # This is OK a_list = [] # type: List[T] ... # This is however illegal class MyGeneric(Generic[T]): ...

A generic class nested in another generic class cannot use same type variables. The scope of the type variables of the outer class doesn't cover the inner one: T = TypeVar('T') S = TypeVar('S') class Outer(Generic[T]): class Bad(Iterable[T]): # Error ... class AlsoBad: x = None # type: List[T] # Also an error class Inner(Iterable[S]): # OK ... attr = None # type: Inner[T] # Also OK

Instantiating generic classes and type erasure User-defined generic classes can be instantiated. Suppose we write a Node class inheriting from Generic[T] : from typing import TypeVar, Generic T = TypeVar('T') class Node(Generic[T]): ... To create Node instances you call Node() just as for a regular class. At runtime the type (class) of the instance will be Node . But what type does it have to the type checker? The answer depends on how much information is available in the call. If the constructor ( __init__ or __new__ ) uses T in its signature, and a corresponding argument value is passed, the type of the corresponding argument(s) is substituted. Otherwise, Any is assumed. Example: from typing import TypeVar, Generic T = TypeVar('T') class Node(Generic[T]): x = None # type: T # Instance attribute (see below) def __init__(self, label: T = None) -> None: ... x = Node('') # Inferred type is Node[str] y = Node(0) # Inferred type is Node[int] z = Node() # Inferred type is Node[Any] In case the inferred type uses [Any] but the intended type is more specific, you can use a type comment (see below) to force the type of the variable, e.g.: # (continued from previous example) a = Node() # type: Node[int] b = Node() # type: Node[str] Alternatively, you can instantiate a specific concrete type, e.g.: # (continued from previous example) p = Node[int]() q = Node[str]() r = Node[int]('') # Error s = Node[str](0) # Error Note that the runtime type (class) of p and q is still just Node -- Node[int] and Node[str] are distinguishable class objects, but the runtime class of the objects created by instantiating them doesn't record the distinction. This behavior is called "type erasure"; it is common practice in languages with generics (e.g. Java, TypeScript). Using generic classes (parameterized or not) to access attributes will result in type check failure. Outside the class definition body, a class attribute cannot be assigned, and can only be looked up by accessing it through a class instance that does not have an instance attribute with the same name: # (continued from previous example) Node[int].x = 1 # Error Node[int].x # Error Node.x = 1 # Error Node.x # Error type(p).x # Error p.x # Ok (evaluates to None) Node[int]().x # Ok (evaluates to None) p.x = 1 # Ok, but assigning to instance attribute Generic versions of abstract collections like Mapping or Sequence and generic versions of built-in classes -- List , Dict , Set , and FrozenSet -- cannot be instantiated. However, concrete user-defined subclasses thereof and generic versions of concrete collections can be instantiated: data = DefaultDict[int, bytes]() Note that one should not confuse static types and runtime classes. The type is still erased in this case and the above expression is just a shorthand for: data = collections.defaultdict() # type: DefaultDict[int, bytes] It is not recommended to use the subscripted class (e.g. Node[int] ) directly in an expression -- using a type alias (e.g. IntNode = Node[int] ) instead is preferred. (First, creating the subscripted class, e.g. Node[int] , has a runtime cost. Second, using a type alias is more readable.)

Arbitrary generic types as base classes Generic[T] is only valid as a base class -- it's not a proper type. However, user-defined generic types such as LinkedList[T] from the above example and built-in generic types and ABCs such as List[T] and Iterable[T] are valid both as types and as base classes. For example, we can define a subclass of Dict that specializes type arguments: from typing import Dict, List, Optional class Node: ... class SymbolTable(Dict[str, List[Node]]): def push(self, name: str, node: Node) -> None: self.setdefault(name, []).append(node) def pop(self, name: str) -> Node: return self[name].pop() def lookup(self, name: str) -> Optional[Node]: nodes = self.get(name) if nodes: return nodes[-1] return None SymbolTable is a subclass of dict and a subtype of Dict[str, List[Node]] . If a generic base class has a type variable as a type argument, this makes the defined class generic. For example, we can define a generic LinkedList class that is iterable and a container: from typing import TypeVar, Iterable, Container T = TypeVar('T') class LinkedList(Iterable[T], Container[T]): ... Now LinkedList[int] is a valid type. Note that we can use T multiple times in the base class list, as long as we don't use the same type variable T multiple times within Generic[...] . Also consider the following example: from typing import TypeVar, Mapping T = TypeVar('T') class MyDict(Mapping[str, T]): ... In this case MyDict has a single parameter, T.

Abstract generic types The metaclass used by Generic is a subclass of abc.ABCMeta . A generic class can be an ABC by including abstract methods or properties, and generic classes can also have ABCs as base classes without a metaclass conflict.

Type variables with an upper bound A type variable may specify an upper bound using bound=<type> (note: <type> itself cannot be parameterized by type variables). This means that an actual type substituted (explicitly or implicitly) for the type variable must be a subtype of the boundary type. Example: from typing import TypeVar, Sized ST = TypeVar('ST', bound=Sized) def longer(x: ST, y: ST) -> ST: if len(x) > len(y): return x else: return y longer([1], [1, 2]) # ok, return type List[int] longer({1}, {1, 2}) # ok, return type Set[int] longer([1], {1, 2}) # ok, return type Collection[int] An upper bound cannot be combined with type constraints (as in used AnyStr , see the example earlier); type constraints cause the inferred type to be _exactly_ one of the constraint types, while an upper bound just requires that the actual type is a subtype of the boundary type.

Covariance and contravariance Consider a class Employee with a subclass Manager . Now suppose we have a function with an argument annotated with List[Employee] . Should we be allowed to call this function with a variable of type List[Manager] as its argument? Many people would answer "yes, of course" without even considering the consequences. But unless we know more about the function, a type checker should reject such a call: the function might append an Employee instance to the list, which would violate the variable's type in the caller. It turns out such an argument acts contravariantly, whereas the intuitive answer (which is correct in case the function doesn't mutate its argument!) requires the argument to act covariantly. A longer introduction to these concepts can be found on Wikipedia [wiki-variance] and in PEP 483; here we just show how to control a type checker's behavior. By default generic types are considered invariant in all type variables, which means that values for variables annotated with types like List[Employee] must exactly match the type annotation -- no subclasses or superclasses of the type parameter (in this example Employee ) are allowed. To facilitate the declaration of container types where covariant or contravariant type checking is acceptable, type variables accept keyword arguments covariant=True or contravariant=True . At most one of these may be passed. Generic types defined with such variables are considered covariant or contravariant in the corresponding variable. By convention, it is recommended to use names ending in _co for type variables defined with covariant=True and names ending in _contra for that defined with contravariant=True . A typical example involves defining an immutable (or read-only) container class: from typing import TypeVar, Generic, Iterable, Iterator T_co = TypeVar('T_co', covariant=True) class ImmutableList(Generic[T_co]): def __init__(self, items: Iterable[T_co]) -> None: ... def __iter__(self) -> Iterator[T_co]: ... ... class Employee: ... class Manager(Employee): ... def dump_employees(emps: ImmutableList[Employee]) -> None: for emp in emps: ... mgrs = ImmutableList([Manager()]) # type: ImmutableList[Manager] dump_employees(mgrs) # OK The read-only collection classes in typing are all declared covariant in their type variable (e.g. Mapping and Sequence ). The mutable collection classes (e.g. MutableMapping and MutableSequence ) are declared invariant. The one example of a contravariant type is the Generator type, which is contravariant in the send() argument type (see below). Note: Covariance or contravariance is not a property of a type variable, but a property of a generic class defined using this variable. Variance is only applicable to generic types; generic functions do not have this property. The latter should be defined using only type variables without covariant or contravariant keyword arguments. For example, the following example is fine: from typing import TypeVar class Employee: ... class Manager(Employee): ... E = TypeVar('E', bound=Employee) def dump_employee(e: E) -> None: ... dump_employee(Manager()) # OK while the following is prohibited: B_co = TypeVar('B_co', covariant=True) def bad_func(x: B_co) -> B_co: # Flagged as error by a type checker ...

The numeric tower PEP 3141 defines Python's numeric tower, and the stdlib module numbers implements the corresponding ABCs ( Number , Complex , Real , Rational and Integral ). There are some issues with these ABCs, but the built-in concrete numeric classes complex , float and int are ubiquitous (especially the latter two :-). Rather than requiring that users write import numbers and then use numbers.Float etc., this PEP proposes a straightforward shortcut that is almost as effective: when an argument is annotated as having type float , an argument of type int is acceptable; similar, for an argument annotated as having type complex , arguments of type float or int are acceptable. This does not handle classes implementing the corresponding ABCs or the fractions.Fraction class, but we believe those use cases are exceedingly rare.

Forward references When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later. A situation where this occurs commonly is the definition of a container class, where the class being defined occurs in the signature of some of the methods. For example, the following code (the start of a simple binary tree implementation) does not work: class Tree: def __init__(self, left: Tree, right: Tree): self.left = left self.right = right To address this, we write: class Tree: def __init__(self, left: 'Tree', right: 'Tree'): self.left = left self.right = right The string literal should contain a valid Python expression (i.e., compile(lit, '', 'eval') should be a valid code object) and it should evaluate without errors once the module has been fully loaded. The local and global namespace in which it is evaluated should be the same namespaces in which default arguments to the same function would be evaluated. Moreover, the expression should be parseable as a valid type hint, i.e., it is constrained by the rules from the section Acceptable type hints above. It is allowable to use string literals as part of a type hint, for example: class Tree: ... def leaves(self) -> List['Tree']: ... A common use for forward references is when e.g. Django models are needed in the signatures. Typically, each model is in a separate file, and has methods taking arguments whose type involves other models. Because of the way circular imports work in Python, it is often not possible to import all the needed models directly: # File models/a.py from models.b import B class A(Model): def foo(self, b: B): ... # File models/b.py from models.a import A class B(Model): def bar(self, a: A): ... # File main.py from models.a import A from models.b import B Assuming main is imported first, this will fail with an ImportError at the line from models.a import A in models/b.py, which is being imported from models/a.py before a has defined class A. The solution is to switch to module-only imports and reference the models by their _module_._class_ name: # File models/a.py from models import b class A(Model): def foo(self, b: 'b.B'): ... # File models/b.py from models import a class B(Model): def bar(self, a: 'a.A'): ... # File main.py from models.a import A from models.b import B

Union types Since accepting a small, limited set of expected types for a single argument is common, there is a new special factory called Union . Example: from typing import Union def handle_employees(e: Union[Employee, Sequence[Employee]]) -> None: if isinstance(e, Employee): e = [e] ... A type factored by Union[T1, T2, ...] is a supertype of all types T1 , T2 , etc., so that a value that is a member of one of these types is acceptable for an argument annotated by Union[T1, T2, ...] . One common case of union types are optional types. By default, None is an invalid value for any type, unless a default value of None has been provided in the function definition. Examples: def handle_employee(e: Union[Employee, None]) -> None: ... As a shorthand for Union[T1, None] you can write Optional[T1] ; for example, the above is equivalent to: from typing import Optional def handle_employee(e: Optional[Employee]) -> None: ... A past version of this PEP allowed type checkers to assume an optional type when the default value is None , as in this code: def handle_employee(e: Employee = None): ... This would have been treated as equivalent to: def handle_employee(e: Optional[Employee] = None) -> None: ... This is no longer the recommended behavior. Type checkers should move towards requiring the optional type to be made explicit.

Support for singleton types in unions A singleton instance is frequently used to mark some special condition, in particular in situations where None is also a valid value for a variable. Example: _empty = object() def func(x=_empty): if x is _empty: # default argument value return 0 elif x is None: # argument was provided and it's None return 1 else: return x * 2 To allow precise typing in such situations, the user should use the Union type in conjunction with the enum.Enum class provided by the standard library, so that type errors can be caught statically: from typing import Union from enum import Enum class Empty(Enum): token = 0 _empty = Empty.token def func(x: Union[int, None, Empty] = _empty) -> int: boom = x * 42 # This fails type check if x is _empty: return 0 elif x is None: return 1 else: # At this point typechecker knows that x can only have type int return x * 2 Since the subclasses of Enum cannot be further subclassed, the type of variable x can be statically inferred in all branches of the above example. The same approach is applicable if more than one singleton object is needed: one can use an enumeration that has more than one value: class Reason(Enum): timeout = 1 error = 2 def process(response: Union[str, Reason] = '') -> str: if response is Reason.timeout: return 'TIMEOUT' elif response is Reason.error: return 'ERROR' else: # response can be only str, all other possible values exhausted return 'PROCESSED: ' + response

The Any type A special kind of type is Any . Every type is consistent with Any . It can be considered a type that has all values and all methods. Note that Any and builtin type object are completely different. When the type of a value is object , the type checker will reject almost all operations on it, and assigning it to a variable (or using it as a return value) of a more specialized type is a type error. On the other hand, when a value has type Any , the type checker will allow all operations on it, and a value of type Any can be assigned to a variable (or used as a return value) of a more constrained type. A function parameter without an annotation is assumed to be annotated with Any . If a generic type is used without specifying type parameters, they are assumed to be Any : from typing import Mapping def use_map(m: Mapping) -> None: # Same as Mapping[Any, Any] ... This rule also applies to Tuple , in annotation context it is equivalent to Tuple[Any, ...] and, in turn, to tuple . As well, a bare Callable in an annotation is equivalent to Callable[..., Any] and, in turn, to collections.abc.Callable : from typing import Tuple, List, Callable def check_args(args: Tuple) -> bool: ... check_args(()) # OK check_args((42, 'abc')) # Also OK check_args(3.14) # Flagged as error by a type checker # A list of arbitrary callables is accepted by this function def apply_callbacks(cbs: List[Callable]) -> None: ...

The NoReturn type The typing module provides a special type NoReturn to annotate functions that never return normally. For example, a function that unconditionally raises an exception: from typing import NoReturn def stop() -> NoReturn: raise RuntimeError('no way') The NoReturn annotation is used for functions such as sys.exit . Static type checkers will ensure that functions annotated as returning NoReturn truly never return, either implicitly or explicitly: import sys from typing import NoReturn def f(x: int) -> NoReturn: # Error, f(0) implicitly returns None if x != 0: sys.exit(1) The checkers will also recognize that the code after calls to such functions is unreachable and will behave accordingly: # continue from first example def g(x: int) -> int: if x > 0: return x stop() return 'whatever works' # Error might be not reported by some checkers # that ignore errors in unreachable blocks The NoReturn type is only valid as a return annotation of functions, and considered an error if it appears in other positions: from typing import List, NoReturn # All of the following are errors def bad1(x: NoReturn) -> int: ... bad2 = None # type: NoReturn def bad3() -> List[NoReturn]: ...

The type of class objects Sometimes you want to talk about class objects, in particular class objects that inherit from a given class. This can be spelled as Type[C] where C is a class. To clarify: while C (when used as an annotation) refers to instances of class C , Type[C] refers to subclasses of C . (This is a similar distinction as between object and type .) For example, suppose we have the following classes: class User: ... # Abstract base for User classes class BasicUser(User): ... class ProUser(User): ... class TeamUser(User): ... And suppose we have a function that creates an instance of one of these classes if you pass it a class object: def new_user(user_class): user = user_class() # (Here we could write the user object to a database) return user Without Type[] the best we could do to annotate new_user() would be: def new_user(user_class: type) -> User: ... However using Type[] and a type variable with an upper bound we can do much better: U = TypeVar('U', bound=User) def new_user(user_class: Type[U]) -> U: ... Now when we call new_user() with a specific subclass of User a type checker will infer the correct type of the result: joe = new_user(BasicUser) # Inferred type is BasicUser The value corresponding to Type[C] must be an actual class object that's a subtype of C , not a special form. In other words, in the above example calling e.g. new_user(Union[BasicUser, ProUser]) is rejected by the type checker (in addition to failing at runtime because you can't instantiate a union). Note that it is legal to use a union of classes as the parameter for Type[] , as in: def new_non_team_user(user_class: Type[Union[BasicUser, ProUser]]): user = new_user(user_class) ... However the actual argument passed in at runtime must still be a concrete class object, e.g. in the above example: new_non_team_user(ProUser) # OK new_non_team_user(TeamUser) # Disallowed by type checker Type[Any] is also supported (see below for its meaning). Type[T] where T is a type variable is allowed when annotating the first argument of a class method (see the relevant section). Any other special constructs like Tuple or Callable are not allowed as an argument to Type . There are some concerns with this feature: for example when new_user() calls user_class() this implies that all subclasses of User must support this in their constructor signature. However this is not unique to Type[] : class methods have similar concerns. A type checker ought to flag violations of such assumptions, but by default constructor calls that match the constructor signature in the indicated base class ( User in the example above) should be allowed. A program containing a complex or extensible class hierarchy might also handle this by using a factory class method. A future revision of this PEP may introduce better ways of dealing with these concerns. When Type is parameterized it requires exactly one parameter. Plain Type without brackets is equivalent to Type[Any] and this in turn is equivalent to type (the root of Python's metaclass hierarchy). This equivalence also motivates the name, Type , as opposed to alternatives like Class or SubType , which were proposed while this feature was under discussion; this is similar to the relationship between e.g. List and list . Regarding the behavior of Type[Any] (or Type or type ), accessing attributes of a variable with this type only provides attributes and methods defined by type (for example, __repr__() and __mro__ ). Such a variable can be called with arbitrary arguments, and the return type is Any . Type is covariant in its parameter, because Type[Derived] is a subtype of Type[Base] : def new_pro_user(pro_user_class: Type[ProUser]): user = new_user(pro_user_class) # OK ...

Annotating instance and class methods In most cases the first argument of class and instance methods does not need to be annotated, and it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. In addition, the first argument in an instance method can be annotated with a type variable. In this case the return type may use the same type variable, thus making that method a generic function. For example: T = TypeVar('T', bound='Copyable') class Copyable: def copy(self: T) -> T: # return a copy of self class C(Copyable): ... c = C() c2 = c.copy() # type here should be C The same applies to class methods using Type[] in an annotation of the first argument: T = TypeVar('T', bound='C') class C: @classmethod def factory(cls: Type[T]) -> T: # make a new instance of cls class D(C): ... d = D.factory() # type here should be D Note that some type checkers may apply restrictions on this use, such as requiring an appropriate upper bound for the type variable used (see examples).

Version and platform checking Type checkers are expected to understand simple version and platform checks, e.g.: import sys if sys.version_info[0] >= 3: # Python 3 specific definitions else: # Python 2 specific definitions if sys.platform == 'win32': # Windows specific definitions else: # Posix specific definitions Don't expect a checker to understand obfuscations like "".join(reversed(sys.platform)) == "xunil" .

Runtime or type checking? Sometimes there's code that must be seen by a type checker (or other static analysis tools) but should not be executed. For such situations the typing module defines a constant, TYPE_CHECKING , that is considered True during type checking (or other static analysis) but False at runtime. Example: import typing if typing.TYPE_CHECKING: import expensive_mod def a_func(arg: 'expensive_mod.SomeClass') -> None: a_var = arg # type: expensive_mod.SomeClass ... (Note that the type annotation must be enclosed in quotes, making it a "forward reference", to hide the expensive_mod reference from the interpreter runtime. In the # type comment no quotes are needed.) This approach may also be useful to handle import cycles.

Arbitrary argument lists and default argument values Arbitrary argument lists can as well be type annotated, so that the definition: def foo(*args: str, **kwds: int): ... is acceptable and it means that, e.g., all of the following represent function calls with valid types of arguments: foo('a', 'b', 'c') foo(x=1, y=2) foo('', z=0) In the body of function foo , the type of variable args is deduced as Tuple[str, ...] and the type of variable kwds is Dict[str, int] . In stubs it may be useful to declare an argument as having a default without specifying the actual default value. For example: def foo(x: AnyStr, y: AnyStr = ...) -> AnyStr: ... What should the default value look like? Any of the options "" , b"" or None fails to satisfy the type constraint. In such cases the default value may be specified as a literal ellipsis, i.e. the above example is literally what you would write.

Positional-only arguments Some functions are designed to take their arguments only positionally, and expect their callers never to use the argument's name to provide that argument by keyword. All arguments with names beginning with __ are assumed to be positional-only, except if their names also end with __ : def quux(__x: int, __y__: int = 0) -> None: ... quux(3, __y__=1) # This call is fine. quux(__x=3) # This call is an error.