The new Mag­num Python bind­ings, while still la­beled ex­per­i­men­tal , al­ready give you a pack­age us­able in re­al work­flows — a NumPy-com­pat­i­ble con­tain­er li­brary, graph­ics-ori­ent­ed math class­es and func­tions, OpenGL buf­fer, mesh, shad­er and tex­ture APIs, im­age and mesh da­ta im­port and a SDL / GLFW ap­pli­ca­tion class with key and mouse events. Head over to the in­stal­la­tion doc­u­men­ta­tion to get it your­self; if you are on Arch­Lin­ux or use Home­brew, pack­ages are al­ready there, wait­ing for you:

brew tap mosra/magnum brew install --HEAD corrade magnum magnum-plugins magnum-bindings

And of course it has all good­ies you’d ex­pect from a “Python-na­tive” li­brary — full slic­ing sup­port, er­rors re­port­ed through Python ex­cep­tions in­stead of re­turn codes (or hard as­serts) and prop­er­ties in­stead of set­ters/get­ters where it makes sense. To get you a quick over­view of how it looks and how is it used, the first few ex­am­ples are port­ed to it:

Update September 18th, 2019 Added a note about -flto=job­serv­er, about the Pyrr math li­brary and made the Sphinx al­ter­na­tive / m.css Python doc gen­er­a­tor more vis­i­ble.

En­ter py­bind11 I dis­cov­ered py­bind11 by a lucky ac­ci­dent in ear­ly 2018 and im­me­di­ate­ly had to try it. Learn­ing the ba­sics and ex­pos­ing some min­i­mal ma­trix/vec­tor math took me about two hours. It was an ex­treme fun and I have to thank all py­bind11 de­vel­op­ers for mak­ing it so straight­for­ward to use. py :: class_ < Vector3 > ( m , "Vector3" ) . def_static ( "x_axis" , & Vector3 :: xAxis , py :: arg ( "length" ) = 1.0f ) . def_static ( "y_axis" , & Vector3 :: yAxis , py :: arg ( "length" ) = 1.0f ) . def_static ( "z_axis" , & Vector3 :: zAxis , py :: arg ( "length" ) = 1.0f ) . def ( py :: init < Float , Float , Float > ()) . def ( py :: init < Float > ()) . def ( py :: self == py :: self ) . def ( py :: self != py :: self ) . def ( "is_zero" , & Vector3 :: isZero ) . def ( "is_normalized" , & Vector3 :: isNormalized ) . def ( py :: self += py :: self ) . def ( py :: self + py :: self ) . def ( py :: self *= Float {}) . def ( py :: self * Float {}) . def ( py :: self *= py :: self ) That’s what it took to bind a vec­tor class. How­ev­er, dif­fer­ent things took a pri­or­i­ty and so the pro­to­type got shelved un­til it got re­vived again this year. But I learned one main thing — even just the math class­es alone were some­thing so use­ful that I kept the built Python mod­ule around and used it from time to time as an en­hanced cal­cu­la­tor. Now, with the mag­num.math mod­ule be­ing al­most com­plete, it’s an ev­ery­day tool I use for quick cal­cu­la­tions. Feel free to do the same. >>> from magnum import * >>> Matrix3 . rotation ( Deg ( 45 )) Matrix(0.707107, -0.707107, 0, 0.707107, 0.707107, 0, 0, 0, 1) Quick, where are the mi­nus signs in a 2D ro­ta­tion ma­trix?

Hard things are sud­den­ly easy if you use a dif­fer­ent lan­guage >>> a = Vector4 ( 1.5 , 0.3 , - 1.0 , 1.0 ) >>> b = Vector4 ( 7.2 , 2.3 , 1.1 , 0.0 ) >>> a . wxy = b . xwz >>> a Vector(0, 1.1, -1, 7.2) If you ev­er used GLSL or any oth­er shad­er lan­guage, you prob­a­bly fell in love with vec­tor swiz­zles right at the mo­ment you saw them … and then be­came sad af­ter a re­al­iza­tion that such APIs are prac­ti­cal­ly im­pos­si­ble to have in C++. Swiz­zle op­er­a­tions are nev­er­the­less use­ful and as­sign­ing each com­po­nent sep­a­rate­ly would be a pain, so Mag­num pro­vides Math::gath­er() and Math::scat­ter() that al­low you to ex­press the above: a = Math :: scatter < 'w' , 'x' , 'y' > ( a , Math :: gather < 'x' , 'w' , 'z' > ( b )); Ver­bose but prac­ti­cal­ly pos­si­ble. Point is, how­ev­er, that the above is im­ple­mentable very eas­i­ly in Python us­ing __getattr__() and __setattr__() … and a ton of er­ror check­ing on top. … but on the con­trary, C++ has it eas­i­er with over­loads I was very de­light­ed up­on dis­cov­er­ing that py­bind11 sup­ports func­tion over­loads just like that — if you bind more than one func­tion of the same name, it’ll take a type­less (*args, **kwargs) and dis­patch to a cor­rect over­load based on ar­gu­ment types. It’s prob­a­bly not blaz­ing­ly fast (and in some cas­es you could prob­a­bly beat its speed by do­ing the dis­patch you­self), but it’s there and much bet­ter than hav­ing to in­vent new names for over­load­ed func­tions (and con­struc­tors!). With the new typ­ing mod­ule, it’s pos­si­ble to achieve a sim­i­lar thing in pure Python us­ing the @over­load dec­o­ra­tor — though on­ly for doc­u­men­ta­tion pur­pos­es, you’re still re­spon­si­ble to im­ple­ment the type dis­patch your­self. In case of math.dot() im­ple­ment­ed in pure Python, this could look like this: @overload def dot ( a : Quaternion , b : Quaternion ) -> float : ... @overload def dot ( a : Vector2 , b : Vector2 ) -> float : ... def dot ( a , b ): # actual implementation What was ac­tu­al­ly hard though, was the fol­low­ing, look­ing com­plete­ly or­di­nary to a C++ pro­gram­mer: >>> a = Matrix3 . translation (( 4.0 , 2.0 )) >>> a Matrix(1, 0, 4, 0, 1, 2, 0, 0, 1) >>> a . translation = Vector2 ( 5.0 , 3.0 ) >>> a Matrix(1, 0, 5, 0, 1, 3, 0, 0, 1) Is the Python lan­guage po­lice go­ing to ar­rest me now? While the case of Ma­trix3.scal­ing() vs. mat.scaling() — where the for­mer re­turns a scal­ing Ma­trix3 and lat­ter a scal­ing Vec­tor3 out of a scal­ing ma­trix — was eas­i­er and could be done just via a dis­patch based on ar­gu­ment types (“if the first ar­gu­ment is an in­stance of Ma­trix3, be­have like the mem­ber func­tion”), in case of Ma­trix3.trans­la­tion() it’s ei­ther a stat­ic method or an in­stance prop­er­ty. Ul­ti­mate­ly I man­aged to solve it by sup­ply­ing a cus­tom meta­class that does a cor­rect dis­patch when en­coun­ter­ing ac­cess to the translation at­tribute. But yeah, while al­most any­thing is pos­si­ble in Python, it could give a hand here — am I the first per­son ev­er that needs this func­tion­al­i­ty?

Ze­ro-copy da­ta trans­fer One very im­por­tant part of Python is the Buf­fer Pro­to­col. It al­lows ze­ro-copy shar­ing of ar­bi­trati­ly shaped da­ta be­tween C and Python — sim­ple tight­ly-packed lin­ear ar­rays, 2D ma­tri­ces, or a green chan­nel of a low­er right quar­ter of an im­age flipped up­side down. Hav­ing a full sup­port for the buf­fer pro­to­col was among the rea­sony why Con­tain­ers::StridedAr­rayView went through a ma­jor re­de­sign ear­li­er this year. This strid­ed ar­ray view is now ex­posed to Python as a con­tain­ers.StridedAr­rayView1D (or Mu­ta­bleStridedAr­rayView1D, and their 2D, 3D and 4D vari­ants) and thanks to the buf­fer pro­to­col it can be seam­less­ly con­vert­ed from and to numpy.ar­ray() (and Python’s own mem­o­ryview as well). Tran­si­tive­ly that means you can un­leash numpy-based Python al­go­rithms di­rect­ly on da­ta com­ing out of Im­ageView2D.pix­els() and have the mod­i­fi­ca­tions im­me­di­ate­ly re­flect­ed back in C++. Be­cause, again, hav­ing a spe­cial­ized type with fur­ther re­stric­tions makes the code eas­i­er to rea­son about, con­tain­ers.Ar­rayView (and its mu­ta­ble vari­ant) is ex­posed as well. This one works on­ly with lin­ear tight­ly packed mem­o­ry and thus is suit­able for tak­ing views on­to bytes or bytear­ray, file con­tents and such. Both the strid­ed and lin­ear ar­ray views of course sup­port the full Python slic­ing API. As an ex­am­ple, here’s how you can read an im­age in Python, pass its con­tents to a Mag­num im­porter and get the raw pix­el da­ta back: from magnum import trade def consume_pixels ( pixels : np . ndarray ): ... importer : trade . AbstractImporter = trade . ImporterManager () . load_and_instantiate ( 'AnyImageImporter' ) with open ( filename , 'rb' ) as f : importer . open_data ( f . readall ()) image : trade . ImageData2D = importer . image2d ( 0 ) # green channel of a lower right quarter of a 256x256 image flipped upside down consume_pixels ( image . pixels [ 128 : 128 ,:: - 1 , 1 : 2 ]) Just one ques­tion left — who owns the mem­o­ry here, then? To an­swer that, let’s dive in­to Python’s ref­er­ence count­ing.

Ref­er­ence count­ing In C++, views are one of the more dan­ger­ous con­tain­ers, as they ref­er­ence da­ta owned by some­thing else. There you’re ex­pect­ed to en­sure the da­ta own­er is kept in scope for at least as long as the view on it. A sim­i­lar thing is with oth­er types — for ex­am­ple, a GL::Mesh may ref­er­ence a bunch of GL::Buf­fers, or a Trade::Ab­strac­tIm­porter load­ed from a plug­in needs its plug­in man­ag­er to be alive to keep the plug­in li­brary load­ed. importer importer manager manager importer->manager f f importer->f image image image->f pixels pixels pixels->image Ref­er­ence hi­er­ar­chy The dim dashed lines show ad­di­tion­al po­ten­tial de­pen­den­cies that would hap­pen with fu­ture ze­ro-copy plug­in im­ple­men­ta­tions — when the file for­mat al­lows it, these would ref­er­ence di­rect­ly the da­ta in f in­stead of stor­ing a copy them­selves. How­ev­er, im­pos­ing sim­i­lar con­straints on Python users would be dar­ing too much, so all ex­posed Mag­num types that re­fer to ex­ter­nal da­ta im­ple­ment ref­er­ence count­ing un­der the hood. The des­ig­nat­ed way of do­ing this with py­bind11 is wrap­ping all your ev­ery­thing with std::shared_p­tr. On the oth­er hand, Mag­num is free of any shared point­ers by de­sign, and adding them back just to make Python hap­py would make ev­ery­one else an­gry in ex­change. What Mag­num does in­stead is ex­tend­ing the so-called hold­er type in py­bind11 (which doesn’t have to be std::shared_p­tr; std::unique_p­tr or a cus­tom point­er types is fine as well) and stor­ing ref­er­ences to in­stance de­pen­den­cies in­side it. The straight­for­ward way of do­ing this would be to take GL::Mesh, sub­class it in­to a PyMesh , store buf­fer ref­er­ences in­side it and then ex­pose PyMesh as gl.Mesh in­stead. But com­pared to the hold­er type ap­proach this has a se­ri­ous dis­ad­van­tage where ev­ery API that works with mesh­es would sud­den­ly need to work with PyMesh in­stead and that’s not al­ways pos­si­ble. For test­ing and de­bug­ging pur­pos­es, ref­er­ences to mem­o­ry own­ers or oth­er da­ta are al­ways ex­posed through the API — see for ex­am­ple Im­ageView2D.own­er or gl.Mesh.buf­fers. Ze­ro-waste da­ta slic­ing One thing I got used to, es­pe­cial­ly when writ­ing parsers, is to con­tin­u­al­ly slice the in­put da­ta view as the al­go­rithm con­sumes its pre­fix. Con­sid­er the fol­low­ing Python code, vague­ly re­sem­bling an OBJ pars­er: view = containers . ArrayView ( data ) while view : # Comment, ignore until EOL if view [ 0 ] == '#' : while view and view [ 0 ] != '

' : view = view [ 1 :] # Vertex / face elif view [ 0 ] == 'v' : view = self . parse_vertex ( view ) elif view [ 0 ] == 'f' : view = self . parse_face ( view ) ... On ev­ery op­er­a­tion, the view gets some pre­fix chopped off. While not a prob­lem in C++, this would gen­er­ate an im­pres­sive­ly long ref­er­ence chain in Python, pre­serv­ing all in­ter­me­di­ate views from all loop it­er­a­tions. slice4 slice4 slice3 slice3 slice4->slice3 slice2 slice2 slice3->slice2 slice1 slice1 slice2->slice1 view view slice1->view data data view->data sliceN sliceN sliceN->slice4 While the views are gen­er­al­ly small­er than the da­ta they re­fer to, with big files it could eas­i­ly hap­pen that the over­head of views be­comes larg­er than the parsed file it­self. To avoid such end­less growth, slic­ing op­er­a­tions on views al­ways re­fer the orig­i­nal da­ta own­er, al­low­ing the in­ter­me­di­ate views to be col­lect­ed. In oth­er words, for a con­tain­ers.Ar­rayView.own­er, view[:].owner is view.owner al­ways holds. view view data data view->data sliceN sliceN sliceN->data slice4 slice4 slice4->data slice3 slice3 slice3->data slice2 slice2 slice2->data slice1 slice1 slice1->data