Early excision of burns

The idea of excising burns existed in ancient times, in 1510–1590 AD, 400 years ahead of his time, Ambroise Pare, was one of the first to describe early burn wound excision. In 1607, Hildanus also recommended removal of incisions made into burn eschars to allow drainage of serous fluid and allow better penetration of medications. However the benefit of surgical eschar removal was hampered by poor hygiene and lack of antiseptic surgical techniques, which would lead to a high rate of infection and blood loss. Additionally, wound management was not fully understood. Therefore the practice of early burn excision was rightly abandoned even though physicians recognized the importance of the early removal of dead tissues to reduce the inflammatory response. Patients with full-thickness burns could only languish in hospitals while their eschars, which were invariably infected, sloughed off, leaving open wounds that heal by secondary intention with remarkable contractures and disabilities.

Significant advances in infection control made burn eschar excision possible, when Lister in 1865 started successfully utilising carbolic acid (or phenol) as a method to sterilise surgical instruments and clean wounds. This, along with advancements in topical infection control, led to a significant decrease in post-operative infections and improved survival. On the other hand, the understanding and management of burn shock in the late 1950s was already underway. Improvements in the treatment of shock such as improved fluid resuscitation techniques, blood volume monitoring and realisation of the importance of urine output measurement allowed a greater efficiency of burn shock treatment and consequently more patients survived the initial shock stage. This in turn allowed the excision of full thickness burns to be more feasible and safe.

WW2 brought about a tremendous increase in burn victims, and physicians had to find a way to help patients recover faster. Physicians approached the problem of full thickness burns initially by chemical debridement with pyruvic acid and starch, which allowed grafting as early as 6 days post debridement.[53] Several other authors reported in the 1940s their successful attempts at surgical excision of full thickness burns.[54–57] However most reports have been anecdotal.

In 1960, Jackson et al., published a series of pilot and controlled trials which showed that excision and grafting (20–30% of TBSA) could be safely achieved as long as shock was controlled via red cell volume monitoring[58] and recommended the technique in cases of full-thickness burns (as natural slough separation would take 6 weeks), burns of 15–20% and deep circumferential burns of the trunk which affect respiration. However due to small numbers, they could not conclusively demonstrate an impact of the technique on mortality, infection nor healing time which probably prevented it from being accepted by the majority of surgeons at the time. What probably changed practice was the introduction of the tangential excision technique by Zora Janzekovic in the 1970s which was a significant improvement in the technique.[59] An important modification introduced by this technique was that the excision not only included slough but also of the damaged dermis down to bleeding tissues. Her technique was advocated by many, several (including herself)[59] who have reported improvement in mortality and also decrease in hospital stay when compared with conservative treatment.[60,61] The technique became the standard of care in most leading burn centres across the globe, but was only limited to small burns that could be covered by skin from patients’ own donor sites.

Wound cover after major burn excision was the challenge that delayed the practice till the 1970’s, when xenograft, pig skin, and cadaver skin became more widely available. Although Bettman reported success in the treatment of children with large full-thickness burn injuries with allograft skin in way back in 1938[62], modern day skin bank only began following the establishment of the United States Navy tissue bank in 1949.[63]

In the first publication of early burn excision of major burns, Tompkins et al., credited the introduction of early excision and grafting of large burns for the dramatic decrease in mortality in children from 24% to 27%.[64] Dr. David Herndon in a series of landmark papers has demonstrated the benefit of total early burn wound excisions for survival and improved outcomes.[61,64]

Skin grafting and the development of skin substitutes

The earliest record of skin grafting goes back to the 5th century AD, where an Indian surgeon, Sushrutha, repaired noses, that were amputated as punishment for crimes, using strips of skin from the forehead which were flapped downwards and grafted over the wound[65,66]. Sushrutha has also been documented to transplant skin from the buttock to the nose. The first documentation of a modern skin graft in humans was by Carl Bunger in 1823. This again involved a nose wound, and full thickness skin from the inner thigh was used for this purpose. During this time however, the success of skin grafts was low due to inefficient harvesting and use of large and thick grafts. Free skin grafting was successfully reproduced by Reverdin, who was still a student at the time, in 1869 to encourage healing and closure of slow healing or chronic wounds.[2] Reverdin utilized “pinch grafts”, which were small circular skin discs obtained by pinching a small amount of skin and cutting it out. This was done repeatedly to produce islands of small grafts that were used to cover the wound, and it left donor sites that healed quickly due to their small sizes. This was soon popularized in England by George Pollock in 1870.[67] Advances in the quality of surgical instruments meant that thinner grafts than previously possible could be harvested. Thiersch took advantage of this and developed and advocated the use of razor thin skin flaps or “razor flaps”[68] in 1874. However these grafts did not produce satisfactory results in general and were limited to the treatment of small ulcerated wounds. Seven years later, Girdner reported the first successful use of allogeneic skin in burn wound coverage.[69]

In the 1920’s, Blair and Brown discovered that deep islands of hair follicles and sebaceous gland epithelial cells were the factor responsible for initiating the healing at donor sites. This meant that grafts could be harvested to different depths as long as these islands were preserved. Tools that allowed the surgeon to control the depth of skin harvested then quickly developed. Surgeons initially had to harvest thin grafts with the use of blades that afforded no mechanism to control graft thickness such as the Blair and Catlin knives.[70] Hofman and Finochietto developed knives that permitted precise regulation of the thickness harvested via screw-adjusted knives.[71] Split thickness skin grafts (so called as the tools used to harvest these grafts resembled the tools used for splitting leather) however only started becoming more popular in the 1930s due in part to the development and availability of more reliable instruments. Humby then developed a knife allowed control over the depth harvested[71] in 1936, followed by an adjustable dermatome by E.C Padgett in 1939.

The method of meshing grafts can be traced back to Lanz in 1907, who designed a cutting tool consisting of a series of small knives mounted in parallel to make multiple holes in a graft, forming a mesh.[72] A method of expanding the graft size was described by Meek in 1958[73] which utilizes a special dermatome (Meek Wall dermatome) and prefolded gauzes which allowed a nine fold expansion of the harvested graft.[74] The Meek dermatome was however viewed as cumbersome and required much skill and was thus superseded in 1964 by the introduction of simpler ‘mesh dermatome’ developed by J.C. Tanner et al. which allowed a graft to be expanded three times the original donor site size.[75] The Meek technique has in recent years though been seeing a revival in its use including the development of modified, air driven dermatome by Kreis et al., in 1994 and has an advantage over mesh grafts when donor sites are limited e.g. in larger burn wounds.[76–78] Additionally, a study investigating the real expansion rate of meshers and Meek micrografts showed the Meek technique provided more reliable and valid expansion rates compared to the skin meshers.[74,79]

Despite the introduction of meshing and micrografting techniques, these were still insufficient to meet the increased demand for skin in cases of large burns especially with the introduction of the concept of early total burn excision, and thus cryopreservation and long term storage of human skin for both autologous and allogenic skin transplantation were developed in the 1940–50’s[80,81], of which a major milestone was the introduction of the use of glycerol to cryopreserve skin.[82]

Skin allografts allowed the coverage of burns in cases where there was extensive skin loss or where limited sources of sites for autograft harvesting were available and could be used with or without concurrent skin autografts. Jackson described a combined grafting technique[83] which utilized alternate placement of narrow stripes of allograft and autograft onto a granulating or excised wound hence it was also known as ‘Tiger-striping’. Following the adherence of the grafts, a process termed ‘creeping substitution’ occurs, whereby migration of autologous epithelial cells occurs across the wound surface between the sheets of autograft and underneath the allograft sheet causes the allograft sheet to slowly lift and separate, leaving an epithelized wound bed. This technique has been used and adapted by the Chinese surgeons, Zhang and co-workers,[84–86] who minced autologous skin into pieces less than 1 mm in diameter and then seed these ‘micrografts’, into the dermis of large sheets of allograft skin prior to applying it onto the wound. The autograft epidermal cells proliferate and migrate and eventually via the process of creeping substitution, lifts and separates the allograft. While this method results in effective skin expansion ratios up to 1:18, it has been associated with severe wound contraction compared to sheet autografts.[87] Currently, micrografting techniques including the Meek technique has been more widely adopted in China than any other country.

Unfortunately, allogenic skin transplants trigger a potent reaction and rejection by the host immune skin and are invariably rejected acutely and immunosuppressive treatments that have proven effective in preventing rejection of organ transplants have sadly little or no effect on skin transplants. Thus in the UK, the use of allografts has been slowly phased out.

Another method of wound cover had to be developed and in the 1970’s, Yannas, a scientist at the Massachusetts Institute of Technology and Burke, a surgeon at the Massachusetts General Hospital and Shriners Burns center collaborated to develop and produce the first artificial skin, Integra®. It consisted of two layers, a collagen-chondrotin matrix and a silicon layer on top. The first multicentered clinical trial was conducted and published in 1988[88] and was granted an FDA licence in 1996. Integra is now widely used in acute burns and reconstructive surgery. Other artificial skin substitutes include Apligraf, developed by Bell et al., and Matriderm by Otto Suwelack.[89]