Directed Living Kidney Donors
Living kidney donation provides better patient and allograft survival when compared with deceased-donor transplantation, especially when the live-donor transplant is performed before the onset of dialysis (Figures 36–1 and 36–2) (Meier-Kriesche et al, 2002). Living donation rates vary worldwide, but in many Western countries, Asia, and the Middle East, it has become the predominant form of kidney transplantation. In the United States, the annual number of live kidney donors has surpassed the number of deceased donors since 2001, although the absolute number of transplants from deceased donors still outnumbers those from living donors (Klein et al, 2010). Living donors are most often directed; they have an established relationship with the intended recipient. Based on tissue typing disparities (HLA mismatches), an immunologic hierarchy can be established for the best “match” (Table 36–1). The advantages for identical twins and HLA-identical siblings are quite significant, while all other live-donor combinations are similar and provide significant advantages to the deceased donor. More than 30% of live donors are genetically unrelated to their recipient, and represent the fastest growing category of donors. These living unrelated donors (LURDs) (Figure 36–3) come from a spouse, a friend, or even someone anonymous to their recipient (nondirected). The ethical underpinning of this evolving practice is the excellent survival achieved by LURD transplantation, which is similar to the survival of a kidney from a parent or child, from a haploidentical sibling, or from a completely mismatched related donor (Cecka, 2004). These observations have influenced decisions regarding the suitability of live donors who are spouses, friends of the recipients, or anonymous. Today, there is little concern about the degree of HLA match if the ABO blood type and T-cell cross match are compatible. The gender of the living donor in the United States is more frequently female, constituting 60% of the live-donor population (Axelrod et al, 2010). This pattern is similar to what has been observed worldwide, with more male recipients undergoing live-donor transplantation. However, among similarly matched groups, kidneys that provide a greater “nephron dose” (anatomically ideal, young, large, male donors) are often preferred.
Living donor (LD) groups.
Table 36–1. Immunologic Hierarchy of Kidney Donors. ||Download (.pdf)
Table 36–1. Immunologic Hierarchy of Kidney Donors.
Haploidentical: siblings, parents, children, other relatives
Zero haplotype relatives
Living unrelated: spouses, friends
Nondirected Living Kidney Donors
The extreme shortage of kidneys to meet the demand of waiting recipients coupled with the success of LURD kidney transplantation has opened up creative ways to expand the pool of live donors. In particular, there are individuals who wish to be anonymous donors, that is, “nondirected or altruistic donor.” However, in the United States, living-donor exchanges must adhere to section 301 of the National Organ Transplant Act of 1984 (NOTA), which states, “It shall be unlawful for any person to knowingly acquire, receive, or otherwise transfer any human organ for valuable consideration for use in human transplantation.” Valuable consideration according to this Act has traditionally been considered to be monetary transfer or a transfer of valuable property between the donor and the recipient. The donation of an organ is properly considered to be a legal gift. With these constraints, any person who is competent, willing to donate, free of coercion, and found to be medically and psychosocially suitable may be a live kidney donor (Adams et al, 2002). Three protocols of nondirected living donation have been developed to accommodate such donors: (1) a live-donor paired exchange, (2) a live-donor/deceased-donor exchange, and (3) altruistic donation.
Live-Donor Paired Exchange
This approach involves exchanging donors who are ABO or cross match incompatible with their intended recipients so that each donates a kidney to a compatible recipient (Delmonico, 2004). The exchange derives the benefit of live donation but avoids the risk of incompatibility; several computer algorithms have been modeled to execute the exchange (Montgomery et al, 2005). The best example is two families, one with an A donor to a B recipient and the second with B donor to an A recipient. Swapping donors solves the dilemma. Live-donor exchange procedures have been performed worldwide and are best performed with large sharing pools (Segev et al, 2005). The popularity of paired kidney donor exchanges has been bolstered by the demonstrated safety of shipping live donor kidneys between centers.
Another system of exchange of donors was devised by centers in UNOS region 1, by permitting the live donor to be used by another compatible individual on the waiting list in “exchange” for the next blood type compatible deceased donor in the region, for the live donor's recipient. With this method, two patients will be transplanted instead of only one, although some fine-tuning of donor organ quality and age is necessary (Delmonico, 2004).
Altruistic kidney donation (to a complete stranger) is developmental in several centers and must be approached with utmost sensitivity, especially today when organ exchanges are advertised on the Internet. Participating centers usually offer the kidney to the highest wait listed patient at their center after a match run. However, the use of such altruistic donors may aid the formation of live donor “chains” (Rees et al, 2009). The motives of the nondirected donor should be established with care to avoid a prospective donor's intention of remedying a psychological disorder via donation. Many who inquire about altruistic donation have only a limited understanding of these issues, and upon learning these basic realities, about 60% withdraw from the process (Jacobs et al, 2004).
From its inception, the removal of a kidney from a healthy individual to benefit another has been problematic. The practice is based upon the belief that the removal of one kidney is safe and does not diminish survival or significantly harm long-term kidney function. This notion derives from follow-up of patients up to 45 years after nephrectomy for trauma (Narkun-Burgess et al, 1993). Ibrahim et al (2009) reported the follow-up of 3698 kidney donors between 1963 and 2007 and found that survival and the risk of ESRD appeared to be similar to those in the general population. They found that ESRD developed in 11 donors, a rate of 180 cases per million per year versus 268 cases per million in the general population. At a mean (±SD) of 12.2 ± 9.2 years after donation, 85.5% of a subgroup of 255 donors had a measured GFR >60 mL/min/1.73 m2, 32.1% had hypertension, and 12.7% had albuminuria. Older age and higher BMI, but not a longer time since donation, were associated with both a GFR <60 mL/min and hypertension. In addition, a recent analysis of 4650 donors between 1987 and 2007 in which 76.3% were white, 13.1% black, 8.2% Hispanic, and 2.4% other found that both black and Hispanic donors had an increased risk of hypertension, diabetes mellitus requiring drug therapy, and chronic kidney disease (Lentine et al, 2010). ESRD developed more frequently in blacks, but it was <1% for the donor population studied. These recent studies are in line with prior reports that unilateral nephrectomy caused an average decrease of 30% in the GFR that tended to improve with each 10 years of follow-up (average increase 1.4 mL/min per decade); a small, progressive increase in proteinuria (average 76 mg/decade); and variable effects on hypertension (Kasiske et al, 1995). Thus, the published evidence indicates that there is little long-term medical risk to a healthy donor after unilateral nephrectomy. Nevertheless, Ellison et al (2002) identified 56 live kidney donors who were subsequently listed for a kidney transplant. The rate of ESRD in donors was calculated to be 0.04%, comparable with the rate (0.03%) in the general population. The renal diagnosis in these patients was hypertension, focal sclerosis, chronic glomerulonephritis, familial nephropathy, diabetes, and other. Recently, some have advocated the use of donors with isolated medical abnormalities such as hypertension, obesity, dyslipidemia, or stones, which may not result in the safety profiles previously reported. As of 2011, prior living donors get preference for deceased donor kidneys should they develop ESRD (UNOS Web site).
The imbalance between the supply of brain dead deceased donors and the growing demand for kidneys has created many innovative uses of organs that were excluded in the past. These generally include kidneys from donors older than 60 years; the presence of systemic disease such as atherosclerosis, hypertension, or early diabetes; donors with cardiac arrest or significant hypotension; and some with prior exposure to virus and/or infections that have resolved (Ismail and Flechner, 2006). While kidneys that are severely traumatized or come from donors with active cancer, sepsis, or HIV–AIDS are excluded, a number of donor organs with extended criteria that convey about a 10% worse overall graft survival have been incorporated into the donor pool. To maximize kidney usage, the following categories have been developed.
Most individuals that meet the criteria for brain death from age 5 to 60 years with normal kidney function and no history of systemic or infectious disease.
Kidneys from brain dead donors with 1.7 times relative risk of graft failure. These criteria were developed from a consensus conference that analyzed registry survival data (Rosengard et al, 2002). These include any donor older than 60 years or older than 50 years with a history of hypertension, cerebrovascular accident (CVA) death, or creatinine >1.5 mg/dL (Table 36–2). Informed consent of the recipient is requested to receive an expanded criteria donor (ECD) kidney.
Table 36–2. Expanded Criteria Kidney Donors. ||Download (.pdf)
Table 36–2. Expanded Criteria Kidney Donors.
CVA + HTN
CVA + HTN
CVA + Cr >1.5
HTN + Cr >1.5
None of above
Donation after Cardiac Death
When a potential donor does not meet brain death criteria but has an irretrievable head injury, viable organs for transplant can be procured after a controlled cardiac arrest. Such kidneys experience a greater incidence of DGF, but long-term function is comparable with standard donor kidneys (Rudich et al, 2002).
At the extremes of life, one kidney may not be sufficient to deliver an adequate GFR (nephron dose) to an adult recipient. In these instances, using both kidneys from a single donor can overcome these limitations.
Kidneys from donors younger than 5 years (often <6 cm in length) have a historically higher failure rate from technical problems and develop hyperfiltration injury (proteinuria) when transplanted into adults (Bresnahan et al, 2001). Both kidneys can be transplanted en bloc, attached to the donor aorta and vena cava, in a more reliable fashion (Hobart et al, 1998). Such kidneys will grow to adult size in 6–12 months.
When kidneys have extremely unfavorable risk factors for graft success due to insufficient nephron mass, both may provide for successful outcome (Bunnapradist et al, 2003). Such adult dual transplants can be placed in either iliac fossa or preferably on the same side through one incision (Flechner, 2008). The criteria established for dual kidney allocation appear in Table 36–3. This approach utilizes kidneys that in the past were often discarded.
Table 36–3. Criteria for Adult Dual Cadaveric Kidney Transplants. ||Download (.pdf)
Table 36–3. Criteria for Adult Dual Cadaveric Kidney Transplants.
(A) Donor age >60 years
(B) Estimated donor creatinine clearance <65 mL/min based upon serum creatinine upon admission
(C) Rising serum creatinine (>2.5 mg/dL) at the time of retrieval
(D) History of medical disease in donor (defined as either long-standing hypertension or diabetes mellitus)
(E) Adverse donor kidney histology (defined as moderate-to-severe glomerulosclerosis (>15% and <50%).
Extracorporeal Renal Preservation
Simple Hypothermic Storage and Flush Solutions
Once removed, kidneys are flushed to remove blood and stored in a hyperosmolar, hyperkalemic, and hyponatremic solution at (4–10°C) to minimize ischemic injury (cellular swelling). This is usually sufficient for up to 24 hours of preservation although longer cold ischemic times (up to 40 hours) have been reported, but result in higher rates of DGF. A commercial storage solution from the University of Wisconsin (UW) is frequently used, which contains inert substrates such as lactobionate, raffinose, hydroxyethyl starch, and adenosine as an energy substrate. Recently, a less viscous alternative histidine-tryptophan-ketoglutarate (HTK) solution has been shown to yield similar results with cold ischemia times <24 and >24 hours (Agarwal et al, 2006).
Hypothermic pulsatile perfusion is an alternative method of preservation, which takes advantage of a continuous pulsatile flow through the graft. Some feel that such hydrodistention is therapeutic in dilating the ischemic renal microcirculation and permits the delivery of vasodilator drugs (ie, verapamil, beta-blockers). It also permits measurement of flow, pulse pressure, and resistance through the graft, which is an accurate method to determine viability of the kidney (Schold et al, 2005). Pulsatile perfusion is more costly and requires investment in a preservation unit (Waters Co, Rochester, MN) and a technologist, but has been gaining popularity due to the increasing number of ECD and donation after cardiac death (DCD) donors that are considered for transplant (Matsuoka et al, 2009; Shah et al, 2008).
The Major Histocompatibility Complex
The major histocompatibility complex (MHC) describes a region of genes located on chromosome 6 in man, which encode proteins that are responsible for the rejection of tissue between different species or members of the same species (Flechner et al, 2011). The cell surface MHC markers are called human leukocyte antigens (HLAs), because they were first identified on white blood cells. There are two major types of HLA antigens termed class I and class II. Virtually all nucleated cells express HLA class I antigens, while class II antigens are primarily found on B cells, monocytes, macrophages, and antigen-presenting cells. Each individual inherits two serologically defined class I (named A and B) and one class II (named Dr) antigen from each parent, so six HLA antigens constitute an individual's tissue type. One set of HLA A, B, and Dr antigens inherited on one chromosome from a parent is called a haplotype so that HLA-identical siblings have inherited both haplotypes. The HLA molecules are polymorphic (over 170 defined), so it is very unusual if two unrelated individuals have the same tissue type of six HLA antigens. HLA antigens not shared between two individual will generate an immune response. Therefore, the term HLA matching describes the number of shared antigens. One can generate a hierarchical rating of genetic HLA similarities, which roughly correlate to the risk for rejection and eventual kidney transplant outcomes ranging from identical twins to deceased donor (DD) (Table 36–1). In clinical practice, the impact of HLA on graft survival is small in the initial years but may play an important role after 5–10 years. No doubt other factors affect survival: especially donor organ quality (age, function, size, etc) as well as recipient age and comorbidities. However, at present, six Ag-matched (or zero HLA mismatched) deceased donor kidneys are shared nationally due to the beneficial effect on immunological outcomes. In addition, HLA antigen matches also play a role in the algorithm for distribution of deceased donor kidneys with more points assigned for better matches.
Preformed circulating anti-HLA antibodies against the specific phenotype of the donor will lead to acute (if not hyperacute) rejection. Such antibodies (usually IgG) are detected by crossmatching the sera of the recipient with lymphocytes of the donor and adding complement. Such complement-dependent cytotoxicity (CDC) will kill the donor cells and is indicative of deleterious clinical outcome. A similar, yet more sensitive, test has been developed using flow cytometry to identify the presence of anti-HLA antibodies bound to the surface of donor lymphocytes. A crossmatch against both donor T and B lymphocytes is performed within 24 hours of surgery, and transplants are not done if these antibodies are strongly present. In addition, the ABO system will trigger CDC against the mismatched blood group antigens (glycoproteins) present on many tissues. Therefore, transplants are usually done only between ABO compatible individuals. In the last few years, more transplants have been done with weak ABO incompatibilities (low anti-A or anti-B titers) with good outcomes (Montgomery et al, 2009).
At monthly intervals waiting, patients have their serum screened for the presence of anti-HLA antibodies against a panel of HLA phenotypes (lymphocytes) that represent the general population. The result is reported as a percentage of the total referred to as percent reactive antibody (PRA). Those with high titers (>50%) of anti-HLA antibody against the broad population are said to be sensitized and will find it very hard to find a cross-match negative donor. Sensitized patients waiting for an organ depend on better HLA matches to find a cross-match negative donor (McCune et al, 2002). Sensitization to HLA can occur from prior blood transfusions, viral infections, pregnancy, or previous transplants.
The development of de novo donor-specific or nondonor-specific anti-HLA antibodies after the transplant has a deleterious effect on outcomes. Both a greater frequency of acute and chronic rejection as well as lower graft survival have been reported among those patient with these antibodies detected by flow cytometry (El Fettouh et al, 2001; Hourmant et al, 2005). The presence of these antibodies may identify those recipients that need more rather than less immunosuppression. The recent introduction of solid phase technology in which specific HLA antigens bound to synthetic beads can be used as a target to screen sera for the presence of HLA antibodies has expanded our ability to monitor recipients before and after transplant (Lefaucheur et al, 2010).
Donor Nephrectomy for Transplantation
Removal of a kidney for transplant depends upon minimizing both surgical injury and warm ischemia, which will hasten the recovery of function in the recipient. It is best to ensure a brisk diuresis in the donor before the kidney is removed, which can be enhanced by the use of volume expansion with saline and albumin, osmotic diuretics (mannitol), and loop diuretics (furosemide) in order to maximize immediate graft function in the recipient. Minimal dissection of the renal hilum is preferred (Flechner et al, 2008).
All donors should be evaluated both medically and surgically to ensure donor safety. An outline of the usual donor evaluation is shown in Table 36–4. First, a thorough history and physical examination is needed to rule out hypertension, diabetes, obesity, infections, cancers, and specific renal/urologic disorders. Second, laboratory testing of blood and urine, chest x-ray, electrocardiogram, and appropriate cardiac stress testing is done. Several methods to measure GFR and urine protein excretion can be used. Finally, radiographic assessment of the renal anatomy is done, which is usually accomplished by a CT angiogram (Kapoor et al, 2004). A catheter angiogram is reserved for complex vascular anatomy. If anatomic differences are detected between the two kidneys, the donor should always be left with the better kidney. If the two kidneys are equal, the left is preferred for transplant due to its longer and often thicker renal vein. However, in cases when one kidney has three or more renal arteries, the kidney with the single artery is selected. In younger fertile female donors, concern about physiologic hydronephrosis of the right kidney is taken into consideration.
Table 36–4. Standard Evaluation of the Potential Live Donor. ||Download (.pdf)
Table 36–4. Standard Evaluation of the Potential Live Donor.
History: Focus on relation to renal disease
Hypertension, diabetes, family history, use of NSAIDs, other chronic drugs, environmental exposure (heavy metals), chronic UTI, stones, prior surgery, prior cardiovascular or pulmonary events (TB), begin to explore desire to donate
Physical exam: Focus on relation to renal disease
Blood pressure, weight/height (BMI), lymph nodes, joints, breast, prostate
Cardiovascular disease assessment
Urinalysis and culture, electrolytes, BUN creatinine, calcium, phosphorus, magnesium, liver panel, fasting blood glucose, and lipid profile
CBC with platelets, coagulation screen
24-hour urine, creatinine clearance, and protein excretion (albumin/creatinine ratio) or GFR measurement (iothalamate clearance)
Remote stone history: 24-hour urine calcium, uric acid, oxalate, citrate
Viral serology: Hepatitis C; hepatitis B; HIV; Epstein–Barr virus; cytomegalovirus; herpes simplex; and RPR (rapid plasmin reagent)
Electrocardiogram, chest x-ray
Females PAP, mammogram—age appropriate
Males PSA (age >40–50, family history)
Colonoscopy (age appropriate)
Imaging of the kidneys: Local availability
Computed tomography angiogram
Magnetic resonance angiogram
Today, the most commonly used approach is intraperitoneal laparoscopic donor nephrectomy, primarily due to patient choice (Moinzadeh and Gill, 2006). This technique has all but supplanted open donor nephrectomy via an extaperitoneal flank incision due to reports of reduced pain and shorter recovery time. The hand-assisted laparoscopic approach, where the extraction incision is used during the dissection is the most common technique used in the US (Fisher et al, 2006). Nevertheless, in cases with a short right vein or three or more arteries, we prefer an open nephrectomy using 12th rib-sparing flank incision (Turner-Warwick, 1965). When multiple renal arteries areencountered, they should be conjoined ex vivo while the kidney is on ice, in order to minimize the number of anastomoses in the recipient and reduce ischemia times (Flechner and Novick, 2002). Smaller upper pole arteries (<2 mm) often can be sacrificed, while lower pole vessels should be retained because of a risk to the ureteral blood supply.
Today, most donors are multiple-organ donors, and they require removal of the liver, heart, lungs, and pancreas, in addition to the kidneys. The retrieval needs to be coordinated, and it is often performed by several teams representing each organ for transplant. Usually, the thoracic organs are removed first while the abdominal organs are cooled and perfused with UW or HTK perfusion solution. The kidneys are removed en bloc with the aorta and vena cava and a large amount of retroperitoneal tissue. They are separated on the backbench by dividing the great vessels with the renal vessels attached. Donor characteristics and projected cold ischemia time may influence the use of cold storage or pulsatile perfusion preservation.
Standard Renal Transplant Surgery
There are several different methods for surgical revascularization of the kidney; the following is one reliable method (Flechner, 2008). While either iliac fossa is acceptable for the transplant, the right side is often preferred due to the longer and more horizontal segments of external iliac artery and vein compared with the left side. A lower quadrant curvilinear (Gibson) incision is made, and the iliac vessels are exposed through a retroperitoneal approach, a self-retaining retractor is used. The renal-to-iliac vein anastomosis is usually performed first, in an end-to-side fashion with 5–0 nonabsorbable monofilament suture, using a running quadrant technique. The renal artery can be anastomosed end to end to the internal iliac using 6–0 nonabsorbable monofilament suture. However, in older recipients and diabetic patients, this vessel often has significant arterial plaque causing poor runoff. In addition, concern about compromising arterial flow to the penis via the pudendal artery with subsequent erectile dysfunction limits this approach in older males. Because of these factors, an end-to-side anastomosis of the renal artery to the external iliac artery is more frequently done with 6–0 nonabsorbable monofilament suture using a running quadrant technique. An extravesical ureteroneocystostomy (variation of Lich technique) is the preferred method to reimplant the ureter. When healthy appearing ureter is short or the bladder is defunctionalized and small, a native to transplant ureteroureterostomy can be done. An internal double J ureteral stent is usually placed, and a closed suction drain is left in the deep pelvis.
Imaging of the Transplant Kidney
Immediately after the transplant, it is advisable to obtain a baseline duplex Doppler ultrasound to confirm patency of the renal vessels, blood flow to the parenchyma, and to identify large fluid collections, hematomas, or hydronephrosis. This is especially important when the graft is oliguric. Similar information can be obtained using an isotopic (mercaptoacetyltriglycerine, 99mTc-MAG-3) renal scan, which is especially helpful to identify urinary extravasation. Kidneys with DGF demonstrate a typical pattern of isotopic uptake with little clearance or excretion. If fluid collections or intraperitoneal problems are suspected, finer definition can be obtained with a computed axial tomography (CAT) scan. The use of 3-D CAT scans or MR angiography can delineate actual vascular lesions (stenoses, aneurysms, a-v fistula). Catheter angiography is reserved for interventions that require access to the renal vessels such as angioplasty. Imaging with intravenous iodinated contrast should be limited when the creatinine is >1.8 mg/dL, but cystograms and antegrade nephrostograms can be helpful to identify urinary fistulas or obstructions.
Immediate Posttransplant Care
Initial postsurgical care in the first hours and days focuses on the urine output and eventual recovery of GFR. It is important to avoid hypotension, dehydration, or use of alpha-adrenergic drugs, which will exacerbate surgical and preservation injury. It is helpful to monitor central venous pressures to maintain adequate preload (10–15 cm water). Urine output >1 cc/kg/h is desirable, and hourly intravenous replacement at cc/cc of urine is usually sufficient. Some live donor kidneys may generate outputs up to a liter per hour, which will drop the blood pressure and should be managed with only 1/2–2/3 volume replaced. Alternatively, fluid overload and pulmonary edema may cause renal hypoperfusion and should be avoided. Treatment with fluid restriction, diuretics, and even dialysis may be needed. Even when hemodynamically stable, many deceased donor recipients (and a few living donor recipients) will experience delayed recovery of graft function, which is a consequence of extended cold preservation times or warm ischemia in the donor or prolonged anastomosis time in the recipient.
Delayed Recovery of Graft Function
DGF is more formally defined as the need for dialysis during the first week after transplant and occurs in about a third of deceased donor recipients. The term slow graft function (SGF) is said to occur if the recipient creatinine is not <3 mg/dL by day 5, and occurs in another third of deceased donor recipients (Humar et al, 2002). Patients with DGF may produce liters of urine a day (nonoliguric DGF), but have a rising creatinine and need dialysis. Others produce <300 cc a day of urine and are described as oliguric DGF, which is usually an indication of a more prolonged recovery time. These clinical events are associated with specific histological findings referred to as acute tubular necrosis (ATN), the hallmark of which is tubular epithelial swelling, necrosis, and regeneration with mitotic figures. If kidneys are in oliguric DGF for over a week and imaging studies demonstrate good blood flow, a biopsy should be done to rule out rejection and confirm ATN. Transplant DGF due to ATN resolves in most cases but may take up to several weeks, while about 1–2% of grafts never function (primary nonfunction) and may progress to cortical necrosis. Since the definition of DGF encompasses not just ATN but all causes of early graft dysfunction, it does have a negative impact on both short- and long-term graft survival compared with kidneys that function immediately (Shoskes et al, 1997). During DGF, it is helpful to delay the introduction of calcineurin inhibitor (CNI) drugs for 7–10 days until some recovery of function is evident. This usually requires the use of an induction antibody as an umbrella of protection until the graft heals.
Sudden Drop in Urine Output
During the first few days, a sudden loss of urine output after an initial diuresis demands prompt attention to ensure patency of the Foley catheter, and if easily obtainable, a repeat ultrasound to confirm vascular flow and exclude hydronephrosis. If there is any question of abnormal blood flow or a delay in obtaining an imaging study, the kidney should be promptly reexplored, since vascular compromise of a few hours will result in allograft necrosis. Loss of urine output from the bladder catheter with increased drain output may suggest a urine fistula. The drainage fluid can be sent for creatinine, and if 5–10 times the serum level suggests urine. If the above problems are excluded with imaging studies, renal biopsy is needed to rule out acute rejection or thrombotic microangiopathy and to ensure graft viability.
The disparate HLA phenotypes on donor tissue trigger an immune response that leads to renal dysfunction and histological changes in the transplanted kidney called rejection. These responses are both humoral and cellular, and depend upon the presentation of processed donor HLA antigens via either donor (direct) or host (indirect) antigen-presenting cells to the recipient's immunocompetent T cells (Flechner et al, 2011). The clinical signs and symptoms of acute renal allograft rejection may include fever, chills, lethargy, hypertension, pain and swelling of the graft, diminished urine output, edema, an elevated serum creatinine and blood urea nitrogen (BUN), and proteinuria. Immunosuppression is designed to prevent these events. Rejection can also be divided in three distinct clinical entities based on the timing and mechanism responsible for triggering these events.
Hyperacute rejection occurs immediately after revascularization of a kidney when preformed cytotoxic anti-HLA antibody is present. It will lead to graft thrombosis, and the kidney must be removed. While there is no treatment, it can be prevented almost completely by using the sensitive crossmatching techniques available today.
Acute rejection episodes can occur at anytime after the transplant, but most occur in the first 3 months. Such episodes can be mild or severe and cause the symptoms previously described to a variable degree. With the currently available immunosuppression, about 20% or less of transplant recipients experience acute rejection, and most episodes are reversible with treatment. Less than 5% of recipients lose their graft due to unresponsive acute rejection. These episodes are predominantly cellular and cause graft infiltration of cytotoxic cells, but humoral mechanisms contribute to the process. Treated acute rejections that result in a return to baseline renal function appear to have little impact on long-term graft survival (Meier-Kriesche et al, 2004).
Chronic rejection defines a process of gradual, progressive, decline in renal function over time. It is associated with hypertension and proteinuria, and is accompanied by histological features of tubular atrophy, interstitial fibrosis, and an occlusive arteriolopathy (Figure 36–4). It can be detected as early as 6 months after transplant, and is thought to have a strong humoral response against the graft. Some, but not all, recipients have had prior acute rejections or have donor-specific antibody (DSA) detected. There is a role for alloimmunity (antigen-dependent factors), since it does not occur in identical twins, is rare in HLA-identical sibling transplants, and is most common among deceased donor recipients (Kreiger et al, 2003). However, many of these histologic changes are found with older donor age, ischemic injury, viral infections, and other systemic comorbidities, referred to as antigen-independent factors. Therefore, the process remains less well characterized, is no doubt multifactorial, and is often given the name chronic allograft nephropathy (CAN). Treatment is not often effective and consists of tight control of blood pressure, the use of angiotensin converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) drugs for proteinuria, and sparing or elimination of CNI drugs.
Chronic allograft nephropathy.
The holy grail of transplantation is to develop methods that permit a recipient to keep a transplanted organ in a state of “tolerance” or donor-specific unresponsiveness. Until that day arrives, clinical practice is dependent on our ability to interrupt the host immune response using agents that are not precise. It is a constant struggle to deliver enough immunosuppression to prevent rejection, but not too much to render the patient susceptible to infection and cancer. In addition, immunosuppressive drugs have unique mechanisms of action and their own specific toxicities (Halloran, 2004). Immunosuppressive agents can be used in one of three ways: (1) induction or high-dose therapy to prevent a primary immune response immediately after transplantation, (2) low-dose or maintenance therapy once engraftment has stabilized, or (3) additional high-dose therapy to treat acute rejection should it arise.
Chemical Immunosuppression with Small Molecules
Since the initial observations almost 50 years ago that corticosteroids could prevent and treat renal allograft rejection (Hume et al, 1963), they have become the cornerstone of immunosuppressive therapy. Corticosteroids have numerous effects on the immune system that include sequestration of lymphocytes in lymph nodes and the bone marrow resulting in lymphopenia. Glucocorticoids become bound to intercellular receptors, and conformational changes in the steroid–receptor complex interfere with cytokine production. Their primary immunosuppressive effect is inhibition of monocyte production and release of interleukin (IL-1), with subsequent inhibition of T-cell IL-2 and interferon-gamma, thus interfering with lymphocyte activation and production of effector cells. However, systemic toxicities of steroids are myriad, including cushingoid features, hypertension, hyperlipidemia, hyperglycemia, weight gain, osteoporosis, poor wound healing, growth retardation, psychiatric disturbances, etc, and have resulted in intense efforts to reduce steroid dosage. Alternate-day steroid dosing appears beneficial for growth in children, but complete steroid withdrawal or avoidance has become more appealing. The potential benefits include lower blood pressure, improved lipid profiles, and diminished physical side effects attributed to steroids. Today, standard maintenance therapy is achieved with as little as 5 mg prednisone daily. There have been several reviews of trials attempting to withdraw steroids 3 months or later from stable transplant patients. Initial graft stability is often followed by acute rejection requiring the reintroduction of steroids (Pascual et al, 2004). If attempted, withdrawal should be entertained in well-matched recipients, 1 year or more after transplant, with no prior episodes of rejection. Avoidance of steroids after 1 week may be favorable if accompanied by depleting antibody induction (Khwaja et al, 2004), although randomized trials confirm few if any lasting benefits (Woodle et al, 2008).
Introduced first in the 1960s, 6-mercaptopurine and its imidazole derivative azathioprine represent antimetabolites that blocks purine biosynthesis and cell division. The developers of azathioprine, Gertrude Elion and George Hitchings, received the 1988 Nobel Prize. Azathioprine is most effective if given immediately after antigen presentation to prevent rejection and is ineffective in treating established rejection. Adverse effects of azathioprine include bone marrow suppression (primarily leukopenia), alopecia, hepatoxicity, and increased risk of infection and neoplasia. When compared directly with another antiproliferative agent, mycophenolate mofetil (MMF), azathioprine is not as potent in rejection prophylaxis. Therefore, its use has been diminishing rapidly over the past few years but serves as a secondary agent replacing MMF for intractable toxicity.
MMF is a morpholinoethyl ester of the fungal antibiotic mycophenolic acid, which is a noncompetitive inhibitor of the enzyme inosine monophosphate dehydrogenase. MMF inhibits purine biosynthesis preventing the proliferation of activated T and B cells, thereby blocking both cellular and humoral immune responses. It is thought to be more specific for those lymphocytes that rely primarily on de novo purine synthetic pathways, and has replaced azathioprine as an antimetabolite. MMF is usually well tolerated at dosages up to 2 g (divided dosing), with gastrointestinal disorders (nausea, vomiting, cramps, and diarrhea) and bone marrow suppression (leukopenia, anemia) being its major toxicities. Therapeutic drug monitoring of blood levels has demonstrated wide interpatient variability, but it may provide some benefit by preventing rejections due to poor exposure. An additional enteric coated product, mycophenolate sodium, is also available.
Cyclophosphamide has historically been used in place of azathioprine, although it is much less commonly used today. It is an alkylating agent that is biotransformed by the hepatic microsomal oxidase system to active alkylating metabolites. It inhibits DNA replication and, like azathioprine, affects rapidly dividing cells and is most effective immediately after antigen presentation. Cyclophosphamide has a narrower therapeutic-to-toxic ratio than azathioprine, and adverse effects include myelosuppression with leukopenia, fertility disorders, and hemorrhagic cystitis.
Leflunomide is an oral agent that inhibits the enzyme dihydro-orotate dehydrogenase, essential for de novo pyrimidine synthesis. The drug exhibits both antiproliferative and anti-inflammatory activity, and was initially approved for the treatment of rheumatoid and psoriatic arthritis. Its use in organ transplantation as an adjunctive agent is limited. The most common side effects include diarrhea, nausea, dyspepsia, rash, abnormal liver function test results, or marrow suppression. Interestingly, the major metabolite has antiviral activity against CMV and polyoma virus, which can infect transplant recipients (Josephson et al, 2006). It can be substituted for MMF or azathioprine in patients infected with the BK virus.
Calcineurin Inhibitor Drugs
Cyclosporine, a lipophilic small molecule, has been the cornerstone of transplant immunosuppression since the early 1980s and is the prototype CNI drug. It binds to a specific intracellular immunophilin (cyclophilin) causing conformational changes and subsequent engaging of the enzyme calcineurin phosphatase, thereby preventing the downstream gene transcription of IL-2 and other cytokines required for T-cell activation and proliferation. The adverse effects of cyclosporine, which are related to the concentration of the drug, include nephrotoxicity, hypertension, hyperlipidemia, gingival hyperplasia, hirsutism, and the hemolytic uremic syndrome. CNI drugs are metabolized by the hepatic cytochrome P-450 (3A4) system, and other drugs that inhibit or stimulate this enzyme system (ie, diltiazem and ketoconazole, or phenytoin and isoniazid) can significantly affect blood levels, thus favoring therapeutic drug monitoring. Recent developments include monitoring of the peak cyclosporine levels 2 hours after administration to better reflect exposure to the drug. A microemulsion that exhibits more reproducible absorption and metabolism has replaced the initial oral formulation.
Tacrolimus is another CNI drug that engages a different immunophilin, FK-binding protein 12 (FKBP12), to create a complex that inhibits calcineurin with greater molar potency than does cyclosporine. Although quite similar to cyclosporine in efficacy, both can cause nephrotoxicity and the hemolytic uremic syndrome. Tacrolimus is more likely to induce new onset diabetes after transplant, hyperkalemia, hypomagnesemia, and tremors. It seems less likely to cause hyperlipidemia, hypertension, and cosmetic problems. In some regimens, tacrolimus is reported to reduce subclinical rejection, and its use has increased steadily. In the United States, Tacrolimus is now the predominant CNI drug, given to 90% of new recipients. However, the most distressing feature of continuous CNI use is acute and chronic nephrotoxicity. Acute CNI nephrotoxicity is mediated by pronounced vascular and, to a lesser degree, tubular alterations, manifested by oligoanuria and azotemia, with associated hyperkalemia, hyperuricemia, hypertension, hypomagnesia, and renal tubular acidosis. A dose-dependent reduction in renal blood flow and glomerular filtration is well documented. Chronic CNI nephrotoxicity is more insidious, associated with progressive deterioration of graft histology (scarring) in more than 50% by 5 years and virtually all treated patients by 10 years (Nankivell et al, 2003). CNI-treated recipients have a profile of upregulated genes associated with profibrotic/fibrotic activity and tissue remodeling (Flechner et al, 2004). Dosage reduction will often mitigate against some of these effects, and numerous regimens have been tested to try to minimize or eliminate CNI drugs, although it must be done carefully to avoid increased risk of rejection (Ekberg et al, 2007; Russ et al, 2005). In a carefully controlled comparison of monitored exposure to cyclosporine versus tacrolimus, Rowshani et al (2006) reported a similar degree of scarring at 1 year after transplant. Calcium channel blockers are often used to ameliorate CNI nephrotoxicity due to their ability to reduce the dosage requirements, treat the associated hypertension, and reverse the calcium-dependent afferent arteriolar vasoconstriction.
Target of Rapamycin Inhibitors
Sirolimus and everolimus form a class of immunosuppressive agents that have similar molecular structure to the CNIs and bind to the same immunophilin protein (FKBP-12) as tacrolimus. However, their mode of action appears to be distinct, as the sirolimus-FKBP-12 complex does not inhibit calcineurin. Instead, the sirolimus-FKBP-12 complex engages a distinct serine/threonine p70 kinase called mTOR (molecular target of rapamycin). The inhibition of mTOR blocks downstream signal transduction pathways that prevent cell-cycle progression from G-, to S-phase in activated T cells. The principal nonimmune toxic effects of sirolimus and everolimus include hyperlipidemia, bone marrow suppression, and impaired wound healing and lymphoceles. Other reported side effects include aggravation of proteinuria, mouth ulcers, reduced testosterone and pneumonitis. Sirolimus and everolimus may also reduce CMV infection. Sirolimus and everolimus were initially registered for use with cyclosporine, but the combination increased nephrotoxicity, the hemolytic–uremic syndrome, and hypertension. Sirolimus has been combined with tacrolimus, but this combination also produced renal dysfunction and hypertension, which indicates that sirolimus potentiates CNI nephrotoxicity. Nephrotoxicity can be reduced by using very low doses of CNI (Tedesco-Silva et al, 2007), or by withdrawing the CNI (Russ et al, 2005). The mTOR inhibitors also possess antifibrotic, antineoplastic, and arterial protective effects. The combination of de novo sirolimus with MMF in a CNI-free regimen can result in improved renal function with less CAN. However, some have had a hard time using this combination due to tolerability issues (Flechner, 2009). The mTOR inhibitors have been demonstrated to slow down the growth of established experimental tumors and have potential applications in oncology (Guba et al, 2002). In fact, the mTOR inhibitors (temsirolimus and everolimus) are approved for treatment of kidney cancer (Figlin, 2008). The possibility that sirolimus and everolimus can protect arteries is suggested by two observations: mTOR inhibitors can reduce restenosis when eluted from coronary artery stents (Morice et al, 2002) and TOR inhibitors plus CNI inhibitors reduce the incidence of graft coronary artery disease after heart transplant (Eisen et al, 2003).
Polyclonal antibodies are produced by injecting (immunizing) animals such as horses, goats, sheep, or rabbits with cells from human lymphoid tissue. Immune sera from several animals are pooled and the gamma globulin fractions extracted and purified. A rabbit-derived antithymocyte antibody (Thymoglobulin, Sanofi) is the most frequently used preparation. Once injected, the antibodies bind to lymphocytes resulting in a rapid lymphopenia or depletion due to complement-mediated cell lysis, as well as masking of surface antigens or induction of suppressor populations that block cell function. Polyclonal antibodies are used primarily as induction therapy and to treat vascular or antibody-mediated rejection (AMR). Because of their strong immunosuppressive effects, polyclonal antibodies are limited to short courses of 3–10 days, but their depletion may last 6–12 months. Although designed to deplete T cells, they may also affect B cells, NK cells, other mononuclear cells, platelets, etc. Adverse effects include fever, chills, and arthralgias related to the injection of foreign proteins and the release of cytokines. These effects can be minimized by pretreatment with corticosteroids and antihistamines. More serious adverse effects include increased susceptibility to infections (especially viral) and neoplasia.
Monoclonal Antibodies that Deplete Lymphocytes
The introduction of murine hybridoma technology opened the door to the development of highly specific antibodies directed against functional cell surface targets. These antibodies, like polyclonal antibodies, exert their effects through a variety of immune mechanisms. In addition to complement-mediated lysis, blockade and inactivation of cell surface molecules, and opsonization with phagocytosis, these antibodies can induce cytotoxicity and modulation of cell surface molecules on target tissues.
Muromonab-CD3, a mouse monoclonal antibody against CD3, was the first commercially available selective monoclonal antibody used in transplantation for both induction and to treat rejection. Muromonab-CD3 binds to the T-cell receptor associated CD3 complex, which first triggers a massive cytokine-release syndrome before depleting and functionally modulating T cells. Although potent in rejection prophylaxis and its reversal, the toxicity exhibited during use has led to the development of safer products, and the drug is no longer in production.
Alemtuzumab is a humanized monoclonal antibody (IgG1) that specifically interacts with the 21–28-kd lymphocyte cell surface glycoprotein CD52, which is predominantly expressed on peripheral blood lymphocytes, monocytes, and macrophages. Once engaged with CD52, it produces a profound depletion of lymphocyte populations (T, B, and NK) that can persist for over a year. Although multiple doses are FDA approved for treating B-cell chronic lymphocytic leukemia, one or two 30 mg doses have been introduced as an induction agent in organ transplantation. Side effects of alemtuzumab include first-dose reactions, bone marrow suppression, and autoimmunity. Worries concerning prolonged immunodeficiency (infections and cancer) with alemtuzumab persist, although it is commonly used with lower doses of other maintenance agents. Early predictions that the agent would induce prope or “almost” tolerance were not confirmed, as some reports suggested a higher than expected incidence of rejection episodes, including AMR.
Rituximab is chimeric anti-CD20 monoclonal antibody that eliminates most B cells, and it was initially FDA approved for treating refractory non-Hodgkin's B-cell lymphomas. It was introduced in transplantation to treat a similar tumor, posttransplantation lymphoproliferative disease (PTLD). Rituximab is currently being evaluated to treat donor-specific alloantibody responses such as AMR or in transplanting sensitized recipients. It is used in combination with maintenance immunosuppressive drugs, plasmapheresis, and intravenous immune globulin (IVIG). While plasma cells are usually CD20 negative, some precursors are CD20 positive and their elimination may reduce some antibody responses. Such therapy may provide the first of future tools to control humoral rejection.
Monoclonal Antibodies that Are Nondepleting
Another selective site for monoclonal antibody targeting of the immune response is the IL-2 receptor (CD25), present on the surface of activated T cells and responsible for further signal transduction and T-cell proliferation. Both a chimeric (basiliximab) and a humanized (daclizumab) anti-CD25 have been genetically engineered to produce a hybrid IgG that retains the specific anti-CD25 binding characteristics with a less xenogenic (murine) backbone. These agents cause minimal cytokine release upon first exposure and exhibit a prolonged elimination half-life resulting in weeks to months of CD25 suppression. Because expression of CD25 (IL-2 receptor α chain) requires T-cell activation, anti-CD25 antibody causes little depletion of T cells. Anti-CD25 antibodies are useful as safe induction agents in low-to-moderate risk recipients, but have little effect in treating an established rejection episode. Their use appear to offer a favorable risk–benefit compared with depleting agents, providing for improved graft survival with a lower risk of posttransplant cancers. In 2010, daclizumab (Zenapax) was removed from commercial production.
Basic immunology generated the concept that blocking costimulation (signal 2) could prevent the activation of antigen primed T cells, thus providing a new avenue for control of allograft rejection. A first generation of monoclonal antibodies designed to block costimulation proved the concept in animals, but lacked sufficient efficacy in initial clinical trials. Belatacept is a second-generation cytotoxic T-lymphocyte–associated antigen 4 (CTLA-4) immune globulin, engineered as a fusion protein combining CTLA-4 with the Fc portion of an IgG molecule. This biological agent engages CD80 and CD86 on the surface of antigen-presenting cells, thereby blocking costimulation through T-cell CD28. The 2-year results of a phase 3 trial in renal transplant recipients given MMF, steroids, and anti-CD25 antibody demonstrated that belatacept versus cyclosporine resulted in increased acute rejection (23% vs 7%) but better, 10–15 cc/min GFR, renal function (Larsen et al, 2010). However, a higher incidence of PTLD was observed, especially for recipients who were naïve to the EBV at transplant (Vincenti et al, 2010). Belatacept was FDA approved in 2011, and is given at intervals of 2–4 weeks as an intravenous preparation and may be further evaluated in CNI-sparing regimens.
Current regimens vary according to center preference and are often subject to center experience and willingness to participate in clinical trials. Two areas of current investigative interest include CNI sparing or avoidance (to minimize CNI nephrotoxicity) and steroid sparing or avoidance trials (to minimize steroid side effects). A very typical regimen applicable to HLA mismatched deceased or live donor recipients would include an induction agent, either a nondepleting (basiliximab) or a depleting (Thymoglobulin/alemtuzumab) antibody. Maintenance therapy would include an antilymphocytic agent (tacrolimus, cyclosporine, or sirolimus), an antiproliferative agent (MMF or azathioprine), and steroids. The most common initial regimen today is tacrolimus–MMF–steroids. Delayed introduction of CNI drugs for 7–10 days is often selected for recipients with DGF to permit early healing of the ischemic injury, assuming an induction antibody has been administered. Steroid elimination after a week and CNI conversion to an mTOR inhibitor at 2–4 months is done in many centers.
Acute rejection leads to graft injury and eventual CAN if untreated. Therefore, it requires prompt and accurate diagnosis, which is best provided from a percutaneous renal transplant biopsy. One of the remarkable achievements of the last 10 years has been the universal acceptance of the Banff schema to diagnose and characterize renal allograft rejection (Racusen et al, 1999). The scoring system is semiquantitative, based on light microscopy, and describes features for acute rejection and chronic/sclerosing nephropathy as well as features attributed to both cellular and antibody-mediated mechanisms. For patients with Banff I or II acute rejections, high-dose intravenous steroid pulses of 5–7 mg/kg/d for 3 days will reverse about 85%. Some clinicians also prefer to add a 10–14-day recycle of oral prednisone at 2 mg/kg tapered to baseline. If rejections are unresponsive to steroids or histology confirms a component of Banff II or III vascular changes, a depleting antibody such as Thymoglobulin is given at 7–8 mg/kg over a week. It is not generally prudent to treat more than two to three acute rejections in any one recipient.
If distinct evidence of AMR is present, which includes biopsy evidence of deposition of the complement split product C4d in the peritubular capillaries coupled with the presence of circulating donor specific antibody (DSA), additional treatments are needed to recover renal function (Colvin, 2007). These include plasmapheresis to remove existing anti-HLA antibody and blocking IVIG (2 g/kg). The anti-CD20 monoclonal antibody rituximab has also been used, although it does not target the plasma cell. Some optimism has been generated by the use of proteasome inhibitor bortezomib, which directly targets plasma cells, the source of DSA. This drug, approved for the treatment of B-cell chronic leukemias, can reduce titers of the DSA and may reverse AMR episodes when clinically combined with plasmapheresis and IVIG over a 2-week course (Flechner et al, 2010).
Results of Kidney Transplantation
There have been dramatic improvements in short-term kidney transplant outcomes since the inception of clinical practice five decades ago. For recipients of living donor kidneys, 1-year patient and graft survival has increased to about 98.7% and 96.5%, and for recipients of standard criteria deceased donors 96.3% and 91.4% (Figure 36–1). The major reasons for this improvement are a reduction of acute rejection episodes (better immunosuppression and crossmatching techniques) with fewer complications from its treatment, and better prophylaxis and treatment of the common posttransplant infections. However, long-term graft loss beyond 5–10 years has not changed much, with stagnant survival half-lives of 7–8 years for deceased donor and 10–11 years for living donor kidneys. Factors that are statistically associated with graft failure are listed in Table 36–5. Ultimately, these factors lead to a multifaceted process of graft scarring (Figure 36–4) resulting in decline of function termed CAN, which is the major reason for late graft loss. The etiologies of CAN include processes that are immune related as well as those associated with nonspecific renal injury (Colvin, 2003). The second leading cause of late graft loss is death with a functioning graft, primarily due to the consequences of atherosclerotic cardiovascular disease, less so infections and cancers. Some risk factors for CAN and cardiovascular disease overlap (hypertension, hyperlipidemia, smoking, diabetes, etc). Graft loss secondary to patient noncompliance with medications has been estimated at 5–10%.
Table 36–5. Major Factors that Affect Long-Term Graft Outcome. ||Download (.pdf)
Table 36–5. Major Factors that Affect Long-Term Graft Outcome.
HLA match between donor and recipient
Age of donor and recipient
Rejection–both acute and chronic
Prior failed transplants
Sensitization (preformed anti-HLA antibodies
Recipient race (Asians > whites > blacks)
Comorbidities (DM, obesity, hyperlipidemia)
Immunosuppressive drugs utilized
Complications of Kidney Transplantation
The majority of significant surgical problems posttransplant are either vascular or urologic. They include renal artery thrombosis, disruption, stenosis, or mycotic aneurysm; renal vein thrombosis or disruption; urinary fistula or ureteral stenosis; pelvic lymphocele or hematoma; scrotal hydrocele or abscess; and wound abscess, dehiscence, or hernia (Flechner, 2011). Prevention is the best way to avoid these problems using meticulous surgical and antiseptic techniques, including the routine use of preoperative broad-spectrum antibiotics.
In the early posttransplant period, vascular problems may prevent a new kidney from ever functioning, and questions raised from imaging studies often require surgical reexploration. Anastomotic bleeding requires immediate repair; twisting or compression of the vessels may require reanastomosis, while complete thrombosis necessitates nephrectomy. Early large hematomas should be surgically drained and hemostasis confirmed. Significant transplant renal artery stenosis can occur from poor surgical technique, damage of the vessel intima at procurement, atherosclerosis or fibrous disease, or immune injury, but it is fairly uncommon (1–5% of transplants). Poorly controlled hypertension, renal dysfunction (especially after ACE inhibitors or beta-blockers), and a new pelvic bruit are clinical clues. Percutaneous transluminal angioplasty is the treatment of choice and restores kidney perfusion in 60–90% of cases. The risk of restenosis can be minimized with an internal stent (Bruno et al, 2004). Pseudoaneurysms of the renal or iliac artery and A-V fistula after biopsy are often amenable to embolization or endovascular stenting. Large (>5 cm) or mycotic aneurysms, inability to dilate a vascular stenosis, or unusual lesions may require open operative repair to prevent rupture.
Urologic complications are reported in 2–10% of kidney transplants (Streeter et al, 2002), and usually do not result in graft loss if promptly treated (van Roijen et al, 2001). Recent meta-analysis has confirmed that the routine placement of an indwelling ureteral stent will aid healing and reduce early ureteral fistula or obstruction (Wilson et al, 2005). It is advisable to leave a Foley catheter for 10–14 days for thin walled, poorly vascularized, or small-defunctionalized bladders. Ureteral fistulas and stenoses are usually a consequence of ischemia to the distal ureter from surgical dissection, overzealous electrocautery, or immune injury. CMV and BK virus infection of the urinary tract have been associated with ureteral stenosis (Fusaro et al, 2003; Mylonakis et al, 2001). For large fistulas, rapid surgical repair and drainage is advised, either by ureteral reimplantation to the bladder or native ureteroureterostomy or ureteropyelostomy. Small fistulas are occasionally amenable to long-term stenting with or without a proximal diverting nephrostomy, or bladder catheter. Ureteral stenoses are often amenable to balloon dilation and stenting but, if recurrent, require open repair. Urinary retention is more common in recent years as older males with prostatism are transplanted. It is advisable to wait a few months if prostatectomy is needed to ensure healing of the graft. Hydroceles, usually ipsilateral to the transplant and a consequence of spermatic cord transection, may cause discomfort or enlarge. They are best repaired by hydrocelectomy, although successful aspiration and sclerotherapy has been reported.
Wound complications are reported in 5–20% of transplants, and are best prevented since they can cause significant morbidity and take many months to resolve. Since immunosuppression delays wound healing, especially sirolimus and MMF, the use of nonabsorbable sutures in the fascia and more conservative surgical technique in the obese are warranted (Flechner et al, 2003; Humar et al, 2001). A closed suction pelvic drain is also helpful immediately posttransplant. Early fascial defects or late incisional hernias require operative repair; synthetic mesh or Alloderm may be required (Buinewicz, 2004). Suprafascial dehiscence or infection can resolve slowly by secondary intention, which may be hastened by the use of vacuum-assisted closure (Argenta et al, 2006). Lymphocele formation in the retroperitoneum can develop from disruption of small lymphatic channels in the pelvis or around the kidney. The reported incidence of symptomatic lymphoceles ranges from 6% to 18% and is influenced by obesity, immunosuppression (mTOR inhibitors, steroids), and treatment of rejection (Goel et al, 2004). Most are asymptomatic and resolve spontaneously over several months. Clinical presentation may include abdominal swelling, ipsilateral leg edema, renal dysfunction, or lower urinary voiding symptoms depending upon which pelvic structures are being compressed. Simple aspiration tends to recur; definitive treatments include prolonged tube drainage, sclerotherapy (povidone iodine, fibrin glue, tetracycline, etc), or marsupialization and drainage into the peritoneal cavity via laparoscopy or open surgery (Flechner, 2011).
Renal failure and immunosuppression make recipients more susceptible to infections after the transplant that includes bacterial, viral, fungal, and opportunistic pathogens. It is not surprising that such infections occur more often during the first 6 months when doses of immunosuppression are greatest. It is therefore common practice to give recipients prophylaxis against those infective agents that occur with the greatest frequency. Bacterial urinary tract infections are the most common and are controlled by the use of daily prophylaxis with oral trimethoprim/sulfa for the first year. This antibiotic is particularly useful since it also provides excellent prophylaxis of Pneumocystis carinii pneumonia, an opportunistic infection that is usually restricted to transplant patients, or others immunocompromised by HIV–AIDS, cancer chemotherapy, etc. Breakthrough infections and transplant pyelonephritis need further workup to identity, obstruction, reflux, foreign body, stones, or voiding dysfunction.
One of the most significant advances in transplant practice during the last decades has been the control of viral infections, in particular the Herpes viruses (CMV, EBV, VZV, and HSV), which caused major morbidity and even mortality in past years. These DNA viruses are characterized by transmission from donor to host, resulting in primary infections, as well as reactivation of latent virus in the host (Rubin, 2001). Therefore, recipients that have had no prior exposure (serologically negative at transplant) are at the greatest risk for infections. CMV is the most frequently encountered pathogen (10–40% of recipients), and donor and recipient serology (anti-CMV IgG) define risk of infection (D+R−> D+R+> D−R+> D−R−) and treatment strategies (Flechner et al, 1998). The virus can cause an asymptomatic infection (viral DNA copies in the blood); CMV syndrome with fever and leukopenia; and tissue invasive disease with the liver, lung, gastrointestinal tract–colon, and retina often infected. The introduction of the potent nucleoside inhibitors—acyclovir, ganciclovir, and valganciclovir—has largely controlled these infections. Those who receive organs from CMV-positive donors or have had prior exposure are routinely given 3 months of prophylaxis with oral valganciclovir. Six months is recommended for the high-risk D+R− group (Humar et al, 2010). Some prefer the use of preemptive therapy, awaiting detection by screening for virus (Khoury et al, 2006). The use of intravenous ganciclovir is often coadministered with anti–T-cell antibodies for patients at risk.
Candida urinary infections or esophagitis occur with some frequency, especially in diabetic patients. The use of oral fluconazole or Mycelex troche provides prophylaxis during the first few months. Systemic fungal infections are uncommon, but sporadic cases of aspergillosis, cryptococcosis, histoplasmosis, mucormycosis, etc, are reported. Invasive fungal infections usually require treatment with Amphotericin B, or its liposomal formulation.
New onset diabetes after renal transplantation is a growing problem (10–20% of adults) that mimics the features of diabetes type 2. It is a result of both impaired insulin production as well as peripheral insulin resistance, and includes patients that have hyperglycemia responsive to oral agents as well as those that require exogenous insulin. It can be diagnosed up to several years after transplant and is attributed to the use of CNI drugs (tacrolimus > cyclosporine) as well as glucocorticoids. Family history, older age, weight gain, hyperlipidemia, sedentary lifestyle, and viral infections are contributing factors (Duclos et al, 2006).
Immunosuppression impairs immune surveillance, and not surprisingly, it is associated with an increased incidence of de novo cancers. In particular, those oncogenic viruses that are cleared by T cells become the primary agents that induce posttransplant cancers. These include EBV, HHV-8, HPV, and hepatitis B and C. Kasiske et al (2004) examined malignancy rates among first-time recipients of deceased or living donor kidney transplantations in 1995–2001 (n= 35,765) using Medicare billing claims. They found that compared with the general population, a 20-fold increase for non-Hodgkin's lymphomas (including PTLD), nonmelanoma skin cancers, and Kaposi's sarcoma; 15-fold for kidney cancers; 5-fold for melanoma, leukemia, hepatobiliary tumors, cervical tumors, and vulvovaginal tumors; 3-fold for testicular and bladder cancers; and 2-fold for most common tumors, for example, colon, lung, prostate, stomach, esophagus, pancreas, ovary, and breast. PTLDs comprise a spectrum of diseases characterized by lymphoid proliferation ranging from benign lymphoid hyperplasia to high-grade invasive lymphoma. Most PTLDs are B-cell lymphomas arising as a result of immunosuppression and many of these are associated with EBV infections. PTLD is reported to occur in up to 3% of adults and up to 10% of children after kidney or liver transplantation (Oplez et al, 2003). Registry data has emerged that identify the use of a depleting anti–T-cell antibody for induction therapy as a significant risk factor for PTLD (Opelz et al, 2006). Since the rates for most malignancies remain higher after kidney transplantation compared with the general population, cancer should continue to be a major focus of prevention.