The impact of background lumiphore in luminesence optical tomography is examined. To demonstrate its effects, numerical simulations were performed to calculate the diffusion–regime limiting form of forward–problem solutions for a specific test medium. Image reconstructions were performed using a CGD algorithm that makes use of the maximum possible concentration in order to estimate the background concentration, and show that it improves image quality when background lumiphore is present. We conclude that the usual measure of background lumiphore's effect, which is the target–to–background lumiphore concentration ratio, is not adequate to define the contribution from the background lumiphore. The reason for this is that image quality is also a function of target size and location. An alternative measure that we find superior is described. The results indicate that the improved algorithm yields better image quality for low target–to–background ratios.