For this we need Where Ka,i is the association constant of the inhibitor to target i, which is the inverse of the binding constant Kd,i. In short Ka,i 1 Kd,i. If we express selleck the free energy in units of per molecule rather than per mole, equation becomes and equation can be rewritten as will bind to one target almost exclusively and have a narrow distribution. A promis cuous inhibitor will bind to many targets and have a broad distribution. The broad ness of the inhibitor distribution on the target mixture reflects the selectivity of the compound. The binding of one inhibitor molecule to a particular protein can be seen as a thermodynamical state with an energy level determined by Kd. For simplicity we use the term Kd to represent both Kd and Ki. The distribution of molecules over these energy states is given by the Boltzmann law.
As the broadness of a Boltzmann distribution is measured by entropy, the selectivity implied in the distributions of Figure 1d can be captured in an entropy. A similar insight is given by information theory. It is well established that information can be quantified using entropy. A selective kinase inhibitor can be Equation defines how a selectivity entropy can be calculated from a collection of association constants Ka. Here ��K is the sum of all association constants. It is most simple to apply equation to directly measured binding constants or inhibition constants. Also IC50s can be used, but this is only really meaning ful if they are related to Kd. Fortunately, for kinases it is standard to measure IC50 values at KM,ATP.
Ide ally, such IC50s equal 2 times Kd, according to the Cheng Prusoff equation. The factor 2 will drop out in equation, and we therefore can use data of the format IC50 at KM, ATP directly as if they were Kd. Protocol for calculating a selectivity entropy From the above, it follows that a selectivity entropy can be quickly calculated from a set of profiling data with the following protocol This process can be easily automated for use with large datasets or internal databases. Examples The selectivity entropy is based on calculating the entropy of the hypothetical inhibitor distribution in a protein mixture. To give more insights into the proper ties of this metric, some examples are useful. An inhibitor that only binds to a single kinase with a Kd of 1 nM has Ka ��Ka 1. Then Ssel 0, which is the lowest possibly entropy.
An inhibitor Dacomitinib that binds to two kinases with a Kd of 1 nM has Kx ��Ka Ky ��Ka 0. 5 and a selectiv ity entropy of 0. 69. Thus lower selectivity results in higher entropy. If we modify the compound such that it still inhibits kinase X with a Kd of 1 nM, but inhibits less strongly kinase Y with a Kd of 1 uM, then the new inhibitor is more specific. Now Kx ��Ka 109 and Ky ��Ka 106, resulting in Ssel 0. 0079. This is less than 0. 69. This shows that the selectivity entropy can distinguish in the case where the selectivity scores S and S cannot.