uttamchandani, biochip tissue chip 2013, 3:1 …...is guided by high-throughput screening (hts)...

2
Open Access Editorial Biochips & Tissue Chips Volume 3 • Issue 1 • 1000e120 J Biochip Tissue Chip ISSN: 2153-0777 JBTC, an open access journal Uttamchandani, Biochip Tissue Chip 2013, 3:1 http://dx.doi.org/10.4172/2153-0777.1000e120 Small molecules have big potential. Nowhere is this metaphor more compelling than in the pharmaceutical industry. Drug discovery is guided by High-roughput Screening (HTS) which, in turn, guides therapy. HTS has its unique challenges, as few platforms are able to meet the opposing demands of high data quality, high-throughput and low cost. Increased miniaturization, through micro- or nano-chip-based approaches as well as on-bead screening, is amongst the novel ways in which data can be assimilated at an even higher rate [1]. However, these advances do not in themselves expand the yield and knowledge from screening expeditions, pointing to intrinsic factors beyond simply the throughput attainable. By reflecting on the hits and misses of HTS in this editorial, three personal perspectives on how screening efforts can be made more fruitful are offered. e pharmaceutical industry is heavily reliant on HTS for identifying hits from massive compound libraries. Screening represents an important early step within the arduous journey of drug development. Every year millions of compounds are screened against thousands of putative targets in the search for bioactive ligands, both in academia and industry. Yet the yields from such screens are oſten limited, with only several hundred compounds being identified as putative hits, from libraries numbering ten thousand or larger. rough lead optimization, preclinical testing and phased clinical trials, only one molecule from a set of ten thousand may actually emerge as an approved therapeutic. e role for large-scale screening was evident from the early 1990s [2,3] and its principle has changed little since then. Essentially the problem that HTS addresses is this: How does one find one or more compounds with desired properties for a particular target from within repertoires of thousands to millions? An analogy is to find a key, from amongst millions, which fits a single lock. us, arose the concept of massively parallel screening, fuelled by the rise of combinatorial chemistry in creating molecular diversity. HTS and combinatorial chemistry were the two pillars driving drug discovery over the last two decades. Yet, there is a general feeling in the field that these technologies have failed to deliver on their true potential. On close examination, there are ways in which HTS can be approached differently to improve its value to drug discovery. Let us examine here how this may be achieved. e current state of the art in screening is well represented by 384-well and 1536-well plates, offering reaction volumes that range from 10-20 µl to 2.5-5 µl, respectively [4]. Under this solution- phase format, every assay conceivable can be monitored through luminescence, fluorescence or color changes using a diversity of plate- based instrumentation [5]. Robotic liquid handlers are purpose-built to perform micro-pipetting at blistering speeds, enabling push-button operations where a single facility can conduct over a million screening assays within 1-3 months. Increased throughput can be readily attained with screening on solid-support, through the use of microarrays and on-bead screening by reducing reactions to the nanoscale [6,7]. So generating large datasets today for HTS is no longer an obstacle. While hits are the exciting output of such biological screens, rarely is the rest of the data consolidated and analyzed for the wealth of information it contains. A negative result is also information– it highlights molecular events and the chemical profiles of molecules that do not/minimally perturb the target. As omas Edison described it, “I have not failed, I have just found 10,000 ways that do not work.” Similarly, the information yielded from screens can translate into knowledge of off-target effects, experience that other practitioners can apply/replicate. However with the hits taking the limelight, the majority of screening datasets remain undisclosed, unreported and unpublished. My first suggestion is that we look at results from HTS systematically and holistically. By throwing away the bathwater and keeping only what we think is the baby, we limit ourselves from the true contribution of HTS. Even single concentration screens performed using a thousand unpurified compounds against a single target which can yield valuable information into molecular preferences. If this is done systematically, we will reach a point where we can map comprehensively interaction data-maps from millions of compounds against thousands of proteins. Such a matrix will prove useful when evaluating the off-target effects of small molecules hits from across multiple screens [8]. It is considerably easier to find a ligand for a given target protein in vitro than it is to find drugs with the pharmacokinetic profiles effective in vivo, within the human body, hence the need to understand the behavior of as many proteins with small molecules. is information can be readily captured through HTS datasets. Taking HTS a step further, improved experimental design can aid in the characterization hit selectivity during the screening step at the outset, improving the selection of hits for downstream optimization. Traditionally, it is the onward lead development phase that is tasked to confer selectivity onto the selected hit [9]. is is to say that one looks for a potent molecule first during HTS and only then looks into how to make it selective during lead development. is convention could be revisited, as it is not usually potency that is the deciding factor, it is usually the molecule’s selectivity that has an equal or greater impact on its success as a drug. Why then wait to address selectivity, if it can be done at the outset, within the HTS phase itself? Accordingly, there are ways to perform screens not only with targets, but also anti-targets upfront, to ensure that the hits selected fit the criteria of potency as well as selectivity, during HTS [10]. Such an approach would reduce hit attrition, even though it may come with an increased cost for the initial screening phase. irdly a lot of time and effort is spent on hit re-validation, to confirm whether the hits acquired are in fact “real” hits. Many factors *Corresponding author: Dr. Mahesh Uttamchandani, Defence Medical and Environmental Research Institute, DSO National Laboratories, 27 Medical Drive, 117510, Singapore, Republic of Singapore, Tel: +65 6485 7214; Fax: +65 6485 7033; E-mail: [email protected] Received November 19, 2012; Accepted November 21, 2012; Published November 23, 2012 Citation: Mahesh U (2013) High-Throughput Screening: What are we missing? J Biochips Tiss Chips 3:e120. doi:10.4172/2153-0777.1000e120 Copyright: © 2013 Mahesh U. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. High-Throughput Screening: What are we missing? Mahesh Uttamchandani* Defence Medical and Environmental Research Institute, DSO National Laboratories, 27 Medical Drive, 117510, Singapore

Upload: others

Post on 15-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Uttamchandani, Biochip Tissue Chip 2013, 3:1 …...is guided by High-Throughput Screening (HTS) which, in turn, guides therapy. HTS has its unique challenges, as few platforms are

Open AccessEditorial

Biochips & Tissue Chips

Volume 3 • Issue 1 • 1000e120J Biochip Tissue ChipISSN: 2153-0777 JBTC, an open access journal

Uttamchandani, Biochip Tissue Chip 2013, 3:1http://dx.doi.org/10.4172/2153-0777.1000e120

Small molecules have big potential. Nowhere is this metaphor more compelling than in the pharmaceutical industry. Drug discovery is guided by High-Throughput Screening (HTS) which, in turn, guides therapy. HTS has its unique challenges, as few platforms are able to meet the opposing demands of high data quality, high-throughput and low cost. Increased miniaturization, through micro- or nano-chip-based approaches as well as on-bead screening, is amongst the novel ways in which data can be assimilated at an even higher rate [1]. However, these advances do not in themselves expand the yield and knowledge from screening expeditions, pointing to intrinsic factors beyond simply the throughput attainable. By reflecting on the hits and misses of HTS in this editorial, three personal perspectives on how screening efforts can be made more fruitful are offered.

The pharmaceutical industry is heavily reliant on HTS for identifying hits from massive compound libraries. Screening represents an important early step within the arduous journey of drug development. Every year millions of compounds are screened against thousands of putative targets in the search for bioactive ligands, both in academia and industry. Yet the yields from such screens are often limited, with only several hundred compounds being identified as putative hits, from libraries numbering ten thousand or larger. Through lead optimization, preclinical testing and phased clinical trials, only one molecule from a set of ten thousand may actually emerge as an approved therapeutic.

The role for large-scale screening was evident from the early 1990s [2,3] and its principle has changed little since then. Essentially the problem that HTS addresses is this: How does one find one or more compounds with desired properties for a particular target from within repertoires of thousands to millions? An analogy is to find a key, from amongst millions, which fits a single lock. Thus, arose the concept of massively parallel screening, fuelled by the rise of combinatorial chemistry in creating molecular diversity. HTS and combinatorial chemistry were the two pillars driving drug discovery over the last two decades. Yet, there is a general feeling in the field that these technologies have failed to deliver on their true potential. On close examination, there are ways in which HTS can be approached differently to improve its value to drug discovery. Let us examine here how this may be achieved.

The current state of the art in screening is well represented by 384-well and 1536-well plates, offering reaction volumes that range from 10-20 µl to 2.5-5 µl, respectively [4]. Under this solution-phase format, every assay conceivable can be monitored through luminescence, fluorescence or color changes using a diversity of plate-based instrumentation [5]. Robotic liquid handlers are purpose-built to perform micro-pipetting at blistering speeds, enabling push-button operations where a single facility can conduct over a million screening assays within 1-3 months. Increased throughput can be readily attained with screening on solid-support, through the use of microarrays and on-bead screening by reducing reactions to the nanoscale [6,7]. So generating large datasets today for HTS is no longer an obstacle.

While hits are the exciting output of such biological screens, rarely is the rest of the data consolidated and analyzed for the wealth of information it contains. A negative result is also information– it highlights molecular events and the chemical profiles of molecules that

do not/minimally perturb the target. As Thomas Edison described it, “I have not failed, I have just found 10,000 ways that do not work.” Similarly, the information yielded from screens can translate into knowledge of off-target effects, experience that other practitioners can apply/replicate. However with the hits taking the limelight, the majority of screening datasets remain undisclosed, unreported and unpublished.

My first suggestion is that we look at results from HTS systematically and holistically. By throwing away the bathwater and keeping only what we think is the baby, we limit ourselves from the true contribution of HTS. Even single concentration screens performed using a thousand unpurified compounds against a single target which can yield valuable information into molecular preferences. If this is done systematically, we will reach a point where we can map comprehensively interaction data-maps from millions of compounds against thousands of proteins. Such a matrix will prove useful when evaluating the off-target effects of small molecules hits from across multiple screens [8]. It is considerably easier to find a ligand for a given target protein in vitro than it is to find drugs with the pharmacokinetic profiles effective in vivo, within the human body, hence the need to understand the behavior of as many proteins with small molecules. This information can be readily captured through HTS datasets.

Taking HTS a step further, improved experimental design can aid in the characterization hit selectivity during the screening step at the outset, improving the selection of hits for downstream optimization. Traditionally, it is the onward lead development phase that is tasked to confer selectivity onto the selected hit [9]. This is to say that one looks for a potent molecule first during HTS and only then looks into how to make it selective during lead development. This convention could be revisited, as it is not usually potency that is the deciding factor, it is usually the molecule’s selectivity that has an equal or greater impact on its success as a drug. Why then wait to address selectivity, if it can be done at the outset, within the HTS phase itself? Accordingly, there are ways to perform screens not only with targets, but also anti-targets upfront, to ensure that the hits selected fit the criteria of potency as well as selectivity, during HTS [10]. Such an approach would reduce hit attrition, even though it may come with an increased cost for the initial screening phase.

Thirdly a lot of time and effort is spent on hit re-validation, to confirm whether the hits acquired are in fact “real” hits. Many factors

*Corresponding author: Dr. Mahesh Uttamchandani, Defence Medical and Environmental Research Institute, DSO National Laboratories, 27 Medical Drive, 117510, Singapore, Republic of Singapore, Tel: +65 6485 7214; Fax: +65 6485 7033; E-mail: [email protected]

Received November 19, 2012; Accepted November 21, 2012; Published November 23, 2012

Citation: Mahesh U (2013) High-Throughput Screening: What are we missing? J Biochips Tiss Chips 3:e120. doi:10.4172/2153-0777.1000e120

Copyright: © 2013 Mahesh U. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

High-Throughput Screening: What are we missing?Mahesh Uttamchandani*

Defence Medical and Environmental Research Institute, DSO National Laboratories, 27 Medical Drive, 117510, Singapore

Page 2: Uttamchandani, Biochip Tissue Chip 2013, 3:1 …...is guided by High-Throughput Screening (HTS) which, in turn, guides therapy. HTS has its unique challenges, as few platforms are

Citation: Mahesh U (2013) High-Throughput Screening: What are we missing? J Biochips Tiss Chips 3:e120. doi:10.4172/2153-0777.1000e120

Page 2 of 2

Volume 3 • Issue 1 • 1000e120J Biochip Tissue chipISSN: 2153-0777 JBTC, an open access journal

can contribute to false positive hits in HTS. The hit may bind to the target, but may not actually modulate its biological activity. Perhaps the hit may bind and modulate the activity of the target, but in an undesirable/suboptimal way. The challenge is thus to be able to identify molecules that desirably perturb the activity of the target, which often depends on the design of the bioassay being used. For simplicity, screening most often relies on target binding assays. Such binding events may thus be attributed to small molecule interactions to sites not responsible for target activity. One solution to this problem is to perform screening with both the native target and its functionally crippled analogue (that is to say, a denatured or mutated version of the target protein) [11,12]. If both the crippled and normal target binds to the same hits, these are likely to be false positives, as it is unlikely these molecules would contribute to the sought after functional effects. Such a strategy would enable the isolation of only functional hits within the HTS phase, diminishing false positive rates.

To conclude, there is great untapped potential in HTS. While technologies will expand screening capacity, the guiding principles provided here should make screening efforts a lot more fulfilling and productive. As screening is a high-risk process, positive hits as an outcome are not always guaranteed. Even so, each HTS initiative provides useful information and screening results that should be captured and valued. A better design of the overall screening effort would likely enable us to prioritize and focus on hits that will most likely succeed in the subsequent pre-clinical and clinical phases. These strategies could produce worthy outcomes from screening expeditions, ultimately increasing the rates of positive results from HTS and the yields from drug discovery.

References

1. Uttamchandani M, Yao SQ (2010) The expanding world of small molecule microarrays. Methods Mol Biol 669: 1-15.

2. Macarron R, Banks MN, Bojanic D, Burns DJ, Cirovic DA, et al. (2011) Impact of high-throughput screening in biomedical research. Nat Rev Drug Discov 10: 188-195.

3. Pereira DA, Williams JA (2007) Origin and evolution of high throughput screening. Br J Pharmacol 152: 53-61.

4. Mayr LM, Bojanic D (2009) Novel trends in high-throughput screening. Curr Opin Pharmacol 9: 580-588.

5. Bergese P, Cretich M, Oldani C, Oliviero G, Di Carlo G, et al. (2008) Advances in parallel screening of drug candidates. Curr Med Chem 15: 1706-1719.

6. Foong YM, Fu J, Yao SQ, Uttamchandani M (2012) Current advances in peptide and small molecule microarray technologies. Curr Opin Chem Biol 16: 234-242.

7. Maillard N, Clouet A, Darbre T, Reymond J-L (2009) Combinatorial libraries of peptide dendrimers: design, synthesis, on-bead high-throughput screening, bead decoding and characterization. Nat Protoc 4: 132-142.

8. Pritchard JF, Jurima-Romet M, Reimer ML, Mortimer E, Rolfe B, et al. (2003) Making better drugs: Decision gates in non-clinical drug development. Nat Rev Drug Discov 2: 542-553.

9. Bleicher KH, Böhm HJ, Müller K, Alanine AI (2003) Hit and lead generation: beyond high-throughput screening. Nat Rev Drug Discov 2: 369-378.

10. Dar AC, Das TK, Shokat KM, Cagan RL (2012) Chemical genetic discovery of targets and anti-targets for cancer polypharmacology. Nature 486: 80-84.

11. Uttamchandani M, Lee WL, Wang J, Yao SQ (2007) Quantitative inhibitor fingerprinting of metalloproteases using small molecule microarrays. J Am Chem Soc 129: 13110-13117.

12. Kodadek T (2010) Rethinking screening. Nat Chem Biol 6: 162-165.

Submit your next manuscript and get advantages of OMICS Group submissionsUnique features:

• Userfriendly/feasiblewebsite-translationofyourpaperto50world’sleadinglanguages• AudioVersionofpublishedpaper• Digitalarticlestoshareandexplore

Special features:

• 200OpenAccessJournals• 15,000editorialteam• 21daysrapidreviewprocess• Qualityandquickeditorial,reviewandpublicationprocessing• IndexingatPubMed(partial),Scopus,DOAJ,EBSCO,IndexCopernicusandGoogleScholaretc• SharingOption:SocialNetworkingEnabled• Authors,ReviewersandEditorsrewardedwithonlineScientificCredits• Betterdiscountforyoursubsequentarticles

Submityourmanuscriptat:www.omicsonline.org/submissionCitation: Mahesh U (2013) High-Throughput Screening: What are we missing? J Biochips Tiss Chips 3:e120. doi:10.4172/2153-0777.1000e120