CAPRI (Critical Assessment of Predicted Interactions) is a community-wide experiment devoted to the prediction of protein complexes based on the structures of the component proteins.
The results for targets 43-58 were evaluated at the Fifth CAPRI Evaluation Meeting in Utrecht in April 2013, for 63 predictor groups and 12 automated docking servers.
Automatic protein docking server ClusPro v2.0 developed by the groups of Dima Kozakov and Sandor Vajda, was the best in the server category. In particular, the server’s performance was comparable to that of the best human predictor groups, although the latter had access to all information available in the literature. The summary of the results is also shown below. For each predictor group, the table shows the number of acceptable or better predictions, and among those the number of high quality models, indicated by three stars, as well as the number of medium quality solutions, indicated by two stars.
Thanks to successful CAPRI participation, ClusPro v2.0 enjoys heavy usage by academic community. In the last 4 years it ran more than 50000 jobs for 4000 registered and around 3000 unregistered users. Although the number of CAPRI targets is still too small for any significant conclusion, we believe that our results provide some information on the current state of automated protein docking.
Our main observations are as follows.
ClusPro reliably yields correct predictions for the relatively “easy” targets with at most moderate conformational changes in the backbone. In addition to unbound proteins of known structure, such “easy” targets may include designed proteins obtained by mutating a few residues. Targets T50 and T53 were in this category, and ClusPro provided good results. The CAPRI community submitted many good predictions for targets T47, T48, T49, T50, T53, and T57, that is, exactly for the ones ClusPro also predicted well, confirming that these targets are relatively easy. Based on this logic we should have obtained an acceptable or better model for an additional target, T58, but the change in the backbone conformation of a lysozyme loop was too large for ClusPro, although other groups using rigid-body methods such as GRAMM were able to produce an acceptable model, but only for manual submission. The three other targets, T46, T51, and T54 which were difficult for ClusPro were also difficult for the entire CAPRI community, resulting in very few acceptable submissions. As will be further discussed, all these targets required homology modeling.
The quality of automated docking by ClusPro is very close to that of the best human predictor groups, including of our own. We consider this very important, because servers have to submit results within 48 h and the predictions should be reproducible by the server, whereas human predictors have several weeks and can use any type of information. In Rounds 22–27 three predictor groups (Bonvin, Bates, and Vakser) did extremely well, and submitted acceptable or better predictions for more than six targets. These three were followed by six groups that had good predictions for six targets: Vajda (2*** + 3** + 1*), Fernandez-Recio (1*** + 3** + 2*), Shen (1*** + 3** + 2*), Zou (1*** + 2** + 3*), Zacharias (1*** + 5*), and ClusPro (4** + 2*). The only difference between ClusPro and the other five groups is due to the ability of the human predictors obtaining high accuracy predictions for T47 by template-based modeling. Since ClusPro does not have this option, it had to use direct docking, and produced only a medium accuracy model. We emphasize that in the earlier rounds of CAPRI server predictions were substantially inferior to those of the human predictors—this is definitely not the case for ClusPro 2.0 in Rounds 22–27. However, ClusPro seems to be an exception, as for most other groups the manual submissions are generally much better than the submissions from their servers.
As mentioned, our manual submissions were obtained by refining the ClusPro results using “stability analysis”, requiring a large number of relatively short MCM runs. In spite of substantial computational efforts, the improvements due to the refinement are moderate. Apart from T47, where obtaining high accuracy predictions were trivial, the refinement improved predictions only for two targets, T53 and T57. However, it appears that refining predictions to high accuracy was generally very difficult for all targets (again, not considering T47). In fact, the only high accuracy model submitted by any group for any target in Rounds 22–27 was our manual submission for target T53.
Fourth, a new development, not seen in previous rounds of CAPRI, is that the top ranked model M01 provided by ClusPro was acceptable or better quality for all the six targets that Cluspro was able to predict. M01 was also the highest quality model for five of these six targets. The only exception was T48, where models M06 and M07 were medium quality, while model M01 was only acceptable. Due to the very small number of targets the generality of this observation is not at all clear, but suggests that ranking predictions based on cluster size can reliably identify the highest accuracy models.
The most difficult targets, T46, T51, and T54 required the construction of homology models based on templates with moderate sequence identity. The poor results for these targets, either by ClusPro or rest of the CAPRI community, show that the quality of homology models plays a critical role in docking. For example, while ClusPro did not produce any prediction for target T54 with the models we constructed, an acceptable submission was found by the Shen group, who also relied on ClusPro for the initial docking, but used a better homology model. Thus, there is a need for methods that are specifically designed for docking homology models, for example, by further reducing the sensitivity of the scoring function to steric clashes involving mutated side chains and predicted loop regions.
References
Lensink, M. F. and Wodak, S. J. 2013. Docking, scoring, and affinity prediction in CAPRI. Proteins: Structure, Function, and Bioinformatics. link
Kozakov D, Beglov D, Bohnuud T, Mottarella SE, Xia B, Hall DR, Vajda S. 2013. How good is automated protein docking? Proteins: Structure, Function, and Bioinformatics. link
Comments