This site will work and look better in a browser that supports web standards, but it is accessible to any browser or Internet device.

Timothy Pollock

 

Research

VC Reputation Index

Teaching

Press Coverage

Personal Interests

 

Home

 

 

 

 

VC Reputation Index

Terms of Use

Lee-Pollock-Jin VC Reputation Index

© 2011 Peggy M. Lee, Timothy G. Pollock and Kyuho Jin

THIS DATA IS BEING MADE PUBLICLY AVAILABLE to individuals for scholarly research purposes only. It is not to be used by profit-making entities for any purpose or by individuals for any other purpose than scholarly research. The data may be used by non-profit entities in certain select circumstances, but only with the express prior written consent of the owners. Prior written consent may be granted or denied for any or no reason by the owners. Acknowledgement of the owners as the source of the data must be made in any written or published work, and the following publication should be referenced:

Lee, P.M., Pollock, T.G. & Jin, K. 2011. The contingent value of venture capitalist reputation for entrepreneurial firms. Strategic Organization, 9(1): 33-69. PDF version of this article.

 

Acknowledgement of Support

We gratefully acknowledge the financial support of the Oxford University Centre for Corporate Reputation, which allowed us to update our original index for the 1990-2000 time period and to extend our index through 2010.

Oxford CCR Logo

 

LPJ VC Reputation Index

1990-2000 (Excel 2007 Spreadsheet)

2001-2010 (Excel 2007 Spreadsheet)

 

Information About the Construction and Validation of the LPJ VC Reputation Index

The LPJ VC reputation index is a multi-item, time varying index of formative indicators of VC firm reputation. The index is calculated annually for the period 1990-2010, and covers from approximately 500 to 1300 venture capital firms, depending on the year. For venture capital firms less than five years old, we used all available data to date. The variables included in the index are:

To create the reputation index, we standardized all our measures by transforming them into z-scores so that scaling was comparable when the various measures were aggregated. We computed a Cronbach’s alpha for each year to assess the reliability of the variables. The Cronbach’s alpha exceeded 0.80 every year, which is considered satisfactory for exploratory research (DeVellis, 1991; Nunnally, 1967).

For the initial validation of items, we empirically appraised the underlying factor structure by means of exploratory factor analysis (EFA). First, we evaluated the factorability of the correlation matrix by examining the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy (Sharma, 1996). The KMO measures were over 0.80 each year, suggesting the correlation matrix is appropriate for factoring (Tabachnick & Fidell, 2001).

Second, we examined item factor loadings to decide whether to retain all the initial items. All factor loadings achieved acceptable levels (Hair et al., 1979; Kellow, 2006; Worthington & Whittaker, 2006). Thus, we decided to retain all six items for our index.

Third, we investigated the underlying dimensionality of the item set and confirmed that there was only one factor (i.e., the measures achieve uni-dimensionality) based on both the Kaiser-Guttman rule and parallel analysis (O'Connor, 2000; Sharma, 1996).

We also conducted a confirmatory factor analysis (CFA) to further verify the validity of the factor structure following Hu and Bentler’s (1999) recommendations. This analysis showed that our factor model fit well across each year, confirming the validity of the one factor model and enabling us to create a single measurement scale by aggregating all the items.

Finally, we wanted to create a measure that is comparable across years. To create an intuitive measure that could be compared across time, we normalized the scores within each year on a 100-point scale. Since our index scores can take on negative values, within each year we added a constant to all VC reputation scores equal to the lowest reputation index score calculated for that year plus 0.01. We then divided each VC reputation score by the highest score observed that year. Thus, we created a measure that maintained the relative relationships among VCs within each year, while creating comparability in values across years.

References

DeVellis, R. F. 1991. Scale development: Theory and applications. Newbury Park, CA: Sage Publications.

Hair, J. F., Anderson, R. E., Tatham, R. L., & Grablowsky, B. J. 1979. Multivariate data analysis. Tulsa, OK: Petroleum Publishing Co.

Hu, L. & Bentler, P. M. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6: 1-55.

Kellow, J. T. 2006. Using Principal Components Analysis in Program Evaluation: Some Practical Considerations. Journal of MultiDisciplinary Evaluation, 5: 89-107.

Nunnally, J. C. 1967. Psychometric Theory. New York: McGraw-Hill.

O'Connor, B. P. 2000. SPSS and SAS programs for determining the number of components using parallel analysis and Velicer's MAP test. Behavior Research Methods, Instruments, & Computers, 32(3): 396-402.

Sharma, S. 1996. Applied Multivariate Techniques. New York: John Wiley & Sons.

Tabachnick, B. G. & Fidell, L. S. 2001. Using multivariate statistics (4th ed.). Boston, MA: Allyn and Bacon.

Worthington, R. L. & Whittaker, T. A. 2006. Scale Development Research. A Content Analysis and Recommendations for Best Practices. The Counseling Psychologist, 34(6): 806-838.

 

If you have any questions about my work, please contact me at tpollock@psu.edu.

 


Penn State University Logo