This chapter explores the multiple dimensions to data-driven ontology evaluation. Theoretically and empirically it suggests two ontology evaluation metrics - temporal bias and category bias, as well as an evaluation approach that are geared towards accounting for bias in datadriven ontology evaluation. Ontologies are a very important technology in the semantic web. They are an approximate representation and formalization of a domain of discourse in a manner that is both machine and human interpretable. Ontology evaluation therefore, concerns itself with measuring the degree to which the ontology approximates the domain. In data-driven ontology evaluation, the correctness of an ontology is measured agains a corpus of documents about the domain. This domain knowledge is dynamic and evolves over several dimensions such as the temporal and categorical. Current research makes an assumption that is contrary to this notion and hence does not account for the existence of bias in ontology evaluation. This chapter addresses this gap through experimentation and statistical evaluation.