Abstract:
This study investigates the philosophical, linguistic and computational dimensions of Leibniz’s
ambitious scheme of mathematizing natural language and automating reasoning for the purposes
of developing what he referred to as the ‘Encyclopaedia of human knowledge.’ Leibniz envisioned
a universal language capable of perfectly mirroring reality and representing human knowledge
through symbolism; and he referred to such a language as the characteristica universalis. In a
complementary manner, he sought to develop an automated framework for logical inferences
through what he referred to as the calculus ratiocinator. This study appraises the feasibility of
Leibniz’s ideas in natural discourse. It provides an in-depth analysis of the historical perspectives
about natural language from which Leibniz ideas emerged. It also demonstrates attempts made
towards developing his universal language and the reasoning calculus. The study discusses the
extent to which Leibniz’s project has been successfully implemented in the advancement of
artificial intelligence research and automated reasoning in computer science and mathematics,
respectively. It also evaluates the capability of Leibniz’s formalism in eliminating ambiguity, or
in more general terms, semantic indeterminacy. In addition to that, the study explores certain
linguistic models of ambiguity resolution which enable natural language users to use language
efficiently. And in light of which, the study concludes by indicating the limitations of Leibniz’s
characteristica universalis and calculus ratiocinator in natural discourse. The study also
highlights the significance of natural language, despite its complexities, in natural discourse
settings