top of page

Philosophy Writings

In expositions of Probability Theory, probability functions are defined either on sets of a sigma algebra or on propositions of a propositional language. This paper advocates in favor of defining probability functions on propositions. To this end, it will be shown that probability functions on propositions are natural (invariant

preserving) generalizations of evaluation functions from propositional logic, giving

an elegant grounding and foundation for the subject. In addition, I show that elementary outcomes can be understood as special kinds of propositions, therefore making a propositional approach to probability theory more fundamental.

In this essay I lay bare the fact that it was the pulling apart of is and ought in the Western tradition that opened its existential vacuum. By comparing and contrasting Hasidic Jewish ontology and Western ontologies, I show that radical individualism is the reason that this existential vacuum has not been mended. To commence the analysis I introduce fundamental tenets of Heideggerian ontology. Then I build off of this foundation to provide an ontological analysis of Tanya. Then I introduce the is-ought problem and show how it is connected to Heideggerian ontology. With an understanding of Jewish and Western ontologies having been developed, I conclude by turning our attention to the differences and similarities between the origins of Jewish and Western meaning respectively. 

I argue that Derrida’s “Whom To Give To (Knowing not to Know)” reflects a deep epistemic crisis in ethics revealed by the moral horror of collective fanaticism omnipresent in Nazi Germany. First, I outline the crisis explicitly by borrowing language from Viktor Frankl. Then, I demonstrate how Derrida's analysis of Kierkegard's analysis of the Binding of Isaac reflects and affirms this crisis. Finally, I summarize the meaning of the epistemic crisis in ethics that Derrida deconstructs. The lesson we can learn from his analysis is profound.

​

​

​

In this paper I explore how Hitler's Nazism throws the ideals of the Enlightenment back at our faces. The ideal of progress and the veneration of 

science and art is part and parcel of his justification for his atrocious program. What are we to make of this? While I believe the Enlightenment was a force for good, I try my best to develop a thought provoking account which calls into question the legacy of the Enlightenment. I disagree with what I have written here because the Enlightenment value of human life is crucially missing from and undermines the account. Nonetheless, I believe the essay is at least thought provoking. 

‘What is explanation?’ That is the question theories of explanation attempt to answer. Based on a 2018 collection of essays by prominent thinkers in the philosophy of explanation, the current “most promising… monist approach [to account for explanation] are counterfactual theories” (Reutlinger and Saasti 77). In this paper I aim to show why counterfactual theories of explanation are incorrect. They are incorrect, I claim, because there are examples of non-causal explanation in which counterfactual dependence does not figure into the account. 

How we structure our mental representations is a central problem to philosophy of mind and the development of artificial intelligence. In a landmark paper by Fodor and Pylyshyn, it is argued that mental representations are organized according to a classical rather than a connectionist architecture. At first glance Fodor and Pylyshyn's argument flies in the face of the neural network hype that characterizes contemporary AI efforts. In this paper I argue that Fodor and Pylyshyn's argument, while mostly sound, does not support as strong a claim as it appears. In particular, I argue that their proof that our minds obey a classical architecture is not equivalent to our minds having a language of thought.

bottom of page