Speakers:
Biyu Jade He (NYU Medical Centre)
Hakwan Lau (UCLA)
Victor Lamme (University of Amsterdam)
Johannes Fahrenfort (University of Amsterdam)
Where in the brain are the neural correlates of perceptual consciousness? Some leading theories of consciousness, including global workspace and higher-order thought theories, hold that these correlates centrally involve prefrontal cortex. Other leading theories, including first-order and integrated information theories, hold that these correlates centrally involve sensory cortices, with prefrontal cortex playing at most a secondary role. In recent years much experimental evidence has been brought to bear on both sides of the question.
In this debate, Hakwan Lau (UCLA) and Biyu Jade He (NYU) will defend the view that neural activity in prefrontal cortex is important for conscious perception, while Victor Lamme (Amsterdam) and Johannes Fahrenfort (Amsterdam) will argue that prefrontal activity is not important for conscious perception.
NYU’s Center for Mind, Brain, and Consciousness will host a debate on the relationship between memory and imagination.
This event will be held in person at Jurow Hall, Silver Center, 31 Washington Place, and will also be streamed over Zoom at: tinyurl.com/nyumemory
Attendance is free but registration (requiring proof of vaccination) is required for non-NYU guests. Please register no later than April 25th at: forms.gle/tNqkBYPDcZxTdxY38
The NYU Mind, Ethics, and Policy Program is thrilled to be hosting a talk by David Chalmers on whether large language models can be sentient.
About the talk
Artificial intelligence systems—especially large language models, giant neural networks trained to predict text from the internet—have recently shown remarkable abilities. There has been widespread discussion of whether some of these language models might be sentient. Should we take this idea seriously? David Chalmers will discuss the underlying issue and try to break down the strongest reasons for and against.
The talk, which is free and open to the public, will take place on October 13 2022 from 5:00-6:30pm ET. The in-person location will be Jurow Lecture Hall (inside the Silver Center at 32 Waverly Place), and the virtual location will be Zoom (you can sign up to receive a link by clicking “Register here” below). There will also be a light reception from 6:30-7:30pm in the Silverstein Lounge (immediately outside of the Jurow Lecture Hall).
– If you plan to attend in person, please be prepared to show proof of full vaccination.
– If you plan to attend virtually, please check your email for a link in advance of the event.
About the speaker
David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at NYU. He is the author of The Conscious Mind (1996), Constructing the World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He co-founded the Association for the Scientific Study of Consciousness and the PhilPapers Foundation. He is known for formulating the “hard problem” of consciousness, which inspired Tom Stoppard’s play The Hard Problem, and for the idea of the “extended mind,” which says that the tools we use can become parts of our minds.
Thank you to our co-sponsors for your generous support of this event:
-
NYU Center for Bioethics
-
NYU Center for Mind, Brain, and Consciousness
-
NYU Minds, Brains, and Machines Initiative
This talk explores the reflexive nature of consciousness, which consists primarily in the fact that a state of consciousness has a reflexive relation to the subject who has that state, so that the subject can typically be aware of itself as having that state. Comparing Kant’s, Fichte’s, and selected contemporary analytic theories of this reflexivity shows that there is a crucial difference in the way the relation between form (or mode) and content of a state of consciousness is conceived. The first part examines Kant’s formal theory of consciousness: reflexivity is understood not in terms of a self-referential content resulting from a reflection on the state of the subject, but as the universal transcendental form that any content must have in order to be representationally significant and potentially conscious to the subject. The second part examines Fichte’s departure from Kant in his theory of a self-positing consciousness: in the original act of self-positing, the mere form of reflexivity is turned into a self-referential content that determines the subject as an object from the absolute standpoint of consciousness. The third part examines analytic theories that explain the reflexivity (or what is often called the subjective character) of consciousness on a model of mental indexicality. These theories tend to reduce reflexivity to an objective constituent of content that, although often implicit, can be read off from the subject’s contextual situatedness in nature. In conclusion, Kant’s theory can be understood as a moderate, human-centered kind of perspectivism that navigates between Fichtean absolute subjectivity and a naturalist absolute objectivity.
Registration is free but required. A registration link will be shared via email with our department mailing lists a few weeks before the event. Please contact Jack Mikuszewski at jhm378@nyu.edu if you did not receive a registration link.
The Philosophy Department provides reasonable accommodations to people with disabilities. Requests for accommodations should be submitted to philosophy@nyu.edu at least two weeks before the event.
A two-day conference on the philosophy of deep learning, organized by Ned Block (New York University), David Chalmers (New York University) and Raphaël Millière (Columbia University), and jointly sponsored by the Presidential Scholars in Society and Neuroscience program at Columbia University and the Center for Mind, Brain, and Consciousness at New York University.
About
The conference will explore current issues in AI research from a philosophical perspective, with particular attention to recent work on deep artificial neural networks. The goal is to bring together philosophers and scientists who are thinking about these systems in order to gain a better understanding of their capacities, their limitations, and their relationship to human cognition.
The conference will focus especially on topics in the philosophy of cognitive science (rather than on topics in AI ethics and safety). It will explore questions such as:
- What cognitive capacities, if any, do current deep learning systems possess?
- What cognitive capacities might future deep learning systems possess?
- What kind of representations can we ascribe to artificial neural networks?
- Could a large language model genuinely understand language?
- What do deep learning systems tell us about human cognition, and vice versa?
- How can we develop a theoretical understanding of deep learning systems?
- How do deep learning systems bear on philosophical debates such as rationalism vs empiricism and classical vs. nonclassical views of cognition.
- What are the key obstacles on the path from current deep learning systems to human-level cognition?
A pre-conference debate on Friday, March 24th will tackle the question “Do large language models need sensory grounding for meaning and understanding ?”. Speakers include Jacob Browning (New York University), David Chalmers (New York University), Yann LeCun (New York University), and Ellie Pavlick (Brown University / Google AI).
Conference speakers
- Cameron Buckner (University of Houston)
- Rosa Cao (Stanford University)
- Ishita Dasgupta (DeepMind)
- Nikolaus Kriegeskorte (Columbia University)
- Brenden Lake (New York University / Meta AI)
- Grace Lindsay (New York University)
- Tal Linzen (New York University / Google AI)
- Raphaël Millière (Columbia University)
- Nicholas Shea (Institute of Philosophy, University of London)
Call for abstracts
We invite abstract submissions for a few short talks and poster presentations related to the topic of the conference. Submissions from graduate students and early career researchers are particularly encouraged. Please send a title and abstract (500-750 words) to phildeeplearning@gmail.com by January 22nd, 2023 (11.59pm EST).
https://philevents.org/event/show/106406
Yejin Choi is Wissner-Slivka Professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a senior director at AI2 overseeing the project Mosaic and a Distinguished Research Fellow at the Institute for Ethics in AI at the University of Oxford. Her research investigates if (and how) AI systems can learn commonsense knowledge and reasoning, if machines can (and should) learn moral reasoning, and various other problems in NLP, AI, and Vision including neuro-symbolic integration, language grounding with vision and interactions, and AI for social good. She is a co-recipient of 2 Test of Time Awards (at ACL 2021 and ICCV 2021), 7 Best/Outstanding Paper Awards (at ACL 2023, NAACL 2022, ICML 2022, NeurIPS 2021, AAAI 2019, and ICCV 2013), the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI’s 10 to Watch in 2016.