May
9
Mon
Is Memory Continuous with Imagination? Debate Roundtable @ Jurow Hall, NYU
May 9 @ 5:00 pm – 7:00 pm

NYU’s Center for Mind, Brain, and Consciousness will host a debate on the relationship between memory and imagination.

This event will be held in person at Jurow Hall, Silver Center, 31 Washington Place, and will also be streamed over Zoom at: tinyurl.com/nyumemory

Attendance is free but registration (requiring proof of vaccination) is required for non-NYU guests. Please register no later than April 25th at: forms.gle/tNqkBYPDcZxTdxY38

Oct
13
Thu
Are Large Language Models Sentient? David Chalmers @ Jurow Lecture Hall, Silver Center NYU
Oct 13 @ 5:00 pm – 6:30 pm

The NYU Mind, Ethics, and Policy Program is thrilled to be hosting a talk by David Chalmers on whether large language models can be sentient.

About the talk
Artificial intelligence systems—especially large language models, giant neural networks trained to predict text from the internet—have recently shown remarkable abilities. There has been widespread discussion of whether some of these language models might be sentient. Should we take this idea seriously? David Chalmers will discuss the underlying issue and try to break down the strongest reasons for and against.

The talk, which is free and open to the public, will take place on October 13 2022 from 5:00-6:30pm ET. The in-person location will be Jurow Lecture Hall (inside the Silver Center at 32 Waverly Place), and the virtual location will be Zoom (you can sign up to receive a link by clicking “Register here” below). There will also be a light reception from 6:30-7:30pm in the Silverstein Lounge (immediately outside of the Jurow Lecture Hall).

– If you plan to attend in person, please be prepared to show proof of full vaccination.
– If you plan to attend virtually, please check your email for a link in advance of the event.

About the speaker
David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at NYU. He is the author of The Conscious Mind (1996), Constructing the World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He co-founded the Association for the Scientific Study of Consciousness and the PhilPapers Foundation. He is known for formulating the “hard problem” of consciousness, which inspired Tom Stoppard’s play The Hard Problem, and for the idea of the “extended mind,” which says that the tools we use can become parts of our minds.

Thank you to our co-sponsors for your generous support of this event:

  • NYU Center for Bioethics

  • NYU Center for Mind, Brain, and Consciousness

  • NYU Minds, Brains, and Machines Initiative

Feb
17
Fri
The Reflexivity of Consciousness in Kant, Fichte and Beyond. Katharina Kraus (Johns Hopkins) @ NYU Philosophy Dept.
Feb 17 @ 3:30 pm – 5:30 pm

Registration Information

Disability Accommodations

Mar
25
Sat
The Philosophy of Deep Learning @ Center for Mind, Brain, and Consciousness
Mar 25 – Mar 26 all-day

A two-day conference on the philosophy of deep learning, organized by Ned Block (New York University), David Chalmers (New York University) and Raphaël Millière (Columbia University), and jointly sponsored by the Presidential Scholars in Society and Neuroscience program at Columbia University and the Center for Mind, Brain, and Consciousness at New York University.

About

The conference will explore current issues in AI research from a philosophical perspective, with particular attention to recent work on deep artificial neural networks. The goal is to bring together philosophers and scientists who are thinking about these systems in order to gain a better understanding of their capacities, their limitations, and their relationship to human cognition.

The conference will focus especially on topics in the philosophy of cognitive science (rather than on topics in AI ethics and safety). It will explore questions such as:

  • What cognitive capacities, if any, do current deep learning systems possess?
  • What cognitive capacities might future deep learning systems possess?
  • What kind of representations can we ascribe to artificial neural networks?
  • Could a large language model genuinely understand language?
  • What do deep learning systems tell us about human cognition, and vice versa?
  • How can we develop a theoretical understanding of deep learning systems?
  • How do deep learning systems bear on philosophical debates such as rationalism vs empiricism and classical vs. nonclassical views of cognition.
  • What are the key obstacles on the path from current deep learning systems to human-level cognition?

A pre-conference debate on Friday, March 24th will tackle the question “Do large language models need sensory grounding for meaning and understanding ?”. Speakers include Jacob Browning (New York University), David Chalmers (New York University), Yann LeCun (New York University), and Ellie Pavlick (Brown University / Google AI).

Conference speakers

Call for abstracts

We invite abstract submissions for a few short talks and poster presentations related to the topic of the conference. Submissions from graduate students and early career researchers are particularly encouraged. Please send a title and abstract (500-750 words) to phildeeplearning@gmail.com by January 22nd, 2023 (11.59pm EST).

 

https://philevents.org/event/show/106406

Sep
6
Wed
Afternoon Talk with Professor Yejin Choi @ NYU room 801
Sep 6 @ 4:00 pm – 5:30 pm

Yejin Choi is Wissner-Slivka Professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a senior director at AI2 overseeing the project Mosaic and a Distinguished Research Fellow at the Institute for Ethics in AI at the University of Oxford. Her research investigates if (and how) AI systems can learn commonsense knowledge and reasoning, if machines can (and should) learn moral reasoning, and various other problems in NLP, AI, and Vision including neuro-symbolic integration, language grounding with vision and interactions, and AI for social good. She is a co-recipient of 2 Test of Time Awards (at ACL 2021 and ICCV 2021), 7 Best/Outstanding Paper Awards (at ACL 2023, NAACL 2022, ICML 2022, NeurIPS 2021, AAAI 2019, and ICCV 2013), the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI’s 10 to Watch in 2016.