International Symposium on Computational Intelligence (ISCI) 2025 - Ethical AI
Join us for the International Symposium on Computational Intelligence (ISCI) 2025, a premier virtual half-day event dedicated to advancing the field of Ethical AI. This symposium brings together leading researchers, practitioners, and thought leaders to explore cutting-edge developments in trustworthy, explainable, and responsible artificial intelligence.
What to Expect
- Expert Presentations - 30-45 minute talks from internationally recognized speakers
- Interactive Q&A Sessions - 15-minute Q&A following each presentation
- Global Perspective - Speakers from India, UK and beyond
- Live Streaming - Real-time access via YouTube with archived sessions on IEEE.tv
Talks & Speakers
- Toward realizing user-level differential privacy at scale - Dr. Krishna Pillutla, IIT - Madras, India
- Towards Logical Foundations for AI Ethics - Dr. Shrisha Rao & T.V. Priya, IIIT - Bangalore, India
- Usable and Useful Artificial Intelligence: Explorations in Healthcare - Dr. Ann Blandford, University College London (UCL), UK
- Empathic AI for Well-being Support: Challenges, Opportunities and Consequences - Dr. Alladin Ayesh, University of Aberdeen, UK
- Human-AI Interfaces and Enhanced Communications with Intelligent Hearing Devices - Dr. Achintya Bhowmik, CTO - Starkey Hearing and Adjunct Professor - Stanford University, USA
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
- Contact Event Hosts
-
Sravan Kanukolanu
Organizing Chair, ISCI 2025
IEEE CIS – Santa Clara Valley Chapter
Email: kanukolanu.ds@gmail.com
- Co-sponsored by Vishnu S. Pendyala, San Jose State University
Speakers
Dr. Ann Blandford of University College London
Usable and Useful Artificial Intelligence: Explorations in Healthcare
With the growing availability of data and algorithms, Artificial Intelligence (AI) and machine learning over large datasets pervade many aspects of life. This includes novel healthcare technologies. However, healthcare is safety-critical, and there are many challenges to ensuring that new technologies are reliable, usable, useful and effective. In this talk, I will outline some of these challenges and present the Measurement, Algorithm, Presentation (MAP) model for reasoning about AI systems, drawing on examples from three projects that have developed advanced AI solutions to support clinical decision making.
Biography:
Dr. Blandford is a Professor of Human–Computer Interaction in the Department of Computer Science at University College London (UCL). A Fellow of the British Computer Society and Member of the ACM CHI Academy, she is recognized internationally for her contributions to the field, including receiving the IFIP TC13 Pioneer award.
Dr. Blandford holds a Mathematics degree from Cambridge University and a PhD in Artificial Intelligence and Education from the Open University. She began her career as a software engineer before transitioning to academia, holding positions at Cambridge's Applied Psychology Unit and Middlesex University. She joined UCL in 2002 as a Senior Lecturer, becoming Professor in 2005.
Dr. Blandford's research centers on designing effective interactive health technologies for citizens and clinicians. Her distinctive expertise lies in evaluating complex systems "in the wild," studying human error and information use within real working contexts rather than controlled laboratory settings. She takes a pragmatic approach that embraces the inherent "messiness" of real-world healthcare environments, examining both clinician and patient experiences to design technologies that improve care while empowering users and maintaining safety. With her interdisciplinary background spanning mathematics, artificial intelligence, and education, she addresses the fundamental challenge of creating health technologies that genuinely work for the people who depend on them.
Dr. Krishna Pillutla of IIT - Madras
Toward realizing user-level differential privacy at scale
While in-domain user data is becoming increasing crucial to unlocking the full potential of AI models, the use of such data comes at the cost of increased risk of leaking information and compromising the privacy of individuals. In this talk, I'll present some building blocks required to provide provable protection (via differential privacy) to of all the (possibly related) data contributed by any individual (user) to the dataset. We'll address questions of why we need such protections, how one can ensure them, and how one can test/audit such privacy guarantees.
Biography:
Dr. Krishna Pillutla is an assistant professor at the Wadhwani School of Data Science and AI at IIT Madras in India. Previously, he has been a visiting researcher (postdoc) at Google Research in the Federated Learning team. He obtained his Ph.D. at the University of Washington where he was advised by Zaid Harchaoui and Sham Kakade. Before that, he received his M.S. from Carnegie Mellon University and B.Tech from IIT Bombay.
Krishna's research has been recognized by a NeurIPS outstanding paper award (2021), a JP Morgan Ph.D. fellowship (2019-20), and two American Statistical Association (ASA) Student Paper Award Honorable Mentions.
Dr. Shrisha Rao of IIIT - Bangalore
Towards Logical Foundations for AI Ethics
This presentation examines the widening gap between high-level ethical principles and their practical application in AI systems. Current AI ethics approaches tend to be predominantly quantitative and data-driven, often neglecting the qualitative and philosophical dimensions essential to ethical reasoning. Addressing this limitation requires the development of formal, logic-based frameworks that enable organizations to proactively identify and mitigate ethical risks while ensuring compliance with regulatory standards. Such frameworks can also enhance AI accountability, ultimately paving the way for more trustworthy and responsible AI systems.
Dr. Rao will present this talk along with his student T. V. Priya.
Biography:
Dr. Shrisha Rao received his Ph.D. in computer science from the University of Iowa, and before that his M.S. in logic and computation from Carnegie Mellon University. He is a full professor at IIIT-Bangalore. Dr. Rao was an ACM Distinguished Speaker from 2015 to 2021, and is a Senior Member of the IEEE. He is also a life member of the American Mathematical Society and the Computer Society of India.
His research focuses on artificial intelligence and resource management in complex systems, with a particular emphasis on ethical AI applications. He uses agent-based modeling and AI techniques to investigate how individual traits and behaviors—including cognitive biases, prejudices, and ethical conduct—affect social systems and decision-making processes. His work spans computational sustainability, examining energy-efficient systems and intelligent transportation, as well as bioinformatics applications in disease research.
Dr. Aladdin Ayesh of University of Aberdeen
Empathic AI for Well-being Support: Challenges, Opportunities and Consequences
In this talk we will look at the challenges and opportunities afforded by recent developments in Empathic AI especially in the context of well-being from the lense of Responsible AI views and concerns. These concerns are multi-fold and emerge from the technical challenges, and potential ethical and legal consequences, environmental and societal impact, and of course the unknown potential of autonomous systems evolution. We will limit our discourse to the potential impact on the human user.
In the process of examining the development of Empathic AI and its impact on the human user well-being we will reference some of the recent IEEE standards in the relevant areas. In particular, the two IEEE 7010 and 7014 standards do address many of the issues we will cover in this talk. Others, especially, in the areas of Brain-Computer and Neural Interfaces (e.g. IEEE P2731 and P2794) cover the more technical aspects of the underlying technologies. A good understanding of these underlying technologies is necessary to appreciate the potential impact and the long-term implications. Generative AI and the recent advances in Chat Bots is also a good example of emerging and yet very accessible relevant technologies. In this talk we will cover the underlying and related AI technologies briefly but sufficiently to appreciate the challenges and to put the opportunities and consequences in a clear context for follow up studies.
Biography:
Professor Aladdin Ayesh, MSc (Essex, 1996), PhD (LJMU, 2000), holds Personal Chair of Artificial Intelligence at University of Aberdeen, UK. Prior to his current role, he was a Professor of Artificial Intelligence at De Montfort University. His research focuses on computational cognition, machine learning and explainable AI. His research explored cognitive architectures, emotion modelling and recognition, and applied AI using variety of machine learning techniques including statistical approaches, e.g. Markov Models and Bayesian Networks, logic-based and symbolic approaches, e.g. Modal and Fuzzy Logics, and neural approaches, e.g. Self-Organizing Maps and Deep Learning Classifiers. He applies these techniques in three primary areas: Health Informatics, Sustainable Development, and Data Privacy. Prof. Ayesh has over 150 publications, supervised 24 PhD students to successful completions, and participated in 26 funded projects. He is a founding editor of four international journals and chaired several international conferences. He is also a member of two IEEE technical committees, several IEEE Standards working groups, and a contributor to IEEE 7010-2020 – IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being.
Dr. Achin Bhowmik of Starkey Hearing and Stanford University
Human-AI Interfaces and Enhanced Communications with Intelligent Hearing Devices
As artificial intelligence becomes increasingly woven into daily life, the ear is emerging as a natural and powerful interface between humans and AI. Advanced hearing devices, once limited to sound amplification, are now evolving into multifunctional, AI-driven platforms for
communication and interaction. This talk presents how deep neural networks, embedded sensors, and ultra-low-power computing are transforming hearing devices into intelligent in-ear systems that mediate human-AI interaction. Beyond enhancing speech understanding for those with hearing loss, these devices enable seamless communication through real-time translation, transcription, and
personalized voice enhancement. They further extend human-AI collaboration by monitoring health, detecting events such as falls, and serving as always-available personal AI assistants. By positioning hearing devices as discreet, always-worn gateways to AI, we open new possibilities for augmented communication, continuous sensing, and natural human-AI interaction. This vision highlights a paradigm shift: advanced hearing technology is no longer just about restoring hearing; it is about creating a new class of human-AI interfaces that enhance and extend human capabilities.
Biography:
Dr. Achin Bhowmik is the Chief Technology Officer and Executive Vice President of Engineering at Starkey, a global leader in hearing technology. He leads the company’s efforts to transform hearing aids into multifunctional AI-powered devices that enhance communication and monitor health.
He also serves as an adjunct professor at Stanford University, where he advises research and lectures on sensory augmentation and intelligent systems. He is also an affiliate faculty member of the Stanford Institute for Human-Centered Artificial Intelligence and the Wu Tsai Neurosciences Institute. Previously, Dr. Bhowmik was Vice President and General Manager of Perceptual Computing at Intel, where he led pioneering work in 3D sensing, computer vision, and interactive devices.
He is a Fellow of IEEE, SID, AAIA, and AIIA, and serves on several boards, including Mojo Vision, Astranu, and the National Captioning Institute. He has authored over 200 publications, including three books and over 80 patents worldwide. His work has been recognized with
numerous honors, including TIME’s Best Inventions, the Red Dot Design Award, and the Artificial Intelligence Excellence Award.
Agenda
Conference Schedule (PST)
8:00 AM - 8:05 AM
Opening Remarks
8:05 AM - 9:05 AM
Toward Realizing User-Level Differential Privacy at Scale
Dr. Krishna Pillutla, IIT Madras, India
9:05 AM - 10:05 AM
Towards Logical Foundations for AI Ethics
Dr. Shrisha Rao & T.V. Priya, IIIT Bangalore, India
10:05 AM - 11:05 AM
Usable and Useful Artificial Intelligence: Explorations in Healthcare
Dr. Ann Blandford, University College London (UCL), UK
11:05 AM - 12:05 PM
Empathic AI for Well-being Support: Challenges, Opportunities and Consequences
Dr. Aladdin Ayesh, University of Aberdeen, UK
12:05 AM - 1:00 PM
Human-AI Interfaces and Enhanced Communications with Intelligent Hearing Devices
Dr. Achintya Bhowmik, CTO - Starkey Hearing and Adjunct Professor - Stanford University
By registering for this event, you agree that IEEE and the organizers are not liable to you for any loss, damage, injury, or any incidental, indirect, special, consequential, or economic loss or damage (including loss of opportunity, exemplary or punitive damages). The event will be recorded and will be made available for public viewing.