Generative Data Intelligence

Helpful Assistants, Romantic Partners, or Con Artists? Part Two » CCC Blog

Date:

CCC supported three scientific sessions at this year’s AAAS Annual Conference, and in case you weren’t able to attend in person, we will be recapping each session. Today, we will summarize the highlights of the Q&A portion of the session, “Large Language Models: Helpful Assistants, Romantic Partners or Con Artists?” This panel, moderated by Dr. Maria Gini, CCC Council Member and Computer Science & Engineering professor at the University of Minnesota, featured Dr. Ece Kamar, Managing Director of AI Frontiers at Microsoft Research, Dr. Hal Daumé III, Computer Science professor at University of Maryland, and Dr. Jonathan May, Computer Science professor at University of Southern California Information Sciences Institute.

Below is the fascinating summary of the Q&A portion of the Large Language Models: Helpful Assistants, Romantic Partners or Con Artists?panel. Is AI capable of love? What kinds of impacts might these models have on kids? How does the United States’ AI capabilities stack up? Find out below:

Q: When deploying AI language models in multilingual and multicultural contexts, what practices should we do?

Dr. May: In developing technology and lowering walls, it should be made easier for people to do what they want to do. What everyone wants to do, rather than just me. Thanks AI, great to focus on me, but we should pay attention to the rest of the world in general.

Q: Some of these general issues – this is not the first time they have been brought up. Seems like the community isn’t going to come to these by themselves. I’m wondering if all of you have ideas about how to move these conversations into action?

Dr. Kamar: There are roles for a lot of different parties to play. Evaluation matters a lot in representing different cultures and populations. When data sets don’t have diversity in world representation, the resulting systems aren’t representative. A lot of work needs to be done in forming evaluation best practices, regulations and compliance measures. The White House has made commitments, and the Blueprint for an AI Bill of Rights is starting off. There have been processes implemented across industry, with many great minds working together (not perfect, but generalizing across industry there is potential). There are meetings happening to get to convergence on currently starting as standards; possibly in future regulation. How do we do evaluations, safety analysis, etc.? None of these conversations have the diversity that needs to be in the room. Think about who needs to be in the room when decisions are being made.

Dr. Daumé: I think when people talk about regulation, especially in AI, everyone thinks about punitive regulations. But this can also be incentivizing regulation. Funding policy makers and NSF could promote developing tools which help us as a nation and the world.

Q: Funding for AI is way behind in the US compared to other places in the world. The new investment by NSF is 20-million-something, which is peanuts compared to industry investments. The federal government has released reports from studies for years, and the conclusion is that the US has to get going. I love Ece’s phase change analogy. Thermodynamic limit with numbers is growing. We want open AI, who is going to pay for it? There is not enough money. What are your suggestions? Open AI? But we don’t even have open access publishing. Would you recommend to the president to not have legislation?

Dr. May: I think there is money; someone observed to me that you’ve managed to convince the government to spin particles around, but haven’t been able to divert it to us.

Dr. Kamar: The reason companies that are building these models are getting these outputs is through centralization of resources. There is a lot you get from scale. Should think about how we centralize investments in academia so we get a shared resource instead of having lots of different models. We are seeing it is not only about scale. Not something we have to do right now, but current architecture is not great. Having good AI capabilities should not just be about more money and more power.

Q: Overrepresentation bias in answers. Do we know where it is coming from? I’m a math guy, and my thoughts go to it is a compounding of rounding errors that is adding bias? If equal representation, I’d imagine it would output equal representation, or would it still be there?

Dr. May: A lot comes down to spiking functions. Soft maximum is an important part of training. Highest wants to be #1. It’s not like there is some perfect language output, but we want to have some bias’. We just want to minimize harm towards people, and a lot of times we are not recognizing these. Deployment without understanding is a problem. 

Dr. Daumé: One of the challenges with these models is that there are no narrow AI models anymore. They say they can do anything, so it’s hard to test everything.

Q: You mentioned AI being a tool or a replacement, which way do you see it going?

Dr. Daumé: There is more money going into replacement.

Q: The title mentioned romantic AI. I want to know more about that.

Dr. May: There is not enough intent in models for them to be viable romantic replacements, but they are as good as humans at recognizing patterns even when they don’t exist.

Dr. Kamar: I advise you not to think about AI as what it is right now. Try to project into the future–imagine that in a few years, these systems will be personalized to you. What is the relationship you will have with that system?

Dr. May: But will it love you?

Dr. Kamar: It will tell you it loves you.

Dr. May: But is that enough?

Q: I want to hear advice for people not in the field of AI. How can we engage with these tools? What should we know?

Dr. Daumé: At the University of Maryland, we are having these conversations a lot. It’s easy for me to say journalism will be different in 5 years, and other fields too. It’s uncomfortable to say that the role of professor will be different in 5 years, but it will. I have colleagues that use different LLM plug-ins for proposals and papers; it is already happening. I regularly have exam questions written by tools, but I have to check for accuracy. Writing exam questions doesn’t bring me joy, so AI can take it off my plate. In higher education, we have to think about it more. How is it transforming our jobs? There are a lot of discussions going on at universities, but not a ton of pooling resources.

Q: How welcome is AI to be judged in the future when considering military applications? There’s been no mention of military applications in this session – I know if I read people halfway correctly there is a divergence of opinion on that topic.

Dr. May: The military is broad, a lot of my work is sponsored by the defense department. Hard to answer specifically, in general the defense department (not speaking for them) appears to be prioritizing the safety and security of the US, and will continue to do that and leverage LLMs and AI to help the US be safe.

Dr. Kamar: We also need to talk about dual use. If you take military work going on in biology or cybersecurity, we can take the very promising tools we have right now and use them because we want secure systems and new drugs. But with every good use you will have a bad use. What are the use cases we don’t want AI to be used in? In open source applications, people can replicate these models. How do we keep people from doing harm in these cases?

Q: When interacting with language models, adults understand it isn’t alive/self aware, but what about several generations later; kids who have had it since they remember socializing? They have a tutor or teacher that is fully AI; the system is embedded with an instructor. They could create a bond with the instruction, think they have a great relationship, and then the program gets deleted. What is the child psychology of social emotional bonds with non-person entities?

Dr. Kamar: We need research, interdisciplinary research, and we need it quickly. In 5 years, we might get these answers, but in that time AI may become a big part of the life of my 10 year old. Your question is extremely important. There is research showing even innocent systems may have backdoors. We need to have security experts and child development specialists having those conversations today.

Dr. Daumé: I don’t know if anyone remembers surveillance Barbie–there is a big privacy issue here. It is a more interesting social issue. Responses were tuned to be overly positive. Children would say things like I’m mad because Sally didn’t play with me, and it does not give socially appropriate suggestions. I am worried about very positive agents, because positivity is not always the right answer.

Thank you so much for reading, and stay tuned for the recap of our third and final panel at AAAS 2024.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?