This is a great deep dive into how bias issues are getting worse and how overall AI safety is potentially getting worse. I believe Common Sense Media also recently reported that more than 70% of teens have interacted with an "AI companion" and 24% have surrendered personal information to one. As so little of this is reported on, there could be a much deeper undercurrent that will eventually surface in very unexpected ways in the future.
I'm curious about the persuasion experiment, particularly the finding that listing more facts made them more persuasive. This is contrary to what we know about people-to-people persuasion efforts, where storytelling is more effective.
In looking through the study, I don't find any indication that they asked participants about their confidence in AI. It seems to me that an unanswered question is how much of the impact of any version of AI is due to user confidence in the system being accurate. If users begin to lose trust in AI, will their persuasive impact be reduced?
I also think the last line of the abstract is pretty critical: "...strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy.""
This is a great deep dive into how bias issues are getting worse and how overall AI safety is potentially getting worse. I believe Common Sense Media also recently reported that more than 70% of teens have interacted with an "AI companion" and 24% have surrendered personal information to one. As so little of this is reported on, there could be a much deeper undercurrent that will eventually surface in very unexpected ways in the future.
I'm curious about the persuasion experiment, particularly the finding that listing more facts made them more persuasive. This is contrary to what we know about people-to-people persuasion efforts, where storytelling is more effective.
In looking through the study, I don't find any indication that they asked participants about their confidence in AI. It seems to me that an unanswered question is how much of the impact of any version of AI is due to user confidence in the system being accurate. If users begin to lose trust in AI, will their persuasive impact be reduced?
I also think the last line of the abstract is pretty critical: "...strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy.""
Like your newsletter very much; great way to keep up with the latest developments.
Wonder if you'd look at this angle of our relationship with AI:
https://open.substack.com/pub/jonathanblake/p/are-we-going-to-worship-artificial