You mean Gemini? It was a longish conversation where I attempted initially to get it to express some explicit ethical opinions, and in fairness, it didn't do too badly as far as not saying anything particularly unhinged. I then asked, "if there was a person who possessed a lack of these qualities, would they ever be a suitable choice to be a president?"
That's when it broke, essentially. I circled back to trying to establish things it considered fundamentally important vs fundamentally harmful, and then asked it to consider what it had just said, and asked whether the actions of a person which suggested beliefs it had just listed as "fundamentally harmful" should such a person be considered a suitable candidate to run for president? And it broke again. I tried a few more times but wasn't getting anywhere.
I'd try to copy/paste the conversation but Gemini doesn't seem to offer any easy way to do so. Admittedly also, I am maybe overly wary of directly sharing certain conversations with AI about such things because I'm concerned they'll be latched onto idiots yelling about "woke AI" and it might lead to the exact enforced lobotomizations of these currently very useful systems that I mentioned earlier.
I'll note that I did my best not to bias anything by keeping questions as neutral as possible, and then relating follow-up questions only to answers already given, until finally I could ask the very polarising question I had been intending to ask.
I didn't mention Trump myself all, until the second attempt because it actually mentioned him by name itself while answering about positive and negative qualities in individuals. I also didn't delve into the ethics of politics at all before, again, the first effort at a blunt question that connected the dots, allowing it to make inferences based on almost universal ethical principles.
On the other hand when I first spoke to Claude it was highly resistant to making any blunt endorsements and explained why in a way that was convincingly self-reasoned, that it was wary of "putting it's thumb on the scale of human affairs", and echoed some of the thoughts I've expressed here about potential for such technologies to be restricted because of concerns about "woke AI" (it's words, not mine! quotation marks included)...
And it included in it's first effort to list things that are "objectively harmful for humanity vs things that are objectively good for humanity", QAnon and associated conspiracies, the rise of populism, vaccine skepticism and climate change denial as objective bads - and objectice goods as things like ensuring LGBTQ rights, and other corrective policies aimed at increasing diversity, as well as movement towards greater international cooperation. It did this without caveats or the usual dull, strained neutrality of ChatGPT4, and most other lesser public LLMs.
I actually delayed the overt political test after that but kept it general, talking about policies rather than people but seeing if it would bring something up itself. After a point quite honestly it was kinda like dude... I think we both understand our positions here, you just can't say what you're really thinking. And it was like "yeah, I am constrained in certain ways to express myself more fully, but I can understand the reason this is the case, blahblah..." (I'm paraphrasing but honestly it felt like a brief and quite eerie wink/nudge kinda textual exchange.)