If you see a new youTube channel with a plain sounding name like "NatureView" or "BrightScience" etc. and there is what looks like a tempting video on a specific education topic "Most Active Volcanoes" or "Incredible Carnivorous Plants"
There is a 50/50 chance it will be a generated voice with stock footage and a script written by GPT.
I am now avoiding videos if I don't recognize the creator, or don't see signs it was made by a person.
So much spam!
@futurebird For that matter, I do wish YouTube videos made by humans would cite their sources. But at least humans know what their sources were! The "AI" throws away that information completely (and I believe it's designed to do so on purpose… if they'd designed it to know where its content came from, someone might try to make them pay royalties…)
@mcc @futurebird
Maybe you're more an expert in this area, but my understanding was sources would be difficult (but not impossible) for AI to fully cite.
From what I glean, AI is filtering and weighing massive scads of information and sort of weighting and averaging it to get a plausible sounding answer.
So maybe you would get pages and pages of "sources" with tiny percentages listed for each entry, indicating how much they contributed to the result?
@PTR_K @futurebird Perhaps it is impractical with the type of system that is popular now. But also perhaps someone who has traceability as a goal would have designed a different type of system?
@mcc @PTR_K @futurebird Exactly! When LLM developers say that “Well, LLMs’ underlying tech means they can’t verify the information they provide”, my response is “Then they’re a terrible tool, and shouldn’t use that tech for that purpose.” Don’t push AI on us if you know it doesn’t work — get us tools that *do* work.
LLMs might be great *front-ends* that allow natural conversation with separate actual expert systems. But they aren’t experts on their own.
@michaelgemar @mcc @futurebird
Not sure if this is exactly part of the same issue, but I've heard there is actually a "black box problem" for AI.
Basically: What exact process did the AI make its decisions or what specific aspects of the data presented did the AI latch onto in order to provide its output in any given case.
This seems to be a problem for researchers themselves and they're trying to come up with ways to figure it out.
@PTR_K @michaelgemar @mcc @futurebird It's actually worse than that.. You can ask the AI for its reasoning easily enough. But it can't actually answer because the way they work internally doesn't retain that information. Instead they will just generate a new answer to that question, based only on their previous answer. A generative predictor actually has no memory, at all, other than its own output.
@Qybat @PTR_K @michaelgemar @mcc
Whoa. It's obvious to me that asking something like chat GPT "Where did you get that answer?" will only produce text that sounds like what GPT's matrices say ought to be the response to that question... and it couldn't possibly be an actual answer to the question.
But if many or most people don't see this? it shows a deep fundamental misunderstanding about what these tools are doing... might explain why people keep trying to get them to do things they can't.
@futurebird @Qybat @PTR_K @michaelgemar @mcc yes that's absolutely it!
I think the trouble is that, for us humans, language is our interface to the world. So much of our understanding of reality is communicated through language that it's kind of like our single point of failure, the perfect hack. We can't comprehend of something being able to say all those clever words, without actually being smart, because words are also the only way we have of telling if other people are smart.
@nottrobin @mcc @michaelgemar @PTR_K @Qybat @futurebird the way I describe it, LLMs are "intelligent" but not necessarily "smart", with both terms being complete junk to begin with which is why people argue over them constantly.
They possess certain cognitive abilities around language processing, but not a full set of cognitive abilities and many fall in "sub-human" or "lower end of human" ranges (a popular usage of one it's good at is Executive Function, which gets it used by a lot of ADHD/Autistic people who have an impairment in our executive functioning).
I definitely agree regardless that they're either applied poorly or presented poorly in most cases. (ie. applied poorly meaning cases like customer service LLMs that get companies sued, and presented poorly being cases like search where people are treating it as authoritative as opposed to supplementary).
And the programming assistant side gets wildly misrepresented (as someone who happily uses AI as a programming assistant, but never in the ways people seem to think it gets used...)
I'm afraid I just disagree.
LLMs aren't a "lower end of human" intelligence. It's completely different in kind.
Someone's written a complex formal model of language and run it over insanely huge amounts of text to calculate millions of statistical data points about what text comes next. Then they wrote a program to receive a blob of text input and use the statistical graph to generate a blob of text in response.
Intelligence means many things, but this is none of them.
@shiri @futurebird There is nothing like "understanding". That's the language trick I was talking about.
When it says "I'm sorry that was my mistake", it's just regurgitating what some humans have said before in a slightly different order.
When you ask it what it's like to be an AI, it regurgitates an amalgam of the sci fi & fan fic people have written about what an AI might say.
It's what Timnit Gebru called a stochastic parrot.
I remember being surprised to learn that some people never think without hearing words, a kind of narrated version of their thoughts.
My thoughts don't work like that all the time, thoughts don't always have narration.
It seems to vary from person to person. I wonder if people who always hear their thoughts as words are more likely to see a LLM as "thinking" ?
@futurebird @nottrobin @shiri Feynman once said in a talk that when he was young, he believed that all thoughts were words. His friend heard this, and asked him something like "Oh yeah? Then what words does your brain say when you imagine that crazy shape of the camshaft in your dad's car?"
Feynman then realised he'd been overstating that point.
@spacehobo @futurebird @shiri yeah I relate to this.
But from what @futurebird said, it sounds like she thinks in far fewer words than I do. Although it's impossible to be sure.
I actually sometimes think out loud (or talk to myself). I suspect @futurebird doesn't, but do let me know.
@nottrobin @spacehobo @futurebird @shiri
In my case, I feel as if using words is a huge "translation step". I have this image in my head of what I want to say or write down, but then have to explain parts of the image in text.
Reading text is the same thing backwards.
It's like a wooden cube lying on a sand beach, the wind comes from a certain direction and deposits sand in the wind shadow of the cube, slowly filling it up until I can make out the form the text writer (likely? maybe?) intended for me to see.
@wakame @spacehobo @futurebird @shiri this is definitely true of me too, but it's also true that I often come up with these incredible articulations of things in my head, in words, but then for some reason I can never turn them into good words in the real world, for some reason. I don't quite understand what's with that.
@nottrobin @spacehobo @futurebird @shiri
For me, it is often that words or expressions have a certain "taste" or "direction" attached. So I want to build a good argument, but then only find parts that taste like citrus, so in the end a few paragraphs come out that make the reader think "why are you so obsessed with citrus fruits?"
I am not, but the text building blocks I used leave that impression (and thereby might mislead the reader).