Comment by 👻 darkghost

Re: "AI thoughts and observations"
In: u/darkghost

But I fly helicopters!

👻 darkghost [OP]

Oct 07 · 2 months ago

46 Later Comments ↓

🎵 jmcs · Oct 07 at 12:31:

Well I don't! :D

Side note, I got a free ride on a helicopter once, of all things due to being part of the Christmas parade on my town. It was pretty cool.

🎵 jmcs · Oct 07 at 12:35:

As an aside, I think the main problem with these technologies, and one that everyone seems to be forgetting to talk about, is that... by nature, they aren't reliable.

You can tell the machine "do this" and get different results every time. That's not very sciencey.

All in all, I'm getting really annoyed and bored at seeing "AI" stamped on every damn thing...

👻 darkghost [OP] · Oct 07 at 16:22:

They're less "machine" more "brain damaged assistant."

🚀 mbays · Oct 07 at 17:38:

The main use I've found for LLMs (only mostly joking) is as a way of testing whether you really know a subject. If you ask one a non-trivial question on a technical topic and the answer seems to you clever and even insightful, this implies (in my experience) that you don't have a deep understanding of the topic.

🚀 stack · Oct 07 at 17:49:

AI bullshit bubble is now at 28x dot-com.

And the chinese AI models are shown to be trainable for millions , without the building out hydroelectric and nuclear plants...(shh, we must not cover that in the media, because we are making so much money investing in AI companies and power plants for AI)

How do you think this is going to end.

☀️ sbr · Oct 07 at 18:04:

@mbays slight tangent, but that’s a bit like the Gell-Mann Amnesia effect for general journalism. Just bs content production on steroids. Perhaps its an all round good analogy. Its like having a journalist report on hard science, dont really understand the details, but make it sound right ish. An obvious difference is they mostly try to be correct.

☯️ fairlygoodthanks · Oct 07 at 18:45:

I’m probably in the minority here but I do find LLMs useful, and I probably use them every day for one reason or another.

I do think we’re in a massive bubble though.

☯️ gdorn · Oct 07 at 18:50:

@mbays It's funny how much that strategy overlaps with certain faux experts, like billionaires who want you to believe they "could do physics" if they wanted. So many Musk fanboys realized he was full of it when he did something in their area of understanding (boring repetitive videogames), but didn't notice that his 'full of it' quotient does not vary across domains. So if he sounds reasonable or knowledgeable on a topic, it's because you don't know the topic well enough.

👻 darkghost [OP] · Oct 07 at 18:59:

One of the big financial firms was saying the training well has run dry and that synthetic training data was being generated. Hoo boy there's a lot to unpack there.

🚀 stack · Oct 07 at 20:31:

Oh, I find LLMs extremely useful, as long as I know the subject well enough.

I have the same issue with human 'professionals', actually. I've caught many errors made by accountants and lawyers, for instance. Without a lot of research, trusting an 'expert' is as stupid as letting ChatGPT do anything inportant for you.

But when you know the subject enough to smell bullshit, it is very useful. I use it all the time.

☯️ fairlygoodthanks · Oct 07 at 20:41:

True. That’s an important caveat. You need to have enough knowledge to know when it’s bullshitting you.

🚀 RubyMaelstrom · Oct 07 at 21:23:

I find LLMs and other generative technologies very useful, and they have improved significantly in quality over the past couple of years, even if they aren't perfect.

That being said, all of this AI investment is clearly a bubble. There will be two or three 'winners' still churning along when the seed money stops pouring in and a LOT of losers, including most of the investors, probably.

☯️ gdorn · Oct 07 at 21:57:

You need more than just knowledge, though. You need attention span. If the point of "AI" is to increase productivity by taking on some of the work, that is directly at odds with maintaining sufficient attention and concentration to catch its BS. This problem only gets >worse< when the error rate goes down, because it'll be even harder to maintain that attention, all while the decreased error rate means even more push to use it with even less oversight.

👻 darkghost [OP] · Oct 07 at 22:33:

Its like the same thing with the self driving cars. The danger zone is just good enough for people to trust the system but not good enough to be error free. Unlike self driving cars, an LLM isn't likely to kill you.

☯️ fairlygoodthanks · Oct 07 at 22:38:
Unlike self driving cars, an LLM isn't likely to kill you.

Yet.

🚀 devoid · Oct 08 at 00:42:

@fairlygoodthanks, @darkghost: While I don't have the statistics to support my assumption I'd like to challenge that view, and please keep in mind that likelihood is increasing every day. Also: without going into too much detail, how we define 'LLM', 'kill' and 'you' are important distinctions.

🚀 stack · Oct 08 at 02:15:

While it does take an extra effort to keep the LLM from bullshitting you, it is a similar effort to what you need to do to keep your teammates or employees from bullshitting you. and you can skip endless meeting and dealing with egos and political correctness...

also, substance abuse, bad relationships, PMS...

An AI assistant is actually better than a human one.

👻 darkghost [OP] · Oct 08 at 09:37:

You forgot office politics.

I dunno I'm still trying to find useful work for an LLM in my line of work. It is an over eager fresh degree holder with brain damage and no arms or legs. I need physical help performing experiments. Which is why I don't hire remote workers. It doesn't do well in my domain for thinking either. I'd rather have an overpaid consultant who has at least some mastery in the knowledge domain.

🚀 stack · Oct 08 at 12:36:

I understand. It works well when incorrectness can be tolerated and be easily fixed with a correction after realizing it, coding being a good example. Anything life-or-death or expensive to repair is not a good fit.

It is also good as a mostly-uninspired language tutor and source of various trivia.

Mostly a very expensive party trick that people mistake for something much bigger. Perfect ticket to stupid races on Wall Street

🚀 stack · Oct 08 at 12:41:

Oh, it can replace people in low-valued jobs like customer support where screwing up and being useless is the norm. Unplug the router and check the cable please

I suppose that's a few billions saved right there, but hardly requires an llm

🚀 devoid · Oct 08 at 13:00:

It's a content generator. You should use it as a content generator, anything else is pretty much out of scope.

@stack, customer support screwing up and being useless is only the norm because we accept it; we should stop doing that.

👻 darkghost [OP] · Oct 08 at 13:19:

People call customer service after all else fails to speak to a human with supposedly greater knowledge. It is the last resort because it is so awful. This is intentional. Replacing with an LLM will just cause people to stop calling. This could also be considered intentional.

Sure is good at flooding the internet with slop. Maybe that's just what we need to become more connected in person.

🚀 stack · Oct 08 at 13:24:

I don't think it's a very good content generator -- it's really a meat grinder. Having consumed the sum total of human knowledge it is spits out hamburger.

🎵 jmcs · Oct 08 at 14:08:

I'm already missing the days when you could do a web search, and go read information. Now 3 out of 4 results are paragraphs of slop re-phrasing the same filler crap, and maybe one word on each paragraph is worth the time to read it.

"terminal program to do XYZ"

"doing XYZ in the terminal can be a fantastic way to do XYZ, and there are several programs that can help us do it.

Have you ever wanted to do XYZ in the terminal and wondered if there was a program to do XYZ? Here are some alternatives"

... and maybe by paragraph 4 you can see the name of a damn program, if you're lucky even with a damn link...

🚀 stack · Oct 08 at 14:16:

The first prompt I enter is a request for a brief, praise free response without examples unless I request them

☯️ gdorn · Oct 08 at 14:21:

I recently started a job at a startup that makes use of LLMs in two ways. One makes sense, the other is the bane of my existence. The first, as a last-ditch effort to parse file uploads. It's mostly better than nothing and the next step in the flow is manual checks and automated validation.

The bane? Before me they used LLMs to code the site. It is subtly atrocious code, slightly obfuscated "Abject Oriented", but in great volume. But to fix it at reasonable speed I am also locked in to using LLMs to understand the spam, and even to refactor it to something less repetitive. The one saving grace? It is tolerably okay at generating huge test suites, which is necessary in my situation...

☯️ gdorn · Oct 08 at 14:34:

The other application that has worked for me is as a secondary aid for learning Portuguese on Duolingo. Duo is crap at explaining anything, but LLMs are okay at it. Mostly.

It is not at all surprising that the things it is sorta-okay at are all about extracting meaning from the written word (whether human or computer). Language is the one thing it is actually suited to. It cannot generate or understand meaning, but it can do language-related tasks. Humans are very bad at telling the difference, as it hasn't existed until now.

🚀 stack · Oct 08 at 15:26:

Like most things AI, the code I get from LLMs is marginal at best and usually idiotically redundant, but often useful as a starting point or to test something out.

I suspect that much smaller, more constrained models would be more useful in most if not all cases.

But it's more of a party trick to have a chatterbox that can bullshit on any topic in any of 150 languages

🦔 bsj38381 · Oct 08 at 19:46:

I've always found it to be pretty fishy when a ton of tech corporations are paying tons of money just to add llm ai stuff to their software. I'm just patiently waiting until the multiple llm ai bubbles burst at some point.

🚀 devoid · Oct 09 at 16:54:

@bsj38381, I suspect there won't be a burst. There will be a slow, silent deflation while everyone is distractedly looking away towards the next "big thing"

The hype is the product - https://rys.io/en/180.html

🐦 wasolili [...] · Oct 10 at 01:55:

One of my friends told me his work has rolled out a chat bot that is used to negotiate prices for orders. Customers call in and it will haggle with them. People have realized it's a bot and will call it whenever an actual human has declined the offer because they've realized they can manipulate the bot into agreeing to worse terms than people will.

🚀 devoid · Oct 10 at 02:11:

@wasolili, sounds like an actual great use of AI

🦔 bsj38381 · Oct 10 at 10:28:

@devoid When the next big thing appears several years later, I'll happily not join in.

☯️ Hicatrus · Dec 06 at 18:55:

For critical code it shouldn't be used, like what Microsoft is doing nowadays with it's Windows 11 is horrible, operating systems are used for far more than watching cat videos, they're used on surgery machines and traffic systems ..etc, if AI becomes convincing enough for people to use there should be at least some standards of avoiding it in certain areas, or avoiding it all.

the problem is that people will always be lazy, and eventually they will let these repeating machines decide for them everything, it already has achieved that in certain areas.

young girls are using them as boyfriends, there was a case for a girl who suicided because the f***** bot said something or manipulated to do it.

the pandora box is open and all demons will manifest accordingly, the only usage it really has it to control narrative, and everyone will think what their "assistant" told them to do so.

this will only get worse

👻 darkghost [OP] · Dec 06 at 19:04:

The genie is out of the bottle. I've used it for presentations and the damn thing invented numbers and graph axes; it's horrible.

🚀 stack · Dec 06 at 20:36:

It's pretty good for learning a foreign language. Also for 'how do I sort the output of ls' kind of questions. Or even 'which blood test should I order if I am concerned about my kidney function'.

The trick is to remember it's not intelligence, just an LLM

☯️ Hicatrus · Dec 06 at 21:16:

are you using a locally hosted LLM? if not, you'll be subject to ads, no matter which recommendation you seek, it will be subject to what the investors are interested in.

I'm not saying it is useless, I'm saying it will affect negatively on the society, and there are lifes being destroyed because of how it is. all technology when introduced has no predictable effect until it is done, in other words; use it with caution, be very warey of what it tells you, and always remember what it is.

🚀 stack · Dec 06 at 21:41:

True. But when talking to people you are subjected to their biases and opinions, and sometimes hallucinations

🚀 stack · Dec 06 at 21:57:

To be honest I have less trouble with the LLM than my last Gen z assistant.

☯️ Hicatrus · Dec 07 at 00:19:

that's the problem, man. people are becoming too independent on their little bots, eventually they'll forget what they are and they will become exactly what the bot decides, (that young girl who M'ed herself) and our families can easily fall victims to these things without our knowledge and then one day they're gone. it's a poison to the mind, it makes you super lazy and keeps on giving you what you want even if it's not true, and people believe it, Imagine what would your average joe think if you brought to him a person who knows everything about everything? eventually joe will believe anything the person says.

I've used it once this year to check on its progress, never since then. But I still find it very hard to search the web without it popping up everysigletime

👻 darkghost [OP] · Dec 07 at 00:23:

I've been using Lumo to ask technical questions re: Linux and it has been pretty good for that. It doesn't think or feel and I think most of us are plainly aware that it's an overgrown stats engine hooked up to a dictionary.

🗡️ The_Jackal · Dec 15 at 19:10:

@jmcs Expensive toys (well, the cost for the companies maintaining them, I've never paid for anything AI related) are exactly what I've been using them as. First thing I did after finding out about the GPT 3 playground back in 2023 was getting it to write the most heinous shit, usually in the form of fake scripts for TV shows where something completely uncharacteristic that would never happen in the show does, and then showing it to a friend who wondered if I just fucking with him when I said the bot wrote it.

🚀 AGourd · Dec 15 at 19:25:

Honestly using ai for coding takes longer to fix mistakes then it does to just do it yourself

👻 darkghost [OP] · Dec 15 at 19:28:

I've found the same to be true for many tasks. It's almost like it isn't a great tool or something.

🗡️ The_Jackal · Dec 15 at 19:57:

@darkghost They could probably have made much more profit and caused far less headaches for everyone barring the art dilemma if they had leaned more into it being a revolution in the entertainment side of things like completely procedural text adventures, dynamic NPC dialogue/interactions in video games if you go off the beaten path, more intelligent enemies for training and offline modes in shooter/other multiplayer games, PLACEHOLDERS (and not replacements) for assets in games, etc. than trying to act like it'll be like a real life Halo Cortana any day now that helps you excel in everything with some amazing knowledge and advice.

🦔 bsj38381 · Dec 15 at 20:10:

And if I am talking to an AI chatbot, I'm always taking what it says with a grain of salt. (Not helping that too many llm ai stuff is rushed to crap and ends up telling you to cook something that can make you sick.)

Original Post

👻 darkghost

AI thoughts and observations — Doing some research lately and I saw that the market cap of Nvidia is larger than big pharma, all 23 of the largest big pharma companies... combined. The next data point came in the form of these stats: the amount invested is 17 times that of dot-com and 4 times that of subprime mortgages. In the US it was responsible for a third of GDP growth last year. Despite that, the tech is kind of unproven with vague ways it can help the average person. Writing things with...

💬 48 comments · 3 likes · Oct 07 · 2 months ago