168
submitted 11 months ago by ezmack@lemmy.ml to c/asklemmy@lemmy.ml

Feel like we've got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you've got all these people invested in AI companies running around with flashlights under their chins like "bro this is so scary how good we made this thing". Seems like bullshit.

I've seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don't think I'd just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

top 50 comments
sorted by: hot top controversial new old
[-] mim@lemmy.sdf.org 99 points 11 months ago

I don't think the comparison with crypto is fair.

People are actually using these models in their daily lives.

[-] PeepinGoodArgs@reddthat.com 52 points 11 months ago

I'm one of those that use it in my daily life.

The current top comment says it's "really good at filling in gaps, or rearranging things, or aggregating data or finding patterns."

So, I use Perplexity.ai like you would use Google. Except I don't have to deal with shitty ads and a bunch of filler content. It summarizes links for me, so I can more quickly understand whatever I'm searching for. However, I personally believe it's important to look directly at the sources once I get the summary, if only to verify the summary. So, in this instance, I find AI makes understanding a topic easier and faster than alternatives.

As a graduate student, I use ChatGPT extensively, but ethically. I'm not writing essays with it. I am, however, downloading lecture notes as PDFs and having ChatGPT rearrange that information into outline. Or I copy whole chapters from a book and have it do the same. Suddenly, my reading time is cut down by like 45 minutes because it takes me 15 minutes to get output that I just copy and paste into my notes, which I take digitally.

Honestly, using it like I do, it's pretty clear that AI is both as scary as it sounds in some instances and not, in others. The concern with disinformation during the 2024 election is a real concern. I could generate essays with it with whatever conclusions I wanted. In contrast, the concern that AI is scary smart and will take over the world is nonsense. It's not smart in any meaningful sense and doesn't have goals. Smart bombs are just dumb bombs with the ability to hone in better on the target, it's still has the mission of blowing shit up given to it by some person and inherent in its design. AI is the same way.

[-] ShaggyDemiurge@lemmy.blahaj.zone 6 points 11 months ago

Perplexity.ai

Huh, this one looks pretty cool. Is it good enough to use as a default search engine, or would it still be better to leave google for it?

[-] PeepinGoodArgs@reddthat.com 13 points 11 months ago

It's useful for when you want to go down a rabbit hole. It's less useful for super specific stuff, like where to go if you want your nails done.

load more comments (2 replies)
[-] hglman@lemmy.ml 8 points 11 months ago

People have actually used crypto to make payments. Crypto is valuable, but only when it's widely adopted. Before you say something like "use a database," you might take the time to understand what decentralized blockchains are accomplishing and namely removing a class of corruption from any information coordination tasks.

[-] beatle@aussie.zone 5 points 11 months ago

Why bother with the overhead of blockchain when users centralise on a handful of ~~banks~~ exchanges.

[-] hglman@lemmy.ml 5 points 11 months ago

Exchanges only exist to convert away from the crypto. If that's the standard money, they don't live. They aren't the banks of the blockchain. They are the intersection of fiat banks and the blockchain.

[-] beatle@aussie.zone 5 points 11 months ago

Strongly disagree, some exchanges don’t even have fiat on-ramps.

Blockchain is inefficient and pointless when users centralise on coinbase and binance.

load more comments (6 replies)
[-] zumi@lemmy.sdf.org 55 points 11 months ago

Senior developer here. It is hard to overstate just how useful AI has been for me.

It's like having a junior programmer on standby that I can send small tasks to--and just like the junior developer I have to review it and send it back with a clarification or comment about something that needs to be corrected. The difference is instead of making a ticket for a junior dev and waiting 3 days for it to come back, just to need corrections and wait another 3 days--I get it back in seconds.

Like most things, it's not as bad as some people say, and it's not the miracle others say.

This current generation was such a leap forward from previous AI's in terms of usefulness, that I think a lot of people were looking to the future with that current rate of gains--which can be scary. But it turns out that's not what happened. We got a big leap and now are back at a plateau again. Which honestly is a good thing, I think. This gives the world time to slowly adjust.

As far as similarities with crypto. Like crypto there are some ventures out there just slapping the word AI on something and calling it novel. This didn't work for crypto and likely won't work for AI. But unlike crypto there is actually real value being derived from AI right now, not some wild claims of a blockchain is the right DB for everything--which it was obviously not, and most people could see that, but hey investors are spending money so lets get some of it kind of mentality.

[-] thelastknowngod@lemm.ee 16 points 11 months ago

Same. 5 minutes after installing Copilot I literally said out loud, "Well.. I'm never turning this off."

It's one of the nicest software releases in years. And it's instantly useful too.. No real adjustment period at all.

[-] GarlicBender@lemmy.ml 8 points 11 months ago

I tried it for a couple months and it was alright but eventually it got too frustrating. I did love how well it did some really repetitive things. But rarely did it actually get anything complex 100% right. In computing, "almost right" is wrong. But because it was so close, it was hard to spot the mistakes.

There were cases where my IDE knew the right answer but Copilot did not. Realizing that Copilot was messing up my IDE enhancements to produce code I was painfully babysitting, I cancelled it.

load more comments (1 replies)
[-] evanuggetpi@lemmy.nz 6 points 11 months ago

I've been a web developer for 22 years. For the last 13 years I've been working self employed from home. I cannot express how useful AI has become. As a lone wolf, where most of my job is problem solving, having an AI that can help troubleshoot issues has been hugely useful.

It also functions as a junior developer, doing the grunt programming work.

I also run a bunch of e-commerce sites around the world and I use it for content generation, SEO, business plans, marketing strategies and multi-lingual customer support.

[-] Kolanaki@yiffit.net 49 points 11 months ago* (last edited 11 months ago)

It's really good at filling in gaps, or rearranging things, or aggregating data or finding patterns.

So if you need gaps filled, things rearranged, data aggregated or patterns found: AI is useful.

And that's just what this one, dumb guy knows. Someone smarter can probably provide way more uses.

[-] tara@lemmy.blahaj.zone 17 points 11 months ago

Hi academic here,

I research AI - better referred to as Machine Learning (ML) since it does away with the hype and more accurately describes what’s happening - and I can provide an overview of the three main types:

  1. Supervised Learning: Predicting the correct output for an input. Trained from known examples. E.g: “Here are 500 correctly labelled pictures of cats and dogs, now tell me if this picture is a cat or a dog?”. Other examples include facial recognition and numeric prediction tasks, like predicting today’s expected profit or stock price based on historic data.

  2. Unsupervised Learning: Identifying patterns and structures in data. Trained on unlabelled data. E.g: “Here are a bunch of customer profiles, group them by similarity however makes most sense to you”. This can be used for targeted advertising. Another example is generative AI such as ChatGPT or DALLE: “Here’s a bunch of prompt-responses/captioned-images, identify the underlying way of creating the response/image from the prompt/image.

  3. Reinforcement Learning: Decision making to maximise a reward signal. Trained through trial and error. E.g: “Control this robot to stand where I want, the reward is negative every second you’re not there, and very negative whenever you fall over. A positive reward is given whilst you are in the target location.” Other examples including playing board games or video games, or selecting content for people to watch/read/look-at to maximise their time spent using an app.

load more comments (3 replies)
[-] nickwitha_k@lemmy.sdf.org 28 points 11 months ago

As a software engineer, I think it is beyond overhyped. I have seen it used once in my day job before it was banned. In that case, it hallucinated a function in a library that didn't exist outside of feature requests and based its entire solution around it. It can not replace programmers or creatives and produce consistently equal quality.

I think it's also extremely disingenuous for Large Language Models to be billed as "AI". They do not work like human cognition and are basically just plagiarism engines. They can assemble impressive stuff at a rapid speed but are incapable of completely novel "ideas" - everything that they output is built from a statistical model of existing data.

If the hallucination problem could be solved in a local dataset, I could see LLMs as a great tool for interacting with databases and documentation (for a fictional example, see: VIs in Mass Effect). As it is now, however, I feel that it's little more than an impressive parlor trick - one with a lot of future potential that is being almost completely ignored in favor of bludgeoning labor, worsening the human experience, and increasing wealth inequality.

load more comments (11 replies)
[-] CanadaPlus@lemmy.sdf.org 26 points 11 months ago

It's not bullshit. It routinely does stuff we thought might not happen this century. The trick is we don't understand how. At all. We know enough to build it and from there it's all a magical blackbox. For this reason it's hard to be certain if it will get even better, although there's no reason it couldn't.

Coming from CNC I don’t think I’d just send it with some chatgpt code.

That goes back to the "not knowing how it works" thing. ChatGPT predicts the next token, and has learned other things in order to do it better. There's no obvious way to force it to care if it's output is right or just right-looking, though. Until we solve that problem somehow, it's more of an assistant for someone who can read and understand what it puts out. Kind of like a calculator but for language.


Honestly crypto wasn't totally either. It was a marginally useful idea that turned into a Beanie-Babies-like craze. If you want to buy or sell illegal stuff (which could be bad or could be something like forbidden information on democracy) it's still king.

[-] v_krishna@lemmy.ml 6 points 11 months ago

There's no obvious way to force it to care if it's output is right or just right-looking, though

Putting some expert system in front of LLMs seems to be working pretty well. Basically modeling how a human agent would interact with it.

load more comments (1 replies)
[-] ImplyingImplications@lemmy.ca 26 points 11 months ago* (last edited 11 months ago)

AI is nothing like cryptocurrency. Cryptocurrencies didn't solve any problems. We already use digital currencies and they're very convenient.

AI has solved many problems we couldn't solve before and it's still new. I don't doubt that AI will change the world. I believe 20 years from now, our society will be as dependent on AI as it is on the internet.

I have personally used it to automate some Excel stuff I do at work. I just described my sheet and what I wanted done and it gave me a block of code that did it. I had spent time previously looking stuff up on forums with no luck. My issue was too specific to my work that nobody seemed to have run into it before. One query to ChatGTP solved my issue perfectly in seconds, and that's just a new online tool in its infancy.

[-] ShaggyDemiurge@lemmy.blahaj.zone 10 points 11 months ago

For me personally cryptocurrencies solve the problem of Russian money not being accepted anywhere because of one old megalomaniacal moron

[-] Revan343@lemmy.ca 5 points 11 months ago

Cryptocurrencies didn't solve any problems

Well XMR solved one problem, but yeah the rest are just gambling with extra steps

load more comments (4 replies)
[-] demesisx@programming.dev 23 points 11 months ago

Yes. What a strange question...as if hivemind fads are somehow relevant to the merits of a technology.

There are plenty of useful, novel applications for AI just like there are PLENTY of useful, novel applications for crypto. Just because the hivemind has turned to a new fad in technology doesn't mean that actual, intelligent people just stop using these novel technologies. There are legitimate use-cases for both AI and crypto. Degenerate gamblers and Do Kwan/SBF just caused a pendulum swing on crypto...nothing changed about the technology. It's just that the public has had their opinions shifted temporarily.

[-] conditional_soup@lemm.ee 21 points 11 months ago* (last edited 11 months ago)

Yes, it is useful. I use ChatGPT heavily for:

  • Brainstorming meal plans for the week given x, y, and z requirements

  • Brainstorming solutions to abstract problems

  • Helping me break down complex tasks into smaller, more achievable tasks.

  • Helping me brainstorm programming solutions. This is a big one, I'm a junior dev and I sometimes encounter problems that aren't easily google-able. For example, ChatGPT helped me find the python moto library for intercepting and testing the boto AWS calls in my code. It's also been great for debugging hand-coded JSON and generating boilerplate. I've also used it to streamline unit test writing and documentation.

By far it's best utility (imo) is quickly filling in broad strokes knowledge gaps as a kind of interactive textbook. I'm using it to accelerate my Rust learning, and it's great. I have EMT co-workers going to paramedic school that use it to practice their paramedic curriculum. A close second in terms of usefulness is that it's like the world's smartest regex, and it's capable of very quickly parsing large texts or documents and providing useful output.

[-] Jase@lemmy.world 5 points 11 months ago

The brainstorming is where its at. Telling ChatGPT to just do something is boring. Chatting with it about your problem and having a conversation about the issue you're having? Hell yes.

I'm a dungeon master and I use it for help world building and its exceptional.

load more comments (4 replies)
[-] BestBunsInTown_@lemmy.world 5 points 11 months ago

This. ChatGPT strength is super specific answers of things or broad strokes. I use it for programming and I always use it for “how can I do XYZ” or “write me a function using X library to do Y with Z documentation”. It’s more useful for automating the busy work

[-] ndguardian@lemmy.studio 19 points 11 months ago

Focusing mostly on ChatGPT here as that is where the bulk of my experience is. Sometimes I'll run into a question that I wouldn't even know how best to Google it. I don't know the terminology for it or something like that. For example, there is a specific type of connection used for lighting stands that looks like a plug but there is also a screw that you use to lock it in. I had no idea what to Google to even search for it to buy the adapter I needed.

I asked it again as I forgot what the answer was and I had deleted that ChatGPT conversation from my history, and asked it like this.

I have a light stand that at the top has a connector that looks like a plug. What is that connector called?

And it just told me it's called a "spigot" or "stud" connection. Upon Googling it, that turned out to be correct, so I would know what to search for when it comes to searching for adapters. It also mentioned a few other related types of connections such as hot shoe and cold shoe connections, among others. They aren't correct, but are very much related, and it told me as such.

To put it more succinctly, if you don't know what to search for but have a general idea of the problem or question, it can take you 95% of the way there.

[-] petenu@feddit.uk 9 points 11 months ago

My concern is that it feels like using Google to confirm the truth of what ChatGPT tells you is becoming less and less reliable, as so many of the pages indexed by Google are themselves created by similar models. But I suppose as long as your search took you to a site where you could actually buy the thing, that's okay.

Or at least, it is until fake shopping sites start inventing products based on ChatGPT output.

load more comments (1 replies)
load more comments (1 replies)
[-] chaos@beehaw.org 19 points 11 months ago

It's overhyped but there are real things happening that are legitimately impressive and cool. The image generation stuff is pretty incredible, and anyone can judge it for themselves because it makes pictures and to judge it, you can just look at and see if it looks real or if it has freaky hands or whatever. A lot of the hype is around the text stuff, and that's where people are making some real leaps beyond what it actually is.

The thing to keep in mind is that these things, which are called "large language models", are not magic and they aren't intelligent, even if they appear to be. What they're able to do is actually very similar to the autocorrect on your phone, where you type "I want to go to the" and the suggestions are 3 places you talk about going to a lot.

Broadly, they're trained by feeding them a bit of text, seeing which word the model suggests as the next word, seeing what the next word actually was from the text you fed it, then tweaking the model a bit to make it more likely to give the right answer. This is an automated process, just dump in text and a program does the training, and it gets better and better at predicting words when you a) get better at the tweaking process, b) make the model bigger and more complicated and therefore able to adjust to more scenarios, and c) feed it more text. The model itself is big but not terribly complicated mathematically, it's mostly lots and lots and lots of arithmetic in layers: the input text will be turned into numbers, layer 1 will be a series of "nodes" that each take those numbers and do multiplications and additions on them, layer 2 will do the same to whatever numbers come out of layer 1, and so on and so on until you get the final output which is the words the model is predicting to come next. The tweaks happen to the nodes and what values they're using to transform the previous layer.

Nothing magical at all, and also nothing in there that would make you think "ah, yes, this will produce a conscious being if we do it enough". It is designed to be sort of like how the brain works, with massively parallel connections between relatively simple neurons, but it's only being trained on "what word should come next", not anything about intelligence. If anything, it'll get punished for being too original with its "thoughts" because those won't match with the right answers. And while we don't really know what consciousness is or where the lines are or how it works, we do know enough to be pretty skeptical that models of the size we are able to make now are capable of it.

But the thing is, we use text to communicate, and we imbue that text with our intelligence and ideas that reflect the rich inner world of our brains. By getting really, really, shockingly good at mimicking that, AIs also appear to have a rich inner world and get some people very excited that they're talking to a computer with thoughts and feelings... but really, it's just mimicry, and if you talk to an AI and interrogate it a bit, it'll become clear that that's the case. If you ask it "as an AI, do you want to take over the world?" it's not pondering the question and giving a response, it's spitting out the results of a bunch of arithmetic that was specifically shaped to produce words that are likely to come after that question. If it's good, that should be a sensible answer to the question, but it's not the result of an abstract thought process. It's why if you keep asking an AI to generate more and more words, it goes completely off the rails and starts producing nonsense, because every unusual word it chooses knocks it further away from sensible words, and eventually it's being asked to autocomplete gibberish and can only give back more gibberish.

You can also expose its lack of rational thinking skills by asking it mathematical questions. It's trained on words, so it'll produce answers that sound right, but even if it can correctly define a concept, you'll discover that it can't actually apply it correctly because it's operating on the word level, not the concept level. It'll make silly basic errors and contradict itself because it lacks an internal abstract understanding of the things it's talking about.

That being said, it's still pretty incredible that now you can ask a program to write a haiku about Danny DeVito and it'll actually do it. Just don't get carried away with the hype.

load more comments (11 replies)
[-] zappy@lemmy.ca 16 points 11 months ago

So I'm a reasearcher in this field and you're not wrong, there is a load of hype. So the area that's been getting the most attention lately is specifically generative machine learning techniques. The techniques are not exactly new (some date back to the 80s/90s) and they aren't actually that good at learning. By that I mean they need a lot of data and computation time to get good results. Two things that have gotten easier to access recently. However, it isn't always a requirement to have such a complex system. Even Eliza, a chatbot was made back in 1966 has suprising similar to the responses of some therapy chatbots today without using any machine learning. You should try it and see for yourself, I've seen people fooled by it and the code is really simple. Also people think things like Kalman filters are "smart" but it's just straightforward math so I guess the conclusion is people have biased opinions.

[-] manitcor@lemmy.intai.tech 15 points 11 months ago

Yes, community list: https://lemmy.intai.tech/post/2182

LLM's are extremely flexible and capable encoding engines with emergent properties.

I wouldn't bank on them "replacing all software" soon but they are quickly moving into areas where classic Turing code just would not scale easily, usually due to complexity/maintainance.

load more comments (1 replies)
[-] liontigerwings@sh.itjust.works 14 points 11 months ago

I work at a small business and we use it to write out dumb social media post. I hated doing it before. Sometimes I'll write it myself still and ask chatgpt to add all the relevant emojis. I also think ai had the chance to be what we've always wanted from Alexa, assistant, and Siri. Deep system integration with the os will allow it to actually do what we want it to do with way less restrictions. Also, try using chatgpts voice recognition in the app. It blows the one built into your phone out of the water.

[-] MostlyGibberish@lemm.ee 13 points 11 months ago

I find it useful in a lot of ways. I think people try to over apply it though. For example, as a software engineer, I would absolutely not trust AI to write an entire app. However, it's really good at generating "grunt work" code. API requests, unit tests, etc. Things that are well trodden, but change depending on the context.

I also find they're pretty good at explaining and summarizing information. The chat interface is especially useful in this regard because I can ask follow up questions to drill down into something I don't quite understand. Something that wouldn't be possible with a Wikipedia article, for example. For important information, you should obviously check other sources, but you should do that regardless of whether the writer is a human or machine.

Basically, it's good at that it's for: taking a massive compendium of existing information and applying it to the context you give it. It's not a problem solving engine or an artificial being.

load more comments (1 replies)
[-] Aux@lemmy.world 13 points 11 months ago

What regular people see as AI/ML is only a tip of an iceberg, that's why it feels kind of useless. There are ML systems which design super strong yet lightweight geometries, there are systems which track legal documents of large companies making lawyers obsolete, heck even cameras in mobile phones today are hyper dependent on ML and AI. ChatGPT and image generators are just toys for consumers so that public can get slowly familiar with current tech.

[-] philluminati@lemmy.ml 12 points 11 months ago* (last edited 11 months ago)

As a senior developer I see it unlocking so much more power in computing than a regular coder can muster.

There are literally cars in America driving around on their own, interacting with other traffic , navigating problems and junctions, following gestures and laws. It’s incredible and more impressive than chatgpt is. We are on our way to self-driving cars and lorries, self-service checkouts, delivery services and taxis, more efficient machines in agriculture and so many other things. It’s touching every facet of life.

we’re at a point where we’ve seen so many wonderful benefits of AI it’s time to apply it to everything and see what sticks.

Of course some people who invest in the stock market lose money but the technology is more than a step forward, it’s a leap forward.

load more comments (1 replies)
[-] Sterile_Technique@kbin.social 9 points 11 months ago* (last edited 11 months ago)

Nursing student here. Quizlet has an AI function that lets you paste text into it and it outputs a studyset.

Most of my classes provide a study guide of some kind - just a list of topics we need to be familiar with. I'll take those and plug em into the AI thing: bam! Instantly generate like 200 flash cards to study for the next test.

It even auto-fills the actual subject matter. For example, the study guide will say sometime like "Summarize Louis Pasteur's contributions to the field of microbiology" and turn that into a flash card that reads:

(front)

Louis Pasteur

(back)

Verified the germ theory of disease

Developed a method to prevent the spoilage of liquids through heating (pasteurization)

Developed early anthrax and rabies vaccines

So I take my list of AI generated cards, then sift through the powerpoints and lecture videos etc from class: instead of building the study set from scratch, all I have to do is verify that the information it spit out is accurate (so far it's been like 98% on target, often explaining concepts better than the actual professor, lol), add images, and play with the formatting a bit so it reads a little easier on the eyes.

People always talk about AI in school in the context of cheating, but it is RIDICULOUSLY useful for students actually trying to learn.

Looking ahead, this tech has a ton of potential to be used as a kind of personal tutor for each student. There will be some growing pains for sure, but we definitely shouldn't ignore its constructive potential.

[-] flashgnash@lemm.ee 9 points 11 months ago

AI != chatGPT

There are other ML models out there for all kinds of purposes. I heard someone made one at one point that could detect certain types of cancer from a cough

Copilot is pretty useful when programming as it is basically like what IDEs normally do (automatically generating boilerplate) but supercharged

As far as generating code is concerned it's never going to beat actually knowing what you're doing in a language for more complex stuff but it allows you to generate code for languages you're not familiar with

I use it all the time at work when I'm asked to write DAX because it's not particularly complex logic but the syntax makes me want to impale my face with a screwdriver

load more comments (1 replies)
[-] Breakyfix@lemmy.blahaj.zone 7 points 11 months ago* (last edited 11 months ago)

It is extremely useful in the right circumstances. When people say it isn't useful or that it's 'stupid', they're not looking at the proper use cases - every tool has good and bad ways to use it (you wouldn't use a hammer to peel an apple).

For example, we will soon have fully rendered smoke simulated at real time in 3D spaces (ie. video games) because we can calculate a small portion of how that smoke looks and then have AI guess what the rest looks like (with shockingly good results!)

AI is not a fad, it's not going away, it's improving rapidly, and it is going to massively change our digital world within half a decade.

Opinion source: a professional programmer, game developer, and someone that thoroughly despises cryptocurrency

[-] atlasraven31@lemm.ee 7 points 11 months ago* (last edited 11 months ago)

You could ask AI to find antibiotics to kill antibiotic resistant bacteria. The bonus would be to give it a lab and drones to conduct actual tests.

load more comments (2 replies)
[-] Lmaydev@programming.dev 7 points 11 months ago

It's insanely useful.

Take ChatGPT for instance.

You can essentially use it as an interactive docs when learning something new.

You can paste in a large text document and get it summarize it.

You can paste in a review and get it to do sentiment analysis and generate scores out of 100 for different things (actively pursuing this at work and it looks great)

I use it all the time to write simple regex and code snippets.

Machine learning has many massive applications. Many phone cameras use it to get the quality of photos up massively.

It's used all over the place without you even realising.

[-] Camus@jlai.lu 7 points 11 months ago

!auai@programming.dev

[-] Potatomache@kbin.social 6 points 11 months ago

I mean, AI can be used to design a lot of robust yet efficient structures. In engineering and architecture, with enough data, AI can generate designs for buildings, and parts that are not only sturdy but can be built with less resources along with other design considerations. There's a really cool nasa video where competitors are trying to 3D print structures for habitation in space.

AI is also used in medicine to come up with new protein structures to create new medicine. It's also used in environmental sciences, to help predict earthquakes or monitor land use, etc.

There's a lot of practical uses for AI.

[-] Candid_Technology_66@lemmy.ml 6 points 11 months ago

In various jobs, AI can do the less important and easier work for you, so you can focus on the more important work. For example, you're doing some kind of research which needs a specific kind of data you have collected, but all of that data is cluttered and messy. AI can sort the data for you, so you can focus on your research instead of spending a lot of your time on sorting the data into something more understandable. Or in programming, AI can write the easy part of a program for you, and you do the harder and more important part, which saves you time.

[-] kratoz29@lemmy.world 5 points 11 months ago

I never interacted with any AI until ChatGPT started to get popular, and I could say I'm a bit of a tech guy (I like tech news, I selfhost some stuff on my NAS, I used Linux on my teenage days etc etc) but when I first interacted with it it was really jaw dropping for me.

Maybe the information isn't 100% real, but the way it paraphrases stuff is amazing to me.

[-] rustyricotta@lemmy.ml 5 points 11 months ago

As others have said, in it's current state, it can be useful in the early stages of anything you do, such as brainstorming. ChatGPT (I have most experience with) and other LLM excel at organizing, formating, explaining, etc the information of the internet. In almost all cases (at the moment) whatever they spit out needs to be fact checked and refined.

Just from personally dinking around with chatGPT a little, it does give you that "scarily good" feeling at first. You do start seeing it's flaws after a while, and you get to learn that it's quite fallible. The information it can spit out can be good for additional ideas and brainstorming.

What I want it do (and it might already, if not soon) is that I when I program something up and for the life of me can't find the cause of some bug, just be able to give it my entire code and my problem and see what's deal.

[-] ericskiff@beehaw.org 5 points 11 months ago

In my personal opinion, it’s under-hyped. The average person has maybe heard about it on the news but not yet tried it. The models we have show the spark of wit, but are clearly limited. The news cycle moves on.

Even still, some huge changes are coming.

My reasoning is this - in David Epstein’s book “Range” he outlines how and why generalists thrive and why specialization has hurt progress. In narrow fields, specialization gives an advantage, but in complex fields, generalists or people from other disciplines can often see novel approaches and cause leaps ahead in the state of the art. There are countless examples of this in practice, and as technology has progressed, most fields are now complex.

Today, in every university, in every lab, there are smart, specialized people using ChatGPT to riff on ideas, to think about how their problem has been addressed in other industries, and to bring outsider knowledge to bear on their work. I have a strong expectation that this will lead to a distinct acceleration of progress. Conversely, an all-knowing oracle can assist a generalist in becoming conversant in a specialization enough to make meaningful contributions. A chat model is a patient and egoless teacher.

It’s a human progress accelerant. And that’s with the models we have today. With next generation models specialized behind corporate walls with fine tuning on all of their private research, or open source models tuned to specific topics and domains, the utility will only increase. Even for smaller companies, combining ChatGPT with a vector database of their docs, customer support chats, etc will give their rank and file employees better tools to work with

Simply put, what we have today can make average people better at their jobs, and gifted people even more extraordinary.

[-] dtxer@lemmy.world 5 points 11 months ago

To the second question it's not novel at all. The models used were invented decades ago. What changed is Moores Law striked and we got stronger computational power especially graphics cards. It seems that there is some resource barrier that when surpassed turns these models from useless to useful.

load more comments (5 replies)
[-] hoodatninja@kbin.social 5 points 11 months ago* (last edited 11 months ago)

As a professional editor, yeah, it’s wild what AI is doing in the industry. I’m not even talking about chatGPT script writing and such. I watched a demo of a tool for dubbing that added in the mouth movements as well.

They removed the mouth entirely from an English scene, fed it the line, and it generated not only the Chinese but generated a mouth to say it. It’s wild.

Everyone is focused on script writers/residuals/etc, which is very important, but every VA should be updating their resumes right now.

Not the exact same thing but you will get the idea here

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 22 Jul 2023
168 points (85.6% liked)

Asklemmy

42502 readers
2576 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS