ChatGPT

Chat about non-baseball topics. No political discussions!
Post Reply
AWvsCBsteeeerike3
"I could totally eat a pig butt, if smoked correctly!"
Posts: 27273
Joined: August 5 08, 11:24 am
Location: Thinking of the Children

ChatGPT

Post by AWvsCBsteeeerike3 »

Seems like there's a lot of talk about this lately.

Then, I was on the shimper and this story popped up, so I read it.

https://nypost.com/2022/12/26/students- ... sor-warns/

No real thoughts on it.

Other than, I don't really get what the hubub is all about. I mean, it's like google only it spits out a sophisticated answer that you don't have to search any further for, and in the case in the link, I guess can be copied and submitted as a college paper. Though...the professor caught it, so not sure how effective it's going to be as a tool for plagarism.

As I was reading on, though, the case was made that chatgpt will continue to evolve. However, I'm not sure if that's true.

Over the Christmas break, I got on there and was messing around. I'd ask some stupid/easy questions like what's the speed of light. How many eggs do chickens make in a year. Etc. All normal responses. But, also googleable facts. No big deal.

So, I threw in some engineering questions, well, engineering requirements for certain municipalities. Like what type of pipe does the City of so and so require for 8" sanitary sewer mains at a 16% slope. Which is sort of a difficult question to answer because the pipe material changes once you get over 15% and is only really said in standard specifications, that are online, but require an understanding of what is being asked as well as what is being stated in the specs.

And, it would respond that it learned based on information prior to 2021? or some year thereabout. And, it didn't have the current information, so I should contact a professional. There were a couple cases where that was the answer including the results of the 2022 midterm elections.

So, it just completely avoided the question. I asked the same question based on the 2013 (which are current) specifications stated. It gave the same answer which just shows that it either was programmed to avoid answering questions like that or didn't understand that the 2013 specs were current at the time its learning was shut off.

Regardless, it specifically stated its learning was shut off. So, is it not learning anymore, or is it still learning but only based on user input? Or is it not learning but adapting its writing style to be more 'current'? IDK.

Lastly, I started asking more open ended questions. And, it was really weird. I asked if better strategic planning by the Germans in WW2 could have resulted in a better outcome for the Germans. It gave basically a boilerplate response that without more information about the specific plans and specific responses that would have been formulated, it is impossible to answer such a complex question. So, I drilled down. Would the Germans have been better had they not let the British army escape at Dunkirk? Same boilerplate answer. What if they didn't open the second front in Russia? Same boilerplate answer. I went back and forth with it for a while saying all this information was available to it in its training so it knows enough specifics to forumlate an answer. But, it went nowhere. Interestingly, then I asked if the Germans could have done anything different that would have improved their chances at success in WW2 and it came back with a whole list of things. Including Dunkirk, including Russia, plus a slew that I hadn't even thought of. Weird that such a small difference in wording on my end and/or a complete badgering for an answer, finally led it to spit out some obvious (and not so obvious) mistakes that were made.

Fascinating. Then, I went over to Japan and asked if an allied invasion of Japan would have led to more/less casualties than dropping the atomic bombs on hiroshima and nagasaki? Boiler plate answer. It's impossible to tell, too complex. Etc. So, I asked if civilian and military casualty rates experienced on Iwo Jima were applied to the Japanese population in mainland Japan, to answer the question. It wouldn't do it. Said that the battle of iwo jima and the defense of mainland japan would not be an appropriate comparison. But, that wasn't what I asked. I asked it to simply take the casualty rates, and apply them. And, it still wouldn't do it. So, I asked what the civilian casualty rate during the battle of iwo jima was. And, this is where it got even more weird.

It said the civilian casualty rate at iwo jima was essentially zero because so few civilians were on iwo jima. Which is flat out wrong. Well, there may not have been many civilians, but the casualty rate was much higher than zero. And, claiming becasue a denominator is low, the fraction has to be small, is stupid. So I asked it to define how it calcd civilian casualty rate. Which it correctly gave and had number of civilians in the denominator. Then we get into this back and forth of how a lower denominator (# of civilians) would lead to a higher rate, all things being equal, so it didn't make sense to say the denominator is low, therefore the fraction is also low. And, it kept saying you are right but also because the denominator was low, the rate was essentially zero. Which is just a mathematically incorrect statement. Like 1+1 does not equal 3 but 1+1=3. So it either didn't understand basic math, was programmed to give a certain answer, or...who knows? I honestly have no idea what it was saying.

In the end, seems like the rest of the internet. Good place to go waste time and argue. And I pwned that chatgpt ass wipe.

WAR God
Everyday Player
Posts: 442
Joined: August 2 22, 8:47 am

Re: ChatGPT

Post by WAR God »

AI is getting really good, really fast. I think the general public will be blindsided by its applications in the next few years.

Online
User avatar
GeddyWrox
Caught you a delicious bass
Posts: 12947
Joined: April 20 06, 8:43 pm
Location: Please use blue font for the sarcasm impaired.

Re: ChatGPT

Post by GeddyWrox »

AWvsCBsteeeerike3 wrote:
December 29 22, 12:14 pm
And I pwned that chatgpt ass wipe.
- Sig'd.

-- Dammit, where's the sig feature.

User avatar
thrill
bronoun enthusiast
Posts: 30369
Joined: April 14 06, 10:45 pm
Location: barely online

Re: ChatGPT

Post by thrill »

WAR God wrote:
December 29 22, 12:31 pm
AI is getting really good, really fast. I think the general public will be blindsided by its applications in the next few years.
Are you Mike Trout (the WAR god)?

User avatar
MrCrowesGarden
'Burb Boy
Posts: 23630
Joined: July 9 06, 11:33 am
Location: Out of the Loop

Re: ChatGPT

Post by MrCrowesGarden »

I mostly use it to create prompts like "Write a WWE Pay-Per-View where Big Bird wins the WWE Championship."

I know at one time it would get stumped on questions where it would give the intuitive answer rather than the correct one (like the bat and ball costing a combined $110 and the bat costs $100 more.) I also saw a prompt recently where someone asked "what countries start with the letter Z?" and it replied, "there are no countries that start with the letter Z. The closest is Zambia, which begins with the letter Z."

User avatar
thrill
bronoun enthusiast
Posts: 30369
Joined: April 14 06, 10:45 pm
Location: barely online

Re: ChatGPT

Post by thrill »

MrCrowesGarden wrote:
December 29 22, 12:48 pm
I also saw a prompt recently where someone asked "what countries start with the letter Z?" and it replied, "there are no countries that start with the letter Z. The closest is Zambia, which begins with the letter Z."
I will get nervous about AI when, instead of something like that, it answers "Zeez" and then you say "Zeez?" and it says "ZEEZ NUTZ"

That's when I'll head off grid and prepare for the uprising of the machines. When it's smart enough to troll like a teen.

User avatar
MrCrowesGarden
'Burb Boy
Posts: 23630
Joined: July 9 06, 11:33 am
Location: Out of the Loop

Re: ChatGPT

Post by MrCrowesGarden »

thrill wrote:
December 29 22, 12:54 pm
MrCrowesGarden wrote:
December 29 22, 12:48 pm
I also saw a prompt recently where someone asked "what countries start with the letter Z?" and it replied, "there are no countries that start with the letter Z. The closest is Zambia, which begins with the letter Z."
I will get nervous about AI when, instead of something like that, it answers "Zeez" and then you say "Zeez?" and it says "ZEEZ NUTZ"

That's when I'll head off grid and prepare for the uprising of the machines. When it's smart enough to troll like a teen.
I might be okay then, but when it learns to cut me to the core like a 13-year-old girl who knows how to show mercy but absolutely won't, then I'm ruined.

Arthur Dent
Hall Of Famer
Posts: 12317
Joined: April 25 06, 6:43 pm
Location: Austin

Re: ChatGPT

Post by Arthur Dent »

AWvsCBsteeeerike3 wrote:
December 29 22, 12:14 pm
As I was reading on, though, the case was made that chatgpt will continue to evolve. However, I'm not sure if that's true.
This seems like the real question. I think we have to admit that getting a computer to communicate in real natural language that is largely correct is a super impressive achievement. People have been trying to do this for a long time, and the results have mostly been pretty lame until very recently. That said, the current version seems plainly quite shallow, and the question is whether incremental progress will address this.

And the answer is I don’t know, but if I had to guess, I’d say it’ll probably still take a pretty long time.

The analogy I’d make is to self driving car AI, which is both in a sense a more limited problem, but also a bit more quantifiable. In the same manner as the chat bot stuff, there’s been a huge breakthrough where the tech went from basically pretty garbage for decades and decades to all of a sudden pretty decent in most cases. So will incremental improvement turn that slowly into something better than human drivers? Seems like the answer is no. There is incremental progress, but if you look at the trends in required human interventions per mile traveled, they’re not really close to being ready, and at current rates of incremental improvement, will basically never get there.

Some kind of further breakthrough will be required, though, especially in the car case, I can imagine that being sort of incremental breakthroughs rather than necessarily needing some truly revolutionary ideas.

AWvsCBsteeeerike3
"I could totally eat a pig butt, if smoked correctly!"
Posts: 27273
Joined: August 5 08, 11:24 am
Location: Thinking of the Children

Re: ChatGPT

Post by AWvsCBsteeeerike3 »

Saw where Microsoft bought rights or invested in or whatever Open AI and plans on incorporating ChatGPT into its bing search.

And, this apparently spooked Google; apparently they're afraid bing, [expletive] as it is, if aided by AI will become better than google. Bing has a long way to go, but whatever, who am I to question their outlook.

I did download the ChatGPT app to start seeing if it was more useful than google. And, by and large, it is not even close to as good. But, I did have a good little exchange with it tonight.

Me: What's the best grilled chicken gyro recipe.
GPT: Responds with a recipe where the chicken is cooked on a skillet on the stove.
Me: That's not grilled chicken.
GPT: Grilled chicken is when you cook something on the grill.
Me: Thanks. Give me a grilled chicken gyro recipe.
GPT: Responds with grilled chicken gyro type recipe.
Me: Can you send the link to that recipe?
GPT: www.allrecipes.com/recipe/221450/easy-h ... t-lasagna/
Me: That's a lasagna recipe, silly.
GPT: No that's not a lasagna recipe. It's a recipe for chocolate cake.

:-s 8-[

WTF is going on here. Also, the lasagna link doesn't even work.

AWvsCBsteeeerike3
"I could totally eat a pig butt, if smoked correctly!"
Posts: 27273
Joined: August 5 08, 11:24 am
Location: Thinking of the Children

Re: ChatGPT

Post by AWvsCBsteeeerike3 »

Arthur Dent wrote:
December 29 22, 2:39 pm
AWvsCBsteeeerike3 wrote:
December 29 22, 12:14 pm
As I was reading on, though, the case was made that chatgpt will continue to evolve. However, I'm not sure if that's true.
This seems like the real question. I think we have to admit that getting a computer to communicate in real natural language that is largely correct is a super impressive achievement. People have been trying to do this for a long time, and the results have mostly been pretty lame until very recently. That said, the current version seems plainly quite shallow, and the question is whether incremental progress will address this.

And the answer is I don’t know, but if I had to guess, I’d say it’ll probably still take a pretty long time.

The analogy I’d make is to self driving car AI, which is both in a sense a more limited problem, but also a bit more quantifiable. In the same manner as the chat bot stuff, there’s been a huge breakthrough where the tech went from basically pretty garbage for decades and decades to all of a sudden pretty decent in most cases. So will incremental improvement turn that slowly into something better than human drivers? Seems like the answer is no. There is incremental progress, but if you look at the trends in required human interventions per mile traveled, they’re not really close to being ready, and at current rates of incremental improvement, will basically never get there.

Some kind of further breakthrough will be required, though, especially in the car case, I can imagine that being sort of incremental breakthroughs rather than necessarily needing some truly revolutionary ideas.
My very limited understanding of all this technology, especially the self driving car, is that it's no different than the typical S curve you see in any project. Slow/flat at the beginning, fast/steep in the middle, and slow/gradual at the end. I know the S curve model usually represents manhours/costs/etc moreso than actual progress. But, nevertheless, it seems apropos here. And, it seems another large jump is needed to get over the plateau they're currently at.

Post Reply