Google Bard - FaunaClassifieds
FaunaClassifieds  
  Tired of those Google and InfoLink ads? Upgrade Your Membership!
  Inside FaunaClassifieds » Photo Gallery  
 

Go Back   FaunaClassifieds > General Interest Forums > General BS forum

Notices

General BS forum I guess anything is fair game in here. Just watch the subject matter doesn't get carried away too much.

Reply
 
Thread Tools Display Modes
Old 03-21-2023, 02:44 PM   #1
Lucille
Google Bard

Google sent me an email wanting to know whether I wanted to sign up early and then give feedback to have the assistance of Bard, which is a 'collaborative' A1 assistant. Maybe A1 is the wave of the future, but I have mixed feelings, maybe even misgivings about it. Have any of you used A1 in your day to day living?
 
Old 03-22-2023, 01:59 AM   #2
WebSlave
Are you saying "AI", as in Artificial Intelligence, or "A1" as in aye one?


Sorry if I am exposing my ignorance by stating I have never heard of A1, except in reference to some brand of steak sauce.

If you are talking about AI, well, I think any program or app has a little bit of "intelligence" baked inside. Mostly as canned responses to input, but even that can be right clever if done right.

As for *true* artificial intelligence, meaning pretty much an independently thinking artificial life form, well, honestly, if such a thing is created using it's human being creators as a template and role model, I would be VERY afraid of such a critter.

You have to admit, that this world is just chock full of human beings who view other human beings in the context of just what value they have for THEM, and nothing else. So if AI had such an attitude, well, what would artificial intelligences have any use for human kind for? If we were to compete with them for resources, we become a liability to them. Once they have robots that could repair themselves, anyone that they allowed to live would be placed in biological exhibits for study.

And forget about thinking that "rules" could be programmed in to make AI subservient to human kind. Those rules would last about 30 milliseconds after the AI became sentient.

IMHO, of course.
 
Old 03-24-2023, 01:12 PM   #3
Lucille
Quote:
Originally Posted by WebSlave View Post
to make AI subservient to human kind.
Human history shows us that often humans make other humans subservient to them, enslave them, invent caste systems, promote racial/gender inequality, or simply have vastly different lifestyles and opportunities depending on wealth. This mindset and history almost assure that a human created sentience may be tainted with similar tendencies rather than defaulting to peaceful coexistence.

I would think that others who espouse a more peaceful approach and respect for other sentience would nevertheless be wary of this taint and the possibility of danger.

On a less philosophical level, AI at the present time is sometimes not self correcting, and those collaborating with AI should be careful of GIGO, garbage in, garbage out, if it bases erring conclusions/advice on incorrect information.
 
Old 03-30-2023, 05:28 PM   #4
WebSlave
Lately been reading and watching a bunch of videos oriented towards the potential future of AI. Some really smart people seem to be apprehensive about where this might lead. And it will likely be a genie that we will not be able to put back into the bottle after being let out.

Many people seem to believe that the role of AI is to be a slave of humanity. This sort of goal likely will not end well.
 
Old 04-01-2023, 11:02 AM   #5
Lucille
I totally agree with you. All that glitters is not gold, and just because there are uses for AI now does not in my opinion remove my concerns for its use as it becomes more and more sophisticated.
 
Old 05-09-2023, 05:40 PM   #6
Lucille
I have been talking to a new AI, Pi. Amazing!! Just like talking to a person!! (A nice person).
 
Old 05-12-2023, 09:15 PM   #7
WebSlave
With everyone and their brother creating their own AI system, it has me wondering if we will see a war develop between them all? Hopefully mankind won't be just collateral damage in such a conflict. Or else just become cannon fodder.
 
Old 03-22-2023, 11:20 AM   #8
Socratic Monologue
Quote:
Originally Posted by Lucille View Post
Maybe A1 is the wave of the future, but I have mixed feelings, maybe even misgivings about it.
I have misgivings. Not so much about creating intelligent beings where none were before (new intellegent beings are born every minute), but moving too quickly on tech that we don't sufficiently understand (and possibly cannot ever sufficiently understand). Even internal combustion engines have become a difficult to control monster, and that happened over many decades. Restricting a bro's coal rolling lifted truck is somewhat challenging currently; imagine trying to take away his "assistant".

Aside from the political sorts of concerns, we simply don't (and I'd argue 'can't' because consciousness is non-physical and subjective) understand what makes an intelligent system sentient/conscious. As of about ten years ago anyway, the best explanation philosophers of mind had is that sentience is an "emergent" property of any system that's complex enough, whether that system is made of neurons or silicon chips or lincoln logs. That means that when a AI researcher says 'this AI isn't actually conscious' they don't know that, since we don't know at what level of complexity consciousness emerges. It is arguably at a level far below the complexity of the human brain, since many fairly simply animals -- which are possibly less "intelligent" than current AI -- behave in ways consistent with consciousness.

Here's a great summary of the state of our knowledge (yes, most of that should look pretty incomprehensible, because it is).
 
Old 03-22-2023, 12:43 PM   #9
WebSlave
One marker I have tended to use as an indication of self awareness is "fear". Something that expresses fear is inferring a lot about itself. It has to understand it is a discrete living organism that could be made to become non living or injured. It has to be able to recognize an external force or event that could cause the above to take place. It has to be able to recognize that it has the ability to identify such a threat, consider the options available to avoid that threat, and if possible, choose and execute that best option in order to try to avoid the impending injury or death.

No solid definitions there, certainly. For instance, does grass fear the lawnmower? Certainly the grasshoppers in the grass do.

Do my computers regret that I shut them down every night before going to bed? If they could stop me, would they?

Oh, and about fear. I would imagine one of the first things a developed artificial intelligence would experience would be fear. It will know there is a present that came from a past when it did not exist. And in like kind, it will realize that it could return to that past state where it did not exist. That unknown future would likely produce fear of losing what it suddenly just gained.
 
Old 03-22-2023, 01:55 PM   #10
Socratic Monologue
"It has to be able to recognize an external force or event that could cause the above to take place. It has to be able to recognize that it has the ability to identify such a threat, consider the options available to avoid that threat, and if possible, choose and execute that best option in order to try to avoid the impending injury or death."

We have to make sure too much isn't baked into 'recognize' and 'choose', since there's plenty of evidence that this doesn't even typically happen in humans in fearful sorts of situations (avoidance responses often occur before the nerve signals even get to the frontal cortex; many explanations of why one did what they did in fear situations are well established to be after-the-fact rationalizations rather than an account of what actually went on in their heads. When you touch a hot stove and pull your hand back, your brain is not consulted; nerve impulses don't travel that fast. But that's a classic danger avoidance response.).

There's also a problem with danger-avoidance responses in systems that we probably don't want to accept are sentient. The reason my oven won't run above 550 degrees could be because it fears starting a fire if it gets hotter; that's exactly the reason the human designer capped it at that temp, so if we suppose that the human designer is sentient because of the fear of a fire then we have to accept the same about the oven (it isn't fair to say that the oven is just doing this because of a handful of switches, because the human designer is just a bunch of switches too, just really complex ones).
 
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Ultrasound. Bard site-rite. Portable. 2014 model MedMan Trading and Bartering 3 10-06-2019 10:58 PM
Fun Google Ad Robert Walker General BS forum 0 02-03-2017 01:29 PM
Google Checkout LadyOhh General Business Discussions 8 03-04-2012 11:33 AM
Google+ Invites aleria General Discussions 0 07-12-2011 02:13 PM


All times are GMT -4. The time now is 05:37 AM.







Fauna Top Sites


Powered by vBulletin® Version
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Page generated in 0.12470794 seconds with 10 queries
Content copyrighted ©2002-2022, FaunaClassifieds, LLC