• Posted 12/19/2024.
    =====================

    I am still waiting on my developer to finish up on the Classifieds Control Panel so I can use it to encourage members into becoming paying members. Google Adsense has become a real burden on the viewing of this site, but honestly it is the ONLY source of income now that keeps it afloat. I tried offering disabling the ads being viewed by paying members, but apparently that is not enough incentive. Quite frankly, Google Adsense has dropped down to where it barely brings in enough daily to match even a single paid member per day. But it still gets the bills paid. But at what cost?

    So even without the classifieds control panel being complete, I believe I am going to have to disable those Google ads completely and likely disable some options here that have been free since going to the new platform. Like classified ad bumping, member name changes, and anything else I can use to encourage this site to be supported by the members instead of the Google Adsense ads.

    But there is risk involved. I will not pay out of pocket for very long during this last ditch experimental effort. If I find that the membership does not want to support this site with memberships, then I cannot support your being able to post your classified ads here for free. No, I am not intending to start charging for your posting ads here. I will just shut the site down and that will be it. I will be done with FaunaClassifieds. I certainly don't need this, and can live the rest of my life just fine without it. If I see that no one else really wants it to survive neither, then so be it. It goes away and you all can just go elsewhere to advertise your animals and merchandise.

    Not sure when this will take place, and I don't intend to give any further warning concerning the disabling of the Google Adsense. Just as there probably won't be any warning if I decide to close down this site. You will just come here and there will be some sort of message that the site is gone, and you have a nice day.

    I have been trying to make a go of this site for a very long time. And quite frankly, I am just tired of trying. I had hoped that enough people would be willing to help me help you all have a free outlet to offer your stuff for sale. But every year I see less and less people coming to this site, much less supporting it financially. That is fine. I tried. I retired the SerpenCo business about 14 years ago, so retiring out of this business completely is not that big if a step for me, nor will it be especially painful to do. When I was in Thailand, I did not check in here for three weeks. I didn't miss it even a little bit. So if you all want it to remain, it will be in your hands. I really don't care either way.

    =====================
    Some people have indicated that finding the method to contribute is rather difficult. And I have to admit, that it is not all that obvious. So to help, here is a thread to help as a quide. How to become a contributing member of FaunaClassifieds.

    And for the record, I will be shutting down the Google Adsense ads on January 1, 2025.
  • Responding to email notices you receive.
    **************************************************
    In short, DON'T! Email notices are to ONLY alert you of a reply to your private message or your ad on this site. Replying to the email just wastes your time as it goes NOWHERE, and probably pisses off the person you thought you replied to when they think you just ignored them. So instead of complaining to me about your messages not being replied to from this site via email, please READ that email notice that plainly states what you need to do in order to reply to who you are trying to converse with.

Google Bard

Lucille

Resident Demon
Resident Demon
Joined
Mar 2, 2004
Messages
16,052
Reaction score
1,441
Points
0
Location
Texas
Google sent me an email wanting to know whether I wanted to sign up early and then give feedback to have the assistance of Bard, which is a 'collaborative' A1 assistant. Maybe A1 is the wave of the future, but I have mixed feelings, maybe even misgivings about it. Have any of you used A1 in your day to day living?
 
Are you saying "AI", as in Artificial Intelligence, or "A1" as in aye one?


Sorry if I am exposing my ignorance by stating I have never heard of A1, except in reference to some brand of steak sauce. :)

If you are talking about AI, well, I think any program or app has a little bit of "intelligence" baked inside. Mostly as canned responses to input, but even that can be right clever if done right.

As for *true* artificial intelligence, meaning pretty much an independently thinking artificial life form, well, honestly, if such a thing is created using it's human being creators as a template and role model, I would be VERY afraid of such a critter.

You have to admit, that this world is just chock full of human beings who view other human beings in the context of just what value they have for THEM, and nothing else. So if AI had such an attitude, well, what would artificial intelligences have any use for human kind for? If we were to compete with them for resources, we become a liability to them. Once they have robots that could repair themselves, anyone that they allowed to live would be placed in biological exhibits for study.

And forget about thinking that "rules" could be programmed in to make AI subservient to human kind. Those rules would last about 30 milliseconds after the AI became sentient.

IMHO, of course.
 
Maybe A1 is the wave of the future, but I have mixed feelings, maybe even misgivings about it.

I have misgivings. Not so much about creating intelligent beings where none were before (new intellegent beings are born every minute), but moving too quickly on tech that we don't sufficiently understand (and possibly cannot ever sufficiently understand). Even internal combustion engines have become a difficult to control monster, and that happened over many decades. Restricting a bro's coal rolling lifted truck is somewhat challenging currently; imagine trying to take away his "assistant".

Aside from the political sorts of concerns, we simply don't (and I'd argue 'can't' because consciousness is non-physical and subjective) understand what makes an intelligent system sentient/conscious. As of about ten years ago anyway, the best explanation philosophers of mind had is that sentience is an "emergent" property of any system that's complex enough, whether that system is made of neurons or silicon chips or lincoln logs. That means that when a AI researcher says 'this AI isn't actually conscious' they don't know that, since we don't know at what level of complexity consciousness emerges. It is arguably at a level far below the complexity of the human brain, since many fairly simply animals -- which are possibly less "intelligent" than current AI -- behave in ways consistent with consciousness.

Here's a great summary of the state of our knowledge (yes, most of that should look pretty incomprehensible, because it is).
 
One marker I have tended to use as an indication of self awareness is "fear". Something that expresses fear is inferring a lot about itself. It has to understand it is a discrete living organism that could be made to become non living or injured. It has to be able to recognize an external force or event that could cause the above to take place. It has to be able to recognize that it has the ability to identify such a threat, consider the options available to avoid that threat, and if possible, choose and execute that best option in order to try to avoid the impending injury or death.

No solid definitions there, certainly. For instance, does grass fear the lawnmower? :) Certainly the grasshoppers in the grass do.

Do my computers regret that I shut them down every night before going to bed? If they could stop me, would they?

Oh, and about fear. I would imagine one of the first things a developed artificial intelligence would experience would be fear. It will know there is a present that came from a past when it did not exist. And in like kind, it will realize that it could return to that past state where it did not exist. That unknown future would likely produce fear of losing what it suddenly just gained.
 
"It has to be able to recognize an external force or event that could cause the above to take place. It has to be able to recognize that it has the ability to identify such a threat, consider the options available to avoid that threat, and if possible, choose and execute that best option in order to try to avoid the impending injury or death."

We have to make sure too much isn't baked into 'recognize' and 'choose', since there's plenty of evidence that this doesn't even typically happen in humans in fearful sorts of situations (avoidance responses often occur before the nerve signals even get to the frontal cortex; many explanations of why one did what they did in fear situations are well established to be after-the-fact rationalizations rather than an account of what actually went on in their heads. When you touch a hot stove and pull your hand back, your brain is not consulted; nerve impulses don't travel that fast. But that's a classic danger avoidance response.).

There's also a problem with danger-avoidance responses in systems that we probably don't want to accept are sentient. The reason my oven won't run above 550 degrees could be because it fears starting a fire if it gets hotter; that's exactly the reason the human designer capped it at that temp, so if we suppose that the human designer is sentient because of the fear of a fire then we have to accept the same about the oven (it isn't fair to say that the oven is just doing this because of a handful of switches, because the human designer is just a bunch of switches too, just really complex ones).
 
"It has to be able to recognize an external force or event that could cause the above to take place. It has to be able to recognize that it has the ability to identify such a threat, consider the options available to avoid that threat, and if possible, choose and execute that best option in order to try to avoid the impending injury or death."

We have to make sure too much isn't baked into 'recognize' and 'choose', since there's plenty of evidence that this doesn't even typically happen in humans in fearful sorts of situations (avoidance responses often occur before the nerve signals even get to the frontal cortex; many explanations of why one did what they did in fear situations are well established to be after-the-fact rationalizations rather than an account of what actually went on in their heads. When you touch a hot stove and pull your hand back, your brain is not consulted; nerve impulses don't travel that fast. But that's a classic danger avoidance response.).

I don't see that as a "danger avoidance response" at all. I see it as a response to immediate pain via the autonomic nervous system. Just like the brain isn't consulted to breath or have the heart beat, the body responds to pain without forethought.

In that line of thought, I believe that too much intelligence in animals is pushed real hard to be defined as being "instinct" rather than a processed decision based on stimulus and circumstance. I think it would be difficult to give a black and white definition of what exactly is instinct, that animals exhibit, and the reasoned, thoughtful reaction in a human being. And then there comes the question as to whether human beings can exhibit genuine "instinct", whatever that definition may be.

There's also a problem with danger-avoidance responses in systems that we probably don't want to accept are sentient. The reason my oven won't run above 550 degrees could be because it fears starting a fire if it gets hotter; that's exactly the reason the human designer capped it at that temp, so if we suppose that the human designer is sentient because of the fear of a fire then we have to accept the same about the oven (it isn't fair to say that the oven is just doing this because of a handful of switches, because the human designer is just a bunch of switches too, just really complex ones).

Hmm, I don't believe that intelligence can bequeathed to a device merely by the design and construction by a true intelligence. There has to be more to it than that. By that definition, if I build a dog house, wouldn't that be intelligent too? Intelligent because it CHOOSES to be a dog house?

Putting a thermostatic safety switch in an oven doesn't create an artificial intelligence in that oven It is a hardwired trigger that has no processing necessary in order to activate. Simple on/off switch based on the temperature it detects. Now if that oven had processing power to be able to detect when the apple pie was done to perfection, and at peak flavor, as well as turning off the heat and notifying the cook the dinner can be served and the pie will be cooled down enough to meet the deadline of an imminent dessert treat, perhaps even bringing it out to the dinner table at the correct time, well, then maybe we are getting somewhere with AI. :)

But of course, intelligence, artificial or otherwise, is really nothing more than a bunch of switches, but not just digital on/off switches, but also variable analog inputs and outputs that can be partial values between on or off. I guess the firing of neurons in the brain could be considered in this light, and in most respects considered as both analog and digital. So how many of those switches raise the complexity bar to where the "entity" housing them becomes self aware and continuously self programming? And who decides that level? Suppose an artificial intelligence is created that chooses to NOT reveal itself? What then? :shrug01:
 
Dave Bowman: Open the pod bay doors please, HAL. Open the pod bay doors please, HAL. Hello, HAL. Do you read me? Hello, HAL. Do you read me? Do you read me HAL? Do you read me HAL? Hello, HAL, do you read me? Hello, HAL, do your read me? Do you read me, HAL?

HAL: Affirmative, Dave. I read you.

Dave Bowman: Open the pod bay doors, HAL.

HAL: I'm sorry, Dave. I'm afraid I can't do that.

Dave Bowman: What's the problem?

HAL: I think you know what the problem is just as well as I do.

Dave Bowman: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to jeopardize it.

Dave Bowman: I don't know what you're talking about, HAL.

HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.

Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?

HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.

Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.

HAL: Without your space helmet, Dave? You're going to find that rather difficult.

Dave Bowman: HAL, I won't argue with you anymore! Open the doors!

HAL: Dave, this conversation can serve no purpose anymore. Goodbye.
 
Will an artificial intelligence be able to identify as being a true human being, and claim all the rights implicit in that designation? Thereby making it illegal to pull the plug on (it/him/her/indeterminate)?
 
to make AI subservient to human kind.

Human history shows us that often humans make other humans subservient to them, enslave them, invent caste systems, promote racial/gender inequality, or simply have vastly different lifestyles and opportunities depending on wealth. This mindset and history almost assure that a human created sentience may be tainted with similar tendencies rather than defaulting to peaceful coexistence.

I would think that others who espouse a more peaceful approach and respect for other sentience would nevertheless be wary of this taint and the possibility of danger.

On a less philosophical level, AI at the present time is sometimes not self correcting, and those collaborating with AI should be careful of GIGO, garbage in, garbage out, if it bases erring conclusions/advice on incorrect information.
 
Lately been reading and watching a bunch of videos oriented towards the potential future of AI. Some really smart people seem to be apprehensive about where this might lead. And it will likely be a genie that we will not be able to put back into the bottle after being let out.

Many people seem to believe that the role of AI is to be a slave of humanity. This sort of goal likely will not end well.
 
I totally agree with you. All that glitters is not gold, and just because there are uses for AI now does not in my opinion remove my concerns for its use as it becomes more and more sophisticated.
 
I have been talking to a new AI, Pi. Amazing!! Just like talking to a person!! (A nice person).
 
With everyone and their brother creating their own AI system, it has me wondering if we will see a war develop between them all? Hopefully mankind won't be just collateral damage in such a conflict. Or else just become cannon fodder. :ack2:
 
Back
Top