What are your feelings on AI in general?

With AI being a hot topic in the mainstream right now and with our industry at its helm (so making us the people who might be able to do something about it/shape it) this section is being expanded (on a trial basis) to include all/general AI discussions (it’s possible we might create a dedicated section at some point depending on how these threads go). With that said…


What are your feelings on AI in general?

What are your thoughts on AI right now? Not specifically in terms of what it might mean to you as a software developer, or for the software industry, but what kind of impact do you think it will have on society, the planet, our species even? Do you think it will have an overall positive impact? Or do you think we should be worried?

To help kickstart the conversation here’s a clip from Geoffrey Hinton (one of the godfathers of AI) who himself is very concerned…

But what do you think?


Please note:

  • There are no right or wrong answers here - since nobody knows for sure how things will pan out everyone’s opinion is valid.
  • If you disagree with an opinion feel free to debate or challenge it - but please do so tactfully and in good faith.
2 Likes

I throw a lot of cold water on today’s AI hype, however my long term view is quite different.

Consider the big picture, we are part of the tree of life. Our particular species was so successful because we developed language. Our offspring initially learn language, without being taught, and then spend decades using it to first learn what previous generations accomplished then contribute to the next. It is the culture, society and technology we create which sets us apart from the other apes. Modern human evolution has therefore been taking place outside of our biology for thousands of years already.

Everything we have caused is a product of life on earth, AI included. Imagine if we encountered a complex system on another world. We would immediately know that alien life does exist. Would we know whether that thing was alien life itself, or a product of that alien life? Would the question even make sense?

Biology will have bootstrapped the evolution of life which follows, in whatever form it takes. I suspect humans are but a footnote in the fullness of time.

2 Likes

This.

The sooner we discover the finiteness of life and the species, the sooner the majority might actually enjoy their existence here on this planet.

Strangely though the biological AI (i.e. our species) has a longer future than the electronic AI (i.e. ChatGPT) simply because silicon doesn’t grow on trees, let alone server racks or electricity. AI is an extremely fragile technology and humans don’t have a good track record sustaining fragile technologies, e.g. peace.

AI has it benefits and place in our societies but not as a force for fear and anger, it should be used to aid us to develop better technologies for the betterment of all, including the majority.

I also see AI as part of evolution but probably, akin to the dinosaurs: they had their run but, in the end, all they were good for was to fill out petrol tanks.

3 Likes

I’ve always thought that if we don’t annihilate ourselves then transhumans would be what replaces us (transhumans in the sense of enhanced humans by means of advances in biology). Imagine humans with 100 times the memory, 100 times cognitive ability, etc. What I never thought, even though we’ve seen it time and time again in films, is AI as a creation of ours being responsible for annihilating us - which is what Geoffrey Hinton is suggesting, with his underlying message being the threat comes directly from irresponsible companies who prioritise making money above everything else.

It’s actually something I might have thought a bit far-fetched a few years ago (our govts would never allow it/they’re there to protect us, surely!?) but the last few years has been eye-opening, and so now I would agree with Hinton that while AI could potentially be amazing for humankind, the way things are going it is looking more likely to be a danger… unless we somehow curb the greed and system currently behind it.

1 Like

‘It’ can be replaced with AI, book printing, music industry, internet, farming, olympic games, … any human activity that involves money in any form.

I don’t see anything special about AI and greed - I see a connection between humans and greed, a one-to-one connection.

We should be very clear: AI is only an extremely focussed and therefore rather autistic version of us humans. AIs sources of “knowledge” is us and all it does is highlight characteristics that are very common to us all.

No App, no bit of Software, no technology will ever fix that. We as a species have to fix ourselves. We as a species have to reflect about greed, jealousy, hate and generally negativity towards others.

2 Likes

I think the point Geoffrey Hinton is making is that the difference with AI is that it could be (is likely to be?) catastrophic for our species.

It’s probably not a species-problem, but more a problem of ‘modern civilisations’. The book Civilised To Death by Christopher Ryan is well worth a read (also an audio book - I listened to it while on my hikes!)

2 Likes

Civilisations aren’t God-given nor Universe-given, we make them.

I believe it is very very important that we face up to the problems we as a species create. Because we as a species can also change them. “Civilisation”, “AI”, “Nature”, places an artificial layer of abstraction between us the perceived problems. "Oh, I can’t do anything because XYZ is the (perceived) problem*.

The problems mostly lie with us as a species - IMHO.

Jealousy causes greed, Hate causes wars, Fear causes discrimination, and Uncertainty causes doubt.

Just as nuclear weapons before it. Oh and don’t forget Covid/Plague. Always remember: predicting the future isn’t about being right tomorrow, rather it’s about selling you something today.

“AI will eat us” sentiments will sell more bunkers where we can hide out. Whether we get eaten is secondary.

If AI does become AGI and takes us over, it will be a natural part of evolution. We’ll end up in Zoos replacing the Apes. After all, we have nearly ensured their extinction - as our ancestors - so there will be plenty of room in Zoos again.

EDIT: Sorry just to clarify: The importance of us identifying ourselves with the problems we created is purely mental, I don’t believe for a minute that we would actually do anything about it. Of course for an AGI this would be a different view: “we have a problem and we have a cause - now for the solution … Eat the humans!” it might say

2 Likes

Some might argue that they are imposed on us. Again, in that book, the author talks about people living in hunter gatherer tribes laughing when told that people from the ‘civilised world’ have to go to work for the majority of their waking hours so they can buy a home etc - they say if someone needs a home they just build them one.

They’ve also taken people from such tribes and thrust them into the West, only to find those people can’t wait to get back home, rejecting modern civilisation.

The book really is worth a read :icon_biggrin:

I think this is part of the point Hinton is making - nuclear weapons testing/manufacture is heavily restricted and controlled by govts… AI, is not.

This is what Hinton is saying :lol: (AI is a threat to our species)

1 Like

As LLM are reentrant, learning from plausible answers from multiple other LLM, this will irremediably end in a widespread impoverishment and standardization.
This will kill innovation.

If you do like everyone else, you end up like them.

2 Likes

Aha! See I say it: predicting the future is about selling me something today! :wink:

I believe you that the book is good but having this conversation is better, I can interact with the author via you as a proxy. Exchange ideas and find consensus and create new ideas. Much like using an AI, in fact the good thing about the internet is that no knows you’re an AI … errrr dog of course, my mistake!

Does the author speak of trust? The fact that the internet and now AI have/are degrading trust amongst humans because we don’t know what to believe any longer? Fake news is not lying to us, it is degrading our social bonds. AI is degrading the trust we have in our own knowledge.

Does the author speak on how this is good for capitalism? We start to believe the big corporation because these must know what is best, after all they are so successful. So we start doing what they tell us. We each buy a product and leave it standing around instead of sharing it with others to use because we no longer trust them.

Does the author speak of the fact that AI is avoidable? Just as the Amazonian tribes seem to be able to avoid social media. I always ask myself why haven’t these tribes invented the iPhone? Are they more Android fans or because they simply don’t need a phone?

It is hard to believe that life can be better without “modern” gadgetry but yes it can. The point of “civilising” tribes in the Amazon (rainforest not company - one day this might well refer to the company and the tribes might be the factory workers - in China) is to ensure we forget how to live without “civilisation”. Just as capitalism has removed its competition so we are left to assume there is nothing better than capitalism, so “civilisation” wants to remove all competition to its status as “the best” form of society building.

“Civilisation” here refers to “western civilisation”, the Amazonian tribes, have in fact, also a “civilisation” - their civilisation, one that we clearly no longer understand because we have forgotten our past. “Modern” civilisation is a misnomer to encourage us to believe we are better.

There is no “modern” in a timeless universe.

Does the author say how this will happen or just that it will happen? I was thinking about this being a spectrum between the Matrix (the film) kind of enslavement of humankind or the more subtle dumbing down of individuals to become easily controlled by the few. A bit like 1984 (the book) versus Brave New World. Or will AI become these incredible powerful robots and destroy us with their laser ray guns! A kind of socialism amongst robots.

We could just pull the plug and turn off the electricity.

2 Likes

In my opinion (feel free to correct me), AI as we imagine it doesn’t really exist yet. What we currently see are large language models. In other words, they’re a part of what could eventually be AI, but not true AI. If real AI ever appears, I believe it will communicate with Homo sapiens through a language model. Other algorithms behind image or video generation are still just human-made tools for humans, which means they’re full of limitations, flaws, and biases. There’s even a common belief that to create true AI, we would need a computing cluster the size of planet Earth, because it’s practically impossible to replicate the number of neural connections in a human brain - let alone the dynamic behavior of actual neurons. So, in essence, language models today are more like an advanced Google search with extra features. Corporations pour money into marketing and hype, running ahead of the train without actually getting on it - AI today is mostly marketing. That said, I have to give credit where it’s due: I do use language models for generating long responses or drafting documentation. Being Ukrainian, I still face some language barriers despite years of working with foreigners, so sometimes my tone may seem harsh even when I mean no harm. LLMs help me smooth that out. However, I do not trust them blindly - I constantly have to review and correct outputs and verify facts myself. Because if 50% of the content online is true and 50% is false, the model will output a messy mix of both. As I once said in a similar discussion:

Turn off the internet, and a language model will only work with what it has cached or what you feed it. Turn off the internet and electricity for a human - and eventually, we’ll recreate both, and even language models. That’s what we’ve already proven by existing.

3 Likes

We are all living on Spaceship Earth - enjoy the ride! We as a species go on about wanting to explore the universe and travel out into space with rockets and satellites, forgetting that we are already on a spaceship. A spaceship that is optimised for travelling through this vast universe.

Similarly Collective Human Intelligence (CHI) is an untapped resource that would far outdo AI/AGI but we have largely forgotten CHI. So we recreate CHI in silicon and call it AI. Cooperative CHI is - perhaps - best expressed in open source software but also in forums like this one or mailing lists or where else cooperative behaviour is encouraged.

Perhaps if we as a species came back to being more cooperative in sharing our intelligence, then we won’t even need AI. But that’s a very holistic and idealised viewpoint.

2 Likes

i feel, one problem with artificial intelligence is that the sum of artificial and natural intelligence is constant.

4 Likes