AI image prompt "ChatGPT, the language AI tool, talks too much, it does not reason"

ChatGpt is sh*t!

So last week I tried ChatGPT for the first time. You think: “Have you lived under a rock all this time, Maarten-Jan? You, as a software engineer, and a master of science in artificial intelligence?” Yeah… Honestly, I couldn’t get myself to try it until some colleagues told me they use(d) it.

I really was put off by the hype surrounding the whole thing. A lot of what I saw in the media did not impress me that much. I mean, it’s cool we can now ask a computer to generate pictures, or create a short poem, but will that change the world?

Also, some of my university professors, which I still follow on LinkedIn, voiced skepticism. The abilities of ChatGPT seem overrated, and the OpenAi company has questionable ethics: using personal or copyrighted material for its training, while not having too much success combatting many biases.

Was I the only person not hyped about this particular AI?

So anyway, last week I logged in on Bing AI and gave it a try. Given the introduction above, it is hard to say with a straight face that I was open minded. I wasn’t. And what I saw did not impress me much.

For instance, what could it tell me about “Maarten-Jan van Gool”? It said there are several people with that name (there aren’t). I asked who it found. It found me, some guy named “Maarten van Gool” and a third Maarten, about whom it said it could not find any information. I asked why it came up with a third, having no information, and the chat shut off.


I asked about Oosterbeek, my home village, and some tourist information about it. It came up with two museums, both of which are not located in Oosterbeek… I asked about the number of people living in ancient Athens (ancient Greece is a fascination of mine). It said about 10.000.

Maybe it knows how many triremes (ancient warships) Athens used in the Sicilian expedition (415 BC), and how many people manned a trireme? It said 134, and 200 each. It shut off again when I asked it to do the calculations and compare it to its original answer.

I will say I’m quite impressed with the language model. It’s clear it ‘understands’ language constructs. It understands references like “he”, “she”, or “it”, and it has the ability to form coherent sentences which generally fit the questions.

It is however bad at interpreting its sources and forming a coherent picture. To me Google (or Wikipedia) still is the preferred way to find information. ChatGPT is prone to ‘hallucinate’ text.

So what about professional use?

In our project, we’re currently in the process of getting rid of Scala in favor of Kotlin. I asked it to convert some small pieces of code, and it was pretty successful. It’s pretty cool it can work with these less popular programming languages.

Especially with the simpler conversion, it did save me some mind numbing find and replace. In that sense, as an advanced ‘find and replace’ tool, it has utility.

I also see it being useful for generating ‘boilerplate’ code, and saving you some time in that regard. Adapting examples from the internet to your specific use case is something you seem to be able to achieve with ChatGPT as well. Having said that, I’m disinclined to feed it with important information (or code). Because, in what way will that information be used in the future by OpenAI?

All in all, I’m not very impressed. I don’t see myself using it (much), professionally nor personally. There are use cases, but not a lot. Main issue with ChatGPT? Having the ability to speak does not give it the ability to reason.


PS Case in point, biases, the featured image at the top of this blog, was made by the AI at after reading this prompt: “ChatGPT, the language AI tool, talks too much, it does not reason.” Stable diffusion 1.5 thinks talking too much, and not reasoning, is a female quality.  I will just leave it here…

One thought on "ChatGpt is sh*t!"

  1. Paul Brandt says:

    I’m with you. I think people don’t understand that chatGPT is primarily an advanced linguistic machine as opposed to the oracle they hold it for. It is very poor on particular subject knowledge and induction logic, and rightfully so since it wasn’t build for that. In my opinion, this is more a PR failure than a scientific failure because it still manages to achieve a first part of the Turing test: being able to maintain, as a machine, a conversation that you cannot discern linguistically from a human. And that definitely can be considered an milestone in computing.

Leave a Reply

Your email address will not be published. Required fields are marked *