07/05/2023
So, after 1-2 months testing chatgpt, I have to curb my enthusiasm a bit:
- ChatGPT misses the "why": when you (a human) work on something, you are thinking step by step. And you are making mental notes of why you did something this or that way. ChatGPT skips that step and when being asked, it cannot reflect on why it generated something. Instead, it will give you a good-sounding answer based on why someone else would answer it like this. That makes it appear it is reflecting but it doesn't. It's like writing a piece of music and then citing from the Beatles biography how they came up with it.
- ChatGPT is never up to date: Training is costly. While there are workaround to get current data, this reduces ChatGPT to an interpreter of Google search results, which reduces it to an addon to a search engine, rather than its own thing. This is problematic in a fast-pacing field, meaning there will be only limited feedback loops (e.g., you cannot ask chatgpt how to improve chatgpt).
- ChatGPT lacks precision: ChatGPT can generate whole texts or programs instantly, but with complex subject matters (such as programming), a single mistake might be hard to track down without you understanding how it all works together. This might improve a bit in the near future, but the cost of having to understand whatever the program gave you remains. Or demonstrated in another field: It can generate new pieces of music instantly, but to adjust that piece of music or improve on it, you need to know what you are doing.
These are three factors that will not change much in the next few years. It will help tremendously with anything that requires no 100% precision, though. Hence, the first application being a chatbot, not (for example) a robot.