ChatGPT is all anyone can talk about lately. Powered by the GPT 3 and GPT 3.5 language model (for Plus subscribers), the AI chatbot has grown by leaps and bounds in what it can do. However, many people were eagerly awaiting an improved model that pushes the boundaries. Well, OpenAI has made that a reality with GPT-4, its latest multimodal LLM that’s packed with unprecedented AI improvements and technologies. Check out all the details below!
GPT-4 is multimodal and outperforms 3.5
OpenAI’s recently announced GPT-4 model is a big thing in artificial intelligence. The most important thing to mention is that GPT-4 is a large multimodal model. This means that he can accept image and text input bringing him a deeper understanding. OpenAI mentions that even though the new model is less capable than humans in many real-world scenarios, it can still exhibit human-level performance at various levels.
GPT-4 is also considered a more reliable, creative and efficient model than its predecessor GPT-3.5. For example: the new model might pass a mock bar exam with a score around the top 10% of candidates (~90 percentiles), while the GPT 3.5 was in the bottom 10%. GPT-4 is also able to handle more nuanced instructions than the 3.5 model. OpenAI compared the two models across a variety of benchmarks and reviews, and GPT-4 came out on top. Check out all the cool things ChatGPT can do here.
GPT-4 and visual inputs
As mentioned above, the new template can accept both text and image promotions. Compared to restricted text input, GPT-4 will do much better at understanding inputs containing both text and images. Visual input stays consistent across various documents, including text and photos, diagrams, and even screenshots.

OpenAI presented the same thing by feeding GPT-4 an image and a text prompt asking it to describe what’s funny in the image. As seen above, the model managed to read a random image from Reddit and respond to the user’s prompt. He was then able to identify the humorous element. However, the GPT-4 image entries are still not publicly available and are a preview for research.
Subject to hallucinations and limited data
Although GPT-4 is a huge leap from its previous iteration, some issues still remain. For starters, OpenAI mentions that it’s still not completely reliable and prone to hallucinations. This means that the AI will make errors in reasoning and its outputs must be taken with great care and human intervention. It could also be hurt with confidence in its predictions, which can lead to errors. However, GPT-4 reduces hallucinations compared to previous models. To be precise, the new model scores 40% higher than GPT-3.5 in company valuations.
Another drawback that many hoped would be solved with GPT-4 is the limited data set. Unfortunately, GPT-4 still unaware of events after September 2021, which is disappointing. Nor does it learn from experience, which translates into the errors in reasoning discussed above. Additionally, GPT-4 can fail difficult issues just like humans, including security vulnerabilities. But there is nothing to worry about because Microsoft Bing AI uses the GPT-4 model. Yes, you can try the new AI model, with real-time internet data support on Bing. Check out this article to learn how to access Bing AI chat in any browser, not just Edge.
Access GPT-4 with ChatGPT Plus
GPT-4 is available for ChatGPT Plus subscribers with a usage cap. OpenAI mentions that it will adjust the exact usage cap based on demand and system performance. Additionally, the company may even introduce a “new subscription tier” for greater use of GPT-4. Free users, on the other hand, will have to wait as the company hasn’t mentioned any specific plan and only ‘hope‘ that it can offer a number of free GPT-4 requests to those who don’t have a subscription.
At first glance, GPT-4 will turn into a extremely attractive language model even with a few cracks in his armor. For those looking for even more detailed information, we already have something in the works. So stay tuned for more.
Leave a Reply