ProductivityJan 21, 2026

Artificial Intelligence is Not Your Friend

It has come to our attentions nowadays that a lot of people treat AI like their best friend. They talk to them as if texting them, expect them to immediately understand the tasks at hand, and are overall super reliant on them in order to do what needs to get done. But...chatbots aren't infallible, and there are plenty of ways things can go wrong: maybe the AI doesn't understand what you're trying to do. Maybe it doesn't remember something you told it. Or maybe it just does the task in a way that doesn't make sense. Whatever it is, your AI best friend probably also often makes you get mad, causing your productivity to spiral from there. Recently, I've come across this phenomenon a lot amongst college students—they use artificial intelligence to guide their creative direction and heavily trust it to inform their pattern recognition when completing assignments and tasks. When they prompt, they do so tersely, almost casually—as if they were talking to someone in real life as opposed to an AI assistant. And when this doesn't work, they get frustrated and blame the AI for not being accurate or up to date with what they're trying to output. This is increasingly becoming a trend across the use of AI assistants as a whole; people are increasingly realising that while AI is extremely powerful, it's not an end-all be-all. Yet most people are looking for the problem in the wrong place; in fact, the problem lies less in the AI itself and more in the way you treat it.

The difference between AI and your friend

When you break it down, talking to artificial intelligence models as if it was a friend living your life with you doesn't make sense; unlike friends, they're not privvy to your experiences, interactions, or thoughts. If you really were talking to your best friend, they'd already have plenty of context on your life because of your shared moments together. They know what you like, what you don't, your habits, and the intel and stories behind your life trajectory thus far. On the other hand, chatbots don't have access to any of this information. It's extremely infrequent for someone to tell a chatbot anything about their life, and as a result this makes these bots strangers. In other words, most people talk to AI as if it's a friend, but treat it like a robot, a very clear double standard. Treating AI in this manner causes the error cases we talked about above because you're not providing the right context for the chatbot to make accurate decisions for you; instead, you fall into a loophole of miscommunication and friction. So how can we make AI our friend and have it help more accurately with what we need? Put simply, they have to know you. This means providing the context about the task you're currently working on.

What is context?

Context refers to the background information, situational details, and relevant data that inform decision-making and understanding. In the realm of AI interactions, context encompasses everything from the user's current task and goals to their preferences, constraints, and the broader environment in which they're operating (think emails, calendar events, documents containing standard operating procedures, etc.). Context engineering, then, is the practice of deliberately structuring and delivering this information to AI systems to improve their accuracy and relevance. For example, when asking an AI to draft an email, good context engineering will also specify the recipient's role (a potential employer versus a colleague), the email's purpose (requesting information versus declining an invitation), and the desired tone (formal versus casual). In another situation, a well-engineered prompt might include the user's current location, past cuisine preferences, dietary restrictions, budget range, and whether the search is for a special occasion or casual dining in a request to "Find me restaurants". The key difference lies in moving from vague requests to information-rich inputs that mirror how humans naturally communicate when they have shared understanding. Context engineering is often confused with prompt engineering, but they are not necessarily the same. At a high level, context engineering focuses on the "what": which details you choose to provide, how in-depth you go, and what files or documents you might attach. On the other hand, prompt engineering focuses more on the "how": the specific wording, structure, and configuring instructions to manoeuvre a more desirable response from the LLM. Though effective context engineering is built on good prompting practices, we at Dex believe that the future of artificial intelligence lies one layer higher, where models will be able to leverage effective context input to understand and infer user intent, improve quality outputs, and operate in systems that become as robust and adaptable as us humans.