Over the past year, the software whiz Jay Prakash Thakur has been burning the midnight oil to create AI agents that might soon be able to order food and build mobile apps all by themselves. These agents, while pretty smart, are also throwing out some legal curveballs for companies looking to cash in on the AI trend. Agents are like super smart AI programs that can work on their own, taking over tasks like answering customer queries or handling payments. While basic chatbots like ChatGPT can draft emails or check bills, big tech companies like Microsoft are betting that agents can handle more complex tasks without needing too much human supervision.
The grand plan in the tech world is to have teams of agents working together to replace entire workforces. This means saving time and money for companies, which is always a win. The demand for this tech is going up, with experts predicting that AI agents will be able to handle 80 percent of customer service questions by 2029. Companies like Fiverr are seeing a massive surge in searches for “ai agent,” showing that people are really interested in this technology. Thakur, a self-taught coder from California, has been tinkering with Microsoft’s AutoGen software to build these agents for a while now. He’s even created some multi-agent prototypes using just a bit of coding magic.
One of the biggest challenges Thakur has faced is figuring out who’s responsible when an agent messes up and causes financial damage. If agents from different companies can’t communicate properly within a system, it could lead to some serious finger-pointing. Legal experts like Joseph Fireman from OpenAI are already warning that companies might have to take the blame when things go wrong, even if it’s not entirely their fault. The insurance industry is already stepping in to offer coverage for AI mishaps, trying to help companies deal with the costs of any errors that pop up.
Thakur’s experiments with agents have shown that even the smartest AI can make mistakes. From mixing up orders for onion rings to linking to the wrong shopping website, these agents aren’t perfect. Thakur’s dream of a fully automated restaurant might still have some kinks to work out, especially when it comes to handling customer orders correctly. Developers are hoping that a “judge” agent could step in to catch these errors before they become big problems. But as companies keep pushing the boundaries with AI, someone will have to figure out who pays the price when things go wrong. Legal experts are suggesting that users might have to sign contracts holding companies accountable for the actions of their agents, but it’s still a murky legal area.
In the fast-paced world of AI development, it looks like there’s still a lot of work to be done before we can fully trust these smart agents to handle things on their own. Thakur’s experiments show that even the most advanced AI systems can slip up, so it’s important to proceed with caution. As the tech industry races ahead with more complex AI systems, the question of accountability becomes even more crucial. Maybe it’s just me, but it seems like we still have a long way to go before we can rely on AI agents completely.






