What should I eat this week? How should I cut my hair? Where should I take my family on vacation? What’s the best way to fix my shower grout? These are just some of the questions journalist Kashmir Hill handed over to generative AI in an experiment she documented in The New York Times—an attempt to see just how well artificial intelligence could handle everyday decisions.

Hill tested multiple chatbots to guide her choices, even naming one “Spark” at the suggestion of her young daughters. The results? A mix of helpful, amusing, and sometimes flawed advice that highlighted both the promise and the limitations of AI as a personal assistant.

AI as a Personal Decision-Maker

From meal planning to fashion advice, Spark played the role of digital concierge. It generated shopping lists in seconds, suggested the best order for cooking ingredients (onions before mushrooms!), and even crafted thoughtful daily schedules with reminders to “tuck in daughters” and “wind down for yourself.”

But AI’s judgment wasn’t always perfect. When asked for home office color suggestions, it offered taupe, sage, and terra cotta—reasonable, but entirely subjective. Yet in social situations, it showed surprising awareness: when Hill asked for advice on interacting with her mother-in-law, the AI suggested letting her cook if she enjoyed it, an unexpectedly thoughtful answer.

AI Is Already Making Choices for Us

Hill’s experiment is just a glimpse of something bigger: AI isn’t just suggesting decisions—it’s starting to make them. OpenAI’s latest release, Operator, is a major step in this direction. Currently available as a research preview, Operator allows AI to not only recommend choices but also take action, such as booking reservations, filling out forms, and ordering groceries.

This shift from passive assistant to active participant is a major leap. Imagine an AI that doesn’t just tell you what to cook but orders the ingredients for you. Or one that doesn’t just suggest a vacation spot but books the flights, hotel, and itinerary. Operator and similar tools mark the beginning of a new era, where AI doesn’t just offer guidance—it follows through.

The Human Factor: Where AI Still Falls Short

Yet, as Hill discovered, AI is still far from replacing human intuition. It can provide logical recommendations, but it lacks the personal insight that comes from experience and emotion. For example, when seeking fashion advice, Hill shared outfit photos with AI, which recommended an olive knit tank top and high-waisted dark jeans—a pairing it rated as a perfect match. But while the AI could assess aesthetics, it couldn’t account for how she actually felt wearing the outfit, a crucial element in any real decision.

This is the fundamental gap: AI is great at optimizing based on available data, but it doesn’t know you in the way that a close friend, a spouse, or even your own gut instinct does. As AI tools like Operator advance, the challenge will be ensuring they enhance, rather than override, human agency.

The Future of AI-Assisted Decisions

Hill’s experiment, combined with the rise of AI agents like Operator, points to a future where AI plays an even bigger role in our everyday lives. Whether it’s streamlining mundane choices or taking on more complex tasks, AI is evolving from a decision-making aid to an action-driven assistant.

But should we completely outsource our choices? The answer, for now, is balance. AI can help reduce decision fatigue and handle routine tasks, but it still can’t replicate personal preferences, emotional intelligence, or the unique messiness of human life.

For now, AI can take some choices off our plate. But when it comes to the ones that truly matter, we might still prefer to make up our own minds.

Interested in Learning More?