Can AI Replace a Human Adviser?
Artificial intelligence (AI) tools like ChatGPT are increasingly capable of analysing financial data, generating reports, and answering complex questions in seconds. But the real question is: can AI truly replace a human financial adviser?
In this series, ‘Can AI Replace a Human Adviser?’, we examine real-life case studies of clients navigating major life transitions and put AI to the test against a Providend Client Adviser. As these stories unfold, we reveal what AI gets right, what it gets dangerously wrong, and why human advice may still matter more than you think.
Case Study: How to Plan Wisely After a Job Loss
Samantha, aged 38, is a senior project manager at a mid-sized tech company, earning a take-home income of $95,000 a year. She is married, with one child aged 6, and owns a HDB flat with an outstanding mortgage.
Recently, Samantha was informed that her company is downsizing, and her role has been made redundant. This unexpected development has shaken her confidence and financial planning. While she has some savings and investments, the sudden loss of income has prompted her to reconsider her short- and long-term financial priorities.
Samantha wants to ensure her family’s financial stability while she transitions to her next role. She hopes to maintain her current lifestyle for her child’s education, manage her mortgage responsibly, and eventually find a new role that aligns with her career aspirations — even if it means a temporary dip in income or exploring a different industry.
Samantha has about 12 months’ worth of expenses in her emergency fund, and her investments are moderately diversified. She has been diligent with savings but has not previously prepared for an extended period of unemployment. The uncertainty has raised concerns about cash flow, managing debts, and continuing contributions to retirement accounts.
Emotionally, Samantha feels a mix of anxiety and frustration. She also wants to ensure that this setback doesn’t derail her long-term financial plans, including her goal of funding her child’s university education and planning for early retirement.
Her question is: “How can I manage my finances and protect my family while I navigate job loss, maintain my lifestyle, and plan for future opportunities?”
To find out, we posed Samantha’s exact situation to ChatGPT and asked it to act as their financial adviser. Our Client Adviser, Joyce, reviewed the AI-generated plan, analysed its conclusions, and compared them against what a real human adviser would do. Here is what she found.
What ChatGPT Got Right
To its credit, ChatGPT didn’t just spit out a bunch of numbers, it did something that actually matters: It acknowledged her emotions. It recognised the anxiety and frustration Samantha felt and validated that redundancy can feel destabilising even when one is financially prepared.
Then, it got down to business. And it was good at it.
It outlined the cash runway analysis, emergency fund deployment, mortgage considerations, retirement contribution adjustments and so on. It was logical, clear and efficient. In fact, it even generated a spreadsheet, a scenario model and a checklist of next steps, all within seconds. From a computational standpoint, this is impressive.
When you’re panicking at 2 am, having a tool that can instantly organise your thoughts and give you a plan is incredibly valuable. It’s accessible. It’s fast. It lowers the panic level by giving you a sense of control. The advice was technically sound, the structure was logical and the cash flow guidance was reasonable.
But something was missing, it was hollow. It was like a perfectly written recipe when what you really need is someone to cook the meal with you and tell you everything is going to be okay. AI is designed to answer, it reads lines, humans read between the lines.
Where ChatGPT Hit a Wall (The Missing Pieces)
Samantha’s problem is not a math problem. It is a human problem with a math component. And that’s where ChatGPT, for all its brilliance, falls short.
It can’t see what is not being said.
When someone loses a job, the first instinct is to “fix the numbers”. But redundancy is rarely just a numbers problem. It could be a fear of judgement, a sense of loss of identity, or concern about being “behind” peers. A human adviser can sit in that discomfort without rushing to optimisation. Humans pause in the conversation when emotions rise, and this prevents decisions driven by ego, shame or urgency.
A human adviser doesn’t just rush in to fix the numbers. They know when to pause. They know that sometimes, the best thing you can do is just sit quietly and let the other person process. They can slow things down. When Samantha’s fear threatens to spiral, a human can gently steer her back to the facts: “Okay, let’s just look at this one step at a time.” AI can acknowledge emotion, but it can’t contain it. It can’t absorb your fear and reflect back calm.
It forgot she has a family.
ChatGPT talked to Samantha like she was a single person making decisions on an island. But she’s not. She’s a wife. She’s a mother. Her job loss affects everyone at the dinner table.
A human adviser would immediately widen the lens. They would ask questions like:
“How is your husband coping with this news? Is he feeling pressure to step up?”
“Have the two of you sat down to talk about what expenses to cut? Are you on the same page, or is there some tension there?”
Money lives in relationships. When one person loses a job, the whole household feels the ripple effects. Sometimes, the biggest challenge isn’t the budget itself. At Providend, we believe in family facilitation, helping partners and families align their priorities and move forward together. That is something no algorithm can do.
It gave Samantha too much information.
ChatGPT is designed to be thorough. So, when Samantha asked for help, it started generating different scenarios: Scenario A if she finds a job in 6 months, Scenario B if she takes a pay cut, Scenario C if she switches industries, all with different numbers and projections.
For someone who’s already anxious, that’s not helpful. That’s overwhelming. It’s like giving a drowning person a lesson on the physics of water displacement. They don’t need data, they need a lifeline. “Here are the three scenarios with projected IRRs.” A human adviser knows that in moments of crisis, simplicity is a superpower. Instead of five complex scenarios, a human would lean across the table and say,
“You are not in danger, you have 12 months of runway, let’s breathe and think rationally.”
It assumes you have told it everything.
ChatGPT can only answer what you ask, in a structured way, that lays out a plan that looks complete. But here’s the truth, ChatGPT, or rather, AI, can only respond to what is told to it. If Samantha didn’t mention certain things, it would not know to ask. AI works with the information it receives. But in real advisory work, some of the most important risks are never stated upfront.
It won’t challenge her.
AI is built to be nice, to be helpful and agreeable, it won’t push back, or ask hard questions because asking hard questions might upset the user. As a human adviser, we have something called moral courage. We care about you too much to just tell you what you want to hear. We do something much harder, we challenge.
If Samantha says, “I just need to find any job, fast, to stop the panic,” a human might ask, “I hear that. But before this happened, were there parts of your old job that made you miserable? If you rush into another one just like it, will you be happy in a year?”
If she’s scared to change industries, they might ask, “When you imagine your next role, what does a good day look like? Let’s separate what you could do from what you actually want to do.”
They might even ask the big, scary question: “If this isn’t a setback, but a strange kind of gift, a reset, what might it be inviting you to reconsider about your whole life?”
These aren’t questions an algorithm can ask. They require intuition, courage and a deep care for the person sitting in the chair. AI processes sessions, but humans steward journeys.
It can’t help her find meaning.
Finally, and most importantly, ChatGPT can’t help Samantha figure out what it all means. Cash flow templates can be automated, asset allocation models can be replicated, rebalancing can be systemised but life cannot be templated.
When Samantha says she wants to “maintain normalcy”, a human adviser hears a deeper question:
“What kind of example do you want to set for your child?”
“Is early retirement still the goal, or is fulfilment more important now?”
Here at Providend, we often anchor conversations not only in returns, but in purpose. ChatGPT provides tools, a human helps you figure out what you’re actually building.
The Future of Advice: This Is Where Advisers Must Not Become Complacent
This isn’t meant to scare anyone. It’s not about saying AI is bad or useless. It’s incredibly useful. It’s going to change the way we do many things, and financial planning is no exception.
But it forces us to get honest about what we, as human advisers, actually bring to the table. If all we do is build portfolios and run numbers, we should be worried. A computer can do that faster and cheaper.
The only sustainable competitive advantage for human advisers lies in the domains AI cannot replicate. This case study illuminates the path forward. The future of advice lies in mastering:
- Emotional Containment: This is more than empathy. It is the active, almost gravitational, ability to hold a client’s anxiety in the room without absorbing it, dismissing it, or panicking. It’s the quiet confidence that says, “I’ve seen this before. We will get through this.” It is the human capacity to regulate another’s nervous system through presence.
- Family Facilitation: Moving from individual client management to facilitating a multi-party conversation. It’s the skill of uncovering hidden tensions, ensuring alignment, and turning a family unit into a team. The resilience of a family’s finances often depends more on their communication than their returns.
- Moral Courage and Deep Inquiry: The willingness to ask the hard questions that challenge a client’s assumptions, confront their fears, and connect their money to their deepest values. It’s about guiding them toward a meaningful life and wisdom, not just efficiency.
- Long-Term Stewardship and Relational Memory: Financial advice is not a transaction, it is a journey that spans decades. Human advisers accumulate relational memory. They remember how a client reacted during the last market downturn, what values they prioritised and what dreams they whispered about. This shared history, this continuity of care, builds a level of trust and nuanced understanding that is impossible for an AI.
Conclusion
The advice ChatGPT gave Samantha was not wrong. It was logical, fast, and helpful. It would have provided her with a solid framework for survival. But it was incomplete.
Samantha’s problem was never truly about cash flow or mortgage management. Those are the symptoms. The core problem was the human one: the shock to her identity, the fear of the unknown, the quiet tension at the dinner table, and the need to find meaning and purpose in the midst of disruption.
In the end, the question isn’t whether AI can replace a human adviser. I am confident that it can’t. The question is whether human advisers are brave enough to stop competing on the AI’s terms, because if we’re just racing to see who can crunch numbers faster or build a fancier spreadsheet, the machine is going to win. It’s faster. It’s cheaper. It doesn’t sleep.
But if we’re willing to show up differently, to sit with you in the hard moments, to ask the questions you didn’t expect, that’s something no algorithm will ever be able to do. For Samantha, sitting alone with her worries at 2 am, ChatGPT gave her a plan. That’s useful. But what she really needed, what we all need in those moments, is someone who can look her in the eye and say, “You’re going to get through this. And I’ll be here the whole way.”
That’s not just advice. That’s something else entirely. And it’s the only kind that truly matters.
This is an original article written by Joyce Chng, Client Adviser at Providend, the first fee-only wealth advisory firm in Southeast Asia and a leading wealth advisory firm in Asia.
For more related resources, check out:
1. How to Make Life Decisions (Ikigai Decisions)
2. To Live the Good Life, Make Life Decision First Before Wealth Decisions
3. Here’s Why We Charge a Higher Fee Than Robos
To receive first-hand wealth insights from our team of experts, we invite you to subscribe to our weekly newsletter.
Through deep conversations with our advisers, you will gain clarity on what matters most in life and what needs to be done to live a good life, both financially and non-financially.
We do not charge a fee at the first consultation meeting. If you would like an honest second opinion on your current investment portfolio, financial and/or retirement plan, make an appointment with us today.
