Your Website Assumptions Are Costing You Conversions
Key takeaways:
- B2B decision-makers in professional services are increasingly researching and enquiring via mobile, not desktop — assuming otherwise means your mobile experience is probably underbuilt.
- Page speed improvements deliver diminishing returns past roughly 2 seconds; optimising below that threshold rarely moves conversion rates in any meaningful way.
- Every additional field on a contact form reduces completion rates by approximately 15%, making simplicity the most underrated conversion strategy available.
What the data keeps telling us vs. what we keep believing
The interesting thing about working with clients across 25 years is that you stop being surprised by the results and start being surprised by how long the wrong assumptions persist. The same beliefs that were questionable five years ago are still driving decisions today. Mobile is just for consumers. Faster is always better. More information from users equals better leads. None of these hold up when you actually measure them.
Last year I pulled together patterns from work across five Singapore-based clients in professional services — consulting firms, law-adjacent businesses, financial advisory practices — the kind of clients where everyone assumes the buying journey happens on a laptop in an office. What came back from the data challenged almost every default assumption in the brief.
Are B2B buyers really still researching on desktops?
No. Across five professional services clients in Singapore, mobile overtook desktop as the primary source of B2B enquiries. These were not e-commerce stores or consumer products. These were businesses where the average deal size was significant and the buyer was, presumably, a senior decision-maker.
I’ve been building websites since 1998. Back then, mobile was not a consideration because it was not a reality. When the first wave of mobile optimisation hit around 2010 and 2011, the guidance was to make sure your site “worked” on mobile. Then responsive design became standard. Then mobile-first became the mantra. And yet the assumption that B2B buyers do their serious research on a desktop has persisted in brief after brief, meeting after meeting, right up until the data says otherwise.
What this actually means in practice is that professional services websites are being built with the desktop experience as the real priority, and the mobile version treated as an obligation rather than a genuine entry point. The contact form gets tested on desktop. The service pages get reviewed on desktop. The leadership team photos get approved on desktop. And then 50-plus percent of your actual enquiries come from someone on a phone, hitting a form that hasn’t been seriously evaluated at 390 pixels wide.
One client I worked with last year had a genuinely impressive desktop experience. Well-structured service pages, clear navigation, a contact form that converted reasonably well. On mobile the form had three fields that required horizontal scrolling to complete. Nobody had noticed because nobody had tested it seriously on an actual phone. We fixed it in a day. The mobile enquiry rate moved within two weeks.
The assumption is the problem. When you assume desktop is primary, that assumption shapes every decision downstream. Fix the assumption first.
How fast does a website actually need to be?
Fast enough to not frustrate people. Past that threshold, the returns drop off sharply.
The specific pattern I saw last year: going from a 4-second load time to 2 seconds produced a measurable lift in enquiry rates. Going from 2 seconds to 1 second produced almost nothing. The difference was statistically negligible across the clients where we tracked it.
This is not an argument against performance. Page speed still matters, and a slow website is a real problem. Google uses it as a ranking factor, and users notice lag. But there is a category of technical investment that consultants and developers (myself included, in earlier years) have oversold: the obsessive pursuit of perfect Core Web Vitals scores when the site is already performing adequately.
I have sat in client meetings where a developer has presented a roadmap for getting a site from a 72 to a 95 on PageSpeed Insights, priced at a meaningful number, with the implicit promise that this will move the conversion needle. Sometimes it does. If the site is genuinely slow, fixing it genuinely helps. But if the site already loads in under 2.5 seconds and the conversion rate is not moving, the answer is almost certainly not faster hosting or lazy loading optimisation.
The pattern I keep coming back to: speed problems are often a symptom of bigger structural issues. Images that haven’t been compressed, themes loaded with plugins that aren’t being used, hosting plans that were chosen on price rather than performance. Solving those problems usually gets you to “good enough” without a significant technical engagement. After that, the work that actually moves conversions tends to be about clarity, not milliseconds.
At Chillybin, we moved clients off shared hosting into better environments years ago, not because it gave us perfect PageSpeed scores, but because it got them out of the danger zone where performance was genuinely hurting them. That step matters. The step where you spend six weeks shaving another 400 milliseconds off an already-fast site usually doesn’t.
Why are shorter contact forms performing better?
Because every field you add is a question you’re asking a stranger to answer before they’ve decided they trust you.
The pattern from last year’s client work: each additional field reduced form completion rates by roughly 15%. That is not a small number. Add three extra fields to your contact form and you have potentially cut your enquiry volume by nearly half.
The standard form on most professional services websites asks for name, email, phone number, company name, job title, approximate budget, service required, how they heard about you, and sometimes a text box for their message. I have seen forms longer than that. The rationale is always the same: we want to qualify the leads before we speak to them.
That impulse is understandable. Sales time is valuable. Nobody wants to spend 40 minutes on a call with someone who has a $500 budget for a project that starts at $5,000. But the form is the wrong place to do that filtering. A long form does not qualify your leads. It eliminates your leads. The people who fill out a 10-field form are often the people with the most time, not necessarily the highest-value enquiries.
The simplest form structure that consistently performed best across last year’s work: name, email, message. That’s it. No phone number as a required field. No budget dropdown. No “how did you hear about us.” Three fields, low friction, higher completion.
The qualification happens in the follow-up conversation, where you can actually understand what the person needs rather than asking them to self-categorise before you’ve even spoken.
I understand the pushback here. “But we get low-quality enquiries.” Yes, you will get some. You also get them with long forms, just fewer of them. The question is whether you would rather have 20 enquiries and convert four of them, or have eight enquiries and convert three. The maths favours volume, especially when your conversion from enquiry to client is already below 50%.
What should I actually measure to know if my website is working?
Enquiry rate per visit, broken down by device. That is the number that tells you what is actually happening.
Most clients I speak with know their traffic. Some know their general conversion rate. Very few have looked at that conversion rate split by device, and even fewer have mapped form completion rates field by field to understand where people are dropping out.
The tools to do this are not exotic. Google Analytics 4 gives you device breakdowns. Form abandonment data is available if you are willing to set it up. Heatmaps are cheap and show you where people stop engaging on a page. None of this requires a data analyst or a six-month measurement engagement.
The honest reason most businesses do not have this data is not capability. It is prioritisation. The website gets built, it goes live, and then attention moves on to the next thing. The measurement infrastructure that would tell you whether the site is actually performing gets deprioritised because there is always something more urgent. A year later, someone looks at the enquiry rate and wonders why it has not moved.
Measurement is not a reporting exercise. It is how you find the specific problem rather than guessing at it. The mobile form issue I mentioned earlier? Found through a device breakdown and a quick session recording. Without that data, we would have probably started with a content refresh or a redesign conversation, which would have cost more and fixed nothing.
Does this apply outside Singapore?
The patterns do, even if the numbers vary by market.
Singapore is a specific context. Smartphone penetration is high, the population is mobile-native in a way that not every market is, and B2B buyers tend to move fast when they have decided they want something. The mobile-first enquiry pattern is probably more pronounced here than it would be in some other markets.
But the underlying logic applies broadly. Decision-makers are people, and people use their phones throughout the day. A managing director in Auckland or Sydney is checking LinkedIn on their phone, reading articles on their phone, and, increasingly, filling out contact forms on their phone when something catches their attention at 7pm. The desktop-centric assumption is not unique to Singapore. It is just easier to see it fail here because the mobile usage rate is so high.
The form completion data is consistent everywhere I have seen it measured carefully. Friction reduces completion. That is not a Singapore observation, it is a human behaviour observation.
The three patterns that came out of last year’s client work are not complicated. Mobile matters more than your assumptions suggest. Speed matters up to a point, then it stops mattering. Simpler forms convert better than sophisticated ones.
None of this is radical. But the gap between knowing something and actually acting on it is where most website performance problems live. The assumption that B2B is desktop-first has survived years of contradictory evidence. The belief that faster is always meaningfully better keeps driving technical roadmaps that don’t shift enquiry rates. The logic of the long qualification form keeps winning internal arguments despite the data.
The work worth doing in 2024 is the same work that was worth doing in 2023: measure what is actually happening, challenge the assumptions that have not been tested, and resist the temptation to build something more complicated when something simpler would convert better.