AI-powered objections are here: how should the planning and development sector respond?

One of my main priorities in running developer consultations is how consultation can be made meaningful in a system that often treats it as a procedural hurdle. In 2017 wrote Public Consultation and Community Involvement in Planning: a 21st century guide and I set up ConsultOnline as one of the first online consultation services because I wanted engagement to be easier, clearer and less exclusionary.

But are the tables beginning to turn? The same technologies that can make consultation more accessible can also make objection easier to automate. Recent platforms such as Objector AI are evidence that an automated service for objecting to planning application have become a real risk.

Will AI objections change the game?

The immediate impact is not simply more objections but faster objections that look much more professional, more policy-literate and more confident than the underlying knowledge behind them.

Perhaps the greatest problem is provenance. Consultation has always been messy. People get angry, cut and paste content and of course the produce of 'Chinese whispers' proliferate consultation responses. Exacerbating that, AI adds scale and plausibility. A councillor or case officer faced with a polished letter citing a case, a regulation or a paragraph number may not be able to tell whether those references are robust, misquoted or invented and planning departments are far too under-resourced to check the veracity of each of them.

What not to do

Of the available responses, the most obvious while they may feel tempting, will probably backfire.

The first is dismissing AI-enabled objections as illegitimate. That only reinforces a narrative that developers want to silence residents. We have to accept that many people using these tools are not acting in bad faith: they are time-poor and intimidated by planning language and they have been offered a shortcut.

Another is trying to 'out-AI' the objectors with automated rebuttals. If your response looks machine-written it will corrode trust and it will hand opponents an easy line about corporate spin.

What to change

The practical response is to make it harder for low-quality, synthetic or misleading content to dominate while making it easier for genuine local knowledge to surface.

Make the consultation material simpler and more navigable

AI objection tools thrive when the source material is dense and requires consultees to invest unreasonable amounts of time in digesting it.

I would treat 'accessibility' as a risk control and ensure that all consultations follow these basic principles:

  • publish plain-English summaries alongside the technical documents
  • provide clear FAQs with referenced answers
  • show how trade-offs are being handled rather than inviting binary support or objection
  • signpost what is fixed, what is still live and where feedback can influence outcomes

If people feel informed they are less likely to be aggrieved and to outsource their voice to a tool that responds to their anger.

Provide a controlled channel for questions

Many objections are really questions that have nowhere to go, so give them a route. This can be as simple as an online Q&A that is updated daily during a peak period. If you use an AI-enabled chatbot, constrain it tightly to approved source material and publish a clear statement explaining what it can and cannot do, while also providing a route to a human response.

Provide verification, not rebuttal

During a live consultation there is rarely a need to provide full responses, just an grateful acknowledgement and links to further information.

Analysis of the consultation responses is best collated and analysed at the end of the process. This can provide a more efficient means of verifying objections which cite policy, legislation or case law and there is no need to respond directly to the individual: this information will be contained within the consultation report which will be submitted to the local planning authority and published online. It is at this stage that (anonymised) responses can be debunked, politely.

Present consultation responses in such a way that disinformation is immediately apparent

In compiling the consultation responses, I suggest a simple triage:

  • material planning issues grounded in policy or evidence
  • local impacts and lived experiences that require judgement and mitigation
  • non-material points, misinformation and fabricated references

Then report responses through that lens and highlights inaccuracies without shaming individuals and fuelling their anger.

Focus on the qualitative, not the quantitative

If you're not already doing it, stop counting responses and start reporting themes.

AI has made volume less meaningful. Consultation reporting now needs to show how themes were identified, how duplicates and template content were handled and how minority views were retained.

If you use AI to help with analysis, treat it like a junior analyst: helpful, fast and fallible. Keep a record of inputs, sampling checks and sign-off. This is not bureaucracy. It is how you remain defensible when challenged.

Take moderation and conduct seriously

Safeguarding your team and setting behavioural expectations is part of running a competent consultation. Clear codes of conduct, event moderation and escalation routes protect staff and protect the legitimacy of the process.

A note on fairness

There is an argument that AI objection tools 'level the playing field' for those who cannot afford professional advice. There is some truth in that — unfortunately the planning system is full of language and processes that rewards insiders and not enough has been done to change that.

But the correct response is not to fight participation, it is to improve it. Good consultation is not weakened by more voices but by low-trust processes, inaccessible information and decision-making that cannot explain how evidence was weighed.

AI-powered objections demonstrate that consultation must be easier to engage with, harder to manipulate and more transparent in how it turns feedback into decisions. That is achievable, but it requires promoters to treat consultation as a core part of risk management and reputation rather than a box to be ticked.