AI and Architecture: The Question of Ownership

As AI rapidly changes the way we live and work, at LA London we continue to explore the opportunities of AI-assisted tools for us and our clients. We are also acutely aware of the legal and ethical challenges surrounding AI, however, as architects become increasingly vulnerable to having their intellectual property – including concepts, drawings and models – used without their consent.

In the latest in our series of articles on AI in architecture, LA London Associate – and our in-house AI expert – Miruna Stroe answers some questions about AI and IP, exploring the government’s new AI Opportunities Action Plan and the industry’s response, as well as our own policies and practices at LA London.


LA London Associate Miruna Stroe

What are some of the key threats posed by AI to architects’ intellectual property?

It comes back to the age-old debate between architecture being an artistic pursuit or a technical endeavour. As an art, the professional culture of architecture is deeply enmeshed with intellectual property rights. As a technical subject, it tends to make use of the latest technologies and to support innovation.

In particular, with the advancement of AI use, the central threat is invisible use. AI systems learn by consuming vast amounts of data — as architects that would mean images, drawings, text, even 3D models — and right now, most of that ingestion happens without the creators’ consent. For architects, whose work is often publicly accessible in planning portals or project portfolios, that means your designs could become part of a model’s training material without you ever knowing. So, while you’re using a model to quickly generate variations of a design for a client presentation, you might even inadvertently mimic another architect’s “style”. It’s not copying per se, but style mimicry is still frowned upon and, over time, it could lead to an unwelcome aesthetic uniformity.

AI systems learn by consuming vast amounts of data, and right now, most of that ingestion happens without the creators’ consent.

The other danger is internal rather than external. Many offices use public AI tools for quick visualisations or text generation, often uploading sensitive project material along the way. Unless those tools guarantee that your data isn’t stored or reused for model training, you could be leaking both client confidentiality and intellectual property.

This is why practices need to put clear policies in place regarding the use of AI and defining the details it is acceptable for AI to have access to.


What impact is the government’s AI Opportunities Action Plan having on the use of AI in architecture?

The government’s AI Opportunities Action Plan is mostly about encouragement rather than control. It aims to position the UK as a leader in “applied AI”, funding regional AI clusters, training programmes, and public sector pilots. In architecture, that translates into a friendlier environment for experimenting with generative design, digital twins, and planning-process automation. Some councils have already started using AI to assist with digitisation of paper documentation, helping produce digitally available information much quicker. The reliability of this will need to be assessed.

The complication is that this pro-innovation stance has outpaced legal clarity. While the policy invites architects to adopt AI tools, the surrounding copyright framework remains unsettled. It’s a bit like being told to drive faster while the traffic lights are still being installed. The plan accelerates adoption but leaves questions of authorship, data use, and liability dangling.

Understandably, the UK wants to position itself among the AI powers and is competing with the US and China, while the EU, as usual, takes a more cautious stance. With architecture being a heavily regulated domain, a free-for-all buffet of AI use is nearly impossible to imagine. Responsibility, both legal and ethical, is baked into the professional being of an architect and upheld by our professional codes of conduct (ARB and RIBA). Hence, while adoption of AI use is already widely explored by architectural practices, I’m quite sure they won’t do so without consideration.


What about the proposal to let AI companies train models on copyrighted work unless creators opt out?

Pretty much all AI companies take the position that it’s more valuable for them to get the models trained on reliable, proper data and if that is not readily available, it’s easier to ask forgiveness (or pay up) after the event than to ask for permission before. Hence, they use data that is not necessarily out in the open, but still available with a bit of effort. It’s quite often data that one would not consider to be hosted on a vulnerable platform – such as planning portals.

We’re moving from a world where consent is required before use to one where silence equals permission.

That proposal — allowing model training on any published material unless the author explicitly opts out — is arguably the most controversial element of current UK copyright reform. On paper, it aims to balance innovation and rights management. In practice, it flips the burden: creators must monitor where their work ends up and find ways to register their objection, platform by platform.

For architects, that’s not just impractical, it’s philosophically skewed. We’re moving from a world where consent is required before use to one where silence equals permission. It’s also technologically brittle: how exactly do you “opt out” of a model that has already absorbed billions of data points? The fairer route would be licensing frameworks or transparent data registries — systems that let creators choose participation rather than chase infringements retroactively, rather like the UK’s Telephone Preference Service does for nuisance phone calls.


What steps is the industry taking to protect architects’ copyright?

The professional bodies have started to stir. RIBA and other sector groups have been consulting members and lobbying for clearer IP guidance in the age of AI. There’s also a parallel movement in technology itself: the emergence of provenance standards such as C2PA and Adobe’s “Content Credentials,” which embed metadata into images and documents to trace their origin and any AI involvement.

Beyond that, individual practices are quietly rewriting their contracts. They’re inserting clauses that forbid third parties from using project material for model training, require disclosure of AI assistance, and push responsibility for infringement back up the supply chain. It’s a gradual adaptation — less glamorous than new rendering tools, but far more consequential for the profession’s long-term autonomy.


At LA London, most of the protection begins with discretion rather than digital fencing. The majority of our projects are tied to private clients and are already covered by NDAs, so from the outset we rarely disclose the link between a client, a location, and a design narrative.

What is LA London doing to protect its IP?

At LA London, most of the protection begins with discretion rather than digital fencing. The majority of our projects are tied to private clients and are already covered by NDAs, so from the outset we rarely disclose the link between a client, a location, and a design narrative. That separation — the project without the context of ownership — is itself a form of IP protection, because it reduces the risk of our work being absorbed into public datasets or recirculated without permission.

We also have an internal policy on AI use, though it’s less focused on the training of models and more on governance: ensuring that anything produced with AI is reviewed by a designer before it leaves the studio. The emphasis is on authorship and responsibility. If AI is used to explore an idea — a façade study, a spatial variation, a materials atmosphere — the architectural intent still has to be traced back to a human designer, not a machine. Work never goes out without someone being able to say why it looks the way it does.

The protection, then, is cultural as much as procedural. We treat AI as a support tool, not a generator of final imagery, and we maintain a controlled boundary around the material we share externally. In practice, that means being selective about what we publish, reviewing what goes into planning portals, and maintaining a quiet, almost craft-like approach to how design work circulates. The architecture is the valuable part — the relationships, even more — and those aren’t things we surrender lightly.


How can architects and clients prevent accidental breaches of copyright?

Most breaches aren’t the result of malice; they’re the result of speed. A mood board assembled from Pinterest, a render generated with an AI that borrowed a photographer’s composition — these are small, human shortcuts. Preventing them means building habits rather than walls: keeping track of sources, using licensed image libraries, and running similarity checks before publishing.

Most copyright breaches aren’t the result of malice; they’re the result of speed.

On the client side, awareness matters just as much. Clients should understand that a quick AI visual found online may not be legally safe for public use. Practices should guide them — explaining what’s permissible, what’s derivative, and where responsibility lies. Transparency about AI use is becoming a mark of professionalism, not a confession of corner-cutting.

Ultimately, the goal isn’t to reject AI, but to domesticate it — to use it within clear creative and ethical boundaries. If architects don’t define those boundaries themselves, the technology will define them by default. And that, more than any single infringement, is the real threat to intellectual property in the age of AI.


Discover previous articles in our AI series, including our interview with Lightfield’s Jonny Cox and our discussion on the ethics of AI in architecture in the LA London journal.


Next
Next

The Building That Inspired Me: Kathryn Archer