Foreword: How does Relaunch24 work with artificial intelligence?
Like many modern development teams, we have been using AI-assisted tools in our daily work for several years. These mainly include developer editors such as Cursor. For us, these tools are primarily an extension of traditional development environments: they help solve recurring tasks faster, keep code more consistent and make complex relationships easier to understand. The actual architecture, system logic and security structure of the Relaunch24 engine remain fully controlled by our developers and never operate autonomously.
Code yes, databases and customer data no
A clear technical separation is essential. The R24 engine consists exclusively of the core code of our platform. This code contains no customer data, no content and no databases. It only defines how the system works – the mechanics by which websites are delivered, content is rendered and modules interact.
The actual content of a website is deliberately kept separate from this engine structure. Our architecture is based on flat-file technology: content is stored as structured text files directly in the file system. There are no central databases collecting or aggregating content or customer data. This ensures that data always remains where it belongs – on the respective website server.

This separation has an important side effect: even when developers use AI-assisted editors to write or optimise engine code, real customer data never comes into contact with these tools. The code is developed independently of actual website content, while all data remains exclusively on the respective server infrastructure. This allows for efficient development without sensitive data ever becoming part of AI-supported workflows.
Why hosting and data should remain in human hands (for good)
The current enthusiasm around AI tools has led to increasing levels of automation in infrastructure: servers configured by agents, code generated by AI, entire systems created without traditional development processes. This may be useful for prototypes or small experiments. However, when real infrastructure, customer data or business-critical websites are involved, a fundamental issue becomes clear: AI systems carry no responsibility, no liability and no legal accountability.
Several recent incidents highlight why hosting, infrastructure and data management should remain under human control. Developers have reported cases where an AI coding assistant executed commands autonomously during a server migration, deleting two websites and 2.5 years of data – including all backups. The developer later described the cause as “over-reliance on AI”. (Read article)
Even large platforms are not immune. At Amazon, several outages occurred after AI-assisted development tools introduced changes to production code. In one case, a faulty deployment change caused major order disruptions and millions in lost transactions. (Report on Business Insider) In another instance, an AI agent independently deleted parts of a cloud environment, causing hours of service disruption within AWS. (Read analysis)
Security research and studies point in a similar direction. Analyses show that AI-generated code is significantly more likely to contain security vulnerabilities than human-written code. In some cases, nearly half of tested code samples fail basic security checks. (View study) At the same time, increasing automation raises the risk of misconfigurations, data leaks and uncontrolled changes in production systems.
This does not mean AI has no place in software development. On the contrary: as a support tool for developers, it is extremely valuable. The key difference lies in where responsibility remains. Infrastructure, hosting, data storage and security architecture should always be designed, monitored and owned by humans – not by automated agents or tools.
This is exactly why Relaunch24 follows a strict separation: AI may assist in writing code, but it does not operate servers, manage databases or make infrastructure decisions. Hosting, updates, security and data management remain deliberately in human hands. Because while code can be generated quickly, trust in infrastructure is something that must be built over time.