Deliver Faster by Design
How your tools make (or break) a culture of shipping.
Host your own Magnolia instances on-premises or in the cloud for maximum flexibility.
Magnolia is available as a software bundle certified for Linux, Windows, and MacOS. This allows you to install it anywhere–on your computer, in your data center, or on your preferred cloud platform.
Do you prefer to deploy Magnolia as a Docker container in your environment? In our white paper we share best practices helping you choose the right components for your Magnolia container, configure your container to start up automatically, containerize your content, and synchronize content across instances.
Hardware recommendations are dependent on the way the application is hosted, as the underlying stack (i.e. hypervisor / OS / containerization stack) must be accounted for.
The recommended specifications for the application itself (in production mode) are:
The Magnolia application is OS agnostic. Please refer to the certified stack for more details.
The dependencies can be found in our documentation, on the certified stack page. In a nutshell, the following components are required:
It’s best to use a technology stack your engineers are familiar with. Our recommendations are:
This depends on the technology used. Generally speaking, this can imply more compute resources (v)CPU, memory and storage in order to account for more Magnolia Public Instances.
This depends on the on-premise architecture decisions.
This depends on the stack used to host the application. Operations remain the customer's responsibility.
We recommend regular backup (or snapshots) of the disks for the application server and potential storage areas (external / network drives) and application level backups for the database layer.
We recommend regular backup of the OS and application stack, in accordance with each company’s internal IT policies.
The database should be backed up regularly (typically once a day or once a week), depending on the RTO / RPOs required.
As an example, when using PSQL as the backing RDBMS, “Basebackups” are typically performed once a week, with regular WAL archiving (timeout definable depending on RPOs required). The same policy should be applied to any external (remote) filesystem (e.g. S3 storage / Azure Blob) if applicable.
Magnolia is not different from any web application and should be hosted adequately. Please refer to the Magnolia PaaS section of this question for more details.
We strongly recommend the usage of “Security hardened” (i.e. OWASP) reverse proxies, load balancers, web servers. In addition, the usage of a WAF is strongly recommended.
Not applicable, as the application hosting and operation are the responsibility of the customer.
Please refer to the certified stack for available options. In a nutshell, MySQL, MariaDB, Oracle, H2 and PostgreSQL are the products seen the most. PostgreSQL is the preferred RDBMS.
There are no specific requirements when integrating a CDN. Caching policies need to be set on Magnolia level.
Not applicable, as the application hosting and operation are the responsibility of the customer.
Any VCS can be used. We commonly see in use TFS, AzureDevOPS and Git.
There are no specific requirements, as Magnolia is very flexible regarding integrations. Integrations are mostly done via REST APIs, but other technologies might be used as well, depending on the third-party system.
Content replication is possible on JCR workspace level by exporting select items or entire workspaces to XML or YAML.
Magnolia is very flexible regarding user authentication and authorization. Authorization typically stays within Magnolia (even if managed externally) and can either remain “local” to Magnolia or “external” via the usage of the SSO module.
The SSO module natively supports Open ID Connect compatible identity providers (Azure AD or equivalent) and can be extended to other protocols (e.g LDAP / SAML) by using an identity broker like Keycloak.
You're steps away from getting hands-on with Magnolia: