Message-ID: <1385103531.3876.1485855715250.JavaMail.confluence@ip-10-127-227-164> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3875_173883727.1485855715225" ------=_Part_3875_173883727.1485855715225 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html Upgrading DFS cluster to 5.4

Upgrading DFS cluster to 5.4

This only applies to insta= llations that are configured with DFS clustering. Native support is ad= ded in eZ Publish 5.4, and does not use legacy kernel callbacks anymore. As= a consequence, you need to configure DFS on the new stack (no migration of= data is required).

Assuming a typical dfs configuration, in ezpublish_legacy/set= tings/override/file.ini.append.php, like the following.

=20
[ClusteringSettings]
FileHandler=3DeZDFSFileHandler

[eZDFSClusteringSettings]
MountPointPath=3D/var/nfs
DBBackend=3DeZDFSFileHandlerMySQLiBackend
DBHost=3Dclusterhost
DBPort=3D3306
DBName=3Dezpublish_cluster
DBUser=3Dclusteruser
DBPassword=3Dclusterpassword
MetaDataTableNameCache=3Dezdfsfile_cache
=20
Where = should configuration be placed

Either ezpublish/config/ezpublish.yml, ezpublis= h/config/config.yml or any equivalent file that you are using= .

Cluster doctr= ine connection

First, if the cluster database is different from the content database (a= nd it should), you need to create a new doctrine dbal connection.

=20
doctrine:
   dbal:
       connections:
           cluster:
               driver: pdo_mysql
               host: clusterhost
               port: 3306
               dbname: ezpublish_cluster
               user: clusteruser
               password: clusterpassword
               charset: UTF8
=20

This connection will be made available as doctrine.dbal.clust= er_connection.

Metadata h= andler configuration

Handing of file metadata in the ezdfs tables is h= andled by the legacy_dfs_cluster IO metadata handler= . You need to declare a new one that uses the doctrine connection created a= bove.

=20
ez_io:
   metadata_handlers:
       dfs:
           legacy_dfs_cluster:
               connection: doctrine.dbal.cluster_connection
=20

dfs is the name of our custom metadata handler, and&nb= sp;legacy_dfs_cluster its type.

Flysystem adapter

In order to read and write files to the NFS mount point /var/= nfs, you need to add a flysystem adapter. One important note is that= the var storage directories will not be added when writing files, meaning = that they need to be specified them in the configuration.

=20
oneup_flysystem:
   adapters:
       nfs_adapter:
           local:
               directory: "/var/nfs/$var_dir$/$storage_dir$"
=20

$var_dir$ and $storage_dir$ wil= l be replaced by the matching configuration values, and should be used as i= s for legacy compatibility. The value of =E2=80=9Cdirectory=E2=80=9D will b= e set depending on the configuration, for instance to =E2=80=9C/= var/nfs/var/ezdemo_site/storage=E2=80=9D.

DFS binary data ha= ndler

The next step is to configure a binary data handler that uses the flysys= tem adapter we created above. It is very similar to what was done for the m= etadata one:

=20
ez_io:
   binarydata_handlers:
       nfs:
           flysystem:
               adapter: nfs_adapter
=20
Pre-Final step: configuring the metadata and binarydata = handlers

The last thing to do is set eZ Publish to use the binarydata and metadat= a handlers we created above, in the siteaccess aware configuration:

 

=20
ezpublish:
   system:
       default:
           io:
               metadata_handler: dfs
               binarydata_handler: nfs
=20
------=_Part_3875_173883727.1485855715225--