Message-ID: <942995563.4104.1485856705970.JavaMail.confluence@ip-10-127-227-164> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_4103_1900375623.1485856705970" ------=_Part_4103_1900375623.1485856705970 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html DFS IO handler

DFS IO handler

DFS is a requirement for use in Clustering setup. See&nb= sp;Cl= ustering for an overview of the feature. 

What it is meant for

The DFS IO handler (legacy_dfs_cluster) can be us= ed to store binary files on an NFS server. It will use a database to m= anipulate metadata, making up for the potential inconsistency of network ba= sed filesystems.

Configuration

You need to configure both metadata and binarydata handlers.

As the binarydata handler, create a new Flysystem local adapter configur= ed to read/write to the NFS mount point on each local server. As metadata h= andler handler, create a dfs one, configured with a doctrine connection.&nb= sp;We recommend that a dedicated database is used for DFS metadata. In our = example, we will use one named dfs

=20
# new doctrine connection for the dfs legacy_dfs_cluster metadata h=
andler.
doctrine:
    dbal:
        connections:
            dfs:
                driver: pdo_mysql
                host: 127.0.0.1
                port: 3306
                dbname: ezdfs
                user: root
                password: "rootpassword"
                charset: UTF8

# declare the handlers
ez_io:
    binarydata_handlers:
        nfs:
            flysystem:
                adapter: nfs_adapter
    metadata_handlers:transp
        dfs:
            legacy_dfs_cluster:
                connection: doctrine.dbal.dfs_connection
# set the handlers

ezpublish:
    system:
        default:
            io:
                metadata_handler: dfs
                binarydata_handler: nfs
=20

Customizing the stor= age directory

eZ Publish 5.x required the NFS adapter directory to be set to $va= r_dir$/$storage_dir$ part for the NFS path. This is no longer requir= ed with eZ Platform, but the default prefix used to serve binary files will= still match this expectation.

If you decide to change this setting, make sure you also set io.ur= l_prefix to a matching value. If you set the NFS adapter's directory= to "/path/to/nfs/storage", use this configuration so that the files can be= served by Symfony:

=20
ezpublish:
=09system:
=09=09default:
=09=09=09io:
=09=09=09=09url_prefix: "storage"
=20

As an alternative, you may serve images from NFS using a dedicated we= b server. If in the example above, this server listens on http://static.example.com and uses /path/to/nfs/storage as the document root, configure io.url_prefix as follows:

 

=20
ezpublish:
=09system:
=09=09default:
=09=09=09io:
=09=09=09=09url_prefix: "http://static.example.com/"
=20

You can read more about that on Binary files URL handling.

Web server rewrite rules.

The default eZ Platform rewrite rules will let image requests be served = directly from disk. With native support, files matching ^/var/([^/]+/= )?storage/images(-versioned)?/.* have to be passed through /we= b/app.php.

In any case, this specific rewrite rule must be placed without the ones = that "ignore" image files and just let the web server serve the files.

Apache

=20
RewriteRule ^/var/([^/]+/)?storage/images(-versioned)?/.* /app.php =
[L]
=20

ng= inx

=20
rewrite "^/var/([^/]+/)?storage/images(-versioned)?/(.*)" "/app.php=
" break;
=20
------=_Part_4103_1900375623.1485856705970--