Message-ID: <1182466361.2768.1485850741639.JavaMail.confluence@ip-10-127-227-164> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_2767_371348083.1485850741639" ------=_Part_2767_371348083.1485850741639 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html Executing long-running console commands

Executing long-running console commands

=20
=20
=20
=20

Description

This page describes how to execute long-running console commands, to mak= e sure they don't run out of memory. An example is a custom import command = or the indexing command provided by the Solr Bundle.

Solution

Reducin= g memory usage

To avoid quickly running out of memory while executing such commands you= should make sure to:

  1. Always run in prod environment using: --= env=3Dprod
    1. See Environments f= or further information on Symfony environments.
    2. See Logging and debug configu= ration for some of different features enabled in development e= nvironments, which by design uses memory.
  2. Avoid Stash (Persis= tence cache) using to much memory in prod:

    1. If your system is running, or you need to use cache, then disable St= ash InMemory cache as it does not limit the amount of items in cache a= nd grows exponentially:

      config_prod.yml (snippet, not a full example for stash config)
      =20
      stash:
          caches:
              default:
                  inMemory: false 
      =20

      Also if you use FileSystem driver, make sure memKeyLimit is set to a low number, default should be 200 and can be lowered like t= his:

      config_prod.yml
      =20
      stash:
          caches:
              default:
                  FileSystem:
                      memKeyLimit: 100
      =20
    2. If your setup is offline and cache is cold, there is no risk of stal= e cache and you can actually completely disable Stash cache. This will impr= ove performance of import scripts:

      config_prod.yml (full example)
      =20
      stash:
          caches:
              default:
                  drivers: [ BlackHole ]
                  inMemory: false
      =20
  3. For logging using monolog, if you use either the default = fingers_crossed , or buffer handler, make sure to specify buffer_size <= /code> to limit how large the buffer grows before it gets flu= shed:

    config_prod.yml (snippet, not a full example for monolog config)
    =20
    monolog:
        handlers:
            main:
                type: fingers_crossed
                buffer_size: 200
    =20
  4. Run PHP without memory limits using: php  = -d memory_limit=3D-1 app/console <command>
    =
  5. Disable xdebug (PHP extension to debug/profile php us= e) when running the command, this will cause php to use much more= memory.

 

Note: Memory will still grow

Even when everything is configured like described above, memory will gro= w for each iteration of indexing/inserting a content item with at least 1kb per iteration after the initial first 100 rounds. This is expecte= d behavior; to be able to handle more iterations you will have to do one or= several of the following:

  • Change the import/index script in question to use process forking to a= void the issue.
  • Upgrade PHP: newer versions of PHP are typically more memory-effici= ent.
  • Run the console command on a machine with more memory (RAM).

Process forking with Symfony

The recommended way to completely avoid "memory leaks" in PHP in the fir= st place is to use processes, and for console scripts this is typically don= e using process forking which is quite easy to do with Symfony.

The things you will need to do:

  1. Change your command so it supports taking slice parameters, like for in= stance a batch size and a child-offset parameter.
    1. If defined, child-offset parameter denotes if a process is&nbs= p;child, this could have been accomplished with two commands as well.<= /li>
    2. If not defined, it is master process which will execute the process= es until nothing is left to process.
  2. Change the command so that the master process takes care of forking chi= ld processes in slices.
    1. For execution in-order, you may loo= k to our platform installer code used to fork out solr indexing after i= nstallation to avoid cache issues.
    2. For parallel execution of the slices, see Symfony doc for further instruction.

 

=20
=20
=20
=20

In this topic:=

Related topi= cs:

Environments

Symfony Process Component [symfony.com]=

How to Contribute

=20
=20
=20
------=_Part_2767_371348083.1485850741639--