Message-ID: <810313422.3226.1485852443660.JavaMail.confluence@ip-10-127-227-164> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3225_1942267214.1485852443660" ------=_Part_3225_1942267214.1485852443660 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html How to run long-running console commands

How to run long-running console commands

This page describes how to execute long-running console commands= , to make sure they don't run out of memory. An example is a custom import = command or the indexing command provided by the Solr Bundle.

=20 =20

Reducing= memory usage

To avoid quickly running out of memory while executing such commands you= should make sure to:

  1. Always run in prod environment using: --= env=3Dprod
    1. See Using environme= nts page for further information on Symfony environments.
    2. See Loggi= ng & Debug configuration for some of different features enabled in = development environments, which by design uses memory.
  2. Avoid Stash (Persistence cache) using to much memory in pro= d:

    1. If your system is running, or you need to use cache, then disable St= ash InMemory cache as it does not limit the amount of items in cache a= nd grows exponentially:

      config_prod.yml (snippet, not a full example for stash config)
      =20
      stash:
          caches:
              default:
                  inMemory: false 
      =20

      Also if you use FileSystem driver, make sure memKeyLimit is set to a low number, default should be 200 and can be lowered like t= his:

      config_prod.yml
      =20
      stash:
          caches:
              default:
                  FileSystem:
                      memKeyLimit: 100
      =20
    2. If your setup is offline and cache is cold, there is no risk of stal= e cache and you can actually completely disable Stash cache. This will impr= ove performance of import scripts:

      config_prod.yml (full example)
      =20
      stash:
          caches:
              default:
                  drivers: [ Blackhole ]
                  inMemory: false
      =20
  3. For logging using monolog, if you use either the default = fingers_crossed, or buffer handler, make sure to specify buffer_size to limit how large the buffer grows before it gets flushed<= /span>:

    config_prod.yml (snippet, not a full example for monolog config)
    =20
    monolog:
        handlers:
            main:
                type: fingers_crossed
                buffer_size: 200
    =20
  4. Run PHP without memory limits using: php -= d memory_limit=3D-1 app/console <command>
  5. Disable xdebug (PHP extension to debug/profile php use= ) when running the command, this will cause php to use much more = memory.

 

Note: Memory will still grow

Even when everything is configured like described above, memory will gro= w for each iteration of indexing/inserting a content item with at least 1kb per iteration after the initial first 100 rounds. This is expecte= d behavior; to be able to handle more iterations you will have to do one or= several of the following:

  • Change the import/index script in question to use process forking to av= oid the issue.
  • Upgrade PHP: newer versions of PHP are typically more memory-effici= ent.
  • Run the console command on a machine with more memory (RAM).

Process forking with Symfony

The recommended way to completely avoid "memory leaks" in PHP in the fir= st place is to use processes, and for console scripts this is typically don= e using process forking which is quite easy to do with Symfony.

The things you will need to do:

  1. Change your command so it supports taking slice parameters, like for in= stance a batch size and a child-offset parameter.
    1. If defined, child-offset parameter denotes if a process is&nbs= p;child, this could have been accomplished with two commands as well.<= /li>
    2. If not defined, it is master process which will execute the process= es until nothing is left to process.
  2. Change the command so that the master process takes care of forking chi= ld processes in slices.
    1. For execution in-order, you may loo= k to our platform installer code used to fork out solr indexing after i= nstallation to avoid cache issues.
    2. For parallel execution of the slices, see Symfony doc for further instruction.

Related topics=

 

------=_Part_3225_1942267214.1485852443660--