Fatal Error processing fixed number of records per minute > 200.

Tagged:
  • cbarwin
    Participant
    1 year, 5 months ago #27109

    We have a cron job on one of our sites that processes a fixed number of records per minute.

    Earlier, until probably last month, it was set to process 1100 records per minute. Then, it started exceeding 1 minute to process, so I changed it to 900 records per minute. Now, beginning last week, it starts throwing FATAL ERRORS and not updating a single record.

    The following is the error we’re seeing in our logs:

    [error] 51703#51703: *3251 FastCGI sent in stderr: “PHP message: PHP Fatal error: Uncaught Error: Class ‘GuzzleHttp\Ring\Exception\ConnectException’ not found in /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/guzzlehttp/ringphp/src/Client/CurlFactory.php:126
    Stack trace:
    #0 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/guzzlehttp/ringphp/src/Client/CurlFactory.php(91): GuzzleHttp\Ring\Client\CurlFactory::createErrorResponse(Array, Array, Array)
    #1 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/guzzlehttp/ringphp/src/Client/CurlHandler.php(96): GuzzleHttp\Ring\Client\CurlFactory::createResponse(Array, Array, Array, Array, Resource id #1501)
    #2 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/guzzlehttp/ringphp/src/Client/CurlHandler.php(68): GuzzleHttp\Ring\Client\CurlHandler->_invokeAsArray(Array)
    #3 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/guzzlehttp/ringphp/src/Client/Middl” while reading response header from upstream, client: 102.65.133.102, server: s41tradeconnect.com, request: “GET /eclipse-price-inventory-db.php?kinsta-cache-cleared=true HTTP/1.0”, upstream: “fastcgi://unix:/var/run/php7.4-fpm-s41tradeconnect.sock:”, host: “s41tradeconnect.com”

    This is the error we’re seeing generated from the cronjob output:

    window.NREUM||(NREUM={});NREUM.info={“beacon”:”bam.nr-data.net”,”licenseKey”:”xxx”,”applicationID”:”xx”,”transactionName”:”xxx”,”queueTime”:0,”applicationTime”:54624,”atts”:”SxBWEl9MSEg=”,”errorBeacon”:”bam.nr-data.net”,”agent”:””}

    We are using add_remove_document_to_solr_index() to Index on WP Solr.

    We have an identical environment created on another website that is not experiencing the same error. The only change I’ve made to WPSolr, incidentally around the time the error started occurring, was updating the license as it had expired. I had to work with support to activate, deactivate, and reactive our license.

    Changing the number of records to process to 200 per minute is working fine; but that’s insufficient for the scope of indexing we need to perform.

    Please advise.

    Thanks!

    • This topic was modified 1 year, 5 months ago by wpsolr.
    wpsolr
    Keymaster
    cbarwin
    Participant
    1 year, 5 months ago #27113

    No, our new relic is disabled right now.

    wpsolr
    Keymaster
    1 year, 5 months ago #27114

    window.NREUM||(NREUM={});NREUM.info={“beacon”:”bam.nr-data.net”,”licenseKey”:”xxx”,”applicationID”:”xx”,”transactionName”:”xxx”,”queueTime”:0,”applicationTime”:54624,”atts”:”SxBWEl9MSEg=”,”errorBeacon”:”bam.nr-data.net”,”agent”:””}

    This is a new Relic message apparently.

    cbarwin
    Participant
    1 year, 5 months ago #27115

    Let me try to run the process again while it is definitely disabled. Thanks.

    cbarwin
    Participant
    1 year, 5 months ago #27116

    We ran the process again, ensuring New Relic was disabled. We did not get the bam.nr.data.net error this time, but we did get the critical error provided above in my initial message.

    wpsolr
    Keymaster
    1 year, 5 months ago #27118

    Your other site is also at Kinsta?

    What is /eclipse-price-inventory-db.php ?

    cbarwin
    Participant
    1 year, 4 months ago #27388

     

    Yes, Other site is also on Kinsta and the URL to refresh Inventory there is https://inventory.s41tradeconnect.com/eclipse-price-inventory-db.php. It is currently set to 700 records per minute which works perfectly fine without any error.

    On https://s41tradeconnect.com/eclipse-price-inventory-db.php we set 200 records per minute and it works fine but when we increase the number of records beyond 200 it starts throwing Critical error.

    After critical error it keeps on throwing same Fatal error until we reset it to 200. Error recorded in the log is below.

    2021/07/19 07:18:50 [error] 63760#63760: *986026 FastCGI sent in stderr: “PHP message: PHP Fatal error: Uncaught Error: Class ‘GuzzleHttp\Ring\Exception\ConnectException’ not found in /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/CurlFactory.php:126
    Stack trace:
    #0 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/CurlFactory.php(91): GuzzleHttp\Ring\Client\CurlFactory::createErrorResponse(Array, Array, Array)
    #1 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/CurlHandler.php(96): GuzzleHttp\Ring\Client\CurlFactory::createResponse(Array, Array, Array, Array, Resource id #1500)
    #2 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/CurlHandler.php(68): GuzzleHttp\Ring\Client\CurlHandler->_invokeAsArray(Array)
    #3 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/Middleware.php(30):” while reading response header from upstream, client: 12.227.37.194, server: s41tradeconnect.com, request: “GET /eclipse-price-inventory-db.php?kinsta-cache-cleared=true HTTP/1.0”, upstream: “fastcgi://unix:/var/run/php7.4-fpm-s41tradeconnect.sock:”, host: “s41tradeconnect.com”

    Please let us know what this error means and how we can resolve this. We need at least 700 records per minute for the scope of indexing we need to perform.

    You can check URL by changing number of records in query string – https://s41tradeconnect.com/eclipse-price-inventory-db.php?xrpm=200

    Thanks

    wpsolr
    Keymaster
    1 year, 4 months ago #27431

    It is currently set to 700 records per minute

    If you reference the batch size indexing, this defines how many posts are sent to the index on a single call. It has nothing to do with a speed per minute.

    A larger batch size means less calls to your index, so in theory quicker for indexing your data.
    But a larger batch size can also degrade your WP performance (PHP memory, SQL memory) and your index performance.

    You can try different batch sized to verify which ones works better for you.

    cbarwin
    Participant
    1 year, 4 months ago #27432

    We require 500 records per minute to make batch smaller so that it does not effect performance. It runs for some time and then start throwing Fatal error stated above, it keeps on throwing that error until it batch is reset to 200 records per minute. It should not throw fatal error, there has to be some exception handling which resume batch indexing when got buffer. I know there is no limit for number records sent in each batch but in our case it just got stuck with fatal errors. If there exception handling it may not index record but it will resume when got buffer on server limits. Or you can suggest any other method to send posts for indexing, we are currently using add_remove_document_to_solr_index().

    wpsolr
    Keymaster
    1 year, 4 months ago #27433

    add_remove_document_to_solr_index() is a private method, unsupported outside of WPSOLR’s code. It can change of signature without warning.

    Anyway, calling add_remove_document_to_solr_index() for each post id very inefficient, this is why you have performance issues.

    To call your index by batch of 100,200, 500 posts, you need to setup the WPSOLR Cron add-on: https://www.wpsolr.com/documentation/configuration-step-by-step-schematic/activate-extensions/cron-scheduling/

    cbarwin
    Participant
    1 year, 3 months ago #27436

    We have custom meta fields to handle Inventory of a variant, and updating Inventory meta fiels using update_post_meta. We are sending posts for Indexing only if the inventory value is changed from what we already have in database.

    As per your suggestion, now we are using wp_update_post( get_post( $variantId ) ) for sending posts for Indexing instead of add_remove_document_to_solr_index() and a separate Cron job is Indexing posts. Do you think it will work efficiently. I am monitoring the process, it seems to be working now, means no Fatal error but triggering wp_update_post has increased the processing time to complete the process which leads to 504 Time-out. Any suggestions regarding alternative to wp_update_post when we are only updating a single meta field.

    Thanks

    wpsolr
    Keymaster
    1 year, 3 months ago #27437

    Each time you update a post or post type, with “Real-time indexing” in WPSOLR screen 2.2, the post is indexed by WPSOLR.

    IF you have lots of updates (import, or manually triggered as you mentioned), you could deactivate “Real-time indexing”, and instead call a Cron with status “incremental” every hour or less. The cron will pick up only the posts updated.

    You can also add a daily cron with status “Full indexing” if you wish.

    cbarwin
    Participant
    1 year, 3 months ago #27439

    When we are using mehtod – add_remove_document_to_solr_index() what is causing it to trigger following Fatal error. Is there any specific setting or steps we are missing.

    PHP Fatal error: Uncaught Error: Class ‘GuzzleHttp\Ring\Exception\ConnectException’ not found in /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/CurlFactory.php:126
    Stack trace:
    #0 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/CurlFactory.php(91): GuzzleHttp\Ring\Client\CurlFactory::createErrorResponse(Array, Array, Array)
    #1 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/CurlHandler.php(96): GuzzleHttp\Ring\Client\CurlFactory::createResponse(Array, Array, Array, Array, Resource id #1500)
    #2 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/CurlHandler.php(68): GuzzleHttp\Ring\Client\CurlHandler->_invokeAsArray(Array)
    #3 /www/s41tradeconnect_319/public/wp-content/plugins/wpsolr-pro/wpsolr/core/vendor/ezimuel/ringphp/src/Client/Middleware.php(30):” while reading response header from upstream, client: 12.227.37.194, server: s41tradeconnect.com, request: “GET /eclipse-price-inventory-db.php?kinsta-cache-cleared=true HTTP/1.0”, upstream: “fastcgi://unix:/var/run/php7.4-fpm-s41tradeconnect.sock:”, host: “s41tradeconnect.com”

    wpsolr
    Keymaster
    1 year, 3 months ago #27440

    Our cron is the only method supported. I cannot help you with calling add_remove_document_to_solr_index() directly.

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic.