Welcome to Code Ashram

Code Ashram warmly welcomes newcomers to the tech industry. We believe in fostering a supportive environment where beginners can learn and grow their skills. Whether you are a beginner programmer, QA professional, or aspiring DevOps enthusiast, Code Ashram provides a platform for you to ask questions, seek guidance, and engage with experienced members of the community. We encourage knowledge sharing and collaboration, ensuring that newcomers feel comfortable and encouraged to participate. Join us at Code Ashram, and embark on your journey towards becoming a skilled technologist.
Pet ProjectMr Market Nepse

Database fails due to frequent update. Here how I m solving it.

Database Issues in Mr. Market NEPSE During Market Hours

In Mr. Market NEPSE, during market hours from 11 AM to 3 PM, we retrieve the latest price of stocks every 15 to 30 seconds, depending on the trader's role. Polling the LTP (latest traded price) is a crucial operation for placing orders through our system. NEPSE operates within a tinker range, allowing only a 2% movement above or below the LTP. As traders place orders on our platform, we must capture this range and execute the orders accordingly.

Problem

Due to the increasing number of traders and their trade goals, we are experiencing issues with frequent database writes, which causes the database to fail and throws the following errors:

  1. SQLSTATE[HY000] [2002] Connection refused at

  2. General error: 1205 Lock wait timeout exceeded; try restarting transaction (Connection: mysql)

Initial Fix

As an initial fix, I applied a lazy solution. Since I am using Docker, I added the restart: unless-stopped policy to make the service resilient to crashes or Docker restarts. However, this solution still causes downtime and job failures, as there is a delay of 5 to 10 seconds during which the database remains inaccessible.


Caching Query

We are using a 3-server group where all of the background processes are connected to our main Redis server. I am using Laravel Horizon for managing all the background processes. However, due to cache invalidation logic and multiple jobs accessing the cache, I am facing an issue where there are a lot of cache misses. For now, I have reverted the code to avoid further complications.


Redis Pub/Sub (publish/subscribe)

Since we are writing data too frequently, I took lessons from the event-driven pattern. As I need the LTP (latest traded price) for processing order execution in my system, I leveraged Redis Stream functionality.

Producer

I am passing the stock price to stock_price_stream:

  Redis::xAdd('stock_price_stream', '*', $stockPrice);

Consumer Process

For the consumer, I am reading the latest message and running an infinite loop using the following code:

$messages = Redis::xRead(['stock_price_stream' => '$'], 1, 0);

How This Solves the Database Issue

The database issue is resolved on the consumer side as follows:

  1. Batch Updates:

    • After processing the record and executing the order, I hold the data in an array and perform a batch update to the database table.
  2. Key Benefits:

    • Uninterrupted Order Execution:
      • This approach ensures that the order execution flow is not hampered, as the system does not pause for individual update queries in the database.
    • Reduced Database Load:
      • Instead of executing multiple single update queries, the database processes a single batch update. This reduces the number of operations per second (OPS) and prevents the database from crashing.

By implementing this solution, the system handles high-frequency updates more efficiently, maintaining smooth operations without overloading the database.

Here is a screenshot of Laravel Horizon, a beautifully code-driven supervisord first-party Laravel package with an intuitive dashboard.

Laravel Horizon Dashboard

If you find any flaw in this pattern. please do comment.

Comments (1)

Database Issues in Mr. Market NEPSE During Market Hours

In Mr. Market NEPSE, during market hours from 11 AM to 3 PM, we retrieve the latest price of stocks every 15 to 30 seconds, depending on the trader's role. Polling the LTP (latest traded price) is a crucial operation for placing orders through our system. NEPSE operates within a tinker range, allowing only a 2% movement above or below the LTP. As traders place orders on our platform, we must capture this range and execute the orders accordingly.

Problem

Due to the increasing number of traders and their trade goals, we are experiencing issues with frequent database writes, which causes the database to fail and throws the following errors:

  1. SQLSTATE[HY000] [2002] Connection refused at

  2. General error: 1205 Lock wait timeout exceeded; try restarting transaction (Connection: mysql)

Initial Fix

As an initial fix, I applied a lazy solution. Since I am using Docker, I added the restart: unless-stopped policy to make the service resilient to crashes or Docker restarts. However, this solution still causes downtime and job failures, as there is a delay of 5 to 10 seconds during which the database remains inaccessible.


Caching Query

We are using a 3-server group where all of the background processes are connected to our main Redis server. I am using Laravel Horizon for managing all the background processes. However, due to cache invalidation logic and multiple jobs accessing the cache, I am facing an issue where there are a lot of cache misses. For now, I have reverted the code to avoid further complications.


Redis Pub/Sub (publish/subscribe)

Since we are writing data too frequently, I took lessons from the event-driven pattern. As I need the LTP (latest traded price) for processing order execution in my system, I leveraged Redis Stream functionality.

Producer

I am passing the stock price to stock_price_stream:

  Redis::xAdd('stock_price_stream', '*', $stockPrice);

Consumer Process

For the consumer, I am reading the latest message and running an infinite loop using the following code:

$messages = Redis::xRead(['stock_price_stream' => '$'], 1, 0);

How This Solves the Database Issue

The database issue is resolved on the consumer side as follows:

  1. Batch Updates:

    • After processing the record and executing the order, I hold the data in an array and perform a batch update to the database table.
  2. Key Benefits:

    • Uninterrupted Order Execution:
      • This approach ensures that the order execution flow is not hampered, as the system does not pause for individual update queries in the database.
    • Reduced Database Load:
      • Instead of executing multiple single update queries, the database processes a single batch update. This reduces the number of operations per second (OPS) and prevents the database from crashing.

By implementing this solution, the system handles high-frequency updates more efficiently, maintaining smooth operations without overloading the database.

Here is a screenshot of Laravel Horizon, a beautifully code-driven supervisord first-party Laravel package with an intuitive dashboard.

Laravel Horizon Dashboard

If you find any flaw in this pattern. please do comment.

12 days later

Nice one it is really helpful as the solution will definitely reduce the query count so the approach is good.