Pacer: Pinterest’s New Technology of Asynchronous Computing Platform | by Pinterest Engineering | Pinterest Engineering Weblog | Might, 2023


Qi Li | Software program Engineer, Core-Providers; Zhihuang Chen | Software program Engineer, Core-Providers; Ping Jin | Engineer supervisor, Core Providers
At Pinterest, a variety of functionalities and options for varied enterprise wants and merchandise are supported by an asynchronous job execution platform known as Pinlater, which was open-sourced a number of years in the past. Use circumstances on the platform span from saving Pins by Pinners, to notifying Pinners about varied updates, to processing photos/movies and so forth. Pinlater handles billions of job executions every day. The platform helps many fascinating options, like at-least-once semantics, job scheduling for future execution, and dequeuing/processing velocity management on particular person job queues.
With the expansion of Pinterest over the previous few years and elevated site visitors to Pinlater, we found quite a few limitations of Pinlater, together with scalability bottleneck, {hardware} effectivity, lack of isolation, and value. We have now additionally encountered new challenges with the platform, together with ones which have impacted the through-put and reliability of our information storage.
By analyzing these points, we realized some points reminiscent of lock competition and queue-level isolation couldn’t be addressed within the present platform. Thus, we determined to revamp the structure of the platform in its entirety, addressing recognized limitations and optimizing present functionalities. On this publish, we are going to stroll by this new structure and the brand new alternatives it has yielded (like a FIFO queue).
Pinlater has three main elements:
- A stateless Thrift service to handle job submission and scheduling, with three core APIs: enqueue, dequeue, and ACK
- A backend datastore to avoid wasting the job, together with payloads and meta information
- Job employees in employee swimming pools to tug jobs repeatedly, execute them, and ship a optimistic or unfavorable ACK for every job relying on whether or not the execution succeeded or failed
As Pinlater handles extra use circumstances and site visitors, the platform doesn’t work as properly. The uncovered points embrace, however are usually not restricted, to:
- As all queues have one desk in every datastore shard and every dequeue request scans all shards to seek out accessible jobs, lock competition occurs within the datastore when a number of thrift server threads attempt to seize information from the identical desk. It turns into extra extreme because the site visitors will increase and thrift providers scale up. This degrades the efficiency of Pinlater, impacts throughput of the platform, and limits the scalability.
- Executions of jobs influence one another as jobs from a number of job queues with completely different traits are working on the identical employee host. One dangerous job queue may deliver the entire employee cluster down in order that different job queues are impacted as properly. Moreover, mixing these jobs collectively makes efficiency tuning practically not possible, as job queues could require completely different occasion sorts.
- Numerous functionalities are sharing the identical thrift providers and influence one another, however they’ve very completely different reliability necessities. For instance, enqueue failure may affect site-wide SR as enqueuing jobs is one step of some important flows whereas dequeue failure simply ends in job execution delay, which we are able to afford for a brief time period.
To attain higher efficiency and resolve the problems talked about above, we revamped the structure in Pacer by introducing new elements and new mechanisms for storing, accessing, and isolating job information and queues.
Pacer consists of the next main elements:
- A stateless Thrift service to handle job submission and scheduling
- A backend datastore to avoid wasting the roles and its meta information
- A stateful dequeue dealer service to tug jobs from datastore
- Helix with Zookeeper to dynamically assign partitions of job queues to dequeue dealer service
- Devoted employee swimming pools for every queue on K8s to execute the roles
As you may see, new elements, like a devoted dequeue dealer service, Helix, and K8s are launched. The motivation of those elements beneath the brand new structure is to resolve points in Pinlater.
- Helix with Zookeeper helps handle task of partitions of job queues to dequeue brokers. Each partition of a job queue within the datastore shall be assigned to a devoted dequeue dealer service host, and solely this dealer host can dequeue from this partition in order that there isn’t any competitors over the identical job information.
- Dequeue dealer service takes care of fetching information of job queues from datastore and caches them in native reminiscence buffers. The prefetching will scale back latency when a employee pool pulls jobs from a job queue as a result of the reminiscence buffer is far sooner than datastore. Additionally, decoupling dequeue and enqueue from thrift service will remove any potential influence over enqueue and dequeue.
- Devoted employee pods for a job queue are allotted on K8s, as a substitute of sharing employee hosts with different job queues in Pinlater. This fully eliminates impacts of job executions from completely different job queues. Additionally, this makes customization of useful resource allocation and planning for a job queue attainable due to the unbiased runtime setting in order that it improves the {hardware} effectivity.
By migrating present job queues in Pinlater to Pacer, a number of enhancements have been achieved thus far:
- Lock competition is totally gone within the datastore as a result of new mechanism of pulling information
- General effectivity of {hardware} utilization has considerably improved, together with datastore and employee hosts.
- Job is executed independently in its personal setting, with personalized configuration, which has improved efficiency (as in comparison with that of Pinlater).
As proven above, new elements are launched in Pacer to deal with varied points in Pinlater. A number of factors are price mentioning with extra particulars.
Job Information Sharding
In Pinlater, each job queue has a partition in every shard of the datastore cluster regardless of how a lot information and site visitors of a job queue. There are a number of issues with this design.
- Sources are wasted. Even for job queues with small volumes of information, a partition is created in every shard of the datastore and should maintain little or no information or no information in any respect. Because the thrift service must scan each partition to get sufficient jobs, this ends in further calls to the datastore. Primarily based on the metrics, greater than 50% of calls get empty outcomes earlier than getting information.
- Lock competition turns into worse in some situations, like when a number of thrift service threads compete for little information of a small job queue in a single shard. The datastore has to make use of its sources to mitigate lock competition throughout information querying.
- Some functionalities can’t be supported, e.g. job executions of a job queue in chronological order of enqueueing time (FIFO), as employees pull jobs from a number of shards concurrently, and no international order may be assured however solely native order.
In Pacer, the next enhancements are made.
- A job queue shall be partitioned to partial shards of the datastore relying on information quantity and site visitors. A mapping of which shards maintain information of a job queue is constructed.
- Lock competition in datastore may be addressed with the assistance of a devoted layer of dequeue dealer service. And the dequeue dealer doesn’t want to question each datastore shard for a queue as a result of they know which datastore shard shops partitions of a queue.
- Help for some functionalities is feasible, e.g. execution in chronological order, so long as just one partition is created for a job queue.
Dequeue dealer service with Helix & Zookeeper
The dequeue dealer in Pacer addresses a number of important limitations in Pinlater by eliminating lock competition within the datastore.
Dequeue dealer is working as a stateful service, and one partition of a job queue shall be assigned to 1 particular dealer within the cluster. This dealer is answerable for pulling job information from the corresponding desk in a shard of datatore solely, and no competitors between completely different brokers. The brand new method of deterministic job fetching with out lock competition in Pacer sources in MySQL hosts extra effectively on precise job fetching (as a substitute of dealing with lock points).
Queue Buffer in a Dealer
When a dequeue dealer pulls job information from goal storage, it inserts the information into an acceptable in-memory buffer to let employees get jobs with optimum latency. One devoted buffer shall be created for every queue partition and its most capability shall be set to keep away from heavy reminiscence utilization within the dealer host.
A thread-safe queue is used because the buffer as a result of a number of employees will get jobs from the identical dealer concurrently, and dequeue requests for a similar partition of a job queue shall be processed sequentially by the dequeue dealer. Dispatching jobs from the in-memory buffer is a straightforward operation with minimal latency. Our stats present that the dequeue request latency is lower than 1ms.
Dequeue Dealer Useful resource Administration
As talked about above, one queue shall be divided into a number of partitions, and one dealer shall be assigned with one or a number of partitions of a job queue. Managing a lot of partitions and assigning them to acceptable brokers optimally is one main problem. As a generic cluster administration framework used for the automated administration of partitioned, replicated, and distributed sources hosted on a cluster of nodes, Helix is used for the use case of sharding and administration of queue partitions.
The above determine depicts the general structure of how Helix interacts with dequeue brokers.
- Zookeeper is used to speak useful resource configurations between Helix controller and dequeue brokers, and different related data.
- Helix controller always displays occasions which are occuring within the dequeue dealer cluster, e.g configuration adjustments and the becoming a member of and leaving of dequeue dealer hosts. With the newest state of the dequeue dealer cluster, the Helix controller tries to compute a super state of sources and sends messages to the dequeue dealer cluster by Zookeeper to progressively deliver the cluster to the perfect state.
- Each single dequeue dealer host will preserve reporting to Zookeeper about its liveness and shall be notified when the duties assigned to it modified. Primarily based on the notification message, the dequeue dealer host will change its native state.
As soon as the partition data of a queue is created/up to date, Helix shall be notified in order that it may possibly assign these partitions to dequeue brokers.
This work is a results of collaboration throughout a number of groups at Pinterest. Many because of the next those that contributed to this undertaking:
- Core Providers: Mauricio Rivera, Yan Li, Harekam Singh, Sidharth Eric, Carlo De Guzman
- Information Org: Ambud Sharma
- Storage and Caching: Oleksandr Kuzminskyi, Ernie Souhrada, Lianghong Xu
- Cloud Runtime: Jiajun Wang, Harry Zhang, David Westbrook
- Notifications: Eric Tam, Lin Zhu, Xing Wei
To study extra about engineering at Pinterest, take a look at the remainder of our Engineering Weblog and go to our Pinterest Labs website. To discover life at Pinterest, go to our Careers web page.