Prosecution Insights
Last updated: April 19, 2026
Application No. 18/185,435

MESSAGE SYNCHRONIZATION SYSTEM AND METHOD WITH USER PARTITION ACCESS

Non-Final OA §103
Filed
Mar 17, 2023
Examiner
ANYA, CHARLES E
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
The Boeing Company
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
727 granted / 891 resolved
+26.6% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
932
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 891 resolved cases

Office Action

§103
DETAILED ACTION Claims 1, 3-11, and 13-22 are pending in this application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 6, 11, 13-14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0117248 A1 to Wong et al. in view of U.S. Pub. No. 2019/0303224 A1 to Liu et al. and further in view of U.S. Pub. No. 2022/0179720 A1 to Byrne et al. As to claim 1, Wong teaches a system configured to synchronize communications, the system comprising: a synchronizing interface network controller (SINC) (Reflector 20/Synchronizing Interface Network Controller (SINC) 22); and two or more processors in communication with the SINC (First/Second Processors 12), wherein each processor defines a plurality of memory (Memory 14), and wherein each memory comprises dedicated memory space including a queue associated with a software application (process or application) to be executed by a respective processor (Memory 14), wherein the SINC is configured to synchronously (synchronizing communications) and directly push a message (first message) to the queue (Local Count 16/Remote Count 18) associated with the software applications that are being concurrently executed (executed concurrently) by the two or more processors (First/Second Processors 12), wherein the one or more memory being executed by the two or more processors (First/Second Processors 12) receive the same message from the SINC and wherein one or more memory being executed by the two or more processors (First/Second Processors 12) and being associated with identical software applications application (process or application) are configured to directly transmit a message to the SINC such that the SINC receives the same message from one or more memory being concurrently (executed concurrently) executed by the two or more processors (First/Second Processors 12) (“…A system, method and computer program product are provided in accordance with an example of the present disclosure in order to synchronize communications, such as communications between a plurality of processors. As used herein, reference to synchronization refers to time synchronization. The plurality of processors may be executing the same application, such as for fault tolerance purposes, for purposes of integrity and/or for increasing the availability of the application. At least some of the applications executed by the plurality of processors may include a plurality of processes, e.g., tasks or threads, that are executed concurrently and may run at different rates. Thus, the plurality of processors may be executing one or more of the same processes concurrently…In an instance in which the reflector 20 is embodied as a SINC 22, FIG. 2 provides a more detailed view of a system 10 for synchronizing communications between the plurality of processors 12, such as between first and second processors in this illustrated example. As described above, each processor 12 includes or is associated with a memory 14 that, in turn, maintains a local count 16 and a remote count 18 in respective memory locations. The illustrated set of the local count 16 and the remote count 18 is associated with a respective process of an application executed by a processor 12. As the processor 12 may execute a plurality of processes of one or more applications, the memory 14 associated with a processor may include a plurality of sets of the local count 16 and the remote count 18, one set of which is associated with each process. The local count 16 of a respective process serves as a transaction counter, thereby identifying the respective message number that is currently being communicated to or from the process or that was most recently communicated to or from the process. For example, following transmission or reception of the first message by the respective process, the processor 12 may set the local count 16 to value one. Following the transmission or reception of the tenth message by the respective process, the processor 12 may correspondingly set the local count 16 to value ten and so on. Similarly, the remote count 18 maintained by a processor 12 is a representation of the message executed by another process, that is, a corresponding process, of an application executed by another processor, such as the second processor in this example. Since the processors 12 are executing the same applications and, in turn, the same processes, the local count 16 and the remote count 18 should be the same in an instance in which the processors are synchronized. However, in an instance in which the processors 12 are not synchronized, the local count 16 and the remote count 18 will contain different values, thereby causing the system 10 of this example to take action to bring the processors and, more particularly, the processes of the processors back into synchronization…” paragraphs 0018/0022/0035). Wong is silent with reference to wherein each processor defines a plurality of user partitions, and wherein each user partition comprises dedicated memory space including a queue associated with a software application to be executed by a respective processor, wherein the SINC is configured to synchronously and directly push a message to the queue of one or more user partitions associated with the software applications that are being executed by the two or more processors receive the same message from the SINC and wherein the SINC is configured to identify an overflow condition in which the queue is full and the message is dropped, based on identifying the overflow condition, take remedial action to resolve the overflow condition of the queue, send a request to the one or more user partitions to directly retransmit the message to the SINC, and directly push the retransmitted message to the queue. Liu teaches wherein each processor defines a plurality of user partitions, and wherein each user partition comprises dedicated memory space including a queue associated with a software application to be executed by a respective processor (two message queues), and synchronously and directly push a message to the queue of one or more user partitions (two message queues) associated with the software applications that are being executed by the two or more processors receive the same message from the Controller (Devices A and B belong to user A. Device C belongs to user B) (“…FIG. 2 is a block diagram 200 illustrating an example data flow between one or more client devices and a queue system, according to some example embodiments. The queue system may correspond to the queue system 150 of FIG. 1. As shown, the queue system includes two message queues, one for a user A and one for a user B. Moreover, each of the message queues stores messages for a respective user (e.g., user A or user B). Also shown in FIG. 2 are devices A, B, and C. Devices A and B belong to user A. Device C belongs to user B. Device A is shown as having received the first three messages from the message queue for user A. As a result, a fourth message, depicted by line 202, is being transmitted to the device A. Device B is shown as having received the first five messages from the message queue for user A. As a result, a sixth message, depicted by line 204, is being transmitted to the device B. As such, the message queue for a user can provide same messages to multiple devices of the same user. The queue system 150 is configured to identify a position of a current message in the message queue to be transmitted to each of the multiple devices. Also shown in FIG. 2 is a ninety-third message, depicted by line 206, as being transmitted to the device C. Each user of the queue system may have a corresponding message queue…” paragraph 0026). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong with the teaching of Liu because the teaching of Liu would improve the system of Wong by providing message queues for storing messages dedicated for a respective user (Liu paragraph 0026). Byrne teaches identify an overflow condition in which the queue is full and the message is dropped, based on identifying the overflow condition, take remedial action to resolve the overflow condition of the queue (“…Process 1800 begins with the external device receiving a new message from an external network/data bus (step 1802). The external device pushes a copy of the new message to a target address in processor memory for both processor lanes and adds a timestamp to the message (step 1304). Optionally, if the target queue is full, the external device drops the new message (step 1806). The processor provides feedback to the external device on how full the queue is. Reading applications remove messages to clear space…” paragraph 0160), send a request to the one or more user partitions to directly retransmit the message (After the new message), and directly push the retransmitted message to the queue (“…After the new message is pushed to the target address in memory, the external device increments the target address (step 1808). If the external device reaches the last memory address in the queue, it loops back to the beginning address (i.e., ring behavior)…” paragraph 0161). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong and Liu with the teaching of Byrne because the teaching of Byrne would improve the system of Wong and Liu by providing a technique for processing messages inside aligned message queues and discarding messages inside unaligned message queues to allow for optimal processing. As to claim 3, Liu teaches a system according to Claim 1, wherein two or more message queues associated with different user partitions are configured to store the same message in an instance in which the different user partitions consume the same data (two message queues) (“…FIG. 2 is a block diagram 200 illustrating an example data flow between one or more client devices and a queue system, according to some example embodiments. The queue system may correspond to the queue system 150 of FIG. 1. As shown, the queue system includes two message queues, one for a user A and one for a user B. Moreover, each of the message queues stores messages for a respective user (e.g., user A or user B). Also shown in FIG. 2 are devices A, B, and C. Devices A and B belong to user A. Device C belongs to user B. Device A is shown as having received the first three messages from the message queue for user A. As a result, a fourth message, depicted by line 202, is being transmitted to the device A. Device B is shown as having received the first five messages from the message queue for user A. As a result, a sixth message, depicted by line 204, is being transmitted to the device B. As such, the message queue for a user can provide same messages to multiple devices of the same user. The queue system 150 is configured to identify a position of a current message in the message queue to be transmitted to each of the multiple devices. Also shown in FIG. 2 is a ninety-third message, depicted by line 206, as being transmitted to the device C. Each user of the queue system may have a corresponding message queue…” paragraph 0026). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong and Byrne with the teaching of Liu because the teaching of Liu would improve the system of Wong and Byrne by providing message queues for storing messages dedicated for a respective user (Liu paragraph 0026). As to claim 4, Liu teaches a system according to Claim 1, wherein a different number of message queues are associated with different user partitions (two message queues) (“…FIG. 2 is a block diagram 200 illustrating an example data flow between one or more client devices and a queue system, according to some example embodiments. The queue system may correspond to the queue system 150 of FIG. 1. As shown, the queue system includes two message queues, one for a user A and one for a user B. Moreover, each of the message queues stores messages for a respective user (e.g., user A or user B). Also shown in FIG. 2 are devices A, B, and C. Devices A and B belong to user A. Device C belongs to user B. Device A is shown as having received the first three messages from the message queue for user A. As a result, a fourth message, depicted by line 202, is being transmitted to the device A. Device B is shown as having received the first five messages from the message queue for user A. As a result, a sixth message, depicted by line 204, is being transmitted to the device B. As such, the message queue for a user can provide same messages to multiple devices of the same user. The queue system 150 is configured to identify a position of a current message in the message queue to be transmitted to each of the multiple devices. Also shown in FIG. 2 is a ninety-third message, depicted by line 206, as being transmitted to the device C. Each user of the queue system may have a corresponding message queue…” paragraph 0026). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong and Byrne with the teaching of Liu because the teaching of Liu would improve the system of Wong and Byrne by providing message queues for storing messages dedicated for a respective user (Liu paragraph 0026). As to claim 6, Wong teaches a system according to Claim 1, wherein the SINC is configured to push the message (“…In an instance in which the reflector 20 is embodied as a SINC 22, FIG. 2 provides a more detailed view of a system 10 for synchronizing communications between the plurality of processors 12, such as between first and second processors in this illustrated example…” paragraph 0022). Liu teaches pushing the message to a respective user partition by pushing the message to one or more memory locations of the two or more processors that are dedicated to the respective user partition (two message queues) (“…FIG. 2 is a block diagram 200 illustrating an example data flow between one or more client devices and a queue system, according to some example embodiments. The queue system may correspond to the queue system 150 of FIG. 1. As shown, the queue system includes two message queues, one for a user A and one for a user B. Moreover, each of the message queues stores messages for a respective user (e.g., user A or user B). Also shown in FIG. 2 are devices A, B, and C. Devices A and B belong to user A. Device C belongs to user B. Device A is shown as having received the first three messages from the message queue for user A. As a result, a fourth message, depicted by line 202, is being transmitted to the device A. Device B is shown as having received the first five messages from the message queue for user A. As a result, a sixth message, depicted by line 204, is being transmitted to the device B. As such, the message queue for a user can provide same messages to multiple devices of the same user. The queue system 150 is configured to identify a position of a current message in the message queue to be transmitted to each of the multiple devices. Also shown in FIG. 2 is a ninety-third message, depicted by line 206, as being transmitted to the device C. Each user of the queue system may have a corresponding message queue…” paragraph 0026). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong and Byrne with the teaching of Liu because the teaching of Liu would improve the system of Wong and Byrne by providing message queues for storing messages dedicated for a respective user (Liu paragraph 0026). As to claims 11 and 13-14, see the rejection of claim 1 and 3-4 respectively. As to claims 16, see the rejection of claim 6 above. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0117248 A1 to Wong et al. in view of U.S. Pub. No. 2019/0303224 A1 to Liu et al. and further in view of U.S. Pub. No. 2022/0179720 A1 to Byrne et al. as applied to claims 1 and 11 above, and further in view of U.S. Pub. No. 2014/0156959 A1 to Heidelberger et al. As to claim 5, Wong as modified by Liu and Byrne teaches a system according to Claim1, wherein the SINC (“…In an instance in which the reflector 20 is embodied as a SINC 22, FIG. 2 provides a more detailed view of a system 10 for synchronizing communications between the plurality of processors 12, such as between first and second processors in this illustrated example…” paragraph 0022), however it is silent with reference to determine and to provide an indication that a queue associated with the respective user partition of the plurality of user partition is full or has experienced the overflow condition to the respective user partition. Heidelberger teaches determine and to provide an indication that a queue associated with the respective user partition of the plurality of user partition is full or has experienced the overflow condition to the respective user partition (“…In an embodiment, on encountering a full queue, a user may acquire the lock and then the empty the queue by reconfiguring the metadata to use a different array in memory and, subsequently, unlock the queue to make it available to users…” paragraph 0026). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong, Liu and Byrne with the teaching of Heidelberger because the teaching of Heidelberger would improve the system of Wong, Liu and Byrne by providing a mechanism for optimally managing and assigning memory resources. As to claim 15, see the rejection of claim 5 above. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0117248 A1 to Wong et al. in view of U.S. Pub. No. 2019/0303224 A1 to Liu et al. and further in view of U.S. Pub. No. 2022/0179720 A1 to Byrne et al. as applied to claims 1 and 11 above, and further in view of CN 112925659 A to Pan et al. As to claim 7, Wong as modified by Liu and Byrne teaches a system according to Claim 1, wherein the SINC is configured to concurrently push messages (“…In an instance in which the reflector 20 is embodied as a SINC 22, FIG. 2 provides a more detailed view of a system 10 for synchronizing communications between the plurality of processors 12, such as between first and second processors in this illustrated example…” paragraph 0022). Pan teaches pushing both standard integrity messages and high integrity messages without prioritization of the high integrity messages (“…It can be seen that the embodiment of the invention can process the message without priority sending, and the priority is higher than the message of the priority priority processing, so it is good for sending the message according to the objective actual requirement…” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong, Liu and Byrne with the teaching of Pan because the teaching of Pan would improve the system of Wong, Liu and Byrne by providing a mechanism for sending and processing messages without priority to allow for non partial processing. As to claim 17, see the rejection of claim 7 above. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0117248 A1 to Wong et al. in view of U.S. Pub. No. 2019/0303224 A1 to Liu et al. and further in view of U.S. Pub. No. 2022/0179720 A1 to Byrne et al. as applied to claims 1 and 11 above, and further in view of U.S. Pub. No. 2009/0113085 A1 to Banyai et al. As to claim 8, Wong as modified by Liu and Byrne teaches a system according to Claim 1, wherein the SINC is commanded to communicate messages or start and stop transmission of messages to one or more memory by receiving a command (“…In an instance in which the reflector 20 is embodied as a SINC 22, FIG. 2 provides a more detailed view of a system 10 for synchronizing communications between the plurality of processors 12, such as between first and second processors in this illustrated example…” paragraph 0022). Banyai teaches to flush messages (flush message) or start and stop transmission of messages to one or more memory by receiving a command (“…In yet other embodiment, while using the in-band mechanism, the CPU 110 may transfer a flush message along the data transfer path. In one embodiment, the CPU 110 may transfer the flush message in response to receiving a trigger or in periodic intervals of time. In one embodiment, the CPU 110 may receive a trigger caused by the onset of power-down mode of either of nodes 101 and 151. However, the memory 190 may be backed-up by battery supply. In one embodiment, the data transfer path may refer to a path over which the data units may be transferred from the memory 140 to the write buffer 180. In one embodiment, the MCH 170 may comprise a flush logic 186, which may decode the flush message and flush the contents of the write buffer 180. In one embodiment, the flush logic 186 may cause the contents of the write buffer 180 to be flushed to the memory 190 in response to receiving the flush message…” paragraph 0018). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong, Liu and Byrne with the teaching of Banyai because the teaching of Banyai would improve the system of Wong, Liu and Byrne by providing a mechanism for flushing message from a buffer to make room for other messages As to claim 18, see the rejection of claim 8 above. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0117248 A1 to Wong et al. in view of U.S. Pub. No. 2019/0303224 A1 to Liu et al. and further in view of U.S. Pub. No. 2022/0179720 A1 to Byrne et al. as applied to claims 1 and 11 above, and further in view of C.N. No. 11000804 A to Zhao et al. As to claim 9, Wong as modified by Liu and Byrne teaches a system according to Claim 1, however it is silent with reference to wherein a user partition is further configured to indicate that the message pushed by the SINC to one or more user partitions is outdated based upon a comparison of a reference time and a timestamp associated with the message. Zhao teaches wherein a user partition is further configured to indicate that the message pushed to one or more user partitions is outdated (the message to be processed is an expiration message) based upon a comparison of a reference time and a timestamp associated with the message (“…step 142, the first server determining the timestamp processing message carries is earlier than the latest timestamp of recorded data in the local cache, if it is, That is, when determining the timestamp to be processed already carried by the information recording the latest timestamp of the data in the local cache, the message to be processed is an expiration message, there is no need to write in the database of the first server, executing step 143, if not, that is to say when it is determined that the determined timestamp of message to be processed carried not already recording the latest timestamp of the data in the local buffer; description the message to be processed is not stale messages, needs to prepare written in the database of the first server, executing the step 144. thus, data update table retaining the latest timestamp of data, judging whether the latest timestamp of the data recorded, the old data before the recorded latest timestamp of data discarded, i.e. discarding stale messages, no need to write in the database of the first server…”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong, Liu and Byrne with the teaching of Zhao because the teaching of Zhao would improve the system of Wong, Liu and Byrne by providing a mechanism for discarding expired or old messages. As to claim 19, see the rejection of claim 9 above. Claims 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0117248 A1 to Wong et al. in view of U.S. Pub. No. 2019/0303224 A1 to Liu et al. and further in view of U.S. Pub. No. 2022/0179720 A1 to Byrne et al. as applied to claims 1 and 11 above, and further in view of U.S. Pub. No. 2002/0136205 A1 to Sasaki. As to claim 21, Wong as modified by Liu and Bryne teaches the system according to Claim 1, however it is silent with reference to wherein the remedial action to resolve the overflow condition of the queue includes at least one of flushing the queue or performing read operations to remove one or more message from the queue. Saski teaches wherein the remedial action to resolve the overflow condition of the queue includes at least one of flushing the queue or performing read operations to remove one or more message from the queue (“…When buffer's overflow occurs, the buffer is cleared, and the reproduction processing on packet data is suspended until the predetermined number of packet data are stored in the buffer. When the predetermined number of packet data are stored in the buffer, then, the reproduction processing on packet data is resumed…” paragraph 0014). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Wong, Liu and Byrne with the teaching of Saski because the teaching of Zhao Saski would improve the system of Wong, Liu and Byrne by providing a garbage collection mechanism for reclaiming computing resources for reuse. As to claim 22, see the rejection of claim 21 above. Allowable Subject Matter Claims 10 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Reasons for Allowance The following is an examiner’s statement of reasons for allowance: The closest prior art of records, (U.S. Pub. No. 2021/0117248 A1 to Wong et al. and U.S. Pub. No. 2019/0303224 A1 to Liu et al.), taken alone or in combination do not specifically disclose or suggest the claimed recitations (claims 10 and 20), when taken in the context of claims as a whole. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Response to Arguments Applicant's arguments filed 01/20/26 have been fully considered but they are not persuasive. Applicants argued in substance that the applied prior arts do not teach or suggest “…identify an overflow condition in which the queue is full and the message is dropped, based on identifying the overflow condition, take remedial action to resolve the overflow condition of the queue, send a request to the one or more user partitions to directly retransmit the message and directly push the retransmitted message to the queue…”. The Examiner disagrees. Although the Wong and Liu prior arts do not discloses this claim language, the Bryne prior art does. Please see the rejection of claims 1 and 11. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES E ANYA whose telephone number is (571)272-3757. The examiner can normally be reached Mon-Fir. 9-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES E ANYA/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Mar 17, 2023
Application Filed
Jul 26, 2025
Non-Final Rejection — §103
Oct 22, 2025
Applicant Interview (Telephonic)
Oct 22, 2025
Examiner Interview Summary
Nov 03, 2025
Response Filed
Nov 15, 2025
Final Rejection — §103
Jan 15, 2026
Applicant Interview (Telephonic)
Jan 15, 2026
Examiner Interview Summary
Jan 20, 2026
Response after Non-Final Action
Feb 11, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591471
KNOWLEDGE GRAPH REPRESENTATION OF CHANGES BETWEEN DIFFERENT VERSIONS OF APPLICATION PROGRAMMING INTERFACES
2y 5m to grant Granted Mar 31, 2026
Patent 12591455
PARAMETER-BASED ADAPTIVE SCHEDULING OF JOBS
2y 5m to grant Granted Mar 31, 2026
Patent 12585510
METHOD AND SYSTEM FOR AUTOMATED EVENT MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579014
METHOD AND A SYSTEM FOR PROCESSING USER EVENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572393
CONTAINER CROSS-CLUSTER CAPACITY SCALING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+33.5%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 891 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month