Prosecution Insights
Last updated: April 19, 2026
Application No. 17/691,303

FAST DATA SYNCHRONIZATION IN PROCESSORS AND MEMORY

Final Rejection §103
Filed
Mar 10, 2022
Examiner
WONG, TITUS
Art Unit
2181
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
4 (Final)
78%
Grant Probability
Favorable
5-6
OA Rounds
3y 0m
To Grant
98%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
455 granted / 587 resolved
+22.5% vs TC avg
Strong +21% interview lift
Without
With
+20.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
28 currently pending
Career history
615
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
32.2%
-7.8% vs TC avg
§102
32.6%
-7.4% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 587 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment The amendment filed on November 17, 2025 has been received and entered. Applicant’s Amendments to the Claims have been received and acknowledged. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-21 and 23-27 are rejected under 35 U.S.C. 103 as being unpatentable over FAHS et al. (U.S. Publication No. 2014/0019724 A1), hereafter referred to as FAHS’724 in view of JIA (U.S. Publication No. 2009/01000196 A1), hereafter referred to as JIA’196. Referring to claim 1, FAHS’724, as claimed, a method of synchronizing an exchange of data between a producer (threads executing barrier arrival instructions (producers), see para. [0102]) on a first processor (PPUs 202, see paras. [0024], [0025] and Fig. 1; also note: parallel thread processors called streaming multiprocessors (SPMs), see para. [0037], Fig. 3A; and any number of processing units, e.g. SPMs 310 or texture units 315, preROPs 325 may be included within a GPC 208. Further, while only on GPC208 is shown, a PPU 202 may include any number of GPCs 208, see para. [0044]) and a consumer (consumer threads execute the barrier synchronization instruction to wait for a resource to be produced, see para. [0102]) on a second processor (PPUs 202, see paras. [0024], [0025] and Fig. 1; also note: parallel thread processors called streaming multiprocessors (SPMs), see para. [0037], Fig. 3A; and any number of processing units, e.g. SPMs 310 or texture units 315, preROPs 325 may be included within a GPC 208. Further, while only on GPC208 is shown, a PPU 202 may include any number of GPCs 208, see para. [0044]), comprising: receiving, at the second processor, a first data message from the producer, the first data message identifying the data (each SPM outputs processed tasks to work distribution crossbar in order to provide the processed task to another GPC for further processing or to store the processed task in an L2 cache, parallel processing memory 204, or system memory 104 via crossbar unit 210, see para. [0043]; also note: the series of instructions transmitted to particular GPC 208 constitutes a thread and the collection of a certain number of concurrently executing threads across the parallel processing engines is a “thread group”, see para. [0039]); in response to the first data message, storing the data in a memory buffer in a local shared memory (shared among all GPCs 208; store data in a shared memory to which one or more of the other threads have access; shared memory is accessible to all CTA threads, see paras. [0041], [0050], [0055], [0056] and Fig. 3) of the second processor (a barrier arrival instruction to write data that is consumed by other threads, see para. [0103]; also note: transfer data from system memory and/or local parallel processing memory into internal (on-chip) memory, see paras. [0030]-[0033]); updating a barrier memory structure (barrier, see paras. [0085]-[0089], [0106]-[0107]; also note: PP memory 204 (u-1) associated with PPU 202 (u-1), see Fig. 2) in the local shared memory (data structure in shared memory, see para. [0063]) of the second processor (update data stored, see para. [0050]) to indicate the storing of the data in the memory buffer (barrier instructions are extended to specify an aggregation function that performs a reduction operation or scan operation…, see para. [0060], a scan operation used to provide a unique position for writing data to a data structure in shared memory., see paras. [0063]-[0065]), wherein the first data message comprises address information of the memory buffer and barrier memory structure (The CTA program can also include an instruction to compute an address in the shared memory from which data is to be read, with the address being a function of thread ID, see para. [0050], an instruction used to access any of the local, shared, or global memory spaces by specifying an address in the united memory space, see para. [0055]; CTA’s barriers are addressed from 0 to (NumBarriersAllocated-1), see para. [0086]); and in response to the updating, reading, by the consumer on the second processor, the data in the memory buffer (read data after producer thread has written it, see para. [0103] and Fig. 6B; providing data, see paras. [0041]; data written to a given location in shared memory by one thread and read from that location by a different thread, see paras. [0050], [0055]; and TABLE 6). However, FAHS’724 does not appear to teach the message structure comprising the first address and second address. JIA’196 discloses the message comprising the first address and second address (The synchronization manager of each participant maintains two local bit vectors 402, 404: one (vector_a) 402 for registering its barrier arrivals and the other (vector_b) 404 for detecting when to leave a barrier…Each participant is also associated with one or more shared memory flags,. The array of shared memory flags, denoted by shm_bar_flags., see para. [0042]; These vectors 402 are kept locally by each participant. When a non-leader task arrives at a barrier it updates one of its local vectors 402, 404 and uses this value to set its shared memory flag in the shared memory area., see paras. [0044]-[0048] and Fig. 7; Process for synchronization process and/or threads using partial or complete barriers. Each participant that arrives at the barrier, updates a first local vector. If the participant is part of the subgroup of participants that is to leave the barrier next, the participant updates the second vector. It should be noted that each time a participant updates a local vector 402, 404, it also updates its shared memory flags to reflect that state of its vectors 402, 404, see paras. [0064]-[0067] and Fig. 9). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify FAHS’724’s invention to comprise the message structure comprising the first address and second address, as taught by JIA’196, in order to perform synchronization through the same set of shared memory resources and do not interfere with non-representatives waiting in the barrier (see para. [0010]). As to claim 2, FAHS’724 also discloses the consumer and the producer are in respective cooperative thread arrays (CTA) (cooperative thread array CTA, see paras. [0040], [0050]-[0054]) in a same cooperative grid array (CGA) (grid to which CTA belongs, see paras. [0052] and [0055]). As to claim 3, FAHS’724 also discloses a time elapsed from a sending of the first data message by the producer to receiving notification of availability of the data in the memory buffer by the consumer is less than a latency of a roundtrip from the producer to the memory buffer (read data after producer thread has written it, see paras. [0103]-[0105]), wherein the latency of the roundtrip includes a time elapsed for a message from the producer to reach the consumer and a corresponding acknowledgement message from the consumer to reach the producer (the threads executing barrier arrival instructions (producers) announce their arrival at the barrier and continue execution…, while the consumer threads execute the barrier synchronization instruction to wait for a resource to be produced. The roles are then reversed,…,where the producer threads execute a barrier synchronization instruction to wait for a resource to be consumed, while the consumer threads announce that the resource has been consumed with barrier arrival instruction, see para. [0102]; also note: arrival counter incremented and value associated with thread, see paras. [0106] and [0107]). As to claim 4, FAHS’724 also discloses receiving a second data message from a second producer (each SPM outputs processed tasks to work distribution crossbar in order to provide the processed task to another GPC for further processing or to store the processed task in an L2 cache, parallel processing memory 204, or system memory 104 via crossbar unit 210, see para. [0043]; also note: the series of instructions transmitted to particular GPC 208 constitutes a thread and the collection of a certain number of concurrently executing threads across the parallel processing engines is a “thread group”, see para. [0039]), the second data message including a second data (a barrier arrival instruction to write data that is consumed by other threads, see para. [0103]; also note: transfer data from system memory and/or local parallel processing memory into internal (on-chip) memory, see paras. [0030]-[0033]); and in response to the second data message, storing the second data in the memory buffer (shared among all GPCs 208; store data in a shared memory to which one or more of the other threads have access; shared memory is accessible to all CTA threads, see paras. [0041], [0050], [0055], [0056] and Fig. 3) and updating the barrier memory structure (barrier, see paras. [0085]-[0089], [0106]-[0107]; also note: PP memory 204 (u-1) associated with PPU 202 (u-1), see Fig. 2; barrier instructions are extended to specify an aggregation function that performs a reduction operation or scan operation…, see para. [0060], a scan operation used to provide a unique position for writing data to a data structure in shared memory., see paras. [0063]-[0065]), wherein said reading, by the consumer on the second processor, the data in the memory buffer is performed in response to the updating in response to the first data message and the updating in response to the second data message (read data after producer thread has written it, see para. [0103] and Fig. 6B; providing data, see paras. [0041]; data written to a given location in shared memory by one thread and read from that location by a different thread, see paras. [0050], [0055]; and TABLE 6). As to claim 5, FAHS’724 also discloses performing the reading, by the consumer on the second processor, the data in the memory buffer to be read by the consumer (read data after producer thread has written it, see para. [0103] and Fig. 6B; providing data, see paras. [0041]; data written to a given location in shared memory by one thread and read from that location by a different thread, see paras. [0050], [0055]; and TABLE 6) if a clear condition of the barrier memory structure is satisfied (Fig. 6B and paras. [0105]-[0107]). Note claim 6 recites the corresponding limitations of claim 5. Therefore it is rejected based on the same reason accordingly. As to claim 7, FAHS’724 also discloses the barrier memory structure comprises a first memory barrier structure co-located with the consumer and a second memory barrier structure co-located with the producer (barrier arrive-and-wait instructions, see paras. [0103]-[0105] and Table 6), wherein, after sending of the first data message to the consumer, the producer waits on the second barrier structure, and, after reading the data in the memory buffer, the consumer arrives at the second barrier memory structure (see Fig. 6B and Table 6). As to claim 8, FAHS’724 also discloses the first data message (each SPM outputs processed tasks to work distribution crossbar in order to provide the processed task to another GPC for further processing or to store the processed task in an L2 cache, parallel processing memory 204, or system memory 104 via crossbar unit 210, see para. [0043]; also note: the series of instructions transmitted to particular GPC 208 constitutes a thread and the collection of a certain number of concurrently executing threads across the parallel processing engines is a “thread group”, see para. [0039]) represents, in a single message, a write of the data to the memory buffer and an update to the memory barrier structure (barrier instructions are extended to specify an aggregation function that performs a reduction operation or scan operation…, see para. [0060], a scan operation used to provide a unique position for writing data to a data structure in shared memory., see paras. [0063]-[0065]). As to claim 9, FAHS’724 also discloses the first data message includes a combined store and arrive instruction comprising an address of the memory buffer (compute an address in the memory, see para. [0050]), an address of the barrier memory structure (The CTA program can also include an instruction to compute an address in the shared memory from which data is to be read, with the address being a function of thread ID, see para. [0050], an instruction used to access any of the local, shared, or global memory spaces by specifying an address in the united memory space, see para. [0055]; CTA’s barriers are addressed from 0 to (NumBarriersAllocated-1), see para. [0086]), and the data (a barrier arrival instruction to write data that is consumed by other threads, see paras. [0103]-[0107], Fig. 6B and Table 6). As to claim 10, FAHS’724 also discloses the barrier memory structure comprises an arrive count (arrival counter 504, see paras. [0060], [0099], [0106] and Fig. 5A-B) and a transaction count (count, see paras. [0062], [0090], [0092]), wherein, in response to the first data message, the arrive count and the transaction count are updated and the transaction count is updated in accordance with an amount of data in the first data message (increment arrival counter 625, see Fig. 6A and Fig. 6B, block 675). As to claim 11, FAHS’724 also discloses atomically (atomically read and update data, see para. [0050]; also note: pushbuffer, see paras. [0026], [0028], [0030]) performing the storing the data in the memory buffer and the updating the barrier memory structure (barrier arrive-and-wait instructions, see paras. [0103]-[0105] and Table 6; barrier instructions are extended to specify an aggregation function that performs a reduction operation or scan operation…, see para. [0060], a scan operation used to provide a unique position for writing data to a data structure in shared memory., see paras. [0063]-[0065]). As to claim 12, FAHS’724 also discloses the updating (update data, see para. [0050]) of the barrier memory structure is performed in response to receiving one or more second data messages from the first processor (barrier arrive-and-wait instructions, see paras. [0103]-[0105] and Table 6; barrier instructions are extended to specify an aggregation function that performs a reduction operation or scan operation…, see para. [0060], a scan operation used to provide a unique position for writing data to a data structure in shared memory., see paras. [0063]-[0065]). As to claim 13, FAHS’724 also discloses the one or more second data messages comprise a respective second data message for each path configured in an interconnect switch from the first processor to the second processor (work distribution and crossbar in each PPU connecting to other PPUs over communication path 113, see Fig. 2). As to claim 14, FAHS’724 also discloses wherein the barrier memory structure includes an expected arrive count (expected arrival count, see paras. [0066], [0067], [0078]), an actual arrive count (arrival counter 504, see paras. [0060], [0099], [0106] and Fig. 5A-B), and a fence transaction count (count, see paras. [0062], [0089], [0090], [0092]), wherein the first data message is transmitted before the second data message by the first processor, and wherein the actual arrive count is updated in response to receiving each first data message, and the fence transaction count is updated in response to receiving each second data message (increment arrival counter 625, see Fig. 6A and Fig. 6B, block 675). As to claim 15, FAHS’724 also discloses the consumer waits on the barrier memory structure (barrier arrive-and-wait instructions, see paras. [0103]-[0105] and Table 6) and wherein the barrier memory structure is cleared when the actual arrive count equals the expected arrive count (expected arrival count that defines a number of threads that participate in the barrier, see para. [0066], [0067], and [0078]) and the fence transaction count represents all said data is written to the memory buffer (read data after producer thread has written it, see para. [0103] and Fig. 6B; also note: all the threads participating the barrier reached the barrier, then execution of the barrier instruction has been completed by all of the threads in the barrier instruction, see para. [0107]). As to claim 16, FAHS’724 also discloses a buffer queue is configured in an external memory accessible to the producer, wherein the second processor pushes receive buffer information to the buffer queue and the producer pops destination buffer information from the buffer queue, wherein the first processor transmits the data to the receive buffer through the external memory and the second processor writes the data to a memory of the second processor (writes a stream of commands for each PPU 202 to a push buffer that may be located in system memory 104, parallel processing memory 204, or another storage location accessible to both CPU 102 and PPU 202, see paras. [0026], [0028] and Fig. 1; also note: parallel thread processors, see para. [0037] and Fig. 3A). As to claim 17, FAHS’724 also discloses an agent is initiated in response to a message from the producer, and the agent coordinates exchange of the data from an output queue in a memory of the first processor to an input queue in a local memory of the second processor in coordination with respective direct memory access components in the first processor and the second processor (PPUs 202 transfer data from system memory 104 and/or local parallel processing memories 204 into internal (on-chip) memory, process the data, and write result data back to system memory 104 and/or local parallel processing memories 204, where such data can be accessed by other system components including CPU 102 or another parallel processing subsystem 112, see paras. [0031]-[0035], [0041], [0043] and Fig. 2 ;also note: parallel thread processors, see para. [0037] and Fig. 3A). As to claim 18, FAHS’724 also discloses the first processor and the second processor are processors in a non-uniform memory access (NUMA)-organized system (each PPU 202 has PP Memory 204, see Figs. 2 and 3A). Note claims 19 and 26 recite similar limitations of claim 1. Therefore they are rejected based on the same reason accordingly. Note claim 20 recites similar limitations of claim 11. Therefore it is rejected based on the same reason accordingly. As to claim 21, FAHS’724 also discloses each of the first processor and the second processor are streaming multiprocessors (streaming multiprocessors 310, see paras. [0037] and Fig. 3A; also note: paras. [0026] and [0032]). Note claim 23 recites the corresponding limitations of claim 9. Therefore it is rejected based on the same reason accordingly. Note claim 24 recites the corresponding limitations of claim 12. Therefore it is rejected based on the same reason accordingly. Note claim 25 recites similar limitations of claim 13. Therefore it is rejected based on the same reason accordingly. Referring to claim 27, FAHS’724, as claimed, a method of transferring data between first and second processors, comprising: a first processor (PPUs 202, see paras. [0024], [0025] and Fig. 1; also note: parallel thread processors called streaming multiprocessors (SPMs), see para. [0037], Fig. 3A; and any number of processing units, e.g. SPMs 310 or texture units 315, preROPs 325 may be included within a GPC 208. Further, while only on GPC208 is shown, a PPU 202 may include any number of GPCs 208, see para. [0044]), by transmitting a first message, writing data into a shared memory local to a second processor (each SPM outputs processed tasks to work distribution crossbar in order to provide the processed task to another GPC for further processing or to store the processed task in an L2 cache, parallel processing memory 204, or system memory 104 via crossbar unit 210, see para. [0043]; also note: the series of instructions transmitted to particular GPC 208 constitutes a thread and the collection of a certain number of concurrently executing threads across the parallel processing engines is a “thread group”, see para. [0039]) and writing a completion flag (execution of the barrier instruction has been completed by all the threads participating in the barrier instruction, and the barrier instruction execution unit releases the barrier by resetting the wait/go registers and the arrival counter, see paras. [0087], [0092], [0099], [0101], [0106], and [0107]) into the shared memory local to the second processor; and the second processor controlling, based on said writing the completion flag, access to the data written into the shared memory (read data after producer thread has written it, see para. [0103] and Fig. 6B; providing data, see paras. [0041]; data written to a given location in shared memory by one thread and read from that location by a different thread, see paras. [0050], [0055]; and TABLE 6) local to the second processor (a corresponding barrier wait instruction precede an instruction to read the data, thereby guaranteeing that the consumer thread reads the data only after the producer thread has written it, see para. [0103]; also note: while the consumer threads announce that the resource has been consumed with barrier arrival instruction., see paras. [0102] and [0060]). However, FAHS’724 does not appear to teach the message structure comprising the first address and second address. JIA’196 discloses the message comprising the first address and second address (The synchronization manager of each participant maintains two local bit vectors 402, 404: one (vector_a) 402 for registering its barrier arrivals and the other (vector_b) 404 for detecting when to leave a barrier…Each participant is also associated with one or more shared memory flags,. The array of shared memory flags, denoted by shm_bar_flags., see para. [0042]; These vectors 402 are kept locally by each participant. When a non-leader task arrives at a barrier it updates one of its local vectors 402, 404 and uses this value to set its shared memory flag in the shared memory area., see paras. [0044]-[0048] and Fig. 7; Process for synchronization process and/or threads using partial or complete barriers. Each participant that arrives at the barrier, updates a first local vector. If the participant is part of the subgroup of participants that is to leave the barrier next, the participant updates the second vector. It should be noted that each time a participant updates a local vector 402, 404, it also updates its shared memory flags to reflect that state of its vectors 402, 404, see paras. [0064]-[0067] and Fig. 9). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify FAHS’724’s invention to comprise the message structure comprising the first address and second address, as taught by JIA’196, in order to perform synchronization through the same set of shared memory resources and do not interfere with non-representatives waiting in the barrier (see para. [0010]). Response to Arguments Applicant's arguments filed 11/17/2025 have been fully considered but they are moot due to new grounds of rejection. Applicant is suggested to specify the “updating” of the barrier memory structure. In summary, FAHS’724 and JIA’196 teach the claimed limitations as set forth. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Ciolkosz et al. (U.S. Publication No. 2022/0413945 A1) discloses systems that allow for increased parallelism when executing an application by allowing increased flexibility in composition of cooperative thread groups. Koker et al. (U.S. Publication No. 2019/0340018 A1) discloses memory-based software barriers. McKenney (U.S. Publication No. 2002/0194436 A1) discloses software implementation of synchronous memory barriers. GADRE et al. (U.S. Publication No. 2012/0198214 A1) discloses N-Way memory barrier operation coalescing. NICKOLLS et al. (U.S. Publication No. 2011/0078692 A1) discloses coalescing memory barrier operations across multiple parallel threads. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to TITUS WONG whose telephone number is (571)270-1627. The examiner can normally be reached Monday-Friday, 10am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Idriss Alrobaye can be reached on (571) 270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TITUS WONG/Primary Examiner, Art Unit 2181
Read full office action

Prosecution Timeline

Mar 10, 2022
Application Filed
Apr 16, 2024
Non-Final Rejection — §103
Aug 22, 2024
Response Filed
Nov 30, 2024
Final Rejection — §103
May 05, 2025
Request for Continued Examination
May 09, 2025
Response after Non-Final Action
Jul 12, 2025
Non-Final Rejection — §103
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Examiner Interview Summary
Nov 17, 2025
Response Filed
Feb 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596667
MULTI-CHIP MODULE INCLUDING INTEGRATED CIRCUIT WITH RECEIVER CIRCUITRY IMPLEMENTING TRANSMIT SIGNAL CANCELLATION
2y 5m to grant Granted Apr 07, 2026
Patent 12585595
Faster Computer Memory Access By Reducing SLAT Fragmentation
2y 5m to grant Granted Mar 24, 2026
Patent 12572485
SYSTEM, DEVICE AND/OR METHOD FOR PROCESSING DIRECT MEMORY ACCESS GATHER AND SCATTER REQUESTS
2y 5m to grant Granted Mar 10, 2026
Patent 12561269
BUILDING MANAGEMENT SYSTEM WITH AUTOMATIC EQUIPMENT DISCOVERY AND EQUIPMENT MODEL DISTRIBUTION
2y 5m to grant Granted Feb 24, 2026
Patent 12549615
SYSTEM AND METHOD FOR ADVANCED DATA MANAGEMENT WITH VIDEO ENABLED SOFTWARE TOOLS FOR VIDEO BROADCASTING ENVIRONMENTS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
78%
Grant Probability
98%
With Interview (+20.6%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 587 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month