Prosecution Insights
Last updated: April 19, 2026
Application No. 18/452,622

MEMORY SYNCHRONISATION SUBSEQUENT TO A MAINTENANCE OPERATION

Non-Final OA §103§112
Filed
Aug 21, 2023
Examiner
KIM, DONG U
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Arm Limited
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
610 granted / 702 resolved
+31.9% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
737
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
28.0%
-12.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 7-9 and 12 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 (similarly claim 8) recites the limitation “the virtual machines”. There is insufficient antecedent basis for this limitation in the claim. It is unclear if “the virtual machines” are referring to the “plurality of virtual machine” or some other virtual machines. Claim 12 recites the limitation “the determination could result in a false positive”. There is insufficient antecedent basis for this limitation in the claim. The examiner is unclear which determination “the determination” is referring to since there are multiple distinct determinations within corresponding dependent claim. Claims 8-9 are rejected based on rejection of its corresponding dependent claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7-11 and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Craske et al. (Pub 20160139922) (hereafter Craske) in view of Swaine et al. (Pub 20210026568) (hereafter Swaine). As per claim 1, Craske teaches: An apparatus comprising one or more processing elements, each processing element of the one or more processing elements comprising: processing circuitry configured to perform processing operations, wherein the processing operations are carried out in one of a plurality of processing contexts; ([Paragraph 4], Viewed from a first aspect, there is provided an apparatus for data processing comprising: processing circuitry to execute data processing instructions to perform data processing operations, wherein the data processing operations comprise accessing a memory system, and Wherein the processing circuitry is capable of executing the data processing instructions in a plurality of contexts; and memory system interaction circuitry to provide an interface between the processing circuitry and the memory system, wherein the memory system interaction circuitry is capable of, in response to the processing circuitry executing a barrier instruction in a current context of the plurality of contexts, enforcing an access ordering constraint, and wherein the memory system interaction circuitry is capable of limiting enforcement of the access ordering constraint to accesses initiated by the processing circuitry when operating in an identified context.) context tracking circuitry configured to store context tracking data indicative of active contexts of the plurality of processing contexts in which the processing operations have been carried out by the processing circuitry; and ([Paragraph 16], According to the present techniques, when the processing circuitry of the data processing apparatus encounters a barrier instruction, the data processing apparatus may respond by limiting enforcement of a corresponding access ordering constraint only for accesses which have been initiated by the processing circuitry when operating in an identified context, which may for example be the current context, i.e. accesses previously initiated in the same context as the context in which the processing circuitry is currently operating and has encountered the barrier instruction. In other words enforcement of the access ordering constraint may be limited to accesses initiated by the processing circuitry when operating in an identified context. The accesses may take a variety of forms, and can for example be memory accesses such as a store or a load, and can also for example be coherency operations or cache (data or instruction) maintenance operations. [Paragraph 27], In some embodiments the store buffer comprises a context tracking storage with multiple storage locations, and wherein the store buffer is capable of storing an entry in one of the multiple storage locations for the current context if the current context has initiated accesses since the access ordering constraint was last enforced for the current context. [Paragraph 28], In some embodiments the store buffer is capable of clearing a selected entry in the context tracking storage when the access ordering constraint corresponding to the selected entry has been enforced. [Paragraph 30], In some embodiments, the store buffer is capable of storing at least one indication associated with each entry in the context tracking storage indicative of whether the accesses initiated since the access ordering constraint was last enforced for that context comprise at least one type of access. This enables the store buffer to distinguish between different types of access which may be initiated by the processing circuitry in a given context, and which may have different requirements with respect to the enforcement of a access ordering constraint. [Paragraph 22], the apparatus further comprises virtual machine identifier storage for storing a virtual machine identifier, wherein the apparatus is capable of updating the virtual machine identifier to indicate the current virtual machine. The virtual machine identifier storage may for example be provided by a register in the processing circuitry of the data processing apparatus, although could also be provided by any other suitable form of identifier storage, and thus provides the data processing apparatus with a readily available and reliable reference for components of the apparatus to determine the current virtual machine.) Although Craske teaches memory synchronization occurring, delaying execution of instruction and barrier points. [Paragraph 17, 23, 25, 29]. Craske does not explicitly disclose control circuitry responsive to a request for a memory synchronisation occurring subsequent to at least one maintenance operation, the at least one maintenance operation associated with a given set of one or more contexts of the plurality of processing contexts, to determine whether at least one of the given set of one or more contexts is indicated in the context tracking data, and in response to the determination: when the at least one of the given set of one or more contexts is determined to be indicated in the context tracking data, to implement a delay before performing the memory synchronisation, the delay continuing until one or more pending memory updates have been performed; and when each of the given set of one or more contexts is determined to be absent from the context tracking data, to perform the memory synchronisation without implementing the delay. Swaine teaches control circuitry responsive to a request for a memory synchronisation occurring subsequent to at least one maintenance operation, the at least one maintenance operation associated with a given set of one or more contexts of the plurality of processing contexts, to determine whether at least one of the given set of one or more contexts is indicated in the context tracking data, and in response to the determination: when the at least one of the given set of one or more contexts is determined to be indicated in the context tracking data, to implement a delay before performing the memory synchronisation, the delay continuing until one or more pending memory updates have been performed; and when each of the given set of one or more contexts is determined to be absent from the context tracking data, to perform the memory synchronisation without implementing the delay. ([Paragraph 37], In these scenarios then the epoch may not be changed to the next epoch in response to the barrier point signal. This may not affect architectural correctness of the processing, but may simply delay completion of a subsequent barrier termination command on some occasions. Nevertheless, this scenario may be relatively rare in most expected scenarios, as barrier termination commands may be relatively rare. [Paragraph 67], Before signalling of the synchronisation command A is complete, a second synchronisation command B is received at time 152 and this results in the epoch being updated again to epoch 2. The support for more than 2 epochs means that it was not necessary to delay the transition of epoch from epoch 1 to epoch 2 until synchronisation command A had been completed, which can be useful because it means that the completion of synchronisation command B does not need to wait for completion of memory access transactions issued between time 152 and time 154 when the first synchronisation command is complete, which might otherwise be required if only 2 epochs had been supported and so the change of epoch for the second synchronisation command B would have had to wait until time 154. [Paragraph 19], In other implementations, the barrier point signal may comprise a barrier point identifying signal different to the barrier termination command. With this approach, a separate command may be defined which enables the barrier point to be defined at a point of time different to the timing of receipt of the barrier termination command. This could be useful in some scenarios as it may allow the barrier point to be identified more precisely, which may mean that the completion of the barrier termination command does not need to be delayed while waiting for completion of any memory access transactions issued between receipt of the barrier point identifying signal and receipt of the barrier termination command. This can be good for performance in some cases, because this may allow the completion of the barrier termination command to be signalled sooner, enabling subsequent operations which must wait for completion of the barrier termination command to be started earlier. In the case where the barrier point signal is a separate barrier point identifying signal from the barrier termination command, the one or more epochs which are checked by the barrier termination circuitry may be those epochs which are older than the epoch which is the current epoch at the time of receiving the barrier termination command. With this approach, it can be useful for the transaction tracking circuitry to support tracking of epochs for at least three or more different epochs, as it may be possible that multiple barrier point identifying signals could be received in succession before a barrier termination command is received, and so being able to support a larger number of epochs can enable more precise pinpointing of the most recent barrier point before a given barrier termination command was received, which can enable a faster response to the barrier termination command than if the epochs could not be defined as precisely.) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Craske wherein context tracking circuitry allows storage of context tracking data of active contexts of processing contexts, memory barrier instruction(s) is/are used to synchronize not only memory access but also other relevant operations and maintenance operation(s) such as cache maintenance, translation lookaside buffer maintenance, etc., into teachings of Swaine wherein memory synchronization occurring subsequent to at least one maintenance operation, implement a delay before perform the memory synchronization, a given set of context is indicated to implement a delay, delay continuing until pending memory update(s) is/are performed when indicated in the context tracking data and when the given set of context is absent from the tracking data, memory synchronization is perform without delay, because this would enhance the teachings of Craske wherein by determining if delay should/should not occur allows barrier points to be identified more precisely thus allowing determination if barrier termination command does or does not need to be delayed while awaiting for completion of any memory access transactions. [Swaine paragraph 19] As per claim 2, rejection of claim 1 is incorporated: Craske teaches wherein each processing element comprises a translation lookaside buffer configured to store translations between a first address space associated with one of the plurality of processing contexts and a second address space, and the at least one maintenance operation comprises a translation lookaside buffer maintenance operation. ([Paragraph 34], In some embodiments, the selected type of pending access is a coherency operation. The coherency operations can, for example, comprise cache maintenance operations, translation lookaside buffer (TLB) maintenance operations, branch predictor maintenance operations, and so on. The present techniques recognise that such coherency operations may involve a relatively high latency of completion and are therefore a type of access for which the present techniques are of particular benefit. [Paragraph 49], Then when instruction 5 (the DSB) retires one of the two following possibilities occurs depending on whether instruction 4 is the store (STR) or the TLB invalidate (TLBIMVAA). If instruction 4 is a store, the DSB barrier will affect this store and all other stores to the AXI master port currently being handled by the data processing system, but does not result in a DVM sync being sent out from the store buffer. On the other hand, if instruction 4 was a TLB invalidate, the DSB barrier (instruction 5) will result in a DVM sync for all earlier DVM messages, followed by a DSB affecting all previous AXI master port accesses.) Swaine also teaches ([Paragraph 21], A translation lookaside buffer is a cache of page table information from page tables in the memory system, which provides address translation data for controlling memory address translation and/or memory permission data for specifying whether memory access transactions are allowed to particular regions of the address space. Hence, the transaction handling circuitry may initiate a lookup in the translation lookaside buffer (TLB), to check the page table information corresponding to an address specified by a memory access transaction to be issued.) As per claim 3, rejection of claim 1 is incorporated: Craske teaches wherein the context tracking data comprises a set of context identifiers. ([Paragraph 15], Here, a “context” should be understood as an operating environment in which the data processing apparatus can operate, according to which the components of the data processing apparatus are provided with an apparently complete and self-consistent view of not only the components of the data processing apparatus itself, but of the whole of the data processing system in which the data processing apparatus is found, for example further including a memory system to which the data processing apparatus is connected. [Paragraph 20], In one embodiment the identified context is specified in storage accessible to the processing circuitry. For example an indication of the identified may be stored in a register (although any other suitable storage may also be used). [Paragraph 23], In some embodiments the memory system interaction circuitry comprises a store buffer to buffer pending accesses and the store buffer is capable of tagging each pending access with an identifier indicative of the context from which that pending access was issued. ) Swaine also teaches ([Paragraph 21], Some TLBs may be located in a component which does not have a full transaction tracker supporting comparison of individual transaction identifiers against each entry of the tracker, as the full transaction tracker may not be needed for other purposes. [Paragraph 58], It may be possible to do TLB invalidations in a more fine grained manner than simply invalidating the entire TLB, for example limiting the entries to the invalidated based on a context identifier which identifies a certain translation context associated with particular entries of the TLB…) As per claim 7, rejection of claim 1 is incorporated: Craske teaches wherein: the apparatus is configured to provide a virtualized operating environment supporting a plurality of virtual machines each associated with a virtual machine identifier; and each virtual machine of the virtual machines corresponds to one or more of the plurality of processing contexts identified by the virtual machine identifier associated with that virtual machine. ([Paragraph 21], In one embodiment the apparatus is capable of providing a virtualized operating environment in which a current virtual machine of multiple virtual machines operates, wherein the processing circuitry is capable of executing the data processing instructions by interaction with the current virtual machine, and wherein the current context corresponds to the current virtual machine. Accordingly, a virtualized operating environment provides one manner in which the processing circuitry of the data processing apparatus can operate (i.e. execute data processing instructions) in more than one context. A given virtual machine (typically comprising a particular guest operating system and set of applications which run on that guest operating system) interacts with the hardware of the data processing apparatus (i.e. in particular in the present context the processing circuitry and memory system interaction circuitry) when operation of that virtual machine is the present context of operation for the data processing apparatus. The present techniques therefore provide protection for the timing constraints of each of the virtual machines (and in particular a virtual machine with a low-delay timing constraint). [Paragraph 37], FIG. 2 each data processing apparatus 12, 14 operates in a current context (i.e, under control of a hypervisor 34 enabling a selected virtual machine to be operating), and the respective DPUs 52, 54 store a value in the register VSCTLR.VMID 80, 82 which serves as a virtual machine identifier and indicates the current virtual machine running on the respective data processing apparatus.) As per claim 8, rejection of claim 8 is incorporated: Craske teaches wherein at least one of the virtual machines is configured to implement a distributed virtual memory and the memory synchronisation is a distributed virtual memory synchronisation. ([Paragraph 36], Note that the virtual machines may be hosted by just one data processing apparatus or may be distributed across several, depending on the processing resource which it is appropriate to make available to each virtual machine. Where a real-time virtual machine is to be provided it is more likely to be restricted to just one data processing apparatus, whilst a non real-time virtual machine may be configured to be distributed across several data processing apparatuses. [Paragraph 40], The store buffer further comprises a context tracker 96 which the store buffer uses to keep track of which VMIDs (contexts) have accessed the high latency AXI master port 74 (via the SCU 72) or have performed D-cache maintenance operations (labelled “SCU”) and which VMIDs have sent a distributed virtual memory (DVM) message (labelled “DVM sync”). The DVM messages may for example relate to I-cache maintenance, branch predictor maintenance and TLB maintenance.) As per claim 9, rejection of claim 7 is incorporated: Craske teaches wherein the context tracking data comprises, for each active context of the active contexts, information indicative of at least one of: a virtual machine identifier associated with the active context; a security state associated with the active context; and an exception level associated with the active context. ([Paragraph 36], These virtualized operating environments may be viewed in the hierarchical manner schematically shown in FIG. 2, in which a hypervisor 34 which maintains overall control of the virtualization thus provided operates at the highest privilege level shown in the figure referred to as “exception level number 2” (EL2) or “privilege level 2” (PL2))… Note that the virtual machines may be hosted by just one data processing apparatus or may be distributed across several, depending on the processing resource which it is appropriate to make available to each virtual machine. Where a real-time virtual machine is to be provided it is more likely to be restricted to just one data processing apparatus, whilst a non real-time virtual machine may be configured to be distributed across several data processing apparatuses. [Paragraph 40], The store buffer further comprises a context tracker 96 which the store buffer uses to keep track of which VMIDs (contexts) have accessed the high latency AXI master port 74 (via the SCU 72) or have performed D-cache maintenance operations (labelled “SCU”) and which VMIDs have sent a distributed virtual memory (DVM) message (labelled “DVM sync”). The DVM messages may for example relate to I-cache maintenance, branch predictor maintenance and TLB maintenance. ) As per claim 10, rejection of claim 1 is incorporated: Craske teaches wherein the control circuitry is responsive to the context tracking data meeting a predetermined condition to perform a tracking data reset procedure comprising clearing the context tracking data. ([Paragraph 28], In some embodiments the store buffer is capable of clearing a selected entry in the context tracking storage when the access ordering constraint corresponding to the selected entry has been enforced. Thus once the access ordering constraint has been enforced for a given context, clearing the corresponding entry in the context tracking storage at that point ensures that if and when a further barrier instruction is executed in that context, the store buffer can readily recognise that the access ordering constraint does not need to be carried out with regard to those previous accesses which have been subject to actions resulting from the previous barrier instruction.) As per claim 11, rejection of claim 10 is incorporated: Craske teaches wherein the context tracking circuitry comprises storage space for a first number of context identifiers and the predetermined condition is met when storage of a new context identifier in the context tracking circuitry would exceed the storage space. ([Paragraph 29], clearing the victim entry for the selected context, wherein the implicit access ordering constraint does not require the processing circuitry to execute a corresponding barrier instruction. Whilst the store buffer could be provided with a context tracking storage with sufficient storage locations for all possible contexts in which the processing circuitry can execute data processing instructions, it may be the case that the number of contexts supported by the data processing apparatus exceeds the number of storage locations which it is desirable to provide in the context tracking storage. In other words, in order to keep the size of the store buffer as small as possible, it may be desirable to limit the number of storage locations in the context tracking storage to a relatively small number. In this situation it is recognised that the store buffer may not have an occupied entry for the current context, and may not have an available entry which can immediately be used for the current context. In that situation, when an entry is required for the current context the storage buffer can then enforce an implicit access ordering constraint (“implicit” in the sense that this is not instructed by the processing circuitry by execution of a barrier instruction, but is initiated by the store buffer itself in order to free up an entry in its context tracking storage). One or more victim contexts other than the current context is/are selected by the store buffer to be subject to such an implicit access ordering constraint in order to free up one or more entries in the context tracking storage.) As per claim 16, rejection of claim 1 is incorporated: Craske teaches wherein at least one of the given set of one or more contexts is different from a current context being processed by the processing circuitry. ([paragraph 16], According to the present techniques, when the processing circuitry of the data processing apparatus encounters a barrier instruction, the data processing apparatus may respond by limiting enforcement of a corresponding access ordering constraint only for accesses which have been initiated by the processing circuitry when operating in an identified context, which may for example be the current context, i.e. accesses previously initiated in the same context as the context in which the processing circuitry is currently operating and has encountered the barrier instruction. In other words enforcement of the access ordering constraint may be limited to accesses initiated by the processing circuitry when operating in an identified context. The accesses may take a variety of forms, and can for example be memory accesses such as a store or a load, and can also for example be coherency operations or cache (data or instruction) maintenance operations.) As per claim 17, rejection of claim 1 is incorporated: Craske teaches implemented in at least one packaged chip; at least one system component; and a board, wherein the at least one packaged chip and the at least one system component are assembled on the board. ([Paragraph 35], FIG. 1 schematically illustrates a data processing system 10 in one embodiment, which comprises two central processing units (CPUs) 12 and 14. Each CPU comprises a processor 16, 18 respectively which executes a sequence of data processing instructions in order to carry out or initiate data processing operations within the data processing system 10, and also comprises a set of registers 20, 22 respectively in which values used by the processors 16, 18 in their data processing operations are stored. Each CPU 12, 14 has a closely associated Level 1 (L1 ) memory system (cache) which is capable of storing (i.e. has a configuration which enables it to store) temporary copies of data items retrieved from the remainder of the memory system of the data processing system 10, in order to reduce the access latency for those data items, in a manner with which the skilled person will be familiar. The respective L1 memories 24, 26 of each data processing apparatus 12, 14 interacts with a Level 2 (L2 ) memory 28, which itself interacts with an external memory 32 via a system bus 30, in a hierarchical configuration of this set of caches and memories, again with which the skilled person will also be familiar.) As per claim 18, rejection of claim 17 is incorporated: Craske teaches the system of claim 17 assembled on a further board with at least one other product component. ([Paragraph 35], FIG. 1 schematically illustrates a data processing system 10 in one embodiment, which comprises two central processing units (CPUs) 12 and 14. Each CPU comprises a processor 16, 18 respectively which executes a sequence of data processing instructions in order to carry out or initiate data processing operations within the data processing system 10, and also comprises a set of registers 20, 22 respectively in which values used by the processors 16, 18 in their data processing operations are stored. Each CPU 12, 14 has a closely associated Level 1 (L1 ) memory system (cache) which is capable of storing (i.e. has a configuration which enables it to store) temporary copies of data items retrieved from the remainder of the memory system of the data processing system 10, in order to reduce the access latency for those data items, in a manner with which the skilled person will be familiar. The respective L1 memories 24, 26 of each data processing apparatus 12, 14 interacts with a Level 2 (L2 ) memory 28, which itself interacts with an external memory 32 via a system bus 30, in a hierarchical configuration of this set of caches and memories, again with which the skilled person will also be familiar. [Fig. 3A]) As per claim 19, this is a method claim corresponding to the apparatus claim 1. Therefore, rejected based on similar rationale. As per claim 20, this is a non-transitory computer readable storage medium claim corresponding to the apparatus claim 1. Therefore, rejected based on similar rationale. Allowable Subject Matter Claim(s) 4-6, 13-15 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DONG U KIM/Primary Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Aug 21, 2023
Application Filed
Jan 26, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596564
PRE-LOADING SOFTWARE APPLICATIONS IN A CLOUD COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596594
REINFORCEMENT LEARNING POLICY SERVING AND TRAINING FRAMEWORK IN PRODUCTION CLOUD SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591760
CROSS-INSTANCE INTELLIGENT RESOURCE POOLING FOR DISPARATE DATABASES IN CLOUD NATIVE ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591449
Merging Streams For Call Enhancement In Virtual Desktop Infrastructure
2y 5m to grant Granted Mar 31, 2026
Patent 12586064
BLOCKCHAIN PROVISION SYSTEM AND METHOD USING NON-COMPETITIVE CONSENSUS ALGORITHM AND MICRO-CHAIN ARCHITECTURE TO ENSURE TRANSACTION PROCESSING SPEED, SCALABILITY, AND SECURITY SUITABLE FOR COMMERCIAL SERVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month