Prosecution Insights
Last updated: April 19, 2026
Application No. 18/254,322

APPARATUS AND METHOD FOR ADDRESS PRE-TRANSLATION TO ENHANCE DIRECT MEMORY ACCESS BY HARDWARE SUBSYSTEMS

Final Rejection §103
Filed
May 24, 2023
Examiner
AYASH, MARWAN
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
95%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
183 granted / 266 resolved
+13.8% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
20 currently pending
Career history
286
Total Applications
across all art units

Statute-Specific Performance

§101
8.0%
-32.0% vs TC avg
§103
67.8%
+27.8% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 266 resolved cases

Office Action

§103
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment This office action has been issued in response to the amendment filed 08/21/25. Claims 1-20 are pending in this application. Applicant's arguments have been carefully considered, but are not all persuasive. The examiner appreciates Applicant's effort to distinguish over the cited prior art by amending the claims in an attempt to distinguish or clarify the claimed invention, however, upon further consideration and/or search, the claims remain unpatentable over the cited prior art for the reasons articulated in the “response to arguments” section below. All claims pending in the instant application remain rejected and clarification and/or elaboration regarding why the claims are not in condition for allowance will hereafter be provided in order to efficiently further prosecution. Accordingly, this action is made FINAL. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gopal (US PGPUB # 20190095343) in view of Kegel’818 (US PGPUB # 20110022818) further in view of Kegel’724 (US PGPUB # 20110202724). With respect to independent claims 1, 14 Gopal/Kegel discloses: An apparatus comprising: a processor to execute an enqueue instruction to submit, to a hardware subsystem, a job descriptor describing a job to be performed [A request descriptor defining a job to be performed by an accelerator and referencing virtual addresses (VAs) and sizes of one or more buffers is enqueued via execution of a thread on a processor core – Gopal abstract], the job descriptor to at least include an indication whether pre-translation by an input-output memory management unit (IOMMU) is to occur [descriptor includes hints comprising physical addresses or virtual address to physical address (VA-PA) translations to…speculatively pre-fetch buffer data and speculatively start processing the pre-fetched buffer data on the accelerator - Gopal abstract; for maximal efficiency, this process should be done as soon as the job descriptor arrives into the accelerator complex – Gopal 0026; read descriptor and start address translation service (ATS) in parallel - Gopal fig 9] [A prefetch immediate command 326 can contain the address translation information 322 within the body of the prefetch immediate command 326. The IOMMU 302 can then receive address translation information 322 via the prefetch immediate command 326. Accordingly, the IOMMU 302 does not need to use the page-table walker 314 to walk the page tables 320 to obtain the address translation information 322. The information in the prefetch immediate command 326 can be loaded directly into the target IOTLB 312 without performing any page-table walking - Kegel’724, 0040-0041, 0065] and reference to a memory location in which data required to perform the job is stored [referencing virtual addresses (VAs) and sizes of one or more buffers - Gopal abstract, fig 6a/b], the memory location referenced by a first memory address in a first address space [descriptor includes hints comprising physical addresses or virtual address to physical address (VA-PA) translations that are obtained from one or more TLBs associated with the core using the buffer VAs. Under another approach employing TLB snooping, the buffer VAs are used as lookups and matching TLB entries ((VA-PA) translations) are used as hints. The hints are used to speculatively pre-fetch buffer data and speculatively start processing the pre-fetched buffer data on the accelerator - Gopal abstract, fig 6a/b]; wherein the processor is to inspect the job descriptor to determine the first memory address when pre-translation by the IOMMU is to occur and to generate a pre-translation request including the first memory address when the pre-translation by the IOMMU is to occur [descriptor includes hints comprising physical addresses or virtual address to physical address (VA-PA) translations to…speculatively pre-fetch buffer data and speculatively start processing the pre-fetched buffer data on the accelerator - Gopal abstract; for maximal efficiency, this process should be done as soon as the job descriptor arrives into the accelerator complex – Gopal 0026; read descriptor and start address translation service (ATS) in parallel - Gopal fig 9; issue prefetch immediate to IOMMU - Kegel’724, 0040-0041, 0065]; and the (IOMMU) [IOMMU - Gopal 0026-0028] to obtain an address translation for the memory location responsive to a pre-translation request from the processor [Cores/threads that submit the accelerator request to the accelerator so that the accelerator can start its computation as early as possible - Gopal 0024; snooping the processor/thread/core to the accelerator is functionally equivalent to a pre-translation request - Gopal 0026; although Gopal does not explicitly disclose a pre-translation request, nevertheless, in the same field of endeavor Kegel’818 teaches request pre-translation as: if the request is not marked as pre-translated or if the request is a translation request, IOMMU 26 may do a lookup within cache 30 for the translation (block 604). If the translation is present, the IOMMU 26 may provide the translation back to the requester, or provide the translation along with the request to the memory controller 18 – Kegel’818 0061], the address translation obtained by the IOMMU prior to receiving a request for the data from the hardware subsystem to perform the job [for maximal efficiency, this process should be done as soon as the job descriptor arrives into the accelerator complex, to avoid delays - Gopal 0026 in view of Kegel’724 abstract, 0040-0041, 0065], the address translation comprising a mapping of the first address in the first address space to a second address in a second address space [translations are between first and second address spaces - Gopal abstract], wherein responsive to the memory access request, the IOMMU is to retrieve the data from the memory location based on the address translation and to provide the data to the hardware subsystem to fulfill the request [speculatively pre-fetch buffer data and speculatively start processing the pre-fetched buffer data on the accelerator - Gopal abstract; steps 910-928 in fig 9 of Gopal; request/completion queue used to fulfil requests - Gopal 0053]. Gopal does not explicitly disclose a pre-translation request, nevertheless, in the same field of endeavor Kegel’818 teaches request pre-translation as: if the request is not marked as pre-translated or if the request is a translation request, IOMMU 26 may do a lookup within cache 30 for the translation (block 604). If the translation is present, the IOMMU 26 may provide the translation back to the requester, or provide the translation along with the request to the memory controller 18 – Kegel’818 0061. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement a pre-translation request in the invention of Gopal as taught by Kegel’818 because it would be advantageous for achieving access request and instruction performance enhancements via computation offload engine, in other words, this functionality may allow advanced computation architectures such as compute offload, user-level I/O, and accelerated I/O devices to be used more seamlessly in virtualized systems (Kegel’818 0006, 0022-0023). Gopal/Kegel’818 does not explicitly disclose a job descriptor to at least include an indication whether pre-translation by an input-output memory management unit (IOMMU) is to occur. Nevertheless in the same field of endeavor Kegel’724 teaches IOMMU architected TLB support disclosing a software issued IOMMU command to pre-load the IOMMU cache (pre-translation by IOMMU to occur) before later access (Kegel’724 abstract, 0040-0041, 0065). Therefore Gopal/Kegel’818/Kegel’724 teaches all limitations of the instant claim(s). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use a job descriptor to at least include an indication whether pre-translation by an input-output memory management unit (IOMMU) is to occur in the invention of Gopal/Kegel’818 as taught by Kegel’724 because it would be advantageous for improving IOMMU performance (Kegel’724 0010). With respect to dependent claim 2, 15 Gopal/Kegel’818/Kegel’724 discloses wherein the hardware subsystem is to perform the job using the data received from the IOMMU [IOMMU “snoops” the CPU core TLB and gets the translations that are cached there. These can be broadcast to all CPU cores, or done more efficiently in a point-to-point manner (which requires the information of which thread/core submitted the job). Note that for maximal efficiency, this process should be done as soon as the job descriptor arrives into the accelerator complex - Gopal 0026-0028]. With respect to dependent claim 3, 16 Gopal/Kegel’818/Kegel’724 discloses wherein the request is a direct memory access (DMA) request to access the memory [Gopal 0053]. With respect to dependent claim 4, 17 Gopal/Kegel’818/Kegel’724 discloses a local cache of the IOMMU to store the address translation [Gopal 0030, 0033, 0098]. With respect to dependent claim 5, 18 Gopal/Kegel’818/Kegel’724 discloses wherein enqueue instruction is to specify a memory address of the job descriptor and an identifier of the hardware subsystem [Gopal 0081, 0084-0085]. With respect to dependent claim 6, 19 Gopal/Kegel’818/Kegel’724 discloses wherein the job descriptor comprises a pre-translation indicator to indicate whether the processor is to send the pre-translation request to the IOMMU [Gopal 0028-0031, fig 9 step 912]. With respect to dependent claim 7, 20 Gopal/Kegel’818/Kegel’724 discloses wherein the processor is to determine the first address from the job descriptor and provide the first address to the IOMMU when the pre- translation indicator is set to a first value [Gopal 0028-0031, fig 9 step 912]. With respect to dependent claim 8 Gopal/Kegel’818/Kegel’724 discloses wherein the processor is further to provide information to the IOMMU to identify one or more page tables from which the IOMMU is to obtain the address translation [Gopal 0059]. With respect to dependent claim 9 Gopal/Kegel’818/Kegel’724 discloses wherein the processor is not to determine the first address from the job descriptor and/or not to provide the first address to the IOMMU when the pre-translation indicator is set to a second value [an instruction is implemented that submits a descriptor to an accelerator with additional meta-data that contains valid VA-PA translations that have been read or copied from the CPU translation lookaside buffer(s) (TLB(s)) for the core executing an instruction thread including the instruction. Generally, such an instruction may be added to the processor's instruction set architecture (ISA) as a new ISA instruction - Gopal 0025]. With respect to dependent claim 10 Gopal/Kegel’818/Kegel’724 discloses wherein the first address space is a virtual address space and the second address space is a physical address space, and wherein the address translation comprises a virtual-to-physical address translation for the first address [Gopal abstract]. With respect to dependent claim 11 Gopal/Kegel’818/Kegel’724 discloses wherein the memory location is referenced directly by the job descriptor [direct reference - Gopal abstract]. With respect to dependent claim 12 Gopal/Kegel’818/Kegel’724 discloses wherein the memory location is referenced indirectly by the job descriptor [hint - Gopal 0028]. With respect to dependent claim 13 Gopal/Kegel’818/Kegel’724 discloses wherein the processor is to store the job descriptor into a job queue of the hardware subsystem responsive to an execution of the enqueue instruction [Gopal 0053]. Response to Arguments Applicant's arguments have been fully considered but are not persuasive in view of the prior art. All claims pending in the instant application remain rejected. Please note that any rejections/objection not maintained from the previous Office Action have been rectified either by applicant's amendment and/or persuasive argument(s). Regarding applicant’s arguments on page 6-8, that the amended claims are not taught by the cited art [The examiner respectfully submits that amended ground of rejection, necessitated by amendments to the claims have rendered the remarks moot/unpersuasive, particularly in view of the newly cited reference to Kegel’724]. Remaining arguments are understood to be predicated on the previous arguments being persuasive and thus are unpersuasive at least on dependency merits. All remarks are understood to have been addressed herein. If any issues remain which may be clarified by the examiner, the applicant is invited to contact the examiner to set up a telephone interview. When responding to the office action, any new claims and/or limitations should be accompanied by a reference as to where the new claims and/or limitations are supported in the original disclosure. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARWAN AYASH whose telephone number is (571)270-1179. The examiner can normally be reached 9a-730p M-R. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocio del Mar Perez-Velez can be reached on 571-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Marwan Ayash/ - Examiner - Art Unit 2133 /ROCIO DEL MAR PEREZ-VELEZ/Supervisory Patent Examiner, Art Unit 2133
Read full office action

Prosecution Timeline

May 24, 2023
Application Filed
Apr 11, 2025
Non-Final Rejection — §103
Aug 21, 2025
Response Filed
Oct 20, 2025
Applicant Interview (Telephonic)
Oct 24, 2025
Examiner Interview Summary
Nov 18, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591390
APPARATUS AND METHOD FOR PROCESSING READ COMMAND IN ZONED NAMESPACED BASED ON DETERIORATION STATE OF MEMORY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12554437
CREATING THICK-PROVISIONED VOLUME ACCORDING TO A QUALITY OF SERVICE POLICY BASED ON THE WORKLOAD OF A CLUSTER
2y 5m to grant Granted Feb 17, 2026
Patent 12547545
SYSTEM AND METHOD FOR SUPPORTING HIGH AVAILABILITY BY USING IN-MEMORY CACHE AS A DATABASE
2y 5m to grant Granted Feb 10, 2026
Patent 12475045
Adaptive Address Tracking
2y 5m to grant Granted Nov 18, 2025
Patent 12475040
SELECTING DATA TRANSFER UNITS ASSOCIATED WITH A DATA STREAM FOR GARBAGE COLLECTION
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
95%
With Interview (+26.1%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 266 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month