Prosecution Insights
Last updated: April 19, 2026
Application No. 18/822,815

SCALAR CORE INTEGRATION

Non-Final OA §DP
Filed
Sep 03, 2024
Examiner
NGUYEN, HAU H
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Intel Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
807 granted / 892 resolved
+28.5% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
22 currently pending
Career history
914
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
3.8%
-36.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 892 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 12/09/2024 was filed after the mailing date of the application. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 21-38 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-318 of US. Patent No. 12,117,962 (Patent ‘962, hereinafter). Although the claims at issue are not identical, they are not patentably distinct from each other because all the features of current claims 21-38 are already included in claims 1-18 of Patent ‘962. See tables below. Table I: Current Application 18/822815 U.S. Patent No. 12,117,962 21-38 1-18 Table II: Current Application 18/822815 U.S. Patent No. 12,117,962 21. (New) An apparatus comprising: a pre-processor communicably coupled to a scalar processor complex comprising a plurality of scalar processor cores, a vector processor complex comprising a plurality of vector processor cores, and a hardware accelerator bank comprising a tensor core to perform matrix processing using a plurality of operand precisions, wherein the hardware pre-processor to: receive a binary translation of a code segment of a plurality of code segments corresponding to a set of workload instructions for a graphics workload from a host processor; analyze operations of the binary translation to identify whether the operations are suitable for execution by one of the scalar processor complex, the vector processor complex, or the hardware accelerator bank; and assign, to the scalar processor complex, the operations of the binary translation that are identified as suitable for execution by the scalar processor complex, wherein different code segments of the plurality of code segments are assigned to one or more of the scalar processor complex, the vector processor complex, or the hardware accelerator bank based on analysis of binary translations of the plurality of code segments. 1. An apparatus comprising: a scalar processor complex comprising a plurality of scalar processor cores; a vector processor complex comprising a plurality of vector processor cores; a hardware accelerator bank comprising a tensor core to perform matrix processing for deep learning operations using a plurality of operand precisions; and a pre-processor communicably coupled to the scalar processor complex, the vector processor complex, and the hardware accelerator bank, wherein the pre-processor to: receive a binary translation of a code segment of a plurality of code segments corresponding to a set of workload instructions for a graphics workload from a host processor; analyze operations of the binary translation to identify whether the operations are suitable for execution by one of the scalar processor complex, the vector processor complex, or the hardware accelerator bank; and assign, to the scalar processor complex, the operations of the binary translation that are identified as suitable for execution by the scalar processor complex. From the tables above, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention since the current claims 21-38 are just an obvious version of claims 1-18 of Patent ‘962. For the same reason, claims 21-38 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12, 19-24 of US. Patent No. 11,762,804 (Patent ‘804, hereinafter). Although the claims at issue are not identical, they are not patentably distinct from each other because all the features of current claims 21-38 are already included in claims 1-12, 19-24 of Patent ‘804. See tables below. Table I: Current Application 18/822815 US. Patent No. 11,762,804 21-38 1-12, 19-24 Table II: Current Application 18/822815 US. Patent No. 11,762,804 21. (New) An apparatus comprising: a pre-processor communicably coupled to a scalar processor complex comprising a plurality of scalar processor cores, a vector processor complex comprising a plurality of vector processor cores, and a hardware accelerator bank comprising a tensor core to perform matrix processing using a plurality of operand precisions, wherein the hardware pre-processor to: receive a binary translation of a code segment of a plurality of code segments corresponding to a set of workload instructions for a graphics workload from a host processor; analyze operations of the binary translation to identify whether the operations are suitable for execution by one of the scalar processor complex, the vector processor complex, or the hardware accelerator bank; and assign, to the scalar processor complex, the operations of the binary translation that are identified as suitable for execution by the scalar processor complex, wherein different code segments of the plurality of code segments are assigned to one or more of the scalar processor complex, the vector processor complex, or the hardware accelerator bank based on analysis of binary translations of the plurality of code segments. 21. An apparatus comprising: a scalar processor complex comprising a plurality of scalar processor cores; a vector processor complex comprising a plurality of vector processor cores; a hardware accelerator bank comprising a tensor core to perform matrix processing for deep learning operations using a plurality of operand precisions; and a pre-processor communicably coupled to the scalar processor complex, the vector processor complex, and the hardware accelerator bank, wherein the pre-processor to: receive a binary translation of a code segment of the plurality of code segments, the plurality of code segments corresponding to a set of workload instructions for a graphics workload from a host processor; analyze operations of the binary translation to identify whether the operations are suitable for execution by one of the scalar processor complex, the vector processor complex, or the hardware accelerator bank; and assigning the operations of the binary translation to the one of the scalar processor complex, the vector processor complex, or the hardware accelerator bank that is identified as suitable for execution of the operations. From the tables above, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention since the current claims 21-38 are just an obvious version of claims 1-12, 19-24 of Patent ‘804. Claims 21-38 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5, 7-12, 14-19, 21 of US. Patent No. 11,016,929 (Patent ‘929, hereinafter). Although the claims at issue are not identical, they are not patentably distinct from each other because all the features of current claims 21-38 are already included in claims 1-5, 7-12, 14-19, 21 of Patent ‘929. See tables below. Table I: Current Application 18/822815 US. Patent No. 11,016,929 21-38 1-5, 7-12, 14-19, 21, respectively Table II: Current Application 18/822815 US. Patent No. 11,016,929 21. (New) An apparatus comprising: a pre-processor communicably coupled to a scalar processor complex comprising a plurality of scalar processor cores, a vector processor complex comprising a plurality of vector processor cores, and a hardware accelerator bank comprising a tensor core to perform matrix processing using a plurality of operand precisions, wherein the hardware pre-processor to: receive a binary translation of a code segment of a plurality of code segments corresponding to a set of workload instructions for a graphics workload from a host processor; analyze operations of the binary translation to identify whether the operations are suitable for execution by one of the scalar processor complex, the vector processor complex, or the hardware accelerator bank; and assign, to the scalar processor complex, the operations of the binary translation that are identified as suitable for execution by the scalar processor complex, wherein different code segments of the plurality of code segments are assigned to one or more of the scalar processor complex, the vector processor complex, or the hardware accelerator bank based on analysis of binary translations of the plurality of code segments. 1. A general purpose graphics processing device comprising: a scalar processor complex comprising a plurality of scalar processors; a vector processor complex comprising a plurality of vector processors; a hardware accelerator bank comprising a plurality of specialized hardware accelerators; and a pre-processor communicably coupled to the scalar processor complex and the vector processor complex, the pre-processor to: receive a set of workload instructions for a graphics workload received at the graphics processing device from a host complex; determine, based on an analysis of a binary translation of the set of workload instructions, a first subset of operations in the set of operations that is suitable for execution by the scalar processor complex, a second subset of operations in the set of operations that is suitable for execution by the vector processor complex, and a third subset of operations in the set of operations that is suitable for execution by the hardware accelerator bank; assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs; assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs; and assign the third subset of operations to the hardware accelerator bank for execution to generate a third set of outputs. From the tables above, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the claim language to be as currently claimed since the current claims 21-38 are similar in scope to claims 1-5, 7-12, 14-19, 21 of Patent ‘929. For the same reason, claims 21-37 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5, 7-13, 21-25 of US. Patent No. 11,409,693 (Patent ‘693, hereinafter). Although the claims at issue are not identical, they are not patentably distinct from each other because all the features of current claims 21-37 are already included in claims 1-5, 7-13, 21-25 of Patent ‘693. See tables below. Table I: Current Application 18/822815 US. Patent No. 11,409,693 21-37 1-5, 7-13, 21-25, respectively Table II: Current Application 18/822815 US. Patent No. 11,409,693 21. (New) An apparatus comprising: a pre-processor communicably coupled to a scalar processor complex comprising a plurality of scalar processor cores, a vector processor complex comprising a plurality of vector processor cores, and a hardware accelerator bank comprising a tensor core to perform matrix processing using a plurality of operand precisions, wherein the hardware pre-processor to: receive a binary translation of a code segment of a plurality of code segments corresponding to a set of workload instructions for a graphics workload from a host processor; analyze operations of the binary translation to identify whether the operations are suitable for execution by one of the scalar processor complex, the vector processor complex, or the hardware accelerator bank; and assign, to the scalar processor complex, the operations of the binary translation that are identified as suitable for execution by the scalar processor complex, wherein different code segments of the plurality of code segments are assigned to one or more of the scalar processor complex, the vector processor complex, or the hardware accelerator bank based on analysis of binary translations of the plurality of code segments 1. An apparatus comprising: a scalar processor complex comprising a plurality of scalar processor cores; a vector processor complex comprising a plurality of vector processor cores; a hardware accelerator bank comprising a tensor core to perform matrix processing for deep learning operations using a plurality of operand precisions; and a pre-processor communicably coupled to the scalar processor complex, the vector processor complex, and the hardware accelerator bank, the pre-processor to: receive a set of workload instructions for a graphics workload from a host processor; determine a first subset of operations in the set of operations that is suitable for execution by the scalar processor complex, a second subset of operations in the set of operations that is suitable for execution by the vector processor complex, and a third subset of operations in the set of operations that is suitable for execution by the hardware accelerator bank; assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs; assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs; and assign the third subset of operations to the hardware accelerator bank for execution to generate a third set of outputs. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hau H. Nguyen whose telephone number is: 571-272-7787. The examiner can normally be reached on MON-FRI from 8:30-5:30. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached on (571) 272-7773. The fax number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /HAU H NGUYEN/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 03, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597194
METHOD FOR OBTAINING IMAGE RELATED TO VIRTUAL REALITY CONTENT AND ELECTRONIC DEVICE SUPPORTING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12591435
DEVICE LINK MANAGEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586288
DEVICE AND METHOD FOR GENERATING DYNAMIC TEXTURE MAP FOR 3 DIMENSIONAL DIGITAL HUMAN
2y 5m to grant Granted Mar 24, 2026
Patent 12573135
GENERATION OF A DENSE POINT CLOUD OF A PHYSICAL OBJECT
2y 5m to grant Granted Mar 10, 2026
Patent 12573141
METHOD AND DEVICE FOR LEARNING 3D MODEL RECONSTRUCTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+8.9%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 892 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month