Prosecution Insights
Last updated: April 19, 2026
Application No. 18/136,233

ACCELERATING LINEAR ALGEBRA KERNELS FOR ANY PROCESSOR ARCHITECTURE

Non-Final OA §103
Filed
Apr 18, 2023
Examiner
UNG, LANNY N
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
351 granted / 495 resolved
+15.9% vs TC avg
Strong +25% interview lift
Without
With
+25.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
19.8%
-20.2% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 495 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response Request for Continued Examination filed on October 14, 2025. Claims 1-20 are pending. Claims 1 and 11 have been amended. Response to Amendment Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-7, 10-12, 14-16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Diamos et al. (US 2014/0165049) in view of Eichenberger et al. (US 2009/0083724) and in further view of Venkataramani et al. (US 2018/0157471). With respect to Claim 1, Diamos et al. disclose: a central processing unit (CPU) to perform a compiler to generate code to accelerate matrix operations; (system on chip that includes CPU 102, Paragraph 23; CPU executes the compiler 101, Paragraph 24) memory; (see Figure 1; system memory 104, Paragraph 21) a Peripheral Component Interconnect (PCI) communication bus; (see Figure 1; communication path such as a PCI Express, Paragraph 21) and a graphics processing unit (GPU) including a general processing cluster (GPC), where the GPC includes streaming multiprocessors (SMs) comprising: (see Figures 2 and 3; parallel processing subsystem may constitute a GPU and can include a processing cluster array with any number of general processing clusters (GPC) and a steaming multiprocessor is within each GPC, Paragraphs 23, 31 and 39) an instruction cache; (see Figure 3; instruction L1 cache 370) a dispatch unit; (see Figure 3; instruction unit 312) cores; (see Figure 3; execution/processing unit 302s) a load/store unit (LSU); (see Figure 3; LSU 303s) shared memory; (see Figure 3; shared memory 306) an L1 cache; (see Figure 3; L1 cache 320) Diamos et al. do not disclose: and wherein the compiler is to: obtain a computer program; extract a polyhedral representation of the computer program; determine a schedule using the polyhedral representation and using one or more transformations based on a processor architecture of the GPU such that the schedule determined by the compiler is based on both the polyhedral representation and the one or more transformations; and generate executable code based on the schedule and the processor architecture. However, Eichenberger et al. disclose: and wherein the compiler is to: obtain a computer program; (obtaining an original program, Paragraph 82, lines 9-11) extract a polyhedral representation of the computer program; (performing a polyhedral scan by a polyhedral scan module of the original program from the compiler’s IR into a polyhedral representation, referred to as the program statement view, Paragraph 82, lines 9-12) determine a schedule using the polyhedral representation and using one or more transformations based on a processor architecture such that the schedule determined by the compiler is based on both the polyhedral representation and the one or more transformations; (In the program statement view (polyhedral representation) of the program, a loop optimizer module 530 is used to perform transformations on the program statement view to optimize the code. Examples of transformations performed by the loop optimizer module 530 include loop interchange, parallel wavefront, and statement shifting loop transformations. The transformations performed by the loop optimizer module 530 serve to modify the schedule of each individual statement in the program statement view to achieve better data parallelism and/or data locality of the execution of the program (processor architecture), Paragraph 83, lines 1-11) and generate executable code based on the schedule and the processor architecture. (The resulting transformed program schedule and its corresponding domain are provided to a polyhedral code generator 540 which operates on the entire program as represented by the modified IR generated by the loop optimizer module 530, based on the program statement view 520 of the program output by the polyhedral scan module 510 (based on the schedule and the processor architecture). The polyhedral code generator 540 generates an abstract syntax tree (AST) representation of the program based on the modified IR. Some limited optimizations 550-560 may be applied to the entire program as represented by the AST, Paragraph 84; In essence, the module 540 is designed to generate valid code (generate executable code), possibly with overhead due to extra bound computation, if conditional, modulo calculus in bounds and/or conditional computations. It is then the responsibility of optimizations like 550, 560, and 570 to clean up some of the introduced inefficiencies as best as possible. The resulting optimized AST (based on the schedule and the processor architecture) is provided to a code emitter 570 which generates code (generate executable code) from the AST in the compiler's internal representation (IR) by simply converting the internal AST and stripping it of its polyhedral information and generating an equivalent structure that is familiar and recognized by the traditional compiler, Paragraph 86) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Eichenberger et al. into the teaching of Diamos et al. to include obtaining a computer program, extracting a polyhedral representation of the computer program, determining a schedule using the polyhedral representation and using one or more transformations based on a processor architecture of the GPU such that the schedule determined by the compiler is based on both the polyhedral representation and the one or more transformations and generating executable code based on the schedule and the processor architecture in order to provide source code optimization during compilation. (Eichenberger et al., Paragraph 82) Diamos et al. and Eichenberger et al. do not disclose: one or more transformations based on a processor architecture of the GPU However, Venkataramani et al. disclose: one or more transformations based on a processor architecture of the GPU (modify the representation of the kernel within an intermediate representation to optimize use of the parallel processing unit (PPU)’s memory hierarchy (one or more transformations based on a processor architecture of the GPU), Paragraphs 61, 156-158 and 245; PPUs such as a GPU, Paragraph 41) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Venkataramani et al. into the teaching of Diamos et al. and Eichenberger et al. to include one or more transformations based on a processor architecture of the GPU in order to help automatically generate code that efficiently utilizes a device’s memory hierarchy as well as not having the need for a user to hand code kernels for GPU execution. (Venkataramani et al., Paragraph 59, lines 12-20) With respect to Claim 2, all the limitations of Claim 1 have been addressed above; and Diamos et al. further disclose: wherein the GPU further comprises a scheduler unit. (see Figure 2; Task/Work unit which receives the task and dynamically schedules the processing tasks and child processing tasks for execution by the GPCs, Paragraph 38) With respect to Claim 4, all the limitations of Claim 1 have been addressed above; and Diamos et al. and Venkataramani et al. do not disclose: wherein the polyhedral representation of the computer program is determined from a directed acyclic graph (DAG) of the computer program. However, Eichenberger et al. disclose: wherein the polyhedral representation of the computer program is determined from a directed acyclic graph (DAG) of the computer program. (generating a polyhedral representation from a compiler’s internal representation (IR) of statements, conditions, and loops, Paragraph 82, lines 3-8; intermediate representation (IR) is an abstract syntax tree (AST) (DAG), Paragraph 65, lines 1-9) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Eichenberger et al. into the teaching of Diamos et al. and Venkataramani et al. to include wherein the polyhedral representation of the computer program is determined from a directed acyclic graph (DAG) of the computer program in order to provide source code optimization during compilation. (Eichenberger et al., Paragraph 82) obtain a computer program; With respect to Claim 5, all the limitations of Claim 1 have been addressed above; and Diamos et al. further disclose: wherein the GPU further comprises a memory partition unit. (see Figure 2; memory interface includes a number D of partition units that each directly coupled to a portion of parallel processing memory, Paragraph 33) With respect to Claim 6, all the limitations of Claim 1 have been addressed above; and Diamos et al. further disclose: wherein the GPU further comprises a crossbar (Xbar). (see Figure 2, crossbar unit, Paragraph 34) With respect to Claim 7, all the limitations of Claim 1 have been addressed above; and Diamos et al. further disclose: wherein the GPU further comprises an input/output (I/O) unit to interface with the PCI communication bus. (see Figure 2; I/O unit 205 generates packets for transmission on communication patch 113 which is a PCI Express link, Paragraph 30) With respect to Claim 10, all the limitations of Claim 1 have been addressed above; and Diamos et al. further disclose: wherein the SMs each further comprise one or more interconnects. (see Figure 3; memory and cache interconnect, Paragraph 52) With respect to Claim 11, Diamos et al. disclose: two or more integrated circuits comprising: (one or more parallel processing units, Paragraph 26) a central processing unit (CPU) to perform a compiler to generate code to accelerate matrix operations; (system on chip that includes CPU 102, Paragraph 23; CPU executes the compiler 101, Paragraph 24) memory; (see Figure 1; system memory 104, Paragraph 21) a Peripheral Component Interconnect (PCI) communication bus; (see Figure 1; communication path such as a PCI Express, Paragraph 21) and a graphics processing unit (GPU) including a general processing cluster (GPC), where the GPC includes streaming multiprocessors (SMs) comprising: (see Figures 2 and 3; parallel processing subsystem may constitute a GPU and can include a processing cluster array with any number of general processing clusters (GPC) and a steaming multiprocessor is within each GPC, Paragraphs 23, 31 and 39) an instruction cache; (see Figure 3; instruction L1 cache 370) a dispatch unit; (see Figure 3; instruction unit 312) cores; (see Figure 3; execution/processing unit 302s) a load/store unit (LSU); (see Figure 3; LSU 303s) shared memory; (see Figure 3; shared memory 306) an L1 cache; (see Figure 3; L1 cache 320) Diamos et al. do not disclose: and wherein the compiler is to: obtain a computer program; extract a polyhedral representation of the computer program; determine a schedule using the polyhedral representation and using one or more transformations based on a processor architecture of the GPU such that the schedule determined by the compiler is based on both the polyhedral representation and the one or more transformations; and generate executable code based on the schedule and the processor architecture. However, Eichenberger et al. disclose: and wherein the compiler is to: obtain a computer program; (obtaining an original program, Paragraph 82, lines 9-11) extract a polyhedral representation of the computer program; (performing a polyhedral scan by a polyhedral scan module of the original program from the compiler’s IR into a polyhedral representation, referred to as the program statement view, Paragraph 82, lines 9-12) determine a schedule using the polyhedral representation and using one or more transformations based on a processor architecture such that the schedule determined by the compiler is based on both the polyhedral representation and the one or more transformations; (In the program statement view (polyhedral representation) of the program, a loop optimizer module 530 is used to perform transformations on the program statement view to optimize the code. Examples of transformations performed by the loop optimizer module 530 include loop interchange, parallel wavefront, and statement shifting loop transformations, discussed in more detail hereafter. The transformations performed by the loop optimizer module 530 serve to modify the schedule of each individual statement in the program statement view to achieve better data parallelism and/or data locality of the execution of the program (processor architecture), Paragraph 83, lines 1-11) and generate executable code based on the schedule and the processor architecture. (The resulting transformed program schedule and its corresponding domain are provided to a polyhedral code generator 540 which operates on the entire program as represented by the modified IR generated by the loop optimizer module 530, based on the program statement view 520 of the program output by the polyhedral scan module 510 (based on the schedule and the processor architecture). The polyhedral code generator 540 generates an abstract syntax tree (AST) representation of the program based on the modified IR. Some limited optimizations 550-560 may be applied to the entire program as represented by the AST, Paragraph 84; In essence, the module 540 is designed to generate valid code (generate executable code), possibly with overhead due to extra bound computation, if conditional, modulo calculus in bounds and/or conditional computations. It is then the responsibility of optimizations like 550, 560, and 570 to clean up some of the introduced inefficiencies as best as possible. The resulting optimized AST (based on the schedule and the processor architecture) is provided to a code emitter 570 which generates code (generate executable code) from the AST in the compiler's internal representation (IR) by simply converting the internal AST and stripping it of its polyhedral information and generating an equivalent structure that is familiar and recognized by the traditional compiler, Paragraph 86) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Eichenberger et al. into the teaching of Diamos et al. to include obtaining a computer program, extracting a polyhedral representation of the computer program, determining a schedule using the polyhedral representation and using one or more transformations based on a processor architecture of the GPU such that the schedule determined by the compiler is based on both the polyhedral representation and the one or more transformations and generating executable code based on the schedule and the processor architecture in order to provide source code optimization during compilation. (Eichenberger et al., Paragraph 82) Diamos et al. and Eichenberger et al. do not disclose: one or more transformations based on a processor architecture of the GPU However, Venkataramani et al. disclose: one or more transformations based on a processor architecture of the GPU (modify the representation of the kernel within an intermediate representation to optimize use of the parallel processing unit (PPU)’s memory hierarchy (one or more transformations based on a processor architecture of the GPU), Paragraphs 61, 156-158 and 245; PPUs such as a GPU, Paragraph 41) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Venkataramani et al. into the teaching of Diamos et al. and Eichenberger et al. to include one or more transformations based on a processor architecture of the GPU in order to help automatically generate code that efficiently utilizes a device’s memory hierarchy as well as not having the need for a user to hand code kernels for GPU execution. (Venkataramani et al., Paragraph 59, lines 12-20) With respect to Claim 12, all the limitations of Claim 11 have been addressed above; and Diamos et al. further disclose: wherein the GPU further comprises a scheduler unit. (see Figure 2; Task/Work unit which receives the task and dynamically schedules the processing tasks and child processing tasks for execution by the GPCs, Paragraph 38) With respect to Claim 14, all the limitations of Claim 11 have been addressed above; and Diamos et al. further disclose: wherein the SMs further comprise a register file. (see Figure 3; local register file, Paragraph 40) With respect to Claim 15, all the limitations of Claim 11 have been addressed above; and Diamos et al. further disclose: further comprising one or more display devices. (see Figure 1; display device) With respect to Claim 16, all the limitations of Claim 11 have been addressed above; and Diamos et al. further disclose: further comprising a network interface. (see Figure 1; network adapter) With respect to Claim 19, all the limitations of Claim 11 have been addressed above; and Diamos et al. further disclose: wherein the SMs each further comprise one or more interconnects. (see Figure 3; memory and cache interconnect, Paragraph 52) Claims 3 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Diamos et al. (US 2014/0165049) in view of Eichenberger et al. (US 2009/0083724) in view of Venkataramani et al. (US 2018/0157471) and in further view of Rani et al. (US 2018/0083834). With respect to Claim 3, all the limitations of Claim 1 have been addressed above; and Diamos et al. and Eichenberger et al. do not disclose: wherein the compiler is to further obtain one or more configuration files comprising parameters to help determine the one or more transformations. However, Venkataramani et al. disclose: wherein the compiler is to further obtain one or more configurations comprising parameters to help determine the one or more transformations. (The parallel code generator 400 also may receive one or more settings, (one or more configurations) such as the code generation settings 428, for guiding or controlling the code generation process for the source program, as indicated at step 504. The options may indicate the target language of the generated code 326, such as CUDA code, OpenCL code, etc., memory implementation options, such as discrete or unified memory, the identity of a compiler tool chain, such as Nvidia's nvcc compiler or the OpenCL compiler, etc., Paragraph 76) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Venkataramani et al. into the teaching of Diamos et al. and Eichenberger et al. to include wherein the compiler is to further obtain one or more configurations comprising parameters to help determine the one or more transformations in order to help automatically generate code that efficiently utilizes a device’s memory hierarchy as well as not having the need for a user to hand code kernels for GPU execution. (Venkataramani et al., Paragraph 59, lines 12-20) Diamos et al., Eichenberger et al. and Venkataramani et al. do not explicitly disclose: one or more configuration files comprising parameters However, Rani et al. disclose: one or more configuration files comprising parameters (BMC may store configuration files carrying configuration parameters in storage, Paragraph 23) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Rani et al. into the teaching of Diamos et al., Eichenberger et al. and Venkataramani et al. to include one or more configuration files comprising parameters in order to eliminate the need to manually configure parameters for a specific configuration setting each time it is needed. With respect to Claim 20, all the limitations of Claim 11 have been addressed above; and Diamos et al. and Eichenberger et al. do not disclose: wherein the compiler is to further obtain one or more configuration files comprising parameters to help determine the one or more transformations. However, Venkataramani et al. disclose: wherein the compiler is to further obtain one or more configurations comprising parameters to help determine the one or more transformations. (The parallel code generator 400 also may receive one or more settings, (one or more configurations) such as the code generation settings 428, for guiding or controlling the code generation process for the source program, as indicated at step 504. The options may indicate the target language of the generated code 326, such as CUDA code, OpenCL code, etc., memory implementation options, such as discrete or unified memory, the identity of a compiler tool chain, such as Nvidia's nvcc compiler or the OpenCL compiler, etc., Paragraph 76) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Venkataramani et al. into the teaching of Diamos et al. and Eichenberger et al. to include wherein the compiler is to further obtain one or more configurations comprising parameters to help determine the one or more transformations in order to help automatically generate code that efficiently utilizes a device’s memory hierarchy as well as not having the need for a user to hand code kernels for GPU execution. (Venkataramani et al., Paragraph 59, lines 12-20) Diamos et al., Eichenberger et al. and Venkataramani et al. do not explicitly disclose: one or more configuration files comprising parameters However, Rani et al. disclose: one or more configuration files comprising parameters (BMC may store configuration files carrying configuration parameters in storage, Paragraph 23) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Rani et al. into the teaching of Diamos et al., Eichenberger et al. and Venkataramani et al. to include one or more configuration files comprising parameters in order to eliminate the need to manually configure parameters for a specific configuration setting each time it is needed. Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Diamos et al. (US 2014/0165049) in view of Eichenberger et al. (US 2009/0083724) ) in view of Venkataramani et al. (US 2018/0157471) and in further view of Leigh et al. (US 2018/0032117). With respect to Claim 8, all the limitations of Claim 1 have been addressed above; and Diamos et al., Eichenberger et al. and Venkataramani et al. do not disclose: further comprising a hub to interface with one or more GPU interconnects. However, Leigh et al. disclose: further comprising a hub to interface with one or more GPU interconnects. (a processing module includes circuitry to interface (hub) with a specialized GPU interconnect such as NVLink, Paragraph 21) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Leigh et al. into the teaching of Diamos et al., Eichenberger et al. and Venkataramani et al. to include a hub to interface with one or more GPU interconnects in order to enable faster communication between components which leads to improved performance and scalability. With respect to Claim 17, all the limitations of Claim 11 have been addressed above; and Diamos et al., Eichenberger et al. and Venkataramani et al. do not disclose: further comprising a hub to interface with one or more GPU interconnects. However, Leigh et al. disclose: further comprising a hub to interface with one or more GPU interconnects. (a processing module includes circuitry to interface (hub) with a specialized GPU interconnect such as NVLink, Paragraph 21) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Leigh et al. into the teaching of Diamos et al., Eichenberger et al. and Venkataramani et al. to include a hub to interface with one or more GPU interconnects in order to enable faster communication between components which leads to improved performance and scalability. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Diamos et al. (US 2014/0165049) in view of Eichenberger et al. (US 2009/0083724) in view of Venkataramani et al. (US 2018/0157471) and in further view of Ryan Smith (“NVIDIA’s GF100: Architected for Gaming”, Jan 2010) With respect to Claim 9, all the limitations of Claim 1 have been addressed above; and Diamos et al., Eichenberger et al. and Venkataramani et al. do not disclose: wherein the GPC further comprises a raster engine. However, Ryan Smith disclose: wherein the GPC further comprises a raster engine. (each GPC has its own Raster Engine, Paragraph 6) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Ryan Smith into the teaching of Diamos et al., Eichenberger et al. and Venkataramani et al. to include wherein the GPC further comprises a raster engine in order to perform edge/triangle setup, rasterization and z-culling in a pipelined manner. (Ryan Smith, Paragraph 6) With respect to Claim 18, all the limitations of Claim 11 have been addressed above; and Diamos et al., Eichenberger et al. and Venkataramani et al. do not disclose: wherein the GPC further comprises a raster engine. However, Ryan Smith disclose: wherein the GPC further comprises a raster engine. (each GPC has its own Raster Engine, Paragraph 6) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Ryan Smith into the teaching of Diamos et al., Eichenberger et al. and Venkataramani et al. to include wherein the GPC further comprises a raster engine in order to perform edge/triangle setup, rasterization and z-culling in a pipelined manner. (Ryan Smith, Paragraph 6) Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Diamos et al. (US 2014/0165049) in view of Eichenberger et al. (US 2009/0083724) in view of Venkataramani et al. (US 2018/0157471) and in further view of Geeks3D (“What is a Texture Processor Cluster or TPC”, Mar 2010). With respect to Claim 13, all the limitations of Claim 11 have been addressed above; and Diamos et al., Eichenberger et al. and Venkataramani et al. do not disclose: wherein the SMs further comprise one or more special function units (SFUs). However, Geeks3D disclose: wherein the SMs further comprise one or more special function units (SFUs). (a streaming multiprocessor is made up of several streaming processor, several special function units, lines 3-5) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Geeks3D into the teaching of Diamos et al., Eichenberger et al. and Venkataramani et al. to include wherein the SMs further comprise one or more special function units (SFUs) in order to perform transcendental functions such as sine or cosine. (Geeks3D, lines 3-5) Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LANNY N UNG whose telephone number is (571)270-7708. The examiner can normally be reached Mon-Thurs 6am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LANNY N UNG/Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Apr 18, 2023
Application Filed
Nov 21, 2024
Non-Final Rejection — §103
May 27, 2025
Response Filed
Jul 10, 2025
Final Rejection — §103
Oct 14, 2025
Request for Continued Examination
Oct 17, 2025
Response after Non-Final Action
Nov 28, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547527
INTELLIGENT CUSTOMER SERVICE REQUEST PROCESSING MECHANISM
2y 5m to grant Granted Feb 10, 2026
Patent 12481500
ACCELERATING LINEAR ALGEBRA KERNELS FOR ANY PROCESSOR ARCHITECTURE
2y 5m to grant Granted Nov 25, 2025
Patent 12474919
FIRMWARE DISTRIBUTION METHOD FOR AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Nov 18, 2025
Patent 12468519
SYSTEMS AND METHODS FOR IN-PLACE APPLICATION UPGRADES
2y 5m to grant Granted Nov 11, 2025
Patent 12461845
SYSTEM AND METHOD FOR DETECTING SOFTWARE TESTS THAT ARE SUSPECTED AS TESTS THAT ALWAYS PROVIDE FALSE POSITIVE
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
96%
With Interview (+25.4%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 495 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month