Prosecution Insights
Last updated: April 19, 2026
Application No. 18/064,251

FRAMEWORK FOR DEVELOPMENT AND DEPLOYMENT OF PORTABLE SOFTWARE OVER HETEROGENOUS COMPUTE SYSTEMS

Non-Final OA §103
Filed
Dec 10, 2022
Examiner
SOLTANZADEH, AMIR
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Saankhya Labs Pvt Ltd.
OA Round
5 (Non-Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
2y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
340 granted / 421 resolved
+25.8% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
35 currently pending
Career history
456
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 421 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-2, 4-12, and 14-20 are presented for examination. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 5-6, 11-12 and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Martin (US20110252411A1) in view of Zhu (US 20120317556 A1) further in view of Hassan (US 20230063568 A1) and Khanna (US 8296419 B1). Regarding Claim 1, Martin (US20110252411A1) teaches A system, comprising: a processor; a memory storing instructions which when executed by the processor, perform operations to: upon parsing [an annotated code associated with an algorithmic routine], identify a first representation of the annotated code, wherein the first representation of the annotated code includes a plurality of tasks corresponding to the algorithmic routine (Para 0066, process 800 may include performing a static analysis of the TCE code to identify portions of the TCE code (block 830), and determining, prior to execution and based on the input size/type information, portion(s) of the TCE code that are more efficiently executed by a GPU (block 840); Fig 7; Para 0061, FIG. 7 is a diagram of example program code 700 that may be implemented by execution engine 440. In one implementation, program code 700 may include portions of program code (e.g., TCE code 530) created using a TCE. As shown in FIG. 7, program code 700 may include a portion 710 (e.g., serial code) that may be more efficiently executed by CPU 140, and may include a portion 720 (e.g., parallel code) that may be more efficiently executed by GPU 130) Examiner Comments: Fig. 7 shows a code with representation of task that needs to be done on a CPU and tasks to be done on a GPU. The Code in Fig.7 is interpreted to the claimed algorithmic routine; based on the analysis: determine one or more computing resources from a plurality of computing resources (Para 0067, process 800 may include determining, prior to execution and based on the input size/type information, portion(s) of the TCE code that are more efficiently executed by a CPU (block 850), and compiling the portions of the TCE code that are executable by the GPU and the CPU (block 860)); and execute one or more tasks from the plurality of tasks on the determined one or more computing resources, [based on the scheduled execution] wherein the plurality of tasks are associated with the algorithmic routine (Para 0068, process 800 may include providing, to the GPU, the compiled portion(s) of the TCE code executable by the GPU (block 870), and providing, to the CPU, the compiled portion(s) of the TCE code executable by the CPU (block 880)). Martin did not specifically teach an annotated code associated with an algorithmic routine transform the first representation of the annotated code associated with the software algorithmic routine into an intermediate form, wherein the intermediate form includes the plurality of tasks associated with the algorithmic routine; based on a plurality of constraint definitions, a hardware architecture description and a plurality of optimization metrics associated with the algorithmic routine, analyse the intermediate form of the algorithmic routine schedule an execution of the one or more tasks from the plurality of tasks associated with the algorithmic routine on the determined one or more computing resources and in response to determining that the one or more computing resources are added, removed or fail to operate, modifying the hardware architecture description and reconfigure the one or more computing resources for executing the one or more tasks. However, Zhu (US 20120317556 A1) teaches an annotated code associated with an algorithmic routine (Para 0046 Higher level code 111 includes code annotation 112 identifying code portion 116 as a kernel for execution on a co-processor (e.g., a GPU or other accelerator)) transform the first representation of the annotated code associated with the software algorithmic routine into an intermediate form, wherein the intermediate form includes the plurality of tasks associated with the algorithmic routine (Para 0049, Parser/semantic checker 102 can create intermediate representation 181 form higher level code 111. Parser/semantic checker 102 can split kernel related code into stub routine 172 and calling context code into proxy routine 173 in accordance with code annotation 112 (i.e., code annotation 112 demarks the boundary between kernel code and other code)); based on a plurality of constraint definitions, a hardware architecture description and a plurality of optimization metrics associated with the algorithmic routine, analyse the intermediate form of the algorithmic routine (Para 0055, Act 304 includes an act of analyzing an intermediate representation of the stub code to derive usage information about the declared properties of the kernel (act 306). For example, analysis module 107 can analyze stub routine 172 to derive usage information 176 about declared properties 171. Usage information 176 can include an indication that declared properties 171 are used to a lesser extent than declared). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin’s teaching to Zhu’s in order to optimizing kernel execution at runtime in computer system by linKhanna the runtime optimization objects to provide the proxy code with access to derived usage information (Zhu [Summary]). Martin and Zhu did not specifically teach schedule an execution of the one or more tasks from the plurality of tasks associated with the algorithmic routine on the determined one or more computing resources and in response to determining that the one or more computing resources are added, removed or fail to operate, modifying the hardware architecture description and reconfigure the one or more computing resources for executing the one or more tasks. However, Hassan (US 20230063568 A1) teaches schedule an execution of the one or more tasks from the plurality of tasks associated with the algorithmic routine on the determined one or more computing resources (Para 0043, The scheduler unit 220 is coupled to a work distribution unit 225 that is configured to dispatch tasks for execution on the GPCs 250. The work distribution unit 225 may track a number of scheduled tasks received from the scheduler unit 220. In an embodiment, the work distribution unit 225 manages a pending task pool and an active task pool for each of the GPCs 250. The pending task pool may comprise a number of slots (e.g., 32 slots) that contain tasks assigned to be processed by a particular GPC 250. The active task pool may comprise a number of slots (e.g., 4 slots) for tasks that are actively being processed by the GPCs 250). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin and Zhu’s teaching to Hassan’s in order to identify and avoid unsafe memory operations during the implementation of the executable by a gpu, and deliver more accurate results more quickly over time by determining safety of memory access requests during execution of compiled source code by utilizes memory safety checks (Hassan [Summary]). Martin, Zhu, and Hassan do not specifically teach: and in response to determining that the one or more computing resources are added, removed or fail to operate, modifying the hardware architecture description and reconfigure the one or more computing resources for executing the one or more tasks. However, Khanna teaches: and in response to determining that the one or more computing resources are added, removed or fail to operate, modifying the hardware architecture description and reconfigure the one or more computing resources for executing the one or more tasks (Column 17, lines 47-67: " The various status information of FIG. 2C may be used in various manners, including by the DPE service as part of automatically determining whether to modify ongoing distributed execution of one or more programs of the DPE service. For example, with respect to the ongoing distributed execution of Program X, the usage of disk J (by Node A, at 70% of the total disk I/O) and aggregate usage of disk L (by Nodes C and F, at an aggregate 95% of the total disk I/O) may exceed an allocation or other expected usage for shared disks, and thus may create a bottleneck for any other programs that are attempting to use those disks. As such, the DPE service may determine to take various actions, such as to throttle the usage of those disks by those computing nodes (e.g., of the usage by one or both of Nodes C and F of disk L), or to take other actions to accommodate the actual or potential bottleneck (e.g., temporarily prevent any other computing nodes from using disk L, so that the aggregate 95% total disk I/O that is being used by Program X does not create any actual bottlenecks for other programs)."; Column 26, lines 34-67: " The DPESSM module 340 may also dynamically monitor or otherwise interact with one or more of the computing nodes 360 to track use of those computing nodes, such as under control of the Dynamic Monitoring Manager module 348 of DPESSM module 340, and may further dynamically modify the ongoing distributed execution of programs on the computing nodes 360, such as under control of the Dynamic Modification Manager module 346 of DPESSM module 340.") Examiner Comments: Khanna detects addition/removal/failure of nodes (computing resources), updates the cluster description (hardware architecture description), and reconfigures by redistributing jobs/tasks for continued execution.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Martin, Zhu, Hassan with Khanna because Khanna's dynamic cluster modification would allow the combined system to handle runtime changes in resources adaptively, ensuring fault tolerance and scalability in heterogeneous environments by dynamically modify the distributed execution of a program, such as by adding or removing computing nodes from a cluster executing the program, while the execution is in progress (Khanna [Background/Summary]). Regarding Claim 2, Martin, Zhu, Hassan and Khanna teach The system of claim 1. Martin did not specifically teach wherein the intermediate form of the algorithmic routine eliminates a need of a static binding code for executing the one or more tasks on the determined one or more computing resources. However, Zhu (US 20120317556 A1) teaches wherein the intermediate form of the algorithmic routine eliminates a need of a static binding code for executing the one or more tasks on the determined one or more computing resources (Para 0023, An intermediate representation of the stub code is analyzed to derive usage information about the declared properties of the kernel. The stub code is generated in accordance with the derived usage information. The derived usage information is stored in one or more runtime optimization objects alongside the stub code; Para 0024, kernel execution is optimized at runtime) Examiner Comments: Since the optimization is done at runtime or in other words dynamically, than one of the ordinary skilled in the art can argue that it eliminates the need for static binding code. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin’s teaching to Zhu’s in order to optimizing kernel execution at runtime in computer system by linKhanna the runtime optimization objects to provide the proxy code with access to derived usage information (Zhu [Summary]). Regarding Claim 5, Martin, Zhu, Hassan and Khanna teach The system of claim 1. Martin did not specifically teach wherein the algorithmic routine is associated with one or more operations executed in a plurality of varying complexity workloads including domain specific computationally intensive software applications. However, Zhu teaches wherein the algorithmic routine is associated with one or more operations executed in a plurality of varying complexity workloads including domain specific computationally intensive software applications (Para 0004, In the domain of technical computing, it is typical that computational intensive kernels are accelerated by special hardware or networks. Typically the developer will demarcate boundaries of such a computationally intensive kernel (hereinafter referred to simply as a "kernel"). The boundaries indicate to a compiler when the code for the kernel is to be compiled in special ways such as, for example, to a different instruction set (that of the accelerator) or to set up a call-return sequence to and from the GPU). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin’s teaching to Zhu’s in order to optimizing kernel execution at runtime in computer system by linKhanna the runtime optimization objects to provide the proxy code with access to derived usage information (Zhu [Summary]). Regarding Claim 6, Martin, Zhu, Hassan and Khanna teach The system of claim 1, further comprises: determine the one or more computing resources on a hardware platform selected from a group consisting of general-purpose processors (GPPs), field programmable gate arrays (FPGAs), graphical processing units (GPUs), single core or multicore central processing units (CPUs), and network accelerator cards, and a combination thereof (Martin [Para 0020, Client device 110 may determine, prior to execution of the program code and based on the input size and type information, a second portion of the program code to be executed by CPU 140, and may compile the first portion of the program code and the second portion of the program code. Client device 110 may provide, to GPU 130 for execution, the compiled first portion of the program code, and may provide, to CPU 140 for execution, the compiled second portion of the program code]). Regarding Claim 11, is a method claim corresponding to the system claim above (Claim 1) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 1. Regarding Claim 12, is a method claim corresponding to the system claim above (Claim 2) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 2. Regarding Claim 15, is a method claim corresponding to the system claim above (Claim 5) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 5. Regarding Claim 16, is a method claim corresponding to the system claim above (Claim 6) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 6. Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Martin (US20110252411A1) in view of Zhu (US 20120317556 A1), Hassan (US 20230063568 A1) and Khanna (US 8296419 B1) further in view of Long (US 20240036957 A1). Regarding Claim 4, Martin, Zhu, Hassan and Khanna teach The system of claim 1. Martin, Zhu, Hassan and Khanna did not teach further comprises: create one or more binary executables files corresponding to the one or more tasks based on the schedule, wherein the one or more binary executable files are executed on the determined one or more computing resources at a runtime. However, Long (US 20240036957 A1) teaches further comprises: create one or more binary executables files corresponding to the one or more tasks based on the schedule, wherein the one or more binary executable files are executed on the determined one or more computing resources at a runtime (Para 00481, In at least one embodiment, at least one of host executable code 6002 or device executable code 6003 specified in source code 6000 is used to perform an application programming interface to indicate two or more blocks of threads to be scheduled in parallel) Examiner Comments: It is known to someone ordinary skilled in the art that an executable code consists of binary files. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin, Zhu, Hassan and Khanna’s teaching to Long’s in order to accelerate and executes multiple multimedia applications efficiently by using full width of a data bus for performing operations on packed data, thus eliminating requirement to transfer smaller units of data across the data bus to perform multiple operations in a data element at a time (Long [Summary]). Regarding Claim 14, is a method claim corresponding to the system claim above (Claim 4) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 4. Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Martin (US20110252411A1) in view of Zhu (US 20120317556 A1), Hassan (US 20230063568 A1) and Khanna (US 8296419 B1) further in view of Master (US 8276135 B2). Regarding Claim 7, Martin, Zhu, Hassan and Khanna teach The system of claim 1. Martin, Zhu, Hassan and Khanna did not teach wherein the hardware architecture description comprises a plurality of definitions including the plurality of computing resources on the hardware platform and a plurality of network resources. However, Master (US 8276135 B2) teaches wherein the hardware architecture description comprises a plurality of definitions including the plurality of computing resources on the hardware platform and a plurality of network resources (Claim 1, receiving a plurality of hardware architecture descriptions of the sets of matrices, computation units and computational elements; based on the hardware architecture descriptions and the selected algorithmic elements, selecting one or more computational elements; selecting an interconnection network for causing the selected one or more computational elements to be connected together in a first architecture configuration in real time for performing the first function, and switching, when the ACE configuration code is executing, the interconnection network for causing the selected one or more computational elements to be connected together in a second architecture configuration for performing the second function). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin, Zhu, Hassan and Khanna’s teaching to Master’s in order to improve memory throughput, reduce the execution time and consumes less power by measuring multiple data parameters for each functions that are executed with the input data set, the data parameter comparative results are generated from the measured data parameter corresponding to the multiple functions and the input data set (Master [Summary]). Regarding Claim 17, is a method claim corresponding to the system claim above (Claim 7) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 7. Claim(s) 8 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Martin (US20110252411A1) in view of Zhu (US 20120317556 A1), Hassan (US 20230063568 A1) and Khanna (US 8296419 B1) further in view of Chkodrov (US 20120323941 A1) and Bansal (US 11442712 B2). Regarding Claim 8, Martin, Zhu, Hassan and Khanna teach The system of claim 1. Martin did not specifically teach wherein the first representation of the annotated code comprises a plurality of annotations, a plurality of special markers, a plurality of abstract primitives, and a plurality of programming language specific intrinsic functions. However, Zhu (US 20120317556 A1) teaches wherein the first representation of the annotated code comprises a plurality of annotations, a plurality of special markers (Para 0041, statements and expressions of higher-level code include annotations and/or language extensions that are used to specify a section of program source code corresponding to a kernel) Examiner Comments: Language extensions is interpreted to the claimed special markers. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin’s teaching to Zhu’s in order to optimizing kernel execution at runtime in computer system by linking the runtime optimization objects to provide the proxy code with access to derived usage information (Zhu [Summary]). Martin, Zhu, Hassan and Khanna did not teach a plurality of abstract primitives, and a plurality of programming language specific intrinsic functions. However, Chkodrov (US 20120323941 A1) teaches a plurality of abstract primitives (Para 0052, the generated C# classes defines abstract primitive types, such as integers and strings, to which various foreign representations map). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin, Zhu, Hassan and Khanna’s teaching to Chkodrov’s in order to process a query corresponding to an event data, and automatically generating the configuration information that defines an event structure associated with the query, and thus ensures improved and efficient query processing method (Chkodrov [Summary]). Martin, Zhu, Hassan, Khanna and Chkodrov did not specifically teach and a plurality of programming language specific intrinsic functions. However, Bansal (US 11442712 B2) teaches and a plurality of programming language specific intrinsic functions (Col 4: ln 60-67, a dummy function call (e.g., intrinsic function call in LLVM IR) is used to encode these must-not-alias relationships). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin, Zhu, Hassan, Khanna and Chkodrov’s teaching to Bansal’s in order to translate the expression into optimized code that involves optimized register usage, or optimized use of the underlying vectorization hardware, if the expression involves unsequenced evaluations involving side-effects and optionally references, thus allowing better optimization during expression evaluation, and allowing different parts of an expression to be evaluated in parallel, on a parallel computing substrate (Bansal [Summary]). Regarding Claim 18, is a method claim corresponding to the system claim above (Claim 8) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 8. Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Martin (US20110252411A1) in view of Zhu (US 20120317556 A1), Hassan (US 20230063568 A1) and Khanna (US 8296419 B1) further in view of Sandanagobalane (US 20190146766 A1). Regarding Claim 9, Martin, Zhu, Hassan and Khanna teach The system of claim 1. Martin and Zhu did not teach further comprises: generate a plurality of directed flow graphs corresponding to the plurality of tasks associated with the algorithmic routine. However, Sandanagobalane (US 20190146766 A1) teaches further comprises: generate a plurality of directed flow graphs corresponding to the plurality of tasks associated with the algorithmic routine (Claim 14, wherein the compiling the program the first time and the compiling the program the second time each comprise generating a representation of a Control Flow Graph (CFG) for the source code). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin, Zhu, Hassan and Khanna’s teaching to Sandanagobalane’s in order to obtain significant performance benefits by using compilation optimizations based on the reliable profile information as performance of device code is critical to high performance computing and machine learning communities (Sandanagobalane [Summary]). Regarding Claim 19, is a method claim corresponding to the system claim above (Claim 9) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 9. Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Martin (US20110252411A1) in view of Zhu (US 20120317556 A1), Hassan (US 20230063568 A1) and Khanna (US 8296419 B1) further in view of Scholz (US 20160299748 A1). Regarding Claim 10, Martin, Zhu, Hassan and Khanna teach The system of claim 1. Martin and Zhu did not teach wherein transforming the first representation of the annotated code associated with the software algorithmic routine into an intermediate form includes substituting the plurality of declarative statements with the plurality of imperative statements. However, Scholz (US 20160299748 A1) teaches wherein transforming the first representation of the annotated code associated with the software algorithmic routine into an intermediate form includes substituting the plurality of declarative statements with the plurality of imperative statements (Para 0015, the declarative program is subjected to a staged compilation process in which the declarative program is first translated into a relational algebra machine (RAM), which is then translated into imperative language code). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Martin, Zhu, Hassan and Khanna’s teaching to Scholz’ in order to enable providing the RAM with auxiliary data structures to accelerate relational algebra statement operations, thus allowing faster access to tuples of large relations (Scholz [Summary]). Regarding Claim 20, is a method claim corresponding to the system claim above (Claim 10) and, therefore, is rejected for the same reasons set forth in the rejection of Claim 10. Response to Arguments Applicant’s arguments with respect to claims 1-2, 4-12, and 14-20 have been considered but are moot because the arguments do not apply to the previous cited sections of the references used in the previous office action. The current office action is now citing additional references to address the newly added claimed limitations. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR SOLTANZADEH whose telephone number is (571)272-3451. The examiner can normally be reached M-F, 9am - 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Mui can be reached at (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIR SOLTANZADEH/Examiner, Art Unit 2191 /WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Dec 10, 2022
Application Filed
Dec 20, 2024
Non-Final Rejection — §103
Mar 25, 2025
Response Filed
May 30, 2025
Final Rejection — §103
Sep 02, 2025
Request for Continued Examination
Sep 05, 2025
Non-Final Rejection — §103
Sep 05, 2025
Response after Non-Final Action
Dec 09, 2025
Response Filed
Dec 23, 2025
Final Rejection — §103
Mar 11, 2026
Examiner Interview Summary
Mar 11, 2026
Applicant Interview (Telephonic)
Mar 23, 2026
Request for Continued Examination
Mar 25, 2026
Response after Non-Final Action
Apr 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602225
IDENTIFYING THE TRANLATABILITY OF HARD-CODED STRINGS IN SOURCE CODE VIA POS TAGGING
2y 5m to grant Granted Apr 14, 2026
Patent 12591414
CENTRALIZED INTAKE AND CAPACITY ASSESSMENT PLATFORM FOR PROJECT PROCESSES, SUCH AS WITH PRODUCT DEVELOPMENT IN TELECOMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12561134
Function Code Extraction
2y 5m to grant Granted Feb 24, 2026
Patent 12561136
METHOD, APPARATUS, AND SYSTEM FOR OUTPUTTING SOFTWARE DEVELOPMENT INSIGHT COMPONENTS IN A MULTI-RESOURCE SOFTWARE DEVELOPMENT ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12561118
SYSTEM AND METHOD FOR AUTOMATED TECHNOLOGY MIGRATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
98%
With Interview (+16.9%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 421 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month