Prosecution Insights
Last updated: April 19, 2026
Application No. 18/176,342

Inference Performance Using Divide-and-Conquer Techniques

Non-Final OA §101§102§103
Filed
Feb 28, 2023
Examiner
NAULT, VICTOR ADELARD
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
8 granted / 13 resolved
+6.5% vs TC avg
Strong +83% interview lift
Without
With
+83.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
30 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a No therefor, subject to the conditions and requirements of this title. Claims 1-5, 7-12, and 14-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more. Regarding claim 1, Step 1 - “Is the claim to a process, machine, manufacture or composition of matter?” Yes, the claim is directed towards a process. Step 2A, Prong 1 - “Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea?”: The limitation of evaluating an application to identify a plurality of opportunities to execute the application in parallel; recites an evaluation of an application, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer. The limitation of dividing the application according to the identified plurality of opportunities into a plurality of independently executable tasks; recites an evaluation of an application, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer. The limitation of determining respective weighting values for individual ones of the plurality of independently executable tasks according to respective expected computational intensity values of the individual ones of the plurality of independently executable tasks; recites a judgement of tasks, weighting values, and computational intensity, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer. Step 2A, Prong 2 - “Does the claim recite additional elements that integrate the judicial exception into a practical application?”: The limitation of distributing a plurality of computational resources among the individual ones of the plurality of independently executable tasks according to the respective weighting values; recites mere instructions to apply weighting values to distribute resources, which does not integrate the recited abstract ideas into a practical application, MPEP 2106.05(d) and 2106.05(f). The limitation of and executing the divided application using the distributed plurality of computational resources recites mere instructions to apply resources to execute an application, which does not integrate the recited abstract ideas into a practical application, MPEP 2106.05(d) and 2106.05(f). Step 2B - “Does the claim recite additional elements that amount to significantly more than the judicial exception?”: The limitation of distributing a plurality of computational resources among the individual ones of the plurality of independently executable tasks according to the respective weighting values; recites mere instructions to apply weighting values to distribute resources, which is not significantly more than any recited abstract ideas, MPEP 2106.05(f). The limitation of and executing the divided application using the distributed plurality of computational resources recites mere instructions to apply resources to execute an application, which is not significantly more than any recited abstract ideas, MPEP 2106.05(f). Therefore, claim 1 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 2, Claim 2 adds the additional limitations to claim 1: wherein the application comprises a multi-phase pipeline, recites further detail on the application that is evaluated and divided, without changing that the evaluation and division amounts to the abstract ideas of evaluation and judgement, regardless of if they’re performed on a generic computer. and wherein individual ones of the plurality of independently executable tasks correspond to respective phases of the multi-phase pipeline recites further detail on the tasks that are produced via division of an evaluated application, without changing that the evaluation and division amounts to the abstract ideas of evaluation and judgement, regardless of if they’re performed on a generic computer. Therefore, claim 2 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 3, Claim 3 adds the additional limitations to claim 2: wherein a particular phase of the multi-phase pipeline is a batch processing phase, recites further detail on the application that is evaluated and divided, without changing that the evaluation and division amounts to the abstract ideas of evaluation and judgement, regardless of if they’re performed on a generic computer. wherein the batch processing phase processes data in batches of a first size, recites further detail on the application that is evaluated and divided, without changing that the evaluation and division amounts to the abstract ideas of evaluation and judgement, regardless of if they’re performed on a generic computer. and wherein an independently executable task corresponding to the batch processing phase processes data in batches of a second size less than the first size recites further detail on the tasks that are produced via division of an evaluated application, without changing that the evaluation and division amounts to the abstract ideas of evaluation and judgement, regardless of if they’re performed on a generic computer. Therefore, claim 3 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 4, Claim 4 adds the additional limitations to claim 2: wherein a particular phase of the multi-phase pipeline is a batch processing phase processing individual elements of the data padded to a first element size, recites further detail on the application that is evaluated and divided, without changing that the evaluation and division amounts to the abstract ideas of evaluation and judgement, regardless of if they’re performed on a generic computer. and wherein an independently executable task corresponding to the batch processing phase processes the individual elements of data padded to at least a second element size different from the first element size recites further detail on the tasks that are produced via division of an evaluated application, without changing that the evaluation and division amounts to the abstract ideas of evaluation and judgement, regardless of if they’re performed on a generic computer. Therefore, claim 4 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 5, Claim 5 adds the additional limitations to claim 1: assigning respective numbers of computational cores to the individual ones of the plurality of independently executable tasks according to a product of respective weighting values of the individual ones of the plurality of independently executable tasks and a total number of computational cores recites a mathematical formula of taking the product of weighting values and a number of computational cores, which is a mathematical concept, which is an abstract idea. Therefore, claim 5 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 7, Claim 5 adds the additional limitations to claim 1: wherein the application is a machine learning application recites mere instructions to apply abstract ideas to machine learning, which does not integrate the recited abstract ideas into a practical application nor amount to significantly more than any recited abstract ideas, MPEP 2106.05(d) and 2106.05(f). Therefore, claim 7 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claims 8-12, Claims 8-12 recite non-transitory computer-accessible storage media storing instructions for performing the function of the method of claims 1-5, respectively, with substantially the same limitations. Therefore the same analysis and rejection applied to claims 1-5 applies to claims 8-12. Therefore, claims 8-12 are found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 14, Claim 14 adds the additional limitations to claim 8: wherein the application is a natural language processing application recites mere instructions to apply abstract ideas to natural language processing, which does not integrate the recited abstract ideas into a practical application nor amount to significantly more than any recited abstract ideas, MPEP 2106.05(d) and 2106.05(f). Therefore, claim 14 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claims 15-19, Claims 15-19 recite a system comprising at least one processor and a memory for performing the function of the method of claims 1-5, respectively, with substantially the same limitations. Therefore the same analysis and rejection applied to claims 1-5 applies to claims 15-19. Therefore, claims 15-19 are found to be ineligible subject matter under 35 U.S.C. 101. Prior Art The following references are used for prior art claim rejections: Zhou et al. “S3DNN: Supervised Streaming and Scheduling for GPU-Accelerated Real-Time DNN Workloads” Yuan et al. (U.S. Patent Application Publication No. 2021/0334629) Liu et al. “On Removing Algorithmic Priority Inversion from Mission-critical Machine Inference Pipelines” Qian et al. (U.S. Patent Application Publication No. 2024/0256838) Wang et al. “Energy-efficient Inference Service of Transformer-based Deep Learning Models on GPUs” Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 5-9, 12, 13, 15, 16, 19, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhou et al. “S3DNN: Supervised Streaming and Scheduling for GPU-Accelerated Real-Time DNN Workloads”, hereinafter Zhou. Regarding claim 1, Zhou teaches A method, comprising: evaluating an application to identify a plurality of opportunities to execute the application in parallel; ((Zhou Pg. 8) “Since DNN tasks fundamentally exhibit a staged computation pattern where later layers often require fewer resources (i.e., Insight 1), it is more likely that later kernels belonging to different DNN tasks can be executed concurrently”) dividing the application according to the identified plurality of opportunities into a plurality of independently executable tasks; ((Zhou Pg. 5) “our system is constructed from a set of frame processing requests, τ =τ1, ..., τn, in which τi is eventually assigned to a DNN instance to be processed (this DNN instance can be shared by multiple τi). For the sake of simplicity, we assume each layer contains only a single GPU kernel”, (Zhou Pg. 2) “In the CUDA programming model, a kernel is a piece of code executed on GPU hardware consisted of multiple CUDA threads executed in parallel”)) determining respective weighting values for individual ones of the plurality of independently executable tasks ((Zhou Pg. 8) “We use a metric, tbRatio, to measure the proportion of the demanded thread blocks by a kernel to the total number of thread blocks provided by the GPU hardware”) according to respective expected computational intensity values of the individual ones of the plurality of independently executable tasks; ((Zhou Pg. 3) “Insight 1: The GPU usage pattern in DNN shows a staged GPU resource utilization pattern, where the earlier layers involve more intensive computations and larger input sizes, and the later layers incur lighter computations and smaller input sizes, which may under-utilize GPU hardware”) distributing a plurality of computational resources among the individual ones of the plurality of independently executable tasks according to the respective weighting values; ((Zhou Pg. 8) “the kernel h at the head of the scheduling queue is pushed into G (Lines 10-11). Then the scheduler checks whether h can fully occupy the GPU by calculating its tbRatio. If tbRatio of h is smaller than 1 (Line 12), indicating it may be concurrently executed with other kernels, then the scheduler will seek to put more kernels in Q whose tbRatio is also less than 1 (Lines 13-15); else h is directly submitted to GPU device for execution (Line 16). After all potential small kernels in Q are merged into G, the scheduler checks whether the tbRatio of G is still less than 1 (Line 17). If so, the scheduler looks ahead the successor kernels of the ones residing in Q in the order of priorities, in order to identify any such kernels that have not been released but with tbRatio < 1 (Lines 18-22)”, tbRatio is the ratio of the computational weight of a task to the total amount of computational resources) and executing the divided application using the distributed plurality of computational resources ((Zhou Pg. 10) “In this paper, we present S3DNN–a systemic solution that optimizes the execution of DNN workloads on GPU in a real-time multi-tasking environment. Experimental results show that S3DNN significantly outperforms state-of-the-art GPU-accelerated DNN processing frameworks in a real-time multi-tasking environment”) Regarding claim 2, Zhou teaches The method of claim 1, Zhou further teaches: wherein the application comprises a multi-phase pipeline, (Zhou Pg. 7, Fig. 6 shows that the applications are pipelines of kernels) PNG media_image1.png 531 501 media_image1.png Greyscale and wherein individual ones of the plurality of independently executable tasks correspond to respective phases of the multi-phase pipeline ((Zhou Pg. 2) “In the CUDA programming model, a kernel is a piece of code executed on GPU hardware consisted of multiple CUDA threads executed in parallel”, Zhou Pg. 7, Fig. 6 shows that the phases in the pipelines are kernels) Regarding claim 5, Zhou teaches The method of claim 1, wherein distributing the plurality of computational resources among the individual ones of the plurality of independently executable tasks according to the respective weighting values comprises: Zhou further teaches: assigning respective numbers of computational cores to the individual ones of the plurality of independently executable tasks according to a product of respective weighting values of the individual ones of the plurality of independently executable tasks and a total number of computational cores ((Zhou Pg. 8) “We use a metric, tbRatio, to measure the proportion of the demanded thread blocks by a kernel to the total number of thread blocks provided by the GPU hardware (often constrained by either the hardware architecture or register/shared memory size)”, thread blocks correspond to computational cores, a proportion is a product, demanded thread blocks correspond to the weight of a task based on its computational intensity) Regarding claim 6, Zhou teaches The method of claim 5, wherein executing the divided application using the distributed plurality of computational resources comprises: summing the respective numbers of computational cores to calculate a total number of assigned computational cores; ((Zhou Pg. 8) “the kernel h at the head of the scheduling queue is pushed into G (Lines 10-11). Then the scheduler checks whether h can fully occupy the GPU by calculating its tbRatio. If tbRatio of h is smaller than 1 (Line 12), indicating it may be concurrently executed with other kernels, then the scheduler will seek to put more kernels in Q whose tbRatio is also less than 1 (Lines 13-15); else h is directly submitted to GPU device for execution (Line 16). After all potential small kernels in Q are merged into G, the scheduler checks whether the tbRatio of G is still less than 1 (Line 17). If so, the scheduler looks ahead the successor kernels of the ones residing in Q in the order of priorities, in order to identify any such kernels that have not been released but with tbRatio < 1 (Lines 18-22)”, updating a ratio of demanded thread blocks to total thread blocks when more thread blocks must be assigned by additional kernels corresponds to summing computational cores to calculate a total number of assigned computational cores) scheduling the individual ones of the plurality of independently executable tasks to execute in parallel responsive to determining that the total number of assigned computational cores is not greater than the total number of computational cores; ((Zhou Pg. 8) “the kernel h at the head of the scheduling queue is pushed into G (Lines 10-11). Then the scheduler checks whether h can fully occupy the GPU by calculating its tbRatio. If tbRatio of h is smaller than 1 (Line 12), indicating it may be concurrently executed with other kernels, then the scheduler will seek to put more kernels in Q whose tbRatio is also less than 1 (Lines 13-15); else h is directly submitted to GPU device for execution (Line 16)”, adding more kernels for concurrent execution when the ratio of demanded thread blocks to total thread blocks does not exceed 1 corresponds to scheduling additional tasks for parallel execution when assigned cores are not greater than total cores) and scheduling a portion of the plurality of independently executable tasks to execute after completion of at least one of the plurality of independently executable tasks responsive to determining that the total number of assigned computational cores is greater than the total number of computational cores (Zhou Pg. 7, Fig. 6, specifically scheduling policy (d), shows that when a kernel or group of concurrently executing kernels has a thread block usage approaching or meeting 100%, later kernels are executed after completion of kernels scheduled earlier) Regarding claim 7, Zhou teaches The method of claim 1, wherein the application is a machine learning application ((Zhou Abstract) “In this paper, we propose S3DNN, a system solution that optimizes the execution of DNN workloads on GPU in a real-time multi-tasking environment”, DNN workloads, or deep neural network workloads, are machine learning applications) Regarding claims 8, 9, 12 and 13, Claims 8, 9, 12, and 13 recite non-transitory computer-accessible storage media storing instructions for performing the function of the method of claims 1, 2, 5, and 6, respectively. Specifically, claim 8 recites One or more non-transitory computer-accessible storage media storing program instructions that when executed on or across one or more computing devices cause the one or more computing devices to implement: [the method of claim 1]. Zhou recites: (Zhou Pg. 2) “We have fully implemented S3DNN on top of a system with a multi-core CPU and multiple GPUs. S3DNN is designed as a middleware between input videos and GPU hardware to optimize the execution of DNN-based object detection workloads on GPU. S3DNN is implemented as a frontend-backend framework”, with middleware inherently comprising computer media for its storage and execution. All other limitations in claims 8, 9, 12, and 13 are substantially the same as those in claims 1, 2, 5, and 6, respectively, therefore the same rationale for rejection applies. Regarding claims 15, 16, 19 and 20, Claims 15, 16, 19, and 20 recite a system comprising at least one processor and a memory for performing the function of the method of claims 1, 2, 5, and 6, respectively. Specifically, claim 15 recites A system, comprising: one or more processors; and a memory storing program instructions that when executed by the one or more processors cause the one or more processors to implement an application deployment platform, configured to: [perform the method of claim 1]. Zhou recites: (Zhou Pg. 9-10) “We conduct our experiments in a system consisted of an NVIDIA Quadro 6000 GPU, which is based on the Fermi micro-architecture and features 480 cores and 6 GB of GDDR5 memory, and an Intel Core i7-4790k CPU and 16 GB of RAM”. All other limitations in claims 15, 16, 19, and 20 are substantially the same as those in claims 1, 2, 5, and 6, respectively, therefore the same rationale for rejection applies. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou, in view of Yuan et al. (U.S. Patent Application Publication No. 2021/0334629), hereinafter Yuan. Regarding claim 3, Zhou teaches The method of claim 2, Yuan teaches the following further limitations more explicitly than Zhou, or that Zhou does not teach: wherein a particular phase of the multi-phase pipeline ((Yuan [0036]) “The inferencing pipeline 130 may include…an intermediate module(s) 108”) is a batch processing phase, ((Yuan [0065]) “The intermediate module(s) 108 may also be configured to perform batching of the decoded streams 342, for example, by forming batches of one or more frames from each stream to generate batched multimedia data 344”) wherein the batch processing phase processes data in batches of a first size, ((Yuan [0065]) “The intermediate module(s) 108 may also be configured to perform batching…The batches may have a maximum batch size”) and wherein an independently executable task corresponding to the batch processing phase processes data in batches of a second size less than the first size ((Yuan [0065]) “The batches may have a maximum batch size, but a batch may be formed prior to reaching that size, for example, after a time threshold is exceeded depending on the timing of frames being received from the streams”) At the time of filing, one of ordinary skill in the art would have motivation to combine Zhou and Yuan by taking the method of claim 2 of executing a pipeline application in parallel, taught by Zhou, and including a batch processing phase in the pipeline processing batches in one size, and a task that processes batches in a smaller size, taught by Yuan, as doing so increases the flexibility of the parallelly executed batch processing phase to timing of other components of the pipeline executed in parallel. Such a combination would be obvious. Regarding claim 10, Claim 10 recites non-transitory computer-accessible storage media storing instructions for performing the function of the method of claim 3. All other limitations in claim 10 are substantially the same as those in claim 3, therefore the same rationale for rejection applies. Regarding claim 17, Claim 17 recites a system for performing the function of the method of claim 3. All other limitations in claim 17 are substantially the same as those in claim 3, therefore the same rationale for rejection applies. Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou, in view of Liu et al. “On Removing Algorithmic Priority Inversion from Mission-critical Machine Inference Pipelines”, hereinafter Liu, further in view of Qian et al. (U.S. Patent Application Publication No. 2024/0256838), hereinafter Qian. Regarding claim 4, Zhou teaches The method of claim 2, Liu teaches the following further limitation that Zhou does not teach: wherein a particular phase of the multi-phase pipeline is a batch processing phase ((Liu Pg. 5) “Stages of the neural networks are executed on a GPU. We are particularly interested in lower-priced GPUs. While such GPUs feature parallel execution, one way of exploiting their computation capabilities is to execute the same kernel on all GPU cores. This means that we can run different tasks concurrently on the GPU as long as we run the same kernel on all GPU cores. We call the assembly of such concurrently executable task sets, batching”) processing individual elements of the data padded to a first element size, ((Liu Pg. 9) “To store the arrived but not finished tasks, we define a feature buffer for each (image size, network stage) pair. For a given image size, the buffer for each stage is intrinsically a priority queue to store the tasks waiting to execute this stage…When a new frame arrives, it first goes through a data slicing step, assisted by the LIDAR input, to extract the partial frames…partial frames are padded to their closest target sizes with black borders”) At the time of filing, one of ordinary skill in the art would have motivation to combine Zhou and Liu by taking the method of claim 2 of executing a pipeline application in parallel, taught by Zhou, and including a batch processing phase in the pipeline processing batches padded to a first element size, taught by Liu, as both batch processing and padding of inputs are well-known in the art, providing the predictable benefits of increasing throughput for batch processing, and flexibility with input sizes without distortion that would increase classification difficulty for padding. Such a combination would be obvious. Qian teaches the following further limitation that Zhou does not teach and that Liu does not teach explicitly: and wherein an independently executable task [corresponding to the batch processing phase] processes the individual elements of data padded to at least a second element size different from the first element size ((Qian [0029]) “For example, the process engine having the process capacity of 16 may be used to calculate an inner product of the activation tensor of FIG. 1 and the weight tensor of FIG. 2. Traditionally, the input channel size need to be padded to 16, for example, with zeroes, so as to be an integer multiple of the process capacity of the process engine. As another example, when an input channel size is 24, the input channel size need to be padded to 32, to be 2 multiples of the process capacity of the process engine”, Liu teaches a batch processing phase) At the time of filing, one of ordinary skill in the art would have motivation to combine Zhou, Liu, and Qian by taking the method of claim 2 of executing a pipeline application in parallel, and including a batch processing phase with padding of inputs in the pipeline, jointly taught by Zhou and Liu, and including execution of a task using the batch processing phase with data padded to another size, taught by Qian, as doing so increases the flexibility of the batch processing phase to inputs of different sizes that could require separate amounts of padding. Such a combination would be obvious. Regarding claim 11, Claim 11 recites non-transitory computer-accessible storage media storing instructions for performing the function of the method of claim 4. All other limitations in claim 11 are substantially the same as those in claim 4, therefore the same rationale for rejection applies. Regarding claim 18, Claim 18 recites a system for performing the function of the method of claim 4. All other limitations in claim 18 are substantially the same as those in claim 4, therefore the same rationale for rejection applies. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou, in view of Wang et al. “Energy-efficient Inference Service of Transformer-based Deep Learning Models on GPUs”, hereinafter Wang. Regarding claim 14, Zhou teaches The one or more non-transitory computer-accessible storage media of claim 8, Wang teaches the following further limitation that Zhou does not teach: wherein the application is a natural language processing application ((Wang Abstract) “Many natural language processing (NLP) services are based on the Transformer Sequence Transduction model. However, the inference process of the Transformer model consumes a significant amount of energy due to the large model size (e.g., billions of parameters) and tremendous computations…In this work, we conduct a comprehensive study on the inference performance and energy efficiency of a Transformer model trained for the language translation service”) At the time of filing, one of ordinary skill in the art would have motivation to combine Zhou and Yuan by taking the medium of claim 8 of executing an application in parallel, taught by Zhou, and including the application being a natural language processing application, taught by Wang, as natural language processing is a well known area of computer applications that would benefit from the increased efficiency of parallelization taught in claim 8. Such a combination would be obvious. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Liu et al. “Real-Time Task Scheduling for Machine Perception in Intelligent Cyber-Physical Systems” teaches use of a criticality measure for dynamically allocating machine learning inference resources to process the most relevant portions of an input first. Xiang and Kim “Pipelined Data-Parallel CPU/GPU Scheduling for Multi-DNN Real-Time Inference” teaches a pipeline-based scheduling framework for deep neural network inference on heterogenous hardware. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTOR A NAULT whose telephone number is (703) 756-5745. The examiner can normally be reached M - F, 12 - 8. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /V.A.N./Examiner, Art Unit 2124 /Kevin W Figueroa/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Feb 28, 2023
Application Filed
Feb 09, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579429
DEEP LEARNING BASED EMAIL CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12566953
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Mar 03, 2026
Patent 12561563
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Feb 24, 2026
Patent 12468939
OBJECT DISCOVERY USING AN AUTOENCODER
2y 5m to grant Granted Nov 11, 2025
Patent 12446600
TWO-STAGE SAMPLING FOR ACCELERATED DEFORMULATION GENERATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+83.3%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month