Prosecution Insights
Last updated: April 19, 2026
Application No. 17/648,385

MACHINE LEARNING TECHNIQUES USING SEGMENT-WISE REPRESENTATIONS OF INPUT FEATURE REPRESENTATION SEGMENTS

Non-Final OA §101
Filed
Jan 19, 2022
Examiner
HAEFNER, KAITLYN RENEE
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Optum Services (Ireland) Limited
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
2 granted / 4 resolved
-5.0% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
32 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
31.1%
-8.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101
DETAILED ACTION This action is in response to the amendment filed 01/21/2025. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Application No. 63/246,092, fails to provide adequate support in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. The disclosure of Application No. 63/246,092 does not disclose “the initial input feature representation is a fixed-size representation of an input feature”, “the input feature comprises g feature values”, “each feature value corresponds to genetic variant identifier of g genetic variants”, “the initial input feature representation comprises an ordered sequence of n input feature representation values”, “each input feature representation segment comprises a defined subset of the n input feature representation values that begins with an initial input feature representation value having an initial value in-sequence position indicator and ends with a terminal input feature representation value having a terminal value in-sequence position indicator”, “each input feature representation segment is associated with a segment length indicator that is determined based at least in part on the initial value in-sequence position indicator for the input feature representation segment and the terminal value in-sequence position indicator for the input feature representation segment”, “each particular input feature representation segment is associated with a segment-wise feature processing machine learning model of m segment-wise feature processing machine learning models that is associated with an input dimensionality value that corresponds to the segment length indicator for the particular input feature representation segment”, “for each input feature representation segment, generating, using the one or more processors and the segment-wise feature processing machine learning model for the input feature representation segment, and based at least in part on the input feature representation segment, a segment-wise representation of the input feature representation segment”, and “and performing, using the one or more processors, one or more prediction-based actions based at least in part on the multi-segment prediction” of claims 1, 15 and 20. Additionally, examiner respectfully notes disclosure of the prior-filed application, Application No. 63/246,092, fails to provide adequate support in claims 2-14 and 16-19 as they depend on claim 1, 15 and 20 which are not supported by the prior-filed application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Subject Matter Eligibility Analysis Step 1: Claim 1 recites a method and is thus a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 1 recites generating… based at least in part on the ordered sequence of input feature representation values, a set of input feature representation segments (This limitation is a mental process as it encompasses a human mentally generating input feature representation segments) wherein an input feature representation segment of the set of input feature representation segments: (i) comprises a defined subset of the ordered sequence of input feature representation values that (a) begins with an initial input feature representation value having an initial value in-sequence position indicator and (b) ends with a terminal input feature representation value having a terminal value in-sequence position indicator (ii) is associated with a segment length indicator that is determined based at least in part on the initial value in-sequence position indicator and the terminal value in-sequence position indicator, and (iii) is associated with a one-dimensional convolutional neural network, of a set of one-dimensional convolutional neural networks respectively associated with the set of input feature representation segments, that is associated with an input dimensionality value that corresponds to the segment length indicator for the particular input feature representation segment (This limitation is a mental process as it further modifies the mental process of “generating… m input feature representation segments” by defining the “each input feature representation segment,” which a human can use to execute the mental process.), generating… based at least in part on the input feature representation segment, a segment-wise representation of the input feature representation segment (This limitation is a mental process as it encompasses a human mentally generating a segment-wise representation of the input feature representation segment.) generating…a multi-segment input feature representation of the input feature based at least in part on each segment-wise representation of the input feature representation segment (This limitation is a mental process as it encompasses a human mentally generating a feature representation of the input feature.) generate a prediction for the input feature(This limitation is a mental process as it encompasses a human mentally generating a prediction.). Therefore, claim 1 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 1 further recites additional elements of receiving, by one or more processors, an initial input feature representation corresponding to an input feature of a plurality of input features defined for a neural network wherein: (i) the initial input feature representation is a fixed-size representation of the input feature, (ii) the input feature comprises a plurality of feature values, (iii) a feature value of the plurality of feature values corresponds to a genetic variant identifier of a plurality of genetic variants, and (iv) the initial input feature representation comprises an ordered sequence of input feature representation values (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) by the one or more processors (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) by the one or more processors, using the one-dimensional convolutional neural network, (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) by the one or more processors, and using a multi-dimensional convolutional neural network, (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) training by the one or more processors, the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation. (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 1 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination because receiving, by one or more processors, an initial input feature representation corresponding to an input feature of a plurality of input features defined for a neural network wherein: (i) the initial input feature representation is a fixed-size representation of the input feature, (ii) the input feature comprises a plurality of feature values, (iii) a feature value of the plurality of feature values corresponds to a genetic variant identifier of a plurality of genetic variants, and (iv) the initial input feature representation comprises an ordered sequence of input feature representation values is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). By the one or more processors uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). by the one or more processors, using the one-dimensional convolutional neural network, uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). By the one or more processors and using a multi-dimensional convolutional neural network, uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). training by the one or more processors, the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 1 is subject-matter ineligible. Regarding Claim 2: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 2 recites wherein the segment-wise representation comprises a unified segment-wise representation length that is common across a set of segment-wise representations. (This limitation is a mental process as it further modifies the mental process of claim 1 by defining the “each segment-wise representation,” which a human can use to execute the mental process.). Therefore claim 2 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 2 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 2 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 2 is subject-matter ineligible. Regarding Claim 3: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 3 recites wherein the segment-wise representation comprises a two-dimensional representation of the input feature representation segment. (This limitation is a mental process as it further modifies the mental process of claim 1 by defining the “each segment-wise representation,” which a human can use to execute the mental process.). Therefore claim 3 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 3 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 3 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 3 is subject-matter ineligible. Regarding Claim 4: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 4 recites wherein the multi-segment input feature representation is determined based at least in part on a three-dimensional tensor that is generated based at least in part on the two-dimensional representation. (This limitation is a mental process as it encompasses a human mentally determining the multi-segment input feature and a human mentally generating a three-dimensional tensor.). Therefore claim 4 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 4 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 4 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 4 is subject-matter ineligible. Regarding Claim 5: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 5 recites wherein the set of input feature representation segments is determined based at least in part on a segmentation policy that requires that each pair of consecutive input feature representation segments share a number of input feature representation values. (This limitation is a mental process as it encompasses a human mentally determining the m input feature representation segments.). Therefore claim 5 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 5 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 5 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 5 is subject-matter ineligible. Regarding Claim 6: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 6 recites wherein the set of input feature representation segments is determined based at least in part on a segmentation policy that requires that a pair of consecutive input feature representation segments share a number of input feature representation values. (This limitation is a mental process as it encompasses a human mentally determining the m input feature representation segments.). Therefore claim 6 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 6 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 6 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 6 is subject-matter ineligible. Regarding Claim 7: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 7 recites the same abstract idea as in claim 4. Therefore, claim 7 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 7 further recites an additional element of wherein the one-dimensional convolutional neural network is configured to generate a two-dimensional output. (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 7 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 7 does not provide significantly more than the abstract idea itself, taken alone and in combination because wherein the one-dimensional convolutional neural network is configured to generate a two-dimensional output specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 7 is subject-matter ineligible. Regarding Claim 8: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 8 recites wherein the feature value is associated with an input feature type designation of a plurality of input feature type designations, (This limitation is a mental process as it encompasses a human mentally associating each feature value with an input feature type designation.) generating the initial input feature representation comprises: generating one or more image representations of the input feature (This limitation is a mental process as it encompasses a human mentally generating one or more image representations.) wherein: (i) an image representation count of the one or more image representations is based at least in part on the plurality of input feature type designations, (ii) an image representation of the one or more image representations comprises a plurality of image regions, (iii) an image region for the image representation corresponds to a genetic variant identifier, (iv) generating the image representation associated with a character category is performed based at least in part on the plurality of feature values of the input feature having the input feature type designation; (This limitation is a mental process as it further modifies the mental process of “generating…image representations” by defining the “an image representation count,” which a human can use to execute the mental process.) generating a tensor representation of the one or more image representations of the input feature; (This limitation is a mental process as it encompasses a human mentally generating a tensor representation of the image representations) generating a plurality of positional encoding maps (This limitation is a mental process as it encompasses a human mentally generating positional encoding maps.) wherein: (i) a positional encoding map of the plurality of positional encoding maps comprises a plurality of positional encoding map regions, (ii) a positional encoding map region for the positional encoding map corresponds to the genetic variant identifier, and (iii) the genetic variant identifier is associated with a positional encoding map region set comprising one or more positional encoding map regions, from the plurality of positional encoding map regions, that are associated with the genetic variant identifier; (This limitation is a mental process as it further modifies the mental process of “generating… a plurality of positional encoding maps” by defining “each positional encoding map,” which a human can use to execute the mental process.), generating the initial input feature representation based at least in part on the tensor representation and the plurality of positional encoding maps (This limitation is a mental process as it encompasses a human mentally generating positional the initial input feature representation.). Therefore, claim 8 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 8 does not further recite any additional elements. Therefore, claim 8 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Since there are no additional elements, claim 8 does not provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 8 is subject-matter ineligible. Regarding Claim 9: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 9 recites wherein generating the one or more image representations of the input feature further comprises: generating a first image representation based at least in part on a first subset of the plurality of input features;, (This limitation is a mental process as it encompasses a human mentally generating an image representation.) generating a second image representation generated based at least in part on a second subset of the plurality of input feature; (This limitation is a mental process as it encompasses a human mentally generating an image representation.) generating a differential image representation based at least in part on an image difference operation across the first image representation and the second image representation. (This limitation is a mental process as it encompasses a human mentally generating an image representation.) Therefore, claim 9 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 9 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 9 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 9 is subject-matter ineligible. Regarding Claim 10: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 10 recites wherein generating the one or more image representations of the input feature further comprises: generating a first allele image representation based at least in part on a first subset of the plurality of input features corresponding to a first allele; (This limitation is a mental process as it encompasses a human mentally generating an image representation.) generating a second allele image representation based at least in part on a second subset of the input feature corresponding to a second allele; (This limitation is a mental process as it encompasses a human mentally generating an image representation.) generating a dominant allele image representation based at least in part on a third subset of the plurality of input feature corresponding to a dominant allele; (This limitation is a mental process as it encompasses a human mentally generating an image representation.) generating a minor allele image representation generated based at least in part on a fourth subset of the plurality of input feature corresponding to a minor allele; (This limitation is a mental process as it encompasses a human mentally generating an image representation.) generating a zygosity image representation of the one or more image representations based at least in part on performing one or more operations across the first allele image representation, the second allele image representation, the dominant allele image representation, and the minor allele image representation. (This limitation is a mental process as it encompasses a human mentally generating an image representation.) Therefore, claim 10 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 10 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 10 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 10 is subject-matter ineligible. Regarding Claim 11: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 11 recites wherein generating the one or more image representations of the input feature further comprises: identifying an initial image representation of the input feature; (This limitation is a mental process as it encompasses a human mentally identifying an image representation.) assigning an intensity value to the input feature type designation; (This limitation is a mental process as it encompasses a human mentally assigning one or more intensity values to each input feature type designation.) generating an intensity image representation comprising a plurality of intensity image regions based on the intensity value, (This limitation is a mental process as it encompasses a human mentally generating an image representation.) wherein (ii) an intensity image region of the plurality of intensity image regions corresponds to the genetic variant identifier. (This limitation is a mental process as it further modifies the mental process of “generating an intensity image representations” by defining “an intensity image region,” which a human can use to execute the mental process.) Therefore, claim 11 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 11 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 11 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 11 is subject-matter ineligible. Regarding Claim 12: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 12 recites Wherein the prediction comprises a polygenic risk score for one or more diseases for one or more individuals associated with the input feature. (This limitation is a mental process as it modifies the mental process of “generating a prediction” by defining “a prediction,” which a human can use to execute the mental process.) Therefore, claim 12 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 12 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 12 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 12 is subject-matter ineligible. Regarding Claim 13: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 13 recites wherein the feature value corresponds to a categorical feature type or numerical feature type. (This limitation is a mental process as it further modifies the mental process of claim 8 by defining the “each feature value,” which a human can use to execute the mental process.). Therefore claim 13 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 13 does not recite any additional elements and therefore, is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Claim 13 does not recite any additional elements and therefore, cannot provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 13 is subject-matter ineligible. Regarding Claim 14: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 14 recites the same abstract idea as in claim 8. Therefore claim 14 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 14 further recites wherein the feature value further corresponds to a chromosome number and locus. (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 14 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 14 do not provide significantly more than the abstract idea itself, taken alone and in combination because wherein the feature value further corresponds to a chromosome number and locus specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 14 is subject-matter ineligible. Regarding Claim 15: Subject Matter Eligibility Analysis Step 1: Claim 15 recites a method and is thus a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 15 recites generating based at least in part on the ordered sequence of input feature representation values, a set of input feature representation segments (This limitation is a mental process as it encompasses a human mentally generating input feature representation segments) wherein an input feature representation segment of the set of input feature representation segments: (i) comprises a defined subset of the ordered sequence of input feature representation values that (a) begins with an initial input feature representation value having an initial value in-sequence position indicator and (b) ends with a terminal input feature representation value having a terminal value in-sequence position indicator, (ii) is associated with a segment length indicator that is determined based at least in part on the initial value in-sequence position indicator and the terminal value in-sequence position indicator, and (iii) is associated with a one-dimensional convolutional neural network, of a set of one-dimensional convolutional neural networks respectively associated with the set of input feature representation segments, that is associated with an input dimensionality value that corresponds to the segment length indicator for the particular input feature representation segment (This limitation is a mental process as it further modifies the mental process of “generating… m input feature representation segments” by defining the “each input feature representation segment,” which a human can use to execute the mental process.), generating… based at least in part on the input feature representation segment, a segment-wise representation of the input feature representation segment (This limitation is a mental process as it encompasses a human mentally generating a segment-wise representation of the input feature representation segment.) generating…a multi-segment input feature representation of the input feature based at least in part on each segment-wise representation of the input feature representation segment (This limitation is a mental process as it encompasses a human mentally generating a feature representation of the input feature.) generate a prediction for the input feature(This limitation is a mental process as it encompasses a human mentally generating a prediction.). Therefore, claim 15 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 15 further recites additional elements of one or more processors (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).) one or more memories storing processor-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).) receiving, by one or more processors, an initial input feature representation corresponding to an input feature of a plurality of input features defined for a neural network wherein: (i) the initial input feature representation is a fixed-size representation of the input feature, (ii) the input feature comprises a plurality of feature values, (iii) a feature value of the plurality of feature values corresponds to a genetic variant identifier of a plurality of genetic variants, and (iv) the initial input feature representation comprises an ordered sequence of input feature representation values (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) using the one-dimensional convolutional neural network, (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) using a multi-dimensional convolutional neural network, (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) training by the one or more processors, the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation. (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 15 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 15 do not provide significantly more than the abstract idea itself, taken alone and in combination because one or more processors uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). one or more memories storing processor-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). receiving, by one or more processors, an initial input feature representation corresponding to an input feature of a plurality of input features defined for a neural network wherein: (i) the initial input feature representation is a fixed-size representation of the input feature, (ii) the input feature comprises a plurality of feature values, (iii) a feature value of the plurality of feature values corresponds to a genetic variant identifier of a plurality of genetic variants, and (iv) the initial input feature representation comprises an ordered sequence of input feature representation values is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). using the one-dimensional convolutional neural network, uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). using a multi-dimensional convolutional neural network, uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). training by the one or more processors, the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 15 is subject-matter ineligible. Regarding claim 16, claim 16 recites substantially similar limitations to claim 2 and is therefore rejected under the same analysis. Regarding claim 17, claim 17 recites substantially similar limitations to claim 3 and is therefore rejected under the same analysis. Regarding claim 18, claim 18 recites substantially similar limitations to claim 4 and is therefore rejected under the same analysis. Regarding claim 19, claim 19 recites substantially similar limitations to claim 5 and is therefore rejected under the same analysis. Regarding Claim 20: Subject Matter Eligibility Analysis Step 1: Claim 20 recites a method and is thus a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 20 recites generating based at least in part on the ordered sequence of input feature representation values, a set of input feature representation segments (This limitation is a mental process as it encompasses a human mentally generating input feature representation segments) wherein an input feature representation segment of the set of input feature representation segments: (i) comprises a defined subset of the ordered sequence of input feature representation values that (a) begins with an initial input feature representation value having an initial value in-sequence position indicator and (b) ends with a terminal input feature representation value having a terminal value in-sequence position indicator, (ii) is associated with a segment length indicator that is determined based at least in part on the initial value in-sequence position indicator and the terminal value in-sequence position indicator, and (iii) is associated with a one-dimensional convolutional neural network, of a set of one-dimensional convolutional neural networks respectively associated with the set of input feature representation segments, that is associated with an input dimensionality value that corresponds to the segment length indicator for the particular input feature representation segment (This limitation is a mental process as it further modifies the mental process of “generating… m input feature representation segments” by defining the “each input feature representation segment,” which a human can use to execute the mental process.), generating… based at least in part on the input feature representation segment, a segment-wise representation of the input feature representation segment (This limitation is a mental process as it encompasses a human mentally generating a segment-wise representation of the input feature representation segment.) generating…a multi-segment input feature representation of the input feature based at least in part on each segment-wise representation of the input feature representation segment (This limitation is a mental process as it encompasses a human mentally generating a feature representation of the input feature.) generate a prediction for the input feature(This limitation is a mental process as it encompasses a human mentally generating a prediction.). Therefore, claim 20 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 20 further recites additional elements of One or more non-transitory computer-readable media storing processor-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).) receiving, by one or more processors, an initial input feature representation corresponding to an input feature of a plurality of input features defined for a neural network wherein: (i) the initial input feature representation is a fixed-size representation of the input feature, (ii) the input feature comprises a plurality of feature values, (iii) a feature value of the plurality of feature values corresponds to a genetic variant identifier of a plurality of genetic variants, and (iv) the initial input feature representation comprises an ordered sequence of input feature representation values (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) using the one-dimensional convolutional neural network, (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) using a multi-dimensional convolutional neural network, (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) training by the one or more processors, the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation. (This element does not integrate the abstract idea into a practical application because amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 20 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 20 do not provide significantly more than the abstract idea itself, taken alone and in combination because One or more non-transitory computer-readable media storing processor-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). receiving, by one or more processors, an initial input feature representation corresponding to an input feature of a plurality of input features defined for a neural network wherein: (i) the initial input feature representation is a fixed-size representation of the input feature, (ii) the input feature comprises a plurality of feature values, (iii) a feature value of the plurality of feature values corresponds to a genetic variant identifier of a plurality of genetic variants, and (iv) the initial input feature representation comprises an ordered sequence of input feature representation values is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). using the one-dimensional convolutional neural network, uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). using a multi-dimensional convolutional neural network, uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). training by the one or more processors, the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 20 is subject-matter ineligible. Allowable Subject Matter Claims 1-20 would be allowable over the prior art of record if the 101 rejections are overcome in light of the instant amendments. Specifically, regarding Claim 1, “(iii) is associated with a one- dimensional convolutional neural network, of a set of one-dimensional convolutional neural networks respectively associated with the set of input feature representation segments, that is associated with an input dimensionality value that corresponds to the segment length indicator;”, “generating, by the one or more processors, using the one-dimensional convolutional neural network, and based at least in part on the input feature representation segment, a segment-wise representation of the input feature representation segment;”, “generating, by the one or more processors and using a multi-dimensional convolutional neural network , a multi-segment input feature representation of the input feature based at least in part on the segment-wise representation of the input feature representation segment;”, and “training, by the one or more processors, the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation to generate a prediction for the input feature” in conjunction with the other limitations of the claims are not taught by the prior art of record. The closest prior art of record is Gunasekaran et al. (“Analysis of DNA Sequence Classification Using CNN and Hybrid Models”) (hereafter referred to as Guna), Pascal (“K-mer analysis with python”) (hereafter referred to as Pascal), Zhang (“Intro to Distributed Deep Learning Systems”) (hereafter referred to as Zhang 1), Kandpal et al. (US 2022/0327322 A1) (hereafter referred to as Kandpal), and Krizhevsky (US 2015/0294219 A1) (hereafter referred to as Krizhevsky). Guna discloses receiving an initial input feature representation corresponding to an input feature of a plurality of input features defined for a neural network (Guna, page 3, Figure 3), wherein the initial input feature representation is a fixed-size representation of the input feature (Guna, page 3, Figure 3), the input feature comprises a plurality of feature values (Guna, page 1, 2nd column, last paragraph – page 2, 1st column, 1st paragraph), a feature value of the plurality of feature values corresponds to a genetic variant identifier of a plurality of genetic variants ((Guna, page 1, 2nd column, last paragraph – page 2, 1st column, 1st paragraph; Guna, page 1, 1st column, 1st paragraph), the initial input feature representation comprises an ordered sequence of input feature representation values (Guna, page 3, Figure 3), generating a set of input feature representation segments (Guna, page 1, 1st column, 3rd paragraph), wherein an input feature representation segment of the set of input feature representation segments comprises a defined subset of the ordered sequence of input feature representation values that begins with an initial feature representation value and ends with a terminal input feature representation value (Guna, page 3, Figure 5), and the input feature representation is associated with a one-dimensional convolutional neural network (Guna, page 5, 2nd column). Guna does not disclose an initial value in-sequence position indicator, a terminal value in-sequence potion indicator, a segment length indicator, a set of one-dimensional convolutional neural networks associated with the set of input feature representation segments that is associated with an input dimensionality value that corresponds to the segment length indicator, generating a segment-wise representation of the input feature representation segment, generating a multi-segment input feature representation of the input feature, or training neural networks based on the multi-segment input feature representation to generate a prediction for the input feature. Pascal discloses having an initial value in-sequence position indicator (Pascal, page 1, 3rd paragraph), having a terminal value in-sequence position indicator (Pascal, page 1, 3rd paragraph), and a segment length indicator (Pascal, page 1, 3rd paragraph). Pascal does not disclose a set of one-dimensional convolutional neural networks neural networks associated with the set of input feature representation segments that is associated with an input dimensionality value that corresponds to the segment length indicator, generating a segment-wise representation of the input feature representation segment, generating a multi-segment input feature representation of the input feature, or training neural networks based on the multi-segment input feature representation to generate a prediction for the input feature. Zhang 1 discloses neural networks respectively associated with the set of input feature representation segments, that is associated with an input dimensionality value that corresponds to the segment length indicator (Zhang 1, page 5, 2nd paragraph). Zhang 1 does not disclose a set of one-dimensional convolutional neural networks, generating a segment-wise representation of the input feature representation segment, generating a multi-segment input feature representation of the input feature, or training neural networks based on the multi-segment input feature representation to generate a prediction for the input feature. Kandpal discloses generating a segment-wise representation of the input feature representation segment (Kandpal, page 18, paragraph 105), generating a multi-segment input feature representation of the input feature (Kandpal, page 18, paragraph 0105), and training neural networks based on the multi-segment input feature representation to generate a prediction for the input feature (Kandpal, page 10, paragraph 0004-0008; Kandpal, page 18, paragraph 0105-107). Kandpal does not disclose a set of one-dimensional convolutional neural networks. Krizhevsky discloses a set of convolutional neural networks (Krizhevsky, page 2, Figure 1). Krizhevsky does not disclose a set of one-dimensional convolutional neural networks. When combined, the references described above do not disclose the data flow of generating using the one-dimensional convolutional neural network a segment-wise representation of the input feature representation segment, generating using a multi-dimensional convolutional neural network a multi-segment input feature representation of the input feature based at least in part on the segment-wise representation of the input feature representation segment, and training the set of one-dimensional convolutional neural networks, the multidimensional convolutional neural network, and the neural network based on the multi-segment input feature representation to generate a prediction for the input feature. Therefore, without the use of hindsight reasoning, the prior art of record does not disclose claim 1 as a whole. Claims 2-14 are allowable at least due to their dependencies on claim 1 if the 101 rejections are overcome. Claim 15 recites substantially similar limitations as claim 1 and is therefore allowable under the same rationale if the 101 rejections are overcome. Claims 16-19 are allowable at least due to their dependencies on claim 15 if the 101 rejections are overcome. Claim 20 recites substantially similar limitations as claim 1 and is therefore allowable under the same rationale if the 101 rejections are overcome. Response to Arguments The objections to the drawings have been overcome in light of the instant amendments. The objections to the claims have been overcome in light of the instant amendments. On page 13-14, Applicant argues: First, the Office Action notes on page 32 that the claims are allowable over prior art. Office Action p. 32. In Ex Parte Desjardines, recently designated as precedential by Director Squires, the Appeals Review Panel stated that "the traditional and appropriate tools to limit patent protection to its proper scope" are 35 U.S.C. §§ 102, 103, & 112, and that these statutory provisions should be the focus of examination. Ex Parte Desjardines, p. 10. These provisions were the focus of examination and, after amendments to the claims, the claims now comply with each of them. For at least this reason, Applicant respectfully requests reconsideration of the Office Action's rejection under 35 U.S.C. § 101. Regarding the Applicant’s argument that the 101 rejections should be withdrawn on the basis of complying with 35 U.S.C 102, 103, and 112, Examiner respectfully disagrees. Examiner respectfully notes that Ex Parte Desjardines does not change the analysis of claims under 35 U.S.C 101. On page 14, Applicant argues: Second, the Office Action fails to consider the training operations previously recited by the claims by dismissing them without explanation. Office Action p. 9. The Office Action's analysis runs counter to Ex Parte Desjardines, where the Appeals Review Panel acknowledged that "improvements in training the machine learning model itself' were sufficient to integrate an abstract idea into a practical application. Ex Parte Desjardines, p. 8. Just like the claims in Ex Parte Desjardines, the present claims recite an improved training technique for a machine learning model that enhances the performance of a computer. Compare Specification ¶¶ [0033] and [0157]-[0158] to Ex Partes Desjardines, p. 9. For example, in Ex Parte Desjardines, the claimed training technique improved the storage capacity of a computer. Here, the claims increase the speed and reduce the amount of computational resources required to perform large, data-intensive machine learning tasks. See Specification ¶¶ [0033] and [0157]-[0158]. In both cases, a training technique is applied to adjust the values of a machine learning model to improve how the machine learning model itself operates. Thus, like the claims in Ex Parte Desjardines, the present claims are directed to patent eligible subject matter under 35 U.S.C. § 101. Regarding the Applicant’s argument that training operations provide an improvement, Examiner respectfully disagrees. Examiner respectfully notes that the limitation of “training the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation” amounts to mere “apply it on a computer”, and using a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Examiner further respectfully notes that while paragraphs [0033] and [0157]-[0158] provide an improvement, this improvement is directed to “process[ing] the m input feature representation segments in parallel” [0033] and “using a segmentation policy that requires that consecutive/neighboring input feature representation segment having a defined degree of shared input feature representation segments” [0157] which are not reflected in the claims. Examiner further notes that the claims reflect a segmentation policy and consecutive/neighboring input feature representation segments, but does not reflect the defined degree of shared input feature representation segments, the shared input feature representation segments, nor processing segments in parallel. Under broadest reasonable interpretation the limitation of “an input feature representation segment of the set of input feature representation segments:…(iii)is associated with a one-dimensional convolutional neural network, of a set of one-dimensional convolutional neural networks respectively associated with the set of input feature representation segments” is interpreted as a segment being associated with a one-dimensional convolutional neural network of a set of neural networks, where the set of neural networks is associated with a set of segments. This limitation however, does not claim processing the segments on multiple neural network at the same time nor in a parallel fashion. Thus, this limitation does not reflect the parallelization of the invention that is mentioned in the specification. Examiner recommends amending the claims to reflect the specific improvement described in the specification. On page 16, Applicant argues: As amended, claim 1 recites a machine learning process in which multiple neural networks are applied to reduce the overall computational load and time required to process large, data-intensive machine learning tasks. The human mind cannot practically (i) generate a set of input feature representation segments, (ii) generate, using the one-dimensional convolutional neural network, a segment-wise representation of the input feature representation segment, (iii) generate, using a multi-dimensional convolutional neutral network, a multi-segment input feature representation of the input feature, or (iv) train the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation. Accordingly, no element of claim 1, as amended, under its broadest reasonable interpretation may be considered a mental process as defined by the MPEP. For at least this reason, Applicant respectfully requests withdrawal of the rejection under 35 U.S.C. § 101 because the claimed invention is not directed to a judicial exception under prong one of Step 2A. Regarding the Applicant’s argument that claim 1 does not recite an abstract idea, Examiner respectfully disagrees. Specifically, Examiner respectfully notes that a human can mentally (i) generate a set of input feature representation segments, (ii) generate a segment-wise representation of the input feature representation segment, and (iii) generate a multi-segment input feature representation. Examiner further respectfully notes that “claims can recite a mental process even if they are claimed as being performed on a computer” (MPEP 2106.04 (a)(2)(III)(C)). On page 17, Applicant argues: The MPEP states that "[l]imitations the courts have found indicative that an additional element ( or combination of elements) may have integrated the exception into a practical application include an improvement in the functioning of a computer, or an improvement to other technology or technical field." MPEP § 2106.04(d). Ex Parte Desjardines found that improvements in training a machine learning model itself are sufficient to integrate an exception into a practical application. Ex Parte Desjardines, p. 8. Claim 1 recites an improved training technique for a machine learning model that improves the speed and reduces the computational load required for large machine learning tasks. See Specification, ¶¶¶¶ [0033] and [0157]-[0158]. For example, claim 1 recites, inter alia: training ... the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation to generate a prediction for the input feature. (emphasis added). Claim l is directed to an improvement in computer technology that is directly tied to machine learning, specifically, "efficiently performing machine learning models on large datasets and/or on data-intensive datasets." Specification ,i [0033]. "[I]nstead of performing the often excessively large computational task of processing [a large file] as a whole" the claimed machine learning framework "divide[s] the [file] into smaller computational tasks that can be more manageably performed by a machine learning model. Id This allows for "faster and less-resource intensive processing of large machine learning tasks and/or data-intensive machine learning tasks by enabling parallelization of the machine learning tasks and/or data-intensive machine learning tasks." Id. Regarding the Applicant’s argument that training operations provide an improvement, Examiner respectfully disagrees. Examiner respectfully notes that the limitation of “training the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation” amounts to mere “apply it on a computer”, and using a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Examiner further respectfully notes that while paragraphs [0033] and [0157]-[0158] provide an improvement, this improvement is directed to “process[ing] the m input feature representation segments in parallel” [0033] and “using a segmentation policy that requires that consecutive/neighboring input feature representation segment having a defined degree of shared input feature representation segments” [0157] which are not reflected in the claims. Examiner further notes that the claims reflect a segmentation policy and consecutive/neighboring input feature representation segments, but does not reflect the defined degree of shared input feature representation segments, the shared input feature representation segments, nor processing segments in parallel. Under broadest reasonable interpretation the limitation of “an input feature representation segment of the set of input feature representation segments:…(iii)is associated with a one-dimensional convolutional neural network, of a set of one-dimensional convolutional neural networks respectively associated with the set of input feature representation segments” is interpreted as a segment being associated with a one-dimensional convolutional neural network of a set of neural networks, where the set of neural networks is associated with a set of segments. This limitation however, does not claim processing the segments on multiple neural network at the same time nor in a parallel fashion. Thus, this limitation does not reflect the parallelization of the invention that is mentioned in the specification. Examiner recommends amending the claims to reflect the specific improvement described in the specification. On page 18, Applicant argues: The machine learning process recited by the claims, specifically "training the set of one Dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation," is a combination of additional elements that go beyond any abstract idea. Moreover, they "address technical challenges related to infusing attention-like behavior into non-attention-based machine learning models." Specification ,i [0036]. For example, the machine learning framework may "infus[e] attention-like behavior into a non-attention-based machine learning model[s] without requiring extensive computational operations needed to train a non-attention-based machine learning model. In this way, claim recites an improvement in machine learning training that improves computational efficiency of machine learning models - not an underlying abstract idea." Id. Regarding the Applicant’s argument that training operations provide an improvement, Examiner respectfully disagrees. Examiner respectfully notes that the limitation of “training the set of one-dimensional convolutional neural networks, the multi-dimensional convolutional neural network, and the neural network based on the multi-segment input feature representation” amounts to mere “apply it on a computer”, and using a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Examiner further respectfully notes that while paragraphs [0036] provide an improvement, this improvement is directed to “using a segmentation policy that requires that consecutive/neighboring input feature representation segment having a defined degree of shared input feature representation segments” and is not reflected in the claims. Examiner further notes that the claims reflect a segmentation policy and consecutive/neighboring input feature representation segments, but does not reflect the defined degree of shared input feature representation segments nor the shared input feature representation segments. Examiner recommends amending the claims to reflect the specific improvement described in the specification. On page 26, Applicant argues: For at least the same reasons as set forth above, Applicant submits that the independent claims 15 and 20 recite patent eligible subject matter under 35 U.S.C. § 101 and requests withdrawal of the rejection to claims 15 and 20 (and the claims that depend therefrom) as well as allowance in due course. Regarding the Applicant’s argument that the dependent claims are allowable at least due in part to their dependency on the independent claims, the Examiner respectfully disagrees and notes the instant rejections and response to arguments regarding the independent claims above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hall et al. (US 2022/0344049) also describes methods to segment data to use learned with separate models. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN R HAEFNER whose telephone number is (571)272-1429. The examiner can normally be reached Monday - Thursday: 7:15 am - 5:15 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.R.H./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jan 19, 2022
Application Filed
Jun 02, 2025
Non-Final Rejection — §101
Aug 18, 2025
Applicant Interview (Telephonic)
Aug 18, 2025
Examiner Interview Summary
Sep 08, 2025
Response Filed
Oct 15, 2025
Final Rejection — §101
Jan 21, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602431
METHODS FOR PERFORMING INPUT-OUTPUT OPERATIONS IN A STORAGE SYSTEM USING ARTIFICIAL INTELLIGENCE AND DEVICES THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12572828
METHOD FOR INDUSTRY TEXT INCREMENT AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+66.7%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month