Prosecution Insights
Last updated: April 19, 2026
Application No. 17/994,416

NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING DEVICE

Non-Final OA §101§103§112
Filed
Nov 28, 2022
Examiner
MISIR, DAYWAYSHWAR D
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Osaka University
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
451 granted / 538 resolved
+28.8% vs TC avg
Strong +48% interview lift
Without
With
+47.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
32.5%
-7.5% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 538 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “classifying a plurality of linear models” into “groups”. However, it is unclear and not defined in the claims as to what makes the models differ among each other (the variables and/or the coefficients) to justify the need of grouping them, such as the specifying of the characteristics of the linear models and of the groups, and also amounts to as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. Claim 1 further recites the step of “deciding that includes”, and “training data” and “training”. However, it is unclear whether the linear models involved in the previous classifying steps were trained as defined in the step of “deciding”. It is also unclear whether the mentioned ‘training’ is performed before the “first question” is output. Claim 1 further recites “when a linear model in which the degree of importance is reflected” (emphasis added). Firstly “degree of importance” is a relative term and secondly it is unclear in the claim how this importance is “reflected” in the model. The claim does not specify the characteristics of the linear model and how a degree of importance may be “reflected, which also amounts to as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. Claim 1 further recites “based on extent of decrease in number of target groups for selection according to an answer to the first question”. It is unclear what is intended by “target groups” and how the decrease in number of targets groups may be quantified/calculated, which also amounts to as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. The above rejections also applies to independent Claims 6 and 7. Additionally, Claim 2 recites “generating the plurality of linear models according to formulation”. It is unclear in the claim as to what this “formulation” represents or is defined to be, which also amounts to as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. Further, it is also unclear how linear models can be generated by combining “types” of “degree of importance”, which also amounts to as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. Additionally, Claim 3 recites “linear models including the first-type degree of importance” and “linear models including the second-type degree of importance”. It is unclear and not defined in the claim how a linear model may include a “degree of importance”, which also amounts to as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. Additionally, Claim 4 recites “the selection includes deleting, from the plurality of groups, a group that includes a linear model in which explanatory variable with nonidentical degree of importance to the answer is included” (emphasis added). It is unclear and not defined in the claim which entities are “selected” and how a degree of importance is associated to or calculated with respect to an explanatory variable of a linear model, if not via the answer, to be judged “nonidentical”, which also amounts to as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. Additionally, Claim 5 recites “calculating the total value” which firstly lacks antecedent basis and is unclear as to how this is used in the subsequent limitation for “deciding on the target explanatory variable for asking the second question”, which also amounts to as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: All claims are directed towards either a method, a device or a non-transitory computer-readable recording medium and thus satisfies Step 1 as falling into one of the statutory categories. Step 2A, Prong One: Independent Claim 1 recites (the same analysis applies to similar independent Claims 6 and 7): classifying a plurality of linear models, each of which includes one or more variables, into a plurality of groups in such a way that linear models which include identical variables included in each of the plurality of linear models and which have identical coefficient encoding with respect to the variables are grouped in same group; this limitation, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is a person is capable of grouping models into groups based on their variables and coefficients using observation, evaluation and judgment. Step 2A, Prong Two: Claim 1 recites the additional elements of (the same analysis applies to similar independent Claims 6 and 7): outputting a first question used in deciding degree of importance of each explanatory variable included in training data which is used in training, by using machine learning, of the plurality of linear models, and deciding on an explanatory variable about which a second question is to be asked, when a linear model in which the degree of importance is reflected is to be selected from the plurality of linear models, based on extent of decrease in number of target groups for selection according to an answer to the first question, the second question being a question to be outputted after the first question. This limitation is considered as adding insignificant extra-solution activity (outputting first and second question, and deciding on an explanatory variable) to the judicial exception - see MPEP 2106.05(g). The further additional elements of a “computer” and/or a “processor” as recited in these claims are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are therefore directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are considered as adding insignificant extra-solution activity (outputting first and second question, and deciding on an explanatory variable) to the judicial exception - see MPEP 2106.05(g) and MPEP 2106.05(d). And, the “computer” and/or a “processor” as recited in these claims amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are therefore not patent eligible. Dependent Claim 2 is also considered as adding insignificant extra-solution activity (finding models based on importance) to the judicial exception - see MPEP 2106.05(g). Dependent Claim 3 is also considered as adding insignificant extra-solution activity (deciding a target explanatory variable for asking a question based on a total group count value) to the judicial exception - see MPEP 2106.05(g). Dependent Claim 4 is also considered as adding insignificant extra-solution activity (feedback or question and answering for pruning groups of models) to the judicial exception - see MPEP 2106.05(g). Dependent Claim 5 is also considered as adding insignificant extra-solution activity (outputting model and deciding on a target explanatory variable for asking a question based on a value) to the judicial exception - see MPEP 2106.05(g). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7 are rejected under 35 U.S.C. 103 as being unpatentable over Motohashi, US 2020/0074486 A1, in view of Goto, US 2020/0279178 A1. Regarding Claim 1, Motohashi teaches: A non-transitory computer-readable recording medium having stored therein an information processing program that causes a computer to execute a process comprising: classifying a plurality of linear models, each of which includes one or more variables, into a plurality of groups in such a way that linear models which include identical variables included in each of the plurality of linear models and which have identical coefficient encoding with respect to the variables are grouped in same group (paragraph 14: “a plurality of prediction models that are each identified by the plurality of classifications”, the classifications representing the groupings. And, paragraphs 90, 118: “when displaying a plurality of prediction models, the display control unit 40 preferably displays weights of the same variables in a manner as to make the weights aligned in the same column. Further, the display control unit 40 may receive explanatory variables designated by the user through the reception unit 10 and sort the prediction models in descending order of the weights of the explanatory variables thus designated”, the weights being the coefficients). With Motohashi teaching those limitations as previously pointed out, Motohashi may not have taught all of the following, however, Goto shows: and deciding that includes outputting a first question used in deciding degree of importance of each explanatory variable included in training data which is used in training, by using machine learning, of the plurality of linear models, and deciding on an explanatory variable about which a second question is to be asked, when a linear model in which the degree of importance is reflected is to be selected from the plurality of linear models, based on extent of decrease in number of target groups for selection according to an answer to the first question, the second question being a question to be outputted after the first question (paragraphs 47, 50, 84-85: “The information on hypothesis 142 is information that associates a combination of an objective variable and conditions regarding one or more explanatory variables corresponding to the objective variable with an importance degree. FIG. 3 is a diagram illustrating an example of the information on hypothesis. Hereinafter, a combination in the information on hypothesis 142 is sometimes referred to as a hypothesis”, the hypothesis representing the question and its validity/invalidity representing the answer). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Goto with that of Motohashi for deciding that includes outputting a first question used in deciding degree of importance of each explanatory variable included in training data which is used in training, by using machine learning, of the plurality of linear models, and deciding on an explanatory variable about which a second question is to be asked, when a linear model in which the degree of importance is reflected is to be selected from the plurality of linear models, based on extent of decrease in number of target groups for selection according to an answer to the first question, the second question being a question to be outputted after the first question. The ordinary artisan would have been motivated to modify Motohashi in the manner set forth above for the purposes of determining the importance degree of a hypothesis or a question and corresponding answer [Goto: paragraph 50]. Regarding Claim 2, with Motohashi teaching those limitations of the claim as previously pointed out, Goto further teaches: The non-transitory computer-readable recording medium according to claim 1, wherein the process further includes generating the plurality of linear models according to formulation by combining, regarding each explanatory variable included in the training data, a first-type degree of importance in case of assuming that concerned explanatory variable is important and a second-type degree of importance in case of assuming that concerned explanatory variable is not important (paragraph 54: “The extraction apparatus 10 generates, by training, a model combining a hypothesis and an importance degree” and “the extraction apparatus 10 combines the data items to extract a large number of hypotheses, and performs machine training (e.g., Wide Learning) that adjusts importance degrees of the hypotheses (knowledge chunks (hereinafter, sometimes simply described as “chunks”)) and constructs a classification model with high accuracy”). Regarding Claim 3, with Motohashi teaching those limitations of the claim as previously pointed out, Goto further teaches: The non-transitory computer-readable recording medium according to claim 2, wherein the deciding includes calculating, for the each explanatory variable, a first-type group count indicating number of groups to which linear models including the first-type degree of importance belong, and a second-type group count indicating number of groups to which linear models including the second-type degree of importance belong, and deciding, as target explanatory variable for asking the second question, explanatory variable for which total value of the first-type group count and the second-type group count is smallest (paragraph 6: “predicting the objective variable from the explanatory variables of the test data using the trained model for each of groups by which classification has been performed at the classifying; and calculating a predetermined resource amount to be allocated to each of the groups based on the objective variable for each of the groups predicted at the predicting”. And paragraph 101: “The extraction unit 153 extracts a specific combination from the combinations based on the conditions or the importance degree for each of groups by which classification has been performed according to a classification condition that is at least a part of the conditions The extraction unit 153 refers to the information on group 144 and classifies the hypotheses in the information on hypothesis 142 into the groups”. And paragraph 107: “The extraction apparatus 10 extracts a specific combination from the combinations based on the conditions or the importance degree for each of groups by which classification has been performed according to a classification condition that is at least a part of the conditions In this way, the extraction apparatus 10 can evaluate the importance degree of a condition combining a plurality of item values and further classify the combinations into the groups”). Regarding Claim 4, with Goto teaching those limitations of the claim as previously pointed out, Motohashi further teaches: The non-transitory computer-readable recording medium according to claim 3, wherein the deciding includes asking a user the first question about whether or not the target explanatory variable is important, obtaining answer to the first question, and the selection includes deleting, from the plurality of groups, a group that includes a linear model in which explanatory variable with nonidentical degree of importance to the answer is included (paragraphs 105, 109: “The user can conduct an analysis such as “is there a commonality between a group of customers who visit the facility and a group of customers who like orange juice?””. And paragraph 150: “Performing output under designated conditions makes it possible to narrow down prediction models in accordance with the user's viewpoint, as illustrated in FIG. 23 and FIG. 24. That is, the use of the information processing system of the present invention makes it possible to analyze factors that possibly contribute to the prediction target from various viewpoints”). Regarding Claim 5, with Goto teaching those limitations of the claim as previously pointed out, Motohashi further teaches: The non-transitory computer-readable recording medium according to claim 4, wherein the deciding includes outputting, when group count after the selection is smaller than a threshold value, at least one linear model belonging to a group counted in the group count, and deciding that, when the group count is equal to or greater than the threshold value, includes, using a linear model belonging to a group counted in the group count, calculating the total value, and deciding on the target explanatory variable for asking the second question (paragraphs 63, 65: “a prediction model is identified by a classification rather than an ID. In a case where a prediction model is used for the purpose of factor analysis, this configuration makes it possible to provide an information processing system capable of conducting a factor analysis with high usability when there are a large number of prediction models”. And paragraphs 84, 87: “The extraction unit 20 makes a query used for extracting a prediction model based on the classification thus received, and extracts the prediction model from the storage unit 30 based on the query thus made”). Claims 6-7 are similar to Claim 1 and are rejected under the same rationale as stated above for that claim. Examiner’s Note: The Examiner cites particular pages, sections, columns, line numbers, and/or paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner and the additional related prior arts made of record that are considered pertinent to applicant's disclosure to further show the general state of the art. The Examiner's interpretations in parenthesis are provided with the cited references to assist the applicants to better understand how the examiner interprets the prior art to read on the claims. Such comments are entirely consistent with the intent and spirit of compact prosecution. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 for the relevant prior art where for example Katoh, US 2020/0090076 A1, teaches creating a plurality of decision trees, using pieces of training data respectively including an explanatory variable and an objective variable, which are configured by a combination of the explanatory variables and respectively estimate the objective variable based on true or false of the explanatory variables. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVE MISIR whose telephone number is (571)272-5243. The examiner can normally be reached M-R 8-5 pm, F some hours. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at 5712703169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVE MISIR/Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Nov 28, 2022
Application Filed
Sep 21, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602619
MACHINE LEARNING SYSTEM AND MACHINE LEARNING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12585991
DIGITAL RIGHTS MANAGEMENT OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579475
ARTIFICIAL INTELLIGENCE MODEL GENERATED USING AGENTIC WORKFLOW SYSTEM AND METHOD FOR ARTIFICIAL INTELLIGENCE MODEL ALIGNED WITH DOMAIN-SPECIFIC PRINCIPLES
2y 5m to grant Granted Mar 17, 2026
Patent 12572802
METHODS AND DEVICES IN PERFORMING A VISION TESTING PROCEDURE ON A PERSON
2y 5m to grant Granted Mar 10, 2026
Patent 12562242
DATA DRIVEN FEATURIZATION AND MODELING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+47.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 538 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month