Prosecution Insights
Last updated: April 19, 2026
Application No. 18/112,133

LEARNING METHOD OF VALUE CALCULATION MODEL AND SELECTION PROBABILITY ESTIMATION METHOD

Non-Final OA §101§103
Filed
Feb 21, 2023
Examiner
RYLANDER, BART I
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
77%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
68 granted / 109 resolved
+7.4% vs TC avg
Moderate +15% lift
Without
With
+15.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
29 currently pending
Career history
138
Total Applications
across all art units

Statute-Specific Performance

§101
19.8%
-20.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 109 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to submission of application on 2/21/2023. Claims 1-12 are presented for examination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Claims 1-7 are directed to a method (i.e., a process); and, claims 8-12 are directed to a non-transitory, computer-readable recording medium (i.e., a manufacture); therefore, all claims are directed to one of the four statutory categories of invention. Step 2A: Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Claim 1 recites limitations of: adjusting the value calculation model so that a relationship between values calculated when attribute values of the two options included in each combination are input to the value calculation model and a relationship between the selection probabilities corresponding to each combination are close to each other – mental process (observation, evaluation, judgement) as a human mind can adjust a calculation model when attribute values of two options are input. Step 2A: Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? Claim 1 recites the additional elements of: A learning method of a value calculation model for calculating a value of an option used when a person acts from an attribute value of the option, implemented by a computer – mere description of the intent of a generic model used to perform the abstract idea. See MPEP 2106.05(f)(1). acquiring input data in which a selection probability indicating a rate at which each option is selected from a plurality of options and attribute values of the plurality of options when the selection probability is obtained are associated with each other – inputting data is insignificant, extra-solution activity. See MPEP 2106.05(g). acquiring, for each combination of two options that can be extracted from the plurality of options, a relationship between selection probabilities of the two options included in each combination from the input data – inputting data is insignificant, extra-solution activity. See MPEP 2106.05(g). The limitations do not integrate the judicial exception into a practical application. Therefore, no practical application is recited in the claim. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? The additional elements are: A learning method of a value calculation model for calculating a value of an option used when a person acts from an attribute value of the option, implemented by a computer – mere description of the intent of a generic model used to perform the abstract idea. See MPEP 2106.05(f)(1). acquiring input data in which a selection probability indicating a rate at which each option is selected from a plurality of options and attribute values of the plurality of options when the selection probability is obtained are associated with each other – inputting data is insignificant, extra-solution activity. See MPEP 2106.05(g). Transmitting data is well-understood, routine, and conventional. See MPEP 2106.05(d)(II)(i). acquiring, for each combination of two options that can be extracted from the plurality of options, a relationship between selection probabilities of the two options included in each combination from the input data – inputting data is insignificant, extra-solution activity. See MPEP 2106.05(g). Transmitting data is well-understood, routine, and conventional. See MPEP 2106.05(d)(II)(i). The additional elements do not amount to significantly more than the abstract idea. Therefore, the claim is not patent eligible. Independent claim 8 recites the same significant limitations and a similar analysis applies. Claim 8 recites the additional elements of “a non-transitory computer-readable recording medium storing a learning program of a value calculation model that causes a computer to execute a process, the value calculation model being for calculating a value of an option used when a person acts from an attribute value of the option” – high level description of computer components used to implement a process is construed as generic computer components. See MPEP 2106.05(f)(2). As such, it does not integrate the abstract idea into a practical application. Nor does it amount to significantly more. Therefore, the independent claims are not patent eligible. The above analysis similarly applies to the dependent claims. Claims 2 and 9 recite the additional limitations of “the value calculation model is a neural network having the attribute value as an input and the value as an output” - claiming a neural network without a description of the neural network is merely using a generic machine learning model to implement the abstract idea. See MPEP 2106.05(f)(1). Claims 3 and 10 recite the additional elements of “acquiring, for all combinations of two options that can be extracted from the plurality of options, a difference between a relationship between values of the two options included in a combination” – inputting data is insignificant, extra-solution activity. See MPEP 2106.05(g). Transmitting data is well-understood, routine, and conventional. See MPEP 2106.05(d)(II)(i), and “a relationship between the selection probabilities corresponding to the combination” – mere description of the data. As such, this merely identifies a field of use. See MPEP 2106.05(h), and “adjusting the value calculation model so that a sum of differences of the all combinations is smaller than a predetermined value” – mental process (observation, evaluation, judgement) as a human mind can adjust a calculation model. Claims 4 and 11 recite the additional elements of “the relationship between the values is a ratio of one value to another value, and the relationship between the selection probabilities is a ratio of one selection probability to another selection probability” – further details describing the result of the mental process. See MPEP 2106.05(f)(3). Claims 5 and 12 recite the additional elements of “the relationship between the values is a difference between one value and another value, and the relationship between the selection probabilities is a difference between a numerical value of a natural logarithm of one selection probability and a numerical value of a natural logarithm of another selection probability” – further details describing the result of the mental process. See MPEP 2106.05(f)(3). Claim 6 recites the additional elements of “calculating values of a plurality of options by inputting attribute values of the plurality of options to the value calculation model” - mathematical concepts (relationships, formulas or equations, calculations). See MPEP 2106.04(a)(2). Inputting data is insignificant, extra-solution activity. See MPEP 2106.05(g). Transmitting data is well-understood, routine and conventional. See MPEP 2106.05(d)(II)(i). And, “estimating a selection probability of each option based on calculated values of the plurality of options” – mental process (observation, evaluation, judgement) as a human mind can estimate a selection probability. Claim 7 recites the additional elements of “adjusting at least part of attribute values of the plurality of options so that the estimated selection probability of each option approaches a target selection probability” – mental process (observation, evaluation, judgement) as a human mind can adjust attribute values. The dependent claims do not integrate the abstract idea into a practical application. Nor do they amount to significantly more than the abstract idea. Therefore, claim 1-12 are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-12 are rejected over Osugami, et al (US 2015/0170170 A1, Processing Apparatus, Processing Method, and Program, herein Osugami), and Ikeda, et al (US2014/0365250A1, Transportation Service Reservation Method and Apparatus, herein Ikeda). Regarding claim 1, Osugami teaches a learning method of a value calculation model for calculating a value of an option used when a person acts from an attribute value of the option, implemented by a computer (Osugami, FIG. 4, FIG. 8, and, abstract, line 1, “A processing apparatus, a processing method, and a program that generates a selection model obtained by modeling selection behavior of a target to a given choice. The processing apparatus includes an acquiring unit configured to acquire learning data.” paragraph [0002], line 1 “The present invention relates to a processing apparatus, a processing method, and a program.” And, paragraph [0025], line 1 FIG. 13 is an example of a hardware configuration of a computer 1900 functioning as the processing apparatus 100 according to an illustrative embodiment.” PNG media_image1.png 572 471 media_image1.png Greyscale PNG media_image2.png 534 417 media_image2.png Greyscale In other words, choice is option, method is method, from FIG. 4, learning is learning, probability calculating unit calculates a probability of each choice is calculating value of an option, selection model is model, and computer functioning as apparatus is implemented by a computer.) , the learning method comprising: acquiring input data in which a selection probability indicating a rate at which each option is selected from a plurality of options (Osugami, FIG. 8, abstract, line 3 “The processing apparatus includes an acquiring unit configured to acquire learning data including at least one selection behavior for learning in which choices given to the target are input choices and choices selected out of the input choices…” and, paragraph [0020], line 1 “FIG. 8 illustrates an example of probabilities that choices calculated by a probability calculating unit 160 according to an illustrative embodiment are selected…”and, paragraph [0028], line 3 “For example, when a plurality of commodities including a first commodity and a second commodity are presented to the consumer as choices, a ratio of probabilities that the respective first and second commodities are selected by the consumer is sometimes different according to the other commodities included in the presented choices.” In other words, acquire learning data is acquiring input data, and ratio of probabilities is selection probability indicating a rate at which each option is selected from a plurality of options.) and [attribute values of the plurality of options when the selection probability is obtained are associated with each other]; and acquiring, for each combination of two options that can be extracted from the plurality of options, a relationship between selection probabilities of the two options included in each combination from the input data (Osugami, paragraph [0067], line 3 “Therefore, the learning processing unit 150 in this embodiment formularizes the selection behavior of the consumer as a problem for learning mapping from an input vector to an output vector and learns a selection model in which a ratio of selection probabilities of choices included in input choices is variable depending on a combination of the other choices included in the input choices.” In other words, input choices is acquiring options, ratio is relationship between two options, and ratio of selection probabilities is relationship between selection probabilities of the two options.) , and adjusting the value calculation model so that a relationship between values calculated when attribute values of the two options included in each combination are input to the value calculation model (Osugami, paragraph [0047], line 1 “The probability calculating unit 160 calculates probabilities, on the basis of the learned selection model, the determined parameters, and the like, that the respective choices are selected according to input choices. The probability calculating unit 160 is connected to the storing unit 120 and reads out the learned selection model, the determined parameters, and the like from the storing unit 120.” And, paragraph [0028], line 3 “For example, when a plurality of commodities including a first commodity and a second commodity are presented to the consumer as choices, a ratio of probabilities that the respective first and second commodities are selected by the consumer is sometimes different according to the other commodities included in the presented choices.” And, paragraph [0083], line 1 “For example, the learning processing unit 150 updates the parameters to increase the simultaneous probability p(y, x) of input choices and output choices concerning each of the input and output sample vectors that indicate selection behavior for learning.” In other words, updates the parameters is adjusting the value calculation model, first commodity and second commodity is two options and ratio of probabilities is a relationship between the selection probabilities corresponding to each combination. ) , and a relationship between the selection probabilities corresponding to each combination are close to each other (Osugami, paragraph [0028], line 3 “For example, when a plurality of commodities including a first commodity and a second commodity are presented to the consumer as choices, a ratio of probabilities that the respective first and second commodities are selected by the consumer is sometimes different according to the other commodities included in the presented choices.” In other words, first commodity and second commodity is combination, and ratio of probabilities is a relationship between the selection probabilities corresponding to each combination.) Thus far, Osugami does not explicitly teach (previously mapped – see office action page 8.) attribute values of the plurality of options when the selection probability is obtained are associated with each other. Ikeda teaches attribute values of the plurality of options when the selection probability is obtained are associated with each other (Ikeda, paragraph [0118], line 1 “In Eq.(5), Pi,m, is the choice probability of the ride option p, in the case of having offered a combination of ride options corresponding to a certain X, and has been calculated by the choice probability calculation part 124 at step S302.” And, paragraph [0109], line 1 “The kth attribute of the ride option pi,m is, for example, a fare, an access time from an origin to a destination, a waiting time for a ride, a travel time, or an egress time from a drop-off location to a destination. The value of each attribute may be determined or calculated based on the ride option pi,m and the ride request.” In other words, ride option p is one of plurality of options, value of each attribute is attribute values, and Pi,m is choice probability of ride option p is selection probability.) Both Osugami and Ikeda are directed to selecting from a plurality of choices, among other things. Osugami teaches a learning method of a value calculation model for calculating a value of an option used when a person acts from an attribute value of the option, implemented by a computer, the learning method comprising acquiring, for each combination of two options that can be extracted from the plurality of options, a relationship between selection probabilities of the two options included in each combination from the input data, and adjusting the value calculation model so that a relationship between values calculated when attribute values of the two options included in each combination are input to the value calculation model and a relationship between the selection probabilities corresponding to each combination are close to each other; but does not explicitly teach attribute values of the plurality of options when the selection probability is obtained are associated with each other. Ikeda teaches attribute values of the plurality of options when the selection probability is obtained are associated with each other. In view of the teaching of Osugami, it would be obvious to one of ordinary skill in the art before the effective date of the claimed invention to combine the teaching of Ikeda into Osugami. This would result in a learning method of a value calculation model for calculating a value of an option used when a person acts from an attribute value of the option, implemented by a computer, the learning method comprising acquiring input data in which a selection probability indicating a rate at which each option is selected from a plurality of options and attribute values of the plurality of options when the selection probability is obtained are associated with each other, and acquiring, for each combination of two options that can be extracted from the plurality of options, a relationship between selection probabilities of the two options included in each combination from the input data, and adjusting the value calculation model so that a relationship between values calculated when attribute values of the two options included in each combination are input to the value calculation model and a relationship between the selection probabilities corresponding to each combination are close to each other. One of ordinary skill in the art would be motivated to do this because based on attributes, a user may wish to make different choices, and a system that models this can better predict user demand. (Ikeda, paragraph [0003], line 7 “The server, for example, assigns the ride request to a vehicle that can pick up the user earliest and notifies the mobile device of a scheduled pickup time. The reservation is finalized when the user's acceptance notice is transmitted from the mobile device to the server. It is not economical, however, to use taxis on a daily basis because taxi fares are high. Therefore, systems have been devised for finding matches for requests for ridesharing that is available at relatively low cost…A user transmits a ride request, specifying conditions such as an origin, a destination, a preferred departure time, and a preferred arrival time, to a server.”) Regarding claim 2, The combination of Osugami and Ikeda teaches The learning method according to claim 1, wherein the value calculation model is a neural network having the attribute value as an input and the value as an output (Osugami, paragraph [0077], line 1 “For example, the learning processing unit 150 learns a selection model based on a Restricted Boltzmann Machine.” Examiner notes that the phrase “value calculation model” refers to the overall model. FIG. 3 of the instant application lists a number of units: transportation data acquisition unit, selection probability calculation unit, learning data generation unit, model learning unit, etc. that comprise the “value calculation model”. Further, the specification recites “In the present embodiment, the value calculation model is a model using a neural network called a Multi-Layer Perceptron (MLP).” (Specification, page 5, line 34.) Therefore, examiner is interpreting the claim to mean that some unit of the value calculation model, such as the model learning unit, implements a neural network. In other words, selection model is value calculation model, Restricted Boltzmann Machine is a neural network, and selection model based on a Restricted Boltzmann Machine is value calculation model is a neural network.) Regarding claim 3, The combination of Osugami and Ikeda teaches the learning method according to claim 1, wherein the adjusting includes acquiring, for all combinations of two options that can be extracted from the plurality of options, a difference between a relationship between values of the two options included in a combination (Osugami, paragraph [0051], line 6 “In this embodiment, as an example, the acquiring unit 110 acquires five commodities (A, B, C, D, and S) as the commodities likely to be presented to the consumer.” And, paragraph [0134], line 1 “By comparing FIG. 12 and FIG. 6, it is seen that the processing apparatus 100 in this modification can calculate a probability having a tendency substantially the same as the target learning data. It is also seen that a change in the ratio of the selection probabilities of the commodity A and the commodity B in the initial state according to choices presented to the consumer can be reproduced.” And, paragraph [0063], line 1 “As an example, the input vector generating unit 130 generates an input vector X=(X1, X2, X3, X4, X5) corresponding to the five commodities (A, B, C, D, and S) according to the learning data shown in FIG. 6. Here, X1 corresponds to the commodity A, X2 corresponds to the commodity B, X3 corresponds to the commodity C, X4 corresponds to the commodity D, and X5 corresponds to the commodity S. Since the choice R1 of the learning data in the initial state is the choice for presenting the commodities A and B, the input vector generating unit 130 sets xr1=(1, 1, 0, 0, 0). Similarly, the input vector generating unit 130 generates input vectors corresponding to the choices R1 to R4 as indicated by the following expression.” And, paragraph [0034], line 4 “In FIG. 3, commodities A, B, and D are choices presented to the consumer. In a graph of FIG. 3, as in FIG. 1, as an example of characteristics of the commodities, a price is plotted on the abscissa and the commodities A, B, and D are plotted on the ordinate as quality. That is, the commodity A is a commodity having a higher price and higher quality compared with the commodity B. The commodity D is a commodity having a slightly higher price and slightly lower quality compared with the commodity B.” In other words, inputting vector X is acquiring all combinations of two options, comparing prices and quality of the options is a difference between values of the two options. Examiner notes the relative probability for each option as well as the ratio between options is previously mapped in claim 1. See office action, page 5.) and a relationship between the selection probabilities corresponding to the combination, and adjusting the value calculation model so that a sum of differences of the all combinations is smaller than a predetermined value (Osugami, paragraph [0082], line 1 “The learning processing unit 150 updates the parameter vector 0 such that at least one of p(y, x) and p(y |x) is higher for each of the input and output sample vectors. Here, P(y, x) indicates a simultaneous probability that an input vector is x and an output vector is y. Further, p(y, x) indicates a conditional probability that the output vector is y. Note that p(y, x) and p(y |x) are associated as p(y |x) p(y, x)/p(x).” and, paragraph [0137], line 1 “In the above explanation, the processing apparatus 100 in this modification reduces errors of selection probability using the selection model 10 in which the influence of the second weight value set between the output node and the intermediate node corresponding to the input node whose input value is 0 is reduced. The processing apparatus 100 may use a model for reducing the influence of the second weight value when the input node has a value equal to or Smaller than a predetermined threshold instead of when the input node x, of the selection model 10 is 0. In this case, the processing apparatus 100 may calculate a plurality of output values from a plurality of output nodes corresponding to a plurality of input values to be equal to or smaller than the threshold.” In other words, Y and X are sets of two options that can be extracted from the plurality of options, at least one of p(y, x) and p(y|x) is higher for each of the input and output sample vectors is a difference between a relationship between values of the two options in the combination, and, calculate a plurality of output values from a plurality of output nodes corresponding to a plurality of input values to be equal to or smaller than the threshold is a relationship between the selection probabilities so that a sum of differences of the combinations is smaller than a predetermined threshold. ) . Regarding claim 4, The combination of Osugami and Ikeda teaches the learning method according to claim 1, wherein the relationship between the values is a ratio of one value to another value, and the relationship between the selection probabilities is a ratio of one selection probability to another selection probability (Osugami, paragraph [0134], line 1 “By comparing FIG. 12 and FIG. 6, it is seen that the processing apparatus 100 in this modification can calculate a probability having a tendency substantially the same as the target learning data. It is also seen that a change in the ratio of the selection probabilities of the commodity A and the commodity B in the initial state according to choices presented to the consumer can be reproduced.” In other words, selection probabilities is selection probabilities and ratio of the selection probabilities is a ratio of one selection probability to another selection probability.) . Regarding claim 5, The combination of Osugami and Ikeda teaches the learning method according to claim 1, wherein the relationship between the values is a difference between one value and another value (Ikeda, Equation (6), and, paragraph [0119], line 1 “In the case where r is the profit obtained from the ride option pi,m , ri,m is calculated by Eq. (6) below: PNG media_image3.png 26 504 media_image3.png Greyscale where fi,m is the fare of pi,m and ci,m is the cost of pi,m.” And, paragraph [0121], line 1 “Based on the above, at step S303, the ride option selection part 125 selects a combination of ride options based on Eq. (6) in the case of giving preference to maximizing the expected profit.” In other words, selection based on maximizing the expected profit is selecting based on the relationship of the values based on the difference between one value and another value. ) and the relationship between the selection probabilities is a difference between a numerical value of a natural logarithm of one selection probability and a numerical value of a natural logarithm of another selection probability (Ikeda, Equation (7), and, paragraph [120], line 1 On the other hand, in the case of maximizing the expected utility, Eq. (5) is rewritten to Eq. (7) below: PNG media_image4.png 61 495 media_image4.png Greyscale In other words, selecting based on maximizing expected utility is selecting based on a relationship of the difference between a numerical value of a natural logarithm (see eq. 7) of selection probabilities.). Regarding claim 6, The combination of Osugami and Ikeda teaches a selection probability estimation method implemented by a computer, comprising: using the learning method according to claim 1 to learn the value calculation model (Osugami, paragraph [0009], line 4 “The processing apparatus includes: an acquiring unit configured to acquire learning data including at least one selection behavior for learning in which choices given to the target are input choices and choices selected out of the input choices are output choices, an input vector generating unit configured to generate an input vector that indicates whether each of a plurality of kinds of choices is included in the input choices, and a learning processing unit configured to learn the selection model using the input vector corresponding to an input choice for learning and the output choices.” In other words, learn the selection model is learn the value calculation model.); calculating values of a plurality of options by inputting attribute values of the plurality of options to the value calculation model (Ikeda, paragraph [0055], line 1 “Next, at step S102, the request transmission part 22 of the user terminal 20 transmits a ride request including the input parameters to the transportation service reservation apparatus 10.” In other words, input parameters is calculating values by inputting attribute values of the plurality of options.) ; and estimating a selection probability of each option based on calculated values of the plurality of options (Osugami, paragraph [0101], line 1 “In the above explanation, in the processing apparatus 100 in this embodiment, the learning processing unit 150 analytically calculates the conditional probability p(y|x) on the basis of the Restricted Boltzmann Machine and learns the selection model 10. Alternatively, the learning processing unit 150 may estimate the conditional probability p(y|x) using Gibbs sampling or the like and learn the selection model 10.” In other words, estimate the conditional probability p(y|x) is estimating a selection probability of each option based on calculated values of the options. ) . Regarding claim 7, The combination of Osugami and Ikeda teaches the selection probability estimation method according to claim 6, further comprising adjusting at least part of attribute values of the plurality of options so that the estimated selection probability of each option approaches a target selection probability (Osugami, paragraph [0094], line 5 “A specific method of calculating a gradient of the conditional probability p(y|x) and updating the parameters in a gradient direction to increase the conditional probability p(y, x) in this way is known as “Gradient for discriminative training”.” And paragraph [0100], line 1 “By comparing FIG. 8 and FIG. 6, it is illustrated that the processing apparatus 100 in this embodiment can calculate a probability having tendency substantially the same as the tendency of the target learning data.” In other words, updating parameters is adjusting at least part of attribute values of the plurality of options, and calculate a probability to have substantially the same as the tendency of the target learning data is so that each option approaches a target selection probability.) Claims 8-12 are non-transitory computer-readable recording medium claims, corresponding to learning method claims 1-5, respectively. Otherwise, they are the same. The combination of Osugami and Ikeda teaches a non-transitory computer readable medium comprising a computer readable program. (Osugami, claim 14 “non-transitory computer readable storage medium comprising a computer readable program which, when executed, causes the computer to function…”) Therefore, claims 8-12 are rejected for the same reasons as claims 1-5, respectively.) The prior art made of record and not used is considered pertinent to applicant’s disclosure: Beaurepaire, et al (US 2022/0005140 A1) “Method, Apparatus, and System for Providing a Contextually Relevant Vehicle Comparison” discloses an approach for providing contextually relevant vehicle comparison for a given trip including determining contextual data associated with a trip request, a plurality of candidate vehicles available to compete the trip, a plurality of candidate modes of vehicle operation to compel the trip, or a combination thereof. Ikeda, et al (US 2020/0012956 A1) “Action Selection Learning Device, Action Selection Learning Method, And Storage Medium” discloses an action selection learning device configured to generate a reference model that is a set of model parameter vectors that indicate an influence level of each factor that influences selection of an action alternative, calculate selection probability for each action alternative, for each of the model parameter vectors, calculate a model parameter vector for each user using a subset of model parameter vectors extracted from the reference model, based on the selection probability for each action alternative and a selection history of the action alternative by each user. Kwatra, et al (US 2019/0171988 A1) “Cognitive Ride Scheduling” discloses Various embodiments for facilitating ride scheduling, including a method for facilitating ride scheduling based on scheduling parameters and user preferences by a processor is provided. An occurrence of an event associated with a user may be predicted based on user data. And, one or more ride scheduling parameters relating to the event may be determined. Park, et al, “Recommendation of feeder bus routes using neural network embedding-based optimization” discloses neural network-based embedding methodology that extracts road name vectors considering the movement patterns of vehicles. Subsequently, the k-means clustering analysis is applied to those vectors to identify the major taxi transit clusters during the commute hours. For each cluster, we suggest a feeder bus route that can best reflect the taxi trajectory patterns. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BART RYLANDER whose telephone number is (571)272-8359. The examiner can normally be reached Monday - Thursday 8:00 to 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at 571-270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.I.R./Examiner, Art Unit 2124 /MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Feb 21, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555002
RULE GENERATION FOR MACHINE-LEARNING MODEL DISCRIMINATORY REGIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12530572
Method for Configuring a Neural Network Model
2y 5m to grant Granted Jan 20, 2026
Patent 12530622
GENERATING NEW DATA BASED ON CLASS-SPECIFIC UNCERTAINTY INFORMATION USING MACHINE LEARNING
2y 5m to grant Granted Jan 20, 2026
Patent 12493826
AUTOMATIC MACHINE LEARNING FEATURE BACKWARD STRIPPING
2y 5m to grant Granted Dec 09, 2025
Patent 12488318
EARNING CODE CLASSIFICATION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
77%
With Interview (+15.0%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 109 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month