Prosecution Insights
Last updated: April 19, 2026
Application No. 18/349,736

PAIRWISE INTERACTION DETECTION TOOL

Non-Final OA §103
Filed
Jul 10, 2023
Examiner
PHAM, KHANH B
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
Fair Isaac Corporation
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
88%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
604 granted / 835 resolved
+17.3% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
34 currently pending
Career history
869
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
30.7%
-9.3% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 835 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (US 2022/0366439 A1), hereinafter “Han”, and in view of Jordan et al. (US 2021/0182690 A1), hereinafter “Jordan”. As per claim 1, Han teaches a system comprising: at least on data processor; and at least one memory storing instructions, which when executed by the at least one processor result in operations comprising: “binning input samples in a first dimension associated with a first predictor of an outcome based at least on a sample minimum; binning in input samples in a second dimension associated with a second predictor of the outcome based at least on binning the input samples in the first dimension” at [0015], [0039]-[0043] and Fig. 3; (Han teaches segmenting a group of objects based on two sets of metrics. The group of objects is segmented based on the first metric (i.e., “first dimension”). Then the segmented group objects are further segmented based on a second metric (i.e., “second dimension”). The first metric is associated with a first predictor to predict loan default risk for the objects in each segment, and the second metric is associated with a second predictor to predict loan default risk for each segment) “determining a two-dimensional risk pattern based at least on a first one-dimensional risk pattern associated with the first predictor along the first dimension and a second one dimensional risk pattern associated with the second predictor along the second dimension” at [0016], [0029]; (Han teaches obtaining the first risk metric and a second risk metric, segmenting the entities based on the first risk metric and the second risk metric) “comparing a first divergence of a first machine learning model to a second divergence of a second machine learning model” at [0048]-[0051]; (Han teaches determining a number of segments of the plurality of segment to maximize a divergence in credit risk by determining divergence metric of reach segment and comparing the divergence between neighboring segments) “wherein the first machine learning model is trained to generate a first output based at least on the first predictor, the first one-dimensional risk pattern associated with the first predictor, the second predictor, the second one-dimensional risk pattern associated with the second predictor, and a baseline score generated based on the input samples and wherein the second machine learning model is trained to generate a second output based at least one the baseline score and a cross-effect term including the two-dimensional risk pattern” at [0022],[0037], [0051]-[0080]; (Han teaches a first machine learning model is trained to generate a first output/prediction based on first metric associated with first predictor and second matric associated with a second predictor, wherein the first and second metrics represent risk patterns associated with the first and the second predictor. The divergence metric (i.e., “baseline score”) is Han does not teach “predicting a strength of an interaction effect between the first predictor and the second predictor based on the comparison, wherein the strength of the interaction effect indicates a marginal contribution of the interaction between the first predictor and the second predictor to at least the second output” as claimed. However, Jordan teaches a method for optimizing neural networks for generating predictive output including the steps of “predicting a strength of an interaction effect between the first predictor and the second predictor based on the comparison, wherein the strength of the interaction effect indicates a marginal contribution of the interaction between the first predictor and the second predictor to at least the second output” at [0029]-[0036]. Thus, it would have been obvious to one of ordinary skill in the art to combine Jordan with Han’s teaching in order to provide an “optimized neural network can be used both for accurately determining response variables using predictor variables, which indicates an effect or an amount of impact that a given predictor variable has on the response variable”, as suggested by Han at [0004]. As per claim 2, Han and Jordan teach the system of claim 1 discussed above. Han also teaches: wherein “the operations further comprise: generating a visualization representing the strength of a plurality of interaction effect between a plurality of predictor, wherein the strength of the plurality of interaction effects includes the strength of the interaction effect between the first predictor and the second predictor, and wherein the plurality of predictors includes the first predictor and the second predictor” at [0044]-[0053] and Figs. 3-5. As per claim 3, Han and Jordan teach the system of claim 2 discussed above. Han also teaches: wherein “the visualization includes at least one of a paragraph, a heat map, a tabulation, and a matrix” at [0044]-[0053] and Figs. 3-5. As per claim 4, Han and Jordan teach the system of claim 2 discussed above. Han also teaches: wherein “the visualization includes a ranking of strength of the plurality of interaction effects” at [0044]-[0053] and Figs. 3-5. As per claim 5, Han and Jordan teach the system of claim 1 discussed above. Han also teaches: wherein “the sample minimum indicates a minimum quantity of samples associated with a corresponding label that are included in each bin in the first dimension, and wherein input samples are binned in the first dimension such that each bin in the first dimension includes a quantity of samples associated with the corresponding label that is greater than or equal to the minimum quantity” at [0039]-[0045]. As per claim 6, Han and Jordan teach the system of claim 5 discussed above. Han also teaches: wherein “the input samples are binned in the second dimension according to bin breaks separating each bin determined during binning the input samples in the first dimension” at [0039]-[0049]. As per claim 7, Han and Jordan teach the system of claim 1 discussed above. Han also teaches: wherein “the first one-dimensional risk pattern includes at least one of increasing, decreasing, concave, and convex, and wherein the second one-dimensional risk pattern includes at least one of increasing, decreasing, concave, and convex” at [0039]-[0049]. As per claim 8, Han and Jordan teach the system of claim 1 discussed above. Han also teaches: wherein “the first one-dimensional risk pattern and the second one-dimensional risk pattern are applied as separate constraint on the first machine learning model, wherein the two-dimensional risk pattern is applied as a single constraint on the second machine learning model” at [0016], [0029]. As per claim 9, Han and Jordan teach the system of claim 1 discussed above. Han also teaches: wherein “the two-dimensional risk pattern represents the first one-dimensional risk pattern and the second one-dimensional risk pattern of a bin at an intersection between the first predictor and the second predictor” at [0016], [0029] and Fig. 3. As per claim 10, Han and Jordan teach the system of claim 2 discussed above. Han also teaches: wherein “the first machine learning model is at least one of a neural network, a generalized additive model, a scorecard model, and wherein the second machine learning model is at least one of a neural network, a generalized additive model, a scorecard model” at [0022], [0037]. Claims 11-20 recite similar limitations as in claims 1-10 and are therefore rejected by the same reasons. Conclusion Examiner's Note: Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHANH B PHAM whose telephone number is (571)272-4116. The examiner can normally be reached Monday - Friday, 8am to 4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached at (571)272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHANH B PHAM/Primary Examiner, Art Unit 2166 February 11, 2026
Read full office action

Prosecution Timeline

Jul 10, 2023
Application Filed
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602358
DATABASE AND DATA STRUCTURE MANAGEMENT SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12585915
TRAINING METHOD AND APPARATUS FOR A NEURAL NETWORK MODEL, DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579116
DATABASE AND DATA STRUCTURE MANAGEMENT SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12579163
SYSTEMS AND METHODS FOR DETECTING PERFORMANCE DEGRADATION IN DISTRIBUTED DATABASE DEPLOYMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12579161
ETL JOB DISTRIBUTED PROCESSING SYSTEM AND METHOD BASED ON DYNAMIC CLUSTERING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
88%
With Interview (+15.2%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 835 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month