DETAILED ACTION
1. The present application, filed on or after March 13, 2013, is being examined under the first inventor to file provisions of the AIA .
This is a CIP patent application with a claim of priority to a provisional application filed January 25, 2022 and the parent application, Application No. 17/824,688, filed May 25, 2022.
Response to Amendment
2.. An Amendment was filed July 9, 2025 (hereinafter “Amendment”) and has been entered into the record and fully considered. The Amendment was filed in response to a Non-Final Rejection dated April 10, 2025.
Despite the Amendment to the Claims and Applicant’s remarks, the Rejections set forth in the Non-Final Rejection are hereby maintained.
An explanation of the maintained Rejections and a response to Applicant’s arguments are set forth below. The previous Non-Final Rejection is repeated below for completeness of the record. Please see the “Conclusion” section of this Action below for important information regarding responding to this Action.
The IDS dated June 10, 2025 has been considered in this application.
Claims 1 – 4, 6 – 8, 10, 12 – 14, 16 – 17, and 19 – 25 are pending and examined herein. Claims 21 – 25 are new.
The independent claims were amended in substantially identical/similar fashion, making it unnecessary to address each Claim. The dependent Claims were not amended in any substantive manner. The cancelled Claims were, essentially and virtually, incorporated into the respective independent claims and therefore were address in the Non-Final Rejection. The new Claims are considered obvious on grounds that they are essentially identical to other dependent Claims already of record.
Therefore, the following explanation of the maintained rejections with regard to Claim 1 is considered explanatory of the Rejection as a whole.
OFFICE NOTE: Interviews are always welcome at any stage of prosecution. Please use the AIR form for scheduling an interview if such is desired. The link for the AIR form is found at the end of this Action.
With regard to the Amendment:
OBJECTION TO THE CLAIMS:
Claim 1 and elsewhere uses the phrase: “address information address.” In Claim 1 this phrase lacks antecedent basis. Further, the phrase is vague and confusing and requires clarification.
Correction is required.
Claim 1 was amended as follows:
PNG
media_image1.png
772
736
media_image1.png
Greyscale
Summary of the Amendment and Broadest Reasonable Interpretation:
Claim terminology is to be given its plain and ordinary meaning to a person of ordinary skill in the art, consistent with the specification. This is true, unless the terms are given a special meaning. See MPEP §2111.01
Here, no special meaning is detected. As noted in the Amendment, the
changes to Claim 1 relate generally to the types of training data used in training the model, including labelled training data, and to “weighted values” of certain flagged behaviors.
Labelled data is “flagged” to be indicative of fraud while other labeled training data is “confirmed” to be indicative of fraud. As to the former usage, this term appears to mean simply labelled training data, meaning that the behavior is labelled to be indicative of fraud. See at least 0005. As to confirmed behavior, this appears to mean flagged fraudulent behavior that has been confirmed to be fraudulent behavior. This term is used only once, in para. 0017.
Weighted values are assigned to each type of one or more flagged fraudulent behavior and non-fraudulent behavior. See at least 0005
With regard to §101:
Respectfully, the Amendment does not advance prosecution substantially.
Thus, the amendments to the Claim do not alter the analysis set for the Non-Final Rejection regarding §101. The only changes are summarized above. The above quoted recitations merely primarily to how to label and give weights to the training data. No specific details about the labels, how the labels are applied, how the behavior is flagged or is not flagged or what kind or how the weights are applied. These are very common and typical functions in machining learning techniques. This is a very common economic activity. These limitations are recited at a very high – extremely high – level of generality. There is nothing concrete or substantive about these recitations.
No special functionality is recited. No new computerized components are recited.
These limitations recite results or “outcome” of computer processing without specifying “how” a technical problem is solved. That is, the solution of a technical problem is not reflected in the Claim.
Taking the claim elements separately, the function performed by the computer elements at each step of the process is purely typical of obtaining training data, deriving “behaviors” or attributes or features that are indicative of possible fraud, labelling those behaviors, and giving weight to the respective features. And then using the thus trained machine learning model to generate a risk score are among the most basic functions of a computer. Without greater specificity as to “how” certain functions solve a technical problem, the currently recited limitations can be achieved by any general purpose computer without special programming. In short, each step does no more than require a generic computer to perform generic computer functions. Considered as an ordered combination, the computer components of the Claim add nothing that is not already present when the steps are considered separately.
Claim 1 does not, for example, purport to improve the functioning of the computer elements nor does the claim reflect how an improvement in any other technology or technical field is achieved. Thus, Claim 1 amounts to nothing significantly more than instructions to “apply” the abstract idea of training a ML model to generate a risk score when certain behaviors relating to a physical or mailing address is involved.
Accordingly, the Rejection is maintained.
With regard to §103:
It is respectfully submitted that the primary reference to Lagneaux teaches the features are were added by way of the Amendment. For example, Lagneaux teaches the generation of a “confidence score” with respect to a physical or mailing address which score is considered to constitute the recited term “risk score:”
“[0004] In one aspect, a method of confirming identity of an entity is disclosed. The method may comprises receiving a plurality of items for delivery to an address, obtaining, from the plurality of items, information regarding an entity associated with the items and the address, and delivering the plurality of items to the address. The method may also comprise identifying, based on the obtained information, an expected identity of the entity, receiving a request to confirm an identity of the entity using third-party identity verification via a user interface, and determining, based on the information regarding the entity, a confidence score for the expected identity, wherein the confidence score is a measure of a confidence that the expected identity accurately identifies the entity. The method may further comprise comparing the confidence score to a threshold value, determining whether the confidence score is greater than or equal to the threshold value, and generating a response to the request, the response including the confidence score and a result of the determining whether the confidence score is greater than or equal to the threshold value. The method may additionally comprises displaying the response via the user interface.” (Emphasis Added)
Clearly, Lagneaux teaches that a “confirmed” identity – or lack thereof – is indicative of fraud. A person of ordinary skill in the art would readily understand that each behavior would fall on a scale or a spectrum from indicating fraud or non-fraud. This is evident from the following teachings of Lagneaux relating to a “threshold.” A threshold is a measure of a value along a scale or a spectrum. A threshold is arbitrary. It is selected by the user based on experience or other metrics. Thus, any given behavior or attribute may be indicative of fraud or indicative of non-fraud:
“[0005] In some aspects, determining the confidence score for the expected identity comprises calculating a total number of items delivered to the address, a number of items delivered to the entity, and a number of items delivered to each other entity associated with the address. Additionally, determining the confidence score for the expected identity further comprises generating a probability score for the entity by dividing the number of items delivered to the entity by the total number of items delivered to the address. In some aspects, the method further comprises applying probabilistic modeling to the probability score for the entity to generate the confidence score for the entity and, when the confidence score is greater than or equal to the threshold value, applying the third party identity verification to confirm the identity of the entity.” (Emphasis Added)
See [0006] for explicit teachings of a “risk score.”
Labeling and training of data indicative of fraud/non-fraud is taught throughout Lagneaux: see for example [0075], Table 1, [0086], [0091], and [0101] – [0103]. In fact, Table 1 of Lagneaux is very illustrative of the summarized teachings of that publication:
PNG
media_image2.png
758
456
media_image2.png
Greyscale
The analysis of “relationship” between various behavior attributes and fraud/non-fraud is summarized well in Table 1 and elsewhere in Lagneaux. The use of weights is taught throughout the secondary reference to Goshen. (See at least [0063].
Therefore, the existing Rejection under §103 must be maintained.
Response to Arguments
3. Applicant's arguments set forth in the Remarks section of the Amendment have been fully considered but they are not persuasive.
With regard to section 101 rejection, Applicant argues as follows:
PNG
media_image3.png
626
760
media_image3.png
Greyscale
These arguments may all be true. Let’s assume they are. The problem is that they are no specific teachings of “how” address data is used to generate a risk and to be indicative of fraud. These arguments – like the claim – are recited at a high level. Too high to constitute a practical application.
Applicant references 0036 of the specification. However, perhaps it should reference 0037 et seq. since it explains in much more detail “how” labelled behavior relating to a physical address can be used to detect fraud based on a risk score. These details need to be reflected in the Claim to bring it into the realm of eligibility.
Such is not yet the case. The Rejection is maintained.
With regard to the §103 Rejection, Applicant argues:
PNG
media_image4.png
190
732
media_image4.png
Greyscale
However, it is respectfully submitted that no such “agreement” in reflected in the interview summary dated July 8, 2025. Indeed, the focus of the interview – as reflected in the interview summary – was on §101.
The amended features are taught by the current combination of references as noted above. Thus, Applicant’s arguments are not persuasive.
The Rejection must be maintained.
For completeness of the record, the following explanation of the previous rejections are repeated:
Claim Rejections – 35 USC § 101
2. 35 USC § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture and composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
A. Rejection Based on Abstract Idea
Claims 1 - 20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Furthermore, this rejection is based on the 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG).
B. Statutory Categories
Independent Claim 1 is a method claim and therefore falls into the statutory category of a “process.” Claim 14 is a system claim which recites various computer hardware components such as a processor and a memory and it therefore falls into the statutory category of “machine/manufacture.” Claim 20 recites a non-transitory computer machine readable medium and therefore falls into the category of “machine/manufacture.”
C. The Claim Recites an Abstract Idea
Claim 1 is illustrative of the rejection of all claims on the grounds of abstract idea.
Claim 1 recites the limitation:
“determining, using the address risk machine learning model, an address risk score of a first address of the one or more addresses based on the labeled data of the first address.”
This limitation, as drafted, is a process that, under its broadest reasonable interpretation, constitutes a method of organizing human activity, specifically, fundamental economic principles or practices. That is, analyzing this limitation in the context of the claim as a whole, it recites a process that falls within the grouping of abstract ideas comprising certain methods of organizing human activity. Fundamental economic principles or practices are examples of such methods. In this case, the fundamental economic principle or practice is the common practice of determining a fraud risk score.
This practice literally occurs millions of times every second.
Furthermore, the mere nominal recitation of a “processor” or “memory” does not remove the claim from the category of common or abstract methods of organizing human activity. These terms are recited at such a high level of generality as to not alter the designation of reciting an abstract idea.
Thus, Claim 1 recites a judicial exception, namely, an abstract idea.
D. The Claim Does Not Integrate the Abstract Idea into a Practical Application
Moreover, this judicial exception is not integrated into a practical application. The possible “additional limitations” recited in the Claim that must be considered are as follows:
A method of fraud risk assessment using one or more processors,
receiving labeled data that includes address information for one or more addresses and labels corresponding to the one or more addresses;
training an address risk machine learning model capable of predicting a risk of fraud for an address by determining relationships among the labeled data;
determining, using the address risk machine learning model, an address risk score of a first address of the one or more addresses based on the labeled data of the first address.
1. Lack of Computer Components and Interaction Among Same
No additional computerized components are mentioned in these limitations. The only computer terms are recited only at a high level of generality. No other particular computer functions or computer component interactions within this system are recited. The few computer-related limitations are wholly generic in nature and are recited at such a high level of generality as to not provide any meaningful limitations on the claim.
Furthermore, the claim lacks concrete assignments of specific functions among these various components. One example of such concrete assignment is to assign, in the claim, certain functions to specific components and recite them as interacting in specific ways. This is not the case with this Claim.
2. No Technical Solution to a Technical Problem
Analyzing these additional limitations individually, and taking the claim as a whole and as an ordered combination, it is clear that these additional limitations do not serve to integrate the abstract idea into a practical application. They do not recite a technological solution to a technological problem. They do not improve the functioning of the computer system itself or represent an improvement to any technology or technical field. In fact, there are very few computerized system components or functions recited.
Thus, these limitations fail to recite with specificity any technical function or any improvement to the functioning of the computer system itself. Therefore, the claim lacks the specificity required to transform the claim from one claiming only an outcome or a result – determining a fraud risk score based on an address - to one claiming a specific way of achieving that outcome or result. See MPEP §2106.04(d)(I); 2106.04(d)(1); 2106.05(a)
3. The Claim Recites Mere Instructions to Apply the Abstract Idea
The recitation of these generic components amounts to no more than mere instructions “to apply” the abstract idea exception using generic computer components. It is clear that the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. Furthermore, the claim invokes computers or other machinery merely as a tool to perform an existing process.
As noted above, the only possible computer components are recited at a high level of generality. This means that the abstract idea can be applied to an extremely general field of devices and systems. A claim having broad applicability across many fields of endeavor does not provide meaningful limitations that integrate a judicial exception into a practical application or amount to significantly more. For instance, a claim that generically recites an effect of the abstract idea exception, or claims every mode of accomplishing that idea, amounts to a claim that is merely adding the words "apply it" to the abstract idea. See MPEP §2106.05(f)
Accordingly, the additional elements or limitations listed above do not integrate the abstract idea into a practical application because they do not impose any meaningful limitations on practicing the abstract idea. That is, the additional elements recited in the claim beyond the judicial exception(s) have been evaluated to determine whether those additional elements, considered individually and in combination, integrate the judicial exception(s) into a practical application. They do not.
F. Step 2B: The Claim Does Not Recite Significantly More than the Abstract Idea
This step involves the search for an “inventive concept.” However, it is clear from the case law and the MPEP that the considerations at issue are the same as those considered above with respect to the analysis of a practical application. See MPEP 2106.05(a) – (c) and (e). In other words, these analyses sharply overlap.
Therefore, based on the above analysis, the identified additional limitations do not provide “significantly more” than the abstract idea. The claim is therefore ineligible under §101. The other independent claims are, likewise, ineligible for the same reasons as they are virtually identical to Claim 21.
G. The Dependent Claims Do Not Recite Meaningful Additional Limitations
Similarly, Claim 2 recites the same abstract idea as Claim 1 by virtue of its dependency on Claim 1. Like Claim 1, this claim does not recite sufficient additional elements to integrate the abstract idea into a practical application. Claim 2 merely recites the abstract concept of a notification.
Claim 3 merely recites the abstract concept of a threshold for a score.
Claim 4 merely recites the abstract concept of how the threshold value is determined.
Claim 5 merely recites the abstract concept of labeled data.
Claim 6 merely recites the abstract concept of types of flagged behavior.
Claim 7 merely recites the abstract concept of types of address fraud.
Claim 8 merely recites the abstract concept of non-fraudulent behavior.
Claim 9 merely recites the abstract concept of deriving relationships.
Claim 10 merely recites the abstract concept of a weighted value.
Claim 11 merely recites the abstract concept of variations in the weighted value.
Claim 12 merely recites the abstract concept of using historical fraud data.
Claim 13 merely recites the abstract concept of minimizing data.
Claims 14 - 20 are virtually identical or analogous variations to various of the aforementioned claims and are ineligible for the same reasons as set forth above.
None of these claims provide any additional meaningful limitations, non-generic computer components, or specific assignments of functionality among those components. Likewise, if at all, these claims recite only generic, computer-related limitations which are recited at such a high level of generality as to be devoid of any meaningful Limitations. These limitations do not recite improvements in the functioning of the computer or to any other technology or technical field.
Therefore, these claims do not include additional elements that are sufficient to integrate the abstract idea into a practical application, nor do they amount to significantly more than the recited abstract idea because the additional elements, when considered both individually and as an ordered combination, constitute only a mere instruction to “apply” the abstract idea.
Thus, Claims 1 - 20 constitute ineligible subject matter under 35 USC § 101 as being directed to an abstract idea without more.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 – 20 are rejected under 35 U.S.C. §103 as being unpatentable over U.S. Patent Publication No. 2021/0110343 to Lagneaux et al. (hereinafter “Lagneaux”) in view of U.S. Patent Publication No. 2021/0158356 to Goshen (hereinafter “Goshen”).
The Lagneaux reference is in the same field of endeavor as the claimed invention – using address variations to detect fraud.
The title is: Methods and systems for generating address score information
Lagneaux teaches as follows in the Abstract:
“In one aspect, a method of confirming identity of an entity is disclosed. The method comprises receiving a plurality of items for delivery to an address, obtaining, from the items, information regarding an entity associated with the items and the address, and delivering the items to the address. The method may also comprise identifying an expected identity of the entity, receiving a request to confirm an identity of the entity using third-party identity verification via a user interface, and determining, based on the information regarding the entity, a confidence score for the expected identity. The method may further comprise determining whether the confidence score is greater than or equal to the threshold value and generating a response to the request. The method may additionally comprises displaying the response via the user interface..” (Emphasis Added)
Thus, Lagneaux is directly on point with the claimed invention.
Accordingly, with regard to Claim 1, as outlined above, Lagneaux teaches:
Claim 1. A method of fraud risk assessment using one or more processors, comprising: (See at least [0008] which reads as follows:
“[0008] In some aspects, the risk score assigned for the specific address exceeds the threshold value when the specific address is determined to be one of the identified anomalous addresses or does not exceed the threshold value when the specific address is determined to not be one of the identified anomalous addresses. In some aspects, the specific behavior comprises one or more of fraud or criminal activity.” (Emphasis Added)
Using a machine learning model and labelled training data, the model of Lagneaux generates an address score and presents it to a user in an interface as illustrated in Fig. 5:
PNG
media_image5.png
488
692
media_image5.png
Greyscale
receiving labeled data that includes address information for one or more addresses and labels corresponding to the one or more addresses; (See at least [0101)
training an address risk machine learning model capable of predicting a risk of fraud for an address by determining relationships among the labeled data; and (See at least [0010] – [0013] and [0046] and [0071] – [0079].)
determining, using the address risk machine learning model, an address risk score of a first address of the one or more addresses based on the labeled data of the first address. (See at least Fig. 5 above.)
Therefore, Lagneaux appears to teach all of the essential limitations of Claim 1; however, out of an abundance of caution, Goshen is cited for its teachings related to detecting fraud based on variations in physical addresses, such as an order placement address, a shipping address, and a billing address.
Goshen is in the same field of endeavor as the claimed invention and Lagneaux – detecting fraud using machine learning.
The title of Goshen is: Fraud Mitigation Using One or More Enhanced Spatial Features
The Abstract is as follows:
“Techniques are provided for fraud mitigation using enhanced spatial features. One method comprises obtaining transaction data associated with a transaction; obtaining a machine learning module trained using training transaction data for multiple geographic areas to learn a correlation of the training transaction data with fraudulent activity for each geographic area; extracting a transaction address from the transaction data; determining a given geographic area for the transaction using the transaction address; determining values for a predefined spatial feature for a predefined region that includes the transaction address in the given geographic area using a query of an external online data source; applying the determined values for the predefined spatial feature to the machine learning module to obtain an anomaly score for the transaction; and initiating a predefined remedial step and/or a predefined mitigation step when the transaction is determined to be a predefined anomaly based on the anomaly score.” (Emphasis Added)
Goshen teaches the use of address data as well as other transaction related data as illustrated in Fig. 1:
PNG
media_image6.png
428
740
media_image6.png
Greyscale
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning based address risk system of Lagneaux to add the teachings of Goshen related to other features in the detection of fraud. The motivation to make this modification comes from Lagneaux. It teaches, as illustrated above, that various address data can be used to train a ML model to generate a fraud risk score. It would greatly enhance the efficiency and accuracy of the system of Lagneaux to add the feature creation and training features of Goshen.
With regard to Claim 2, Lagneaux teaches:
2. The method of claim 1, further comprising transmitting a notification to an entity involved in an interaction with the first address regarding a likelihood of fraud of the first address. (See at least Fig. 5 which provides a notification of the risk of fraud in the interface.)
3. The method of claim 2, wherein transmitting the notification is based on the address risk score being greater than a threshold value. (See at least [0004])
4. The method of claim 3, wherein the threshold value is determined by a user. (See at least [0049])
5. The method of claim 1, wherein the labeled data comprises flagged behavior, non-fraudulent behavior, and fraudulent behavior corresponding to the one or more addresses. (See at least [0101] – [0103], especially with respect to “non-anomalous addresses.”)
6. The method of claim 5, wherein the flagged behavior for the one or more addresses comprises at least one of: a number of entities greater than a certain threshold value; entities writing checks that return; entities depositing counterfeit checks; accounts being forced to shut down by a bank; a sudden ending of payroll checks for an entity associated with the one or more addresses; a number of personal identifying information (PII) associated with the one or more addresses; one or more instances of fraud or suspected fraud committed by an entity using the one or more addresses; when the last instance of fraud or suspected fraud was committed by an entity using the one or more addresses; a number of entities that have used the one or more addresses to commit fraud or suspected fraud; a number of accounts that have used the one or more addresses to commit fraud or suspected fraud; one or more of an input speed, consistency, and variety for completing an interaction involved with the one or more addresses; the one or more addresses being associated with a fraud network; and a number of bank accounts being opened for the one or more addresses across multiple banks. (See at least [0103] and [0116])
7. The method of claim 5, wherein the fraudulent behavior for the one or more addresses comprises at least one of:causing or being involved in prior fraud; and causing or being involved in loss. (See at least [0008])
8. The method of claim 5, wherein the non-fraudulent behavior for the one or more addresses comprise:entities having a credit score higher than a minimum threshold credit score; entities making timely payments towards their outstanding bills; a number of years for the one or more addresses has been free of flagged or fraudulent behavior; and an income of entities associated with this address. (See at least [0103] and [0052] relative to credit agencies
9. The method of claim 5, wherein training the address risk machine learning model comprises deriving relationships between one or more of the flagged, non-fraudulent behavior, and fraudulent behavior. (See at least [0092] relative to attributes that could indicate fraud. See also [0083] and Tables 1-3.)
10. The method of claim 9, wherein deriving relationships further comprises assigning a weighted value to each type of the one or more of flagged and non-fraudulent behavior corresponding to how likely that type of the one or more of flagged and non-fraudulent behavior is associated with fraudulent behavior. (See at least [0103] – [0105] relative to hierarchical factors. Clustering is a form of weighting.)
11. The method of claim 10, wherein the weighted value varies based on a number of instances of each type of behavior. (See at least [0103] – [0105] relative to hierarchical factors and volume of instances.)
12. The method of claim 1, further comprising testing the address risk machine learning model by comparing the address risk score with a historical data of fraud. (See at least [0091].)
13. The method of claim 12, further comprising re-training the address risk machine learning model to minimize a difference between the address risk score and the historical data of fraud. (See at least [0091] and [0075]. See also [0103] relative to labelled data which a person of ordinary skill in the art would readily understand is used for testing a model’s performance.)
With regard to Claim 14, this claim is essentially identical to Claim 1 and is obvious for the same reasons as set forth in that claim.
With regard to Claim 15, this claim is essentially identical to Claim 5 and is obvious for the same reasons as set forth in that claim.
With regard to Claim 16, this claim is essentially identical to Claim 6 and is obvious for the same reasons as set forth in that claim.
With regard to Claim 17, this claim is essentially identical to Claim 8 and is obvious for the same reasons as set forth in that claim.
With regard to Claim 18, this claim is essentially identical to Claim 9 and is obvious for the same reasons as set forth in that claim.
With regard to Claim 19, this claim is essentially identical to Claim 10 and is obvious for the same reasons as set forth in that claim.
With regard to Claim 20, this claim is essentially identical to Claim 1 and is obvious for the same reasons as set forth in that claim.
Conclusion
4. Applicant should carefully consider the following in connection with this Office Action:
A. Finality
THIS ACTION IS MADE FINAL. See MPEP § 706.07. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
B. Search and Prior Art
The search conducted in connection with this Office Action, as well as any previous Actions, encompassed the inventive concepts as defined in the Applicant’s specification. That is, the search(es) included concepts and features which are defined by the pending claims but also pertinent to significant although unclaimed subject matter. Accordingly, such search(es) were directed to the defined invention as well as the general state of the art, including references which are in the same field of endeavor as the present application as well as related fields (e.g. using labeled training data to train ML models to detect fraud). Indeed, there is a plethora of prior art in these fields.
Therefore, in addition to prior art references cited and applied in connection with this and any previous Office Actions, the following prior art is also made of record but not relied upon in the current rejection:
U.S. Patent Publication No. 2020/0274894 to Argoeti et al. This reference relates to the concept of detecting fraud using IP addresses.
U.S. Patent Publication No. 2023/0206372 to Ordorica et al. This reference relates to the concept of using IP addresses.
U.S. Patent Publication No. 2014/0222631 to Love et al. This reference relates to the concept of a risk factor.
U.S. Patent Publication No. 2012/0226590 to Love et al. This reference relates to the concept of risk with respect to addresses.
PCT Patent Publication No. WO 2020/149790 to Kim et al. This reference relates to the concept of detecting fraud using physical addresses.
C. Responding to this Office Action
In view of the foregoing explanation of the scope of searches conducted in connection with the examination of this application, in preparing any response to this Action, Applicant is encouraged to carefully review the entire disclosures of the above-cited, unapplied references, as well as any previously cited references. It is likely that one or more such references disclose or suggest features which Applicant may seek to claim. Moreover, for the same reasons, Applicant is encouraged to review the entire disclosures of the references applied in the foregoing rejections and not just the sections mentioned.
D. Interviews and Compact Prosecution
The Office strongly encourages interviews as an important aspect of compact prosecution. Statistics and studies have shown that prosecution can be greatly advanced by way of interviews. Indeed, in many instances, during the course of one or more interviews, the Examiner and Applicant may reach an agreement on eligible and allowable subject matter that is supported by the specification.
Interviews are especially welcomed by this examiner at any stage of the prosecution process. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool (e.g. TEAMS). To facilitate the scheduling of an interview, the Examiner requests the use of the AIR form as follows:
USPTO Automated Interview Request http://www.uspto.gov/interviewpractice.
Other forms of interview requests filed in this application may result in a delay in scheduling the interview because of the time required to appear on the Examiner's docket. Thus, the use of the AIR form is strongly encouraged.
E. Communicating with the Office
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM BUNKER whose telephone number is (571)272-0017. The examiner can normally be reached on M - F 8:30AM - 5:30PM, Pacific.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abhishek Vyas, can be reached at 571-270-1836. Information regarding the status of an application, whether published or unpublished, may be obtained from the “Patent Center” system. For more information about the Patent Center system, see https://patentcenter.uspto.gov/
/William (Bill) Bunker/
U.S. Patent Examiner
AU 3691
(571) 272-0017 - office
william.bunker@uspto.gov
August 1, 2025
/ABHISHEK VYAS/Supervisory Patent Examiner, Art Unit 3691