Prosecution Insights
Last updated: April 19, 2026
Application No. 16/777,601

METHOD AND SYSTEM FOR DETERMINING RETURN OPTIONS FOR INVENTORY ITEMS

Non-Final OA §103
Filed
Jan 30, 2020
Examiner
EVANS, KIMBERLY L
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Target Brands Inc.
OA Round
9 (Non-Final)
12%
Grant Probability
At Risk
9-10
OA Rounds
7y 0m
To Grant
26%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
44 granted / 362 resolved
-39.8% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
7y 0m
Avg Prosecution
27 currently pending
Career history
389
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
39.8%
-0.2% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 362 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Non-Final action is in reply to the request for continued examination filed 7/14/2025. Claims 1, 5, 6, and 13 have been amended. Claims 3, 7, 12, 14, 15 and 17-21 were previously cancelled. Claims 1, 2, 4- 6, 8-11, 13, 16 and 22 are pending. Request for Continued Examination A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 7/14/2025 has been entered. Response to Arguments/Amendments As it relates to the 35 USC 103 rejection, applicant argues the amended independent claims which recite similar limitations as dependent claim 5 however, as noted in Examiner’s Interview Summary, (7/16/2025), Flores teaches various return options. Further, Examiner does not consider the prior art references to be as limiting as applicant avers. Examiner has modified the rejection to further explain how the limitations are being interpreted and addressed each of applicant’s limitations below in this Non-Final rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 4- 6, 8-11, 13, 16 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Flores et al., US Patent Application Publication No US 2019/0244214 A1, in view of Karmakar et al., US Patent Application Publication No US 2020/0349575 A1. With respect to Claims 1 and 13, Flores discloses, storing historical data relating to product orders associated with a plurality of inventory items, wherein the product orders are fulfilled by the retail enterprise and the retail enterprise is configured to fulfill product orders through at least one of shipping, in-store pickup and in-store shopping (Fig 1, ¶48: “The transaction history data 130 is data generated by one or more point-of-sale (POS) devices. The transaction history data 130 can include data associated with online item purchases, online item returns, in-store item purchases, and/or in-store item returns. The transaction history data 130 can be obtained directly from a plurality of POS devices, obtained from a local data storage device, such as the data storage device 132, and/or obtained from a remote data storage, such as, but not limited to, the cloud storage 120”; ¶75: “Each type of item in a plurality of items carried within an inventory of a store (online or brick-and-mortar) receives a unique return value customized for that type of item”; ¶135: “FIG. 16 is an exemplary block diagram illustrating a screenshot 1600 of a user device displaying a previous purchase history 1602 for the user. In this example, the portion of the purchase history 1602 displayed on the screen 1604 includes a record 1606 for a set of items purchased on May 5, 2017 and another record 1608 for a set of one or more items purchased on Apr. 15, 2017. The examples are not limited to two item orders in transaction history data. The previous purchase history 1602 is part of the transaction history data for the user associated with the user device”) storing historical data relating to product returns associated with the plurality of inventory items including customer attributes associated with the product orders and returns, product attributes associated with items from among the plurality of inventory items included in the product orders and returns (Abstract: “A customized returns manager component calculates a customized return-trust score and a per-item return value based on analysis of item data and transaction history data”; ¶2: A calculation component analyzes item data associated with the selected item and transaction history data associated with the first user using a set of score generation rules. The calculation component calculates a return-trust score for a user attempting to return a selected item based on results of the analysis. A return authorization component analyzes the per-user trust score and the item data, including a value of the selected item and a per-item return value, using a set of authorization criteria”; ¶47: “The user initiates the unassisted self-return process via the self-returns application 126 in some examples. In these examples, the user 122 scans a barcode or other marker on the item 124 using a scanning device, image capture device, or other sensor device associated with the user device 116. The self-returns application 126 utilizes the scan data and/or image data to obtain item return data associated with the item. The self-returns application 126 sends the item return data to a customized returns manager component 128. The customized returns manager component 128 provides the unassisted item self-return services to the user 122 on behalf of the provider of the item 124”; ¶48: “The transaction history data 130 is data generated by one or more point-of-sale (POS) devices. The transaction history data 130 can include data associated with online item purchases, online item returns, in-store item purchases, and/or in-store item returns”; ¶75: “The calculation component 302 calculates a per-item return value 316 for the item being returned based on an analysis of the item data 306 and a set of item-value parameters 318… The set of item-value parameters 318 includes one or more parameters for determining the item-value for a selected item. Each type of item in a plurality of items carried within an inventory of a store (online or brick-and-mortar) receives a unique return value customized for that type of item.”; ¶79: “the user's return-trust score is updated 320 based on current returns transactions in real-time. In these non-limiting examples, each time a user completes a return transaction successfully, the return-trust score 314 is updated 320 to increase the score. Likewise, if the user attempts a return transaction that is rejected or fails to complete due to a problem with the item, receipt of other issue associated with the return, the return-trust is updated to reflect a lower level of returns-related trust for the user”; Fig 8, ¶101: “The per-item returns history 804 can also indicate the number of instances of a given item returned within a predetermined time-period. If the number is unusually high, it can indicate user-supervision/assistance during item return is advisable”) and an indication of fraudulent or non- fraudulent activity associated with the product orders and returns within a database (¶65: “an authorization component utilizing authorization rules and/or item disposition rules to complete unassisted, self-return of items with fraud avoidance based on the user's past purchase history. The kiosk 208 establishes a return-trust value for the user to determine if the user can complete a return without associate assistance or if assistance is required from returns manager or other personnel to complete the transaction”; ¶72: “The return-trust score 314 is a score or ranking indicating a degree of trust or item return experience/qualifications for the user 315… if a user has returned many items in the past without difficulty and without any issues arising with regard to the item returns, the return-trust score is higher than a score for a user that has attempted one or more item returns associated with an issue, such as, a missing receipt, a fraudulent return, etc”) receiving a particular customer log-in and an indication of an initiation of a particular product return associated with a previously purchased inventory item at a user-interface of a retail website (Fig 1, ¶39: “The applications can communicate with counterpart applications or services such as web services accessible via a network 112. In an example, the applications can represent downloaded client-side applications that correspond to server-side services executing in a cloud”; ¶43: “The user device 116 and/or the user device 118 can also include a user interface component for providing input to the user device and/or receiving output from the user device”; ¶44: “user device 116 is a computing device associated with a user 122 attempting to return an item 124 to a provider of the item 124 via a self-returns application 126 executing on the user device 116. The self-returns application 126 can be downloaded from the computing device 102, the cloud storage 120, an applications server, or other application provider via the network 112”; 45: “The user initiates the unassisted self-return process via the self-returns application 126 in some examples. In these examples, the user 122 scans a barcode or other marker on the item 124 using a scanning device, image capture device, or other sensor device associated with the user device 116. The self-returns application 126 utilizes the scan data and/or image data to obtain item return data associated with the item. The self-returns application 126 sends the item return data to a customized returns manager component 128. The customized returns manager component 128 provides the unassisted item self-return services to the user 122 on behalf of the provider of the item 124”; ¶55: “user-provided data 138 in some non-limiting examples includes data provided by the user 122, such as a user's returns account 140 information, login (username and/or password), user-provided reason for return of the item 124, and/or any other information provided by the user via the self-returns application 126.”) the customer log-in being submitted from the customer via a computing device, the computing device being separated from the retail enterprise by a network, the computing device being separated from the retail enterprise by a network (Fig 1, ¶42: “Communication between the computing device 102 and other devices, such as but not limited to a user device 116, a user device 118 and/or a cloud storage 120, can occur using any protocol or mechanism over any wired or wireless connection”; ¶44: “user device 116 is a computing device associated with a user 122 attempting to return an item 124 to a provider of the item 124 via a self-returns application 126 executing on the user device 116. The self-returns application 126 can be downloaded from the computing device 102, the cloud storage 120, an applications server, or other application provider via the network 112”) responsively, retrieving internal reputation information for the customer based on a history of fraudulent activity of the customer within the retail enterprise (Abstract: “A customized returns manager component calculates a customized return-trust score and a per-item return value based on analysis of item data and transaction history data”; ¶2: “A calculation component analyzes item data associated with the selected item and transaction history data associated with the first user using a set of score generation rules. The calculation component calculates a return-trust score for a user attempting to return a selected item based on results of the analysis. A return authorization component analyzes the per-user trust score and the item data, including a value of the selected item and a per-item return value, using a set of authorization criteria”; ¶72: “If a user has returned many items in the past without difficulty and without any issues arising with regard to the item returns, the return-trust score is higher than a score for a user that has attempted one or more item returns associated with an issue, such as, a missing receipt, a fraudulent return, etc”; ¶98: The set of score generation rules 312 can include a number of items returned per a time-period 704 by the user and/or an indication of a score adjustment amount based on the number of successful returns of items and/or unsuccessful attempted returns (fraudulent returns)“) Applicant’s disclosure teaches at ¶80: “The risk application802 also receives information from a reputation application812. Reputation application812 includes both internal reputation information814 an external reputation information816. Internal reputation information814 includes past fraud history and a customer risk score”. Giving the broadest reasonable interpretation of the claim limitations in light of the specification, Examiner interprets the per-user return trust score as taught by Flores as teaching applicant’s internal reputation score. retrieving external reputation information for the customer based on credential information of the customer external to the retail enterprise (Fig 1, ¶157: “factors utilized to determine whether to allow a user to complete a return via a self-return system and keep the item includes restocking costs of an item, return-trust score of the user, value of the item, type of the item, category of the item, whether the user has a returns account (user opt-in registration data available), whether the user returning the item is identified or unidentifiable, whether this is a user's first item return, previous purchases, credit score, payment method, etc.”; ¶158: “he rules can be customized to a selected store. For example, at a first store, there may be very few fraudulent returns of digital video disks (DVDs), however, at a second store there may be a much higher ratio of fraudulent DVD returns. Therefore, the per-item return value for a DVD item at the first store can be significantly higher than at the second store”; ¶160: “Two independent third-party sources of information are utilized by the system to verify the returns account owner's identity to build a trust model for the user in another example. The first source is a similar financial credit/services data provider, such as EXPERIAN®. The second is a provider which tracks personal email accounts to popular sites, such as social media”) Applicant’s disclosure teaches at ¶80: “The risk application802 also receives information from a reputation application812. Reputation application812 includes both internal reputation information814 an external reputation information816 … External reputation information 816 includes compromise credential information, device reputation, and IP reputation”. Giving the broadest reasonable interpretation of the claim limitations in light of the specification, Examiner interprets at least the trust model including third party sources of information, return-trust score and per-item return value as taught by Flores as teaching applicant’s external reputation score. retrieving a customer baseline login procedure associated with the customer (¶55: “The user-provided data 138 in some non-limiting examples includes data provided by the user 122, such as a user's returns account 140 information, login (username and/or password), user-provided reason for return of the item 124, and/or any other information provided by the user via the self-returns application 126”; ¶160: “Two independent third-party sources of information are utilized by the system to verify the returns account owner's identity to build a trust model for the user in another example. The first source is a similar financial credit/services data provider, such as EXPERIAN®. The second is a provider which tracks personal email accounts to popular sites, such as social media. The user signs up for a returns account and provides an email account. The system utilizes third party data to verify that the user-provided name and email combination have been used for a predetermined time-period by this person, such as, but not limited to, a four-year time-period. This provides another level of verifying/authorizing self-returns”) based on the risk score, a value of the previously purchased inventory item, and a frequency of fraudulent activity associated with the previously purchased inventory item (¶55: “The user-provided data 138 in some non-limiting examples includes data provided by the user 122, such as a user's returns account 140 information, login (username and/or password), user-provided reason for return of the item 124, and/or any other information provided by the user via the self-returns application 126”; ¶160: “Two independent third-party sources of information are utilized by the system to verify the returns account owner's identity to build a trust model for the user in another example. The first source is a similar financial credit/services data provider, such as EXPERIAN®. The second is a provider which tracks personal email accounts to popular sites, such as social media. The user signs up for a returns account and provides an email account. The system utilizes third party data to verify that the user-provided name and email combination have been used for a predetermined time-period by this person, such as, but not limited to, a four-year time-period. This provides another level of verifying/authorizing self-returns”; ¶143: “the item is to be returned to a store. The refund amount is to be credited back to a credit card of the user. When complete, the user selects an icon 2210 to accept and finish the online return process prior to delivering the item to the store”) automatically determining a plurality of appropriate return processing options for physically returning the previously purchased inventory item, (Figs 17-22; Fig 18, Fign19 “Choose Items you wish to return”; #1902 “Reason for return”; Fig 20, #2002 Changed Mind; Fig 22, Item #2202; Return to Store, #2204, ““Refund Total”#2206, Fig 19, ¶140: “a screenshot 1900 of a user device displaying an item selected for return and providing an option permitting a user to select a reason 1904 for returning the selected item”; ¶107: “The return authorization component is a component that analyzes a per-user return-trust score and/or a per-item return value using authorization rules to determine whether to authorize an unassisted self-return of an item, such as, but not limited to, the return authorization component 322 in FIG. 3”; Fig 21, ¶142: “a screenshot 2100 of a user device displaying return options for returning the item to a designated return location. In this non-limiting example, the user clicks on an option to return the item at a store 2102 or return the item via mail 2104”; ¶143: “FIG. 22 is an exemplary block diagram illustrating a screenshot 2200 of a user device displaying a selected item 2202 for return, method of return 2204, an amount to be refunded 2206, and a method of providing the refund 2208. In this non-limiting example, the item is to be returned to a store”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”) wherein the plurality of appropriate return processing options are selected from among: a regular refund option in which a regular refund is provided only after physically receiving, by the retail enterprise, the previously purchased inventory item associated with the product return from the customer via return mail or in store (Figs 17-22; Fig 18, Fign19 “Choose Items you wish to return”; #1902 “Reason for return”; Fig 20, #2002 Changed Mind; Fig 22, Item #2202; Return to Store, #2204, ““Refund Total”#2206, ¶107: “The return authorization component is a component that analyzes a per-user return-trust score and/or a per-item return value using authorization rules to determine whether to authorize an unassisted self-return of an item, such as, but not limited to, the return authorization component 322 in FIG. 3”; ¶110: “instructions are output via a user interface”; ¶140: “FIG. 19 is an exemplary block diagram illustrating a screenshot 1900 of a user device displaying an item selected for return and providing an option permitting a user to select a reason 1904 for returning the selected item. In some examples, the application provides a list of reasons for the item return, such as, but not limited to, item arrived late, the item was ordered by mistake, a part was missing and/or the user changed their mind about purchasing the item. In this example, the user has indicated that the user wants to return the item because the user has changed their mind. Other options which may be provided include, without limitation, that an item was the wrong color, wrong size, damaged in the mail, non-operational, or any other reason”; ¶142: “the user clicks on an option to return the item at a store 2102 or return the item via mail 2104”; ¶143: “FIG. 22 is an exemplary block diagram illustrating a screenshot 2200 of a user device displaying a selected item 2202 for return, method of return 2204, an amount to be refunded 2206, and a method of providing the refund 2208. In this non-limiting example, the item is to be returned to a store”; ¶148: “the system authorizes a user to return an item remotely via a self-returns application on a user device. Implementing this “keep it” logic, including evaluating re-shelving costs and item disposition cost thresholds, enables users (customers) to immediately receive refunds on select “qualifying” items based on dollar thresholds, transaction history data, and risk models”; ¶149: “the system allows a first user (customer) to make an item self-return by scanning and leaving the item to be returned at a designated location without associate (second user) intervention. The second user (returns manager/associate) can approach the first user entering a store with the item for return. The second user can use a mobile device to facilitate the returns process. The second user scans in the first user's receipt and/or scans the item to be returned. The system outputs instructions to the second user”; ¶151: “the system can allow the first user to complete the return process and will instruct the first user to place the item in bin “X” at the store or print out a return shipping label to mail the item back to the store or other provider/seller of the item”; ¶154: “a user enters a store and utilizes a self-returns application on the user's mobile device or on a store's kiosk to retrieve transaction details from a cloud storage, retrieve a refund amount or the item, and automatically refund payment to user after the user returns the item to a designated bin or other receptacle. In these examples, the user returns an item at a different location than the location at which the item was originally purchased”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”) a regular exchange option, wherein the regular exchange option is an exchange order processed only after physically receiving, by the retail enterprise, the previously purchased inventory item associated with the product return from the customer via return mail or in store (¶1: “If the item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item. If the item was ordered via an online source, the user may have to repackage and mail the item to a returns location”; ¶107: “The return authorization component is a component that analyzes a per-user return-trust score and/or a per-item return value using authorization rules to determine whether to authorize an unassisted self-return of an item, such as, but not limited to, the return authorization component 322 in FIG. 3”; ¶151: “the system can allow the first user to complete the return process and will instruct the first user to place the item in bin “X” at the store or print out a return shipping label to mail the item back to the store or other provider/seller of the item”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”) an advance exchange option, wherein a replacement inventory item is physically mailed or physically provided to the customer before the previously purchased inventory item associated with the product return is physically received by the retail enterprise (Abstract: “An item disposition component determines in real-time whether to permit the first user to keep the selected item or instruct the first user to return the selected item to a designated item return area prior to completion of the item return based on a set of item disposition criteria”; ¶1: “If the item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item. If the item was ordered via an online source, the user may have to repackage and mail the item to a returns location”; ¶148: “the system authorizes a user to return an item remotely via a self-returns application on a user device. Implementing this “keep it” logic, including evaluating re-shelving costs and item disposition cost thresholds, enables users (customers) to immediately receive refunds on select “qualifying” items based on dollar thresholds, transaction history data, and risk models”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”) an issue refund now option, wherein the customer is not required to physically return the previously purchased inventory item associated with the product return (¶146: “FIG. 25 is an exemplary block diagram illustrating a screenshot 2500 of a user device displaying a notification to the user that the return transaction is complete without physically returning the item to an item return location. In other words, the return is completed without taking the item to a store or returning the item by mail. The user receives the refund while maintaining possession of the item. This improves user satisfaction, reduces time consumed by the returns process, and avoids incurring item disposal or restocking costs”; ¶148: “the system authorizes a user to return an item remotely via a self-returns application on a user device. Implementing this “keep it” logic, including evaluating re-shelving costs and item disposition cost thresholds, enables users (customers) to immediately receive refunds on select “qualifying” items based on dollar thresholds, transaction history data, and risk models”) a customer can keep option, wherein the replacement inventory item is physically mailed or physically provided to the customer even though the customer is not required to return the previously purchased inventory item associated with the product return (Abstract: “An item disposition component determines in real-time whether to permit the first user to keep the selected item or instruct the first user to return the selected item to a designated item return area prior to completion of the item return based on a set of item disposition criteria”; ¶1: “If the item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item. If the item was ordered via an online source, the user may have to repackage and mail the item to a returns location”; ¶60: “The item disposition instructions 150 instruct the user 122 in disposition of the item 124 following authorization of the unassisted return of the item 124. The item disposition instructions 150 can instruct the user to retain (keep) the item 124 or leave the item 124 at the designated return location 148”; ¶92: “The returns management component 500 can provide approval 510 for the user to complete the proposed item return without item return verification 512 where the user is authorized to retain possession of the selected item 410 being returned”; ¶146: “FIG. 25 is an exemplary block diagram illustrating a screenshot 2500 of a user device displaying a notification to the user that the return transaction is complete without physically returning the item to an item return location. In other words, the return is completed without taking the item to a store or returning the item by mail. The user receives the refund while maintaining possession of the item. This improves user satisfaction, reduces time consumed by the returns process, and avoids incurring item disposal or restocking costs”; ¶150: “Depending upon a set of disposition rules, the system can indicate the first user can keep the item or continue to process the return and place the item in a bin “X”. If the system authorizes completion of the return process, the second user can take the item and place it in the designated bin. The system applies credit to first users account if a credit card was used to complete the transaction; ¶157: “actors utilized to determine whether to allow a user to complete a return via a self-return system and keep the item includes restocking costs of an item, return-trust score of the user, value of the item, type of the item, category of the item, whether the user has a returns account (user opt-in registration data available), whether the user returning the item is identified or unidentifiable, whether this is a user's first item return, previous purchases, credit score, payment method, etc.”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”) Flores discloses that if an item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item. Flores further discloses an item disposition component which determines in real-time whether to permit the first user to keep a selected item prior to completion of the item return based on a set of item disposition criteria. It would have been obvious to one of ordinary skill in the art at the time of applicant’s invention to substitute the selected (return) item as taught by Flores for applicant’s replacement inventory item for the predictable result of a user receiving a refund while maintaining possession of the item prior to completion of the return. wherein the plurality of appropriate return processing options are determined based on a level of trust of the customer and the particular product and attributes of a product order corresponding to the particular product (Fig 3, ¶80: “The set of score generation rules 312 can include a number of items returned per a time-period 704 by the user and/or an indication of a score adjustment amount based on the number of successful returns of items and/or unsuccessful attempted returns (fraudulent returns)”; Fig 7, ¶97: “FIG. 7 is an exemplary block diagram illustrating a set of score generation rules 312. The set of score generation rules 312 includes one or more rules for generating a return-trust score for a user. The set of score generation rules 312 can include a threshold ratio of purchases to returns 702 made by the user and/or an amount of adjustment up (number of points added) or adjusted downward (points subtracted) from the score based on ration and/or changes in the ratio”; ¶98: “The set of score generation rules 312 can include a number of items returned per a time-period 704 by the user and/or an indication of a score adjustment amount based on the number of successful returns of items and/or unsuccessful attempted returns (fraudulent returns).”; Fig 8, ¶100: “FIG. 8 is an exemplary block diagram illustrating a set of item-value parameters 318. The set of item-value parameters 318 can include a ratio of purchases to returns of the selected item by a plurality of users within a predetermined time-period 802. If an item has an unusually large number of item returns when compared with purchases of the item, the ratio can indicate returns of the item should be supervised by an associate”; ¶103: “A set of self-return ineligible items 808 can include one or more items which do not qualify for unassisted self-return due to their value, high-rate of fraudulent returns of the item, or other attributes. In one example, the set of self-return ineligible items 808 includes a watch. If a user attempts to return the watch via the self-return system, the authorization component does not authorize the unassisted return of the item based on the set of self-return ineligible items 808 in the set of item-value parameters in this example”; ¶107: “The return authorization component is a component that analyzes a per-user return-trust score and/or a per-item return value using authorization rules to determine whether to authorize an unassisted self-return of an item, such as, but not limited to, the return authorization component 322 in FIG. 3”) the plurality of appropriate return options for the particular customer excluding at least one of a collection of return processing options based on the risk score exceeding a predetermined threshold (Fig 8, ¶102: “set of self-return ineligible categories 806 is a set of one or more categories of items which are ineligible for unassisted self-return. In some examples, but without limitation, the set of self-return ineligible categories includes high-end categories of items, such as, but not limited to, televisions, jewelry, video game consoles, smart phones, etc. If a user attempts to return a television in this example, the system identifies a category of the item as an ineligible category based on the set of item-value parameters. The system directs the user to wait for a returns manager to assist the user with completion of the transaction”; ¶103: “A set of self-return ineligible items 808 can include one or more items which do not qualify for unassisted self-return due to their value, high-rate of fraudulent returns of the item, or other attributes. In one example, the set of self-return ineligible items 808 includes a watch. If a user attempts to return the watch via the self-return system, the authorization component does not authorize the unassisted return of the item based on the set of self-return ineligible items 808 in the set of item-value parameters in this example”) wherein the attributes of the product order include the total number of items in the product order, the total cost of the product order, and how the product order was received by the customer, and wherein how the product order was received by the customer is one of delivery, in-store pickup, and in-store shopping (¶135: “Delivered”, FIG. 16 is an exemplary block diagram illustrating a screenshot 1600 of a user device displaying a previous purchase history 1602 for the user. In this example, the portion of the purchase history 1602 displayed on the screen 1604 includes a record 1606 for a set of items purchased on May 5, 2017 and another record 1608 for a set of one or more items purchased on Apr. 15, 2017”; “FIG. 17 is an exemplary block diagram illustrating a screenshot 1700 of a user device displaying item data associated with a previous transaction”; ¶137: “the transaction of May 5, 2017 includes two items, item 1704 and item 1706. The user can select either item 1704 or item 1706. The user can alternatively select both items 1704 and 1706 for return”; ¶138: “FIG. 18 is an exemplary block diagram illustrating a screenshot 1800 of a user device displaying a set of items purchased in a single purchase transaction”) determining a corresponding refund for the particular product prior to the retail enterprise receiving the particular product (¶143: “FIG. 22 is an exemplary block diagram illustrating a screenshot 2200 of a user device displaying a selected item 2202 for return, method of return 2204, an amount to be refunded 2206, and a method of providing the refund 2208. In this non-limiting example, the item is to be returned to a store. The refund amount is to be credited back to a credit card of the user. When complete, the user selects an icon 2210 to accept and finish the online return process prior to delivering the item to the store”; ¶146: “FIG. 25 is an exemplary block diagram illustrating a screenshot 2500 of a user device displaying a notification to the user that the return transaction is complete without physically returning the item to an item return location. In other words, the return is completed without taking the item to a store or returning the item by mail. The user receives the refund while maintaining possession of the item. This improves user satisfaction, reduces time consumed by the returns process, and avoids incurring item disposal or restocking costs”; ¶154: “a user enters a store and utilizes a self-returns application on the user's mobile device or on a store's kiosk to retrieve transaction details from a cloud storage, retrieve a refund amount or the item, and automatically refund payment to user after the user returns the item to a designated bin or other receptacle. In these examples, the user returns an item at a different location than the location at which the item was originally purchased”) presenting via the user-interface of the retail website: the particular product; a reason for the return; the plurality of appropriate return processing options for the particular product and the corresponding refund for the particular product according to a selected one of the plurality of appropriate return options, the corresponding refund including a refund amount (Fig 1, 2, ¶34: “The system analyzes item return data using a set of disposition criteria to determine disposition of an item approved for self-return by the identified user. The disposition criteria are utilized to identify a most suitable disposition of each item returned by a user via the self-return system on a per-item basis for increased item return efficiency and improved user return system interactions”; ¶35: “the computing device 102 represents any device executing computer-executable instructions 104 (e.g., as application programs, operating system functionality, or both) to implement the operations and functionality associated with the computing device 102. The computing device 102 can include a mobile computing device or any other portable device”; ¶36: “The computing device 102 can also include a user interface component 110”; ¶39: “The applications can communicate with counterpart applications or services such as web services accessible via a network 112. In an example, the applications can represent downloaded client-side applications that correspond to server-side services executing in a cloud”; Figs 17-22; Figs 18,19 “Choose Items you wish to return”; #1902 “Reason for return”; Fig 20, #2002 Changed Mind; Fig 22, Item #2202; Return to Store, #2204, ““Refund Total”#2206, ¶107: “The return authorization component is a component that analyzes a per-user return-trust score and/or a per-item return value using authorization rules to determine whether to authorize an unassisted self-return of an item, such as, but not limited to, the return authorization component 322 in FIG. 3”; ¶110: “instructions are output via a user interface”; ¶140: “FIG. 19 is an exemplary block diagram illustrating a screenshot 1900 of a user device displaying an item selected for return and providing an option permitting a user to select a reason 1904 for returning the selected item. In some examples, the application provides a list of reasons for the item return, such as, but not limited to, item arrived late, the item was ordered by mistake, a part was missing and/or the user changed their mind about purchasing the item. In this example, the user has indicated that the user wants to return the item because the user has changed their mind. Other options which may be provided include, without limitation, that an item was the wrong color, wrong size, damaged in the mail, non-operational, or any other reason”; ¶142: “the user clicks on an option to return the item at a store 2102 or return the item via mail 2104”; ¶143: “FIG. 22 is an exemplary block diagram illustrating a screenshot 2200 of a user device displaying a selected item 2202 for return, method of return 2204, an amount to be refunded 2206, and a method of providing the refund 2208. In this non-limiting example, the item is to be returned to a store”; ¶148: “the system authorizes a user to return an item remotely via a self-returns application on a user device. Implementing this “keep it” logic, including evaluating re-shelving costs and item disposition cost thresholds, enables users (customers) to immediately receive refunds on select “qualifying” items based on dollar thresholds, transaction history data, and risk models”) present to the particular customer via a user-interface of a retail website: the particular product; a reason for return; the plurality of appropriate return processing options for the particular product, and the corresponding refund for the particular product according to a selected one of the plurality of appropriate return options, the corresponding refund including a refund amount (Fig 1, 2, ¶34: “The system analyzes item return data using a set of disposition criteria to determine disposition of an item approved for self-return by the identified user. The disposition criteria are utilized to identify a most suitable disposition of each item returned by a user via the self-return system on a per-item basis for increased item return efficiency and improved user return system interactions”; ¶35: “the computing device 102 represents any device executing computer-executable instructions 104 (e.g., as application programs, operating system functionality, or both) to implement the operations and functionality associated with the computing device 102. The computing device 102 can include a mobile computing device or any other portable device”; ¶36: “The computing device 102 can also include a user interface component 110”; ¶39: “The applications can communicate with counterpart applications or services such as web services accessible via a network 112. In an example, the applications can represent downloaded client-side applications that correspond to server-side services executing in a cloud”; Figs 17-22; Fig 18, Fig 19 “Choose Items you wish to return”; #1902 “Reason for return”; Fig 20, #2002 Changed Mind; Fig 22, Item #2202; Return to Store, #2204, “Refund Total #2206”, ¶107: “The return authorization component is a component that analyzes a per-user return-trust score and/or a per-item return value using authorization rules to determine whether to authorize an unassisted self-return of an item, such as, but not limited to, the return authorization component 322 in FIG. 3”; ¶110: “instructions are output via a user interface”; ¶140: “FIG. 19 is an exemplary block diagram illustrating a screenshot 1900 of a user device displaying an item selected for return and providing an option permitting a user to select a reason 1904 for returning the selected item. In some examples, the application provides a list of reasons for the item return, such as, but not limited to, item arrived late, the item was ordered by mistake, a part was missing and/or the user changed their mind about purchasing the item. In this example, the user has indicated that the user wants to return the item because the user has changed their mind. Other options which may be provided include, without limitation, that an item was the wrong color, wrong size, damaged in the mail, non-operational, or any other reason”; ¶142: “the user clicks on an option to return the item at a store 2102 or return the item via mail 2104”; ¶143: “FIG. 22 is an exemplary block diagram illustrating a screenshot 2200 of a user device displaying a selected item 2202 for return, method of return 2204, an amount to be refunded 2206, and a method of providing the refund 2208. In this non-limiting example, the item is to be returned to a store”; ¶148: “the system authorizes a user to return an item remotely via a self-returns application on a user device. Implementing this “keep it” logic, including evaluating re-shelving costs and item disposition cost thresholds, enables users (customers) to immediately receive refunds on select “qualifying” items based on dollar thresholds, transaction history data, and risk models”) a computing system including one or more enterprise computing devices, the computing system including at least one processor and a memory subsystem including at least one memory device, the memory subsystem communicatively coupled to the at least one processor, (¶35-¶37; ¶35: “FIG. 1, an exemplary block diagram illustrates a system 100 for customized self-returns… computing device 102 represents any device executing computer-executable instructions 104 (e.g., as application programs, operating system functionality, or both) to implement the operations and functionality associated with the computing device 102… the computing device 102 can represent a group of processing units or other computing devices”; ¶36: “ the computing device 102 has at least one processor 106 and a memory 108. The computing device 102 can also include a user interface component 110”; ¶38: “the computing device 102 further has one or more computer readable media such as the memory 108. The memory 108 includes any quantity of media associated with or accessible by the computing device 102”) the memory subsystem storing a customer attribute database and instructions executable to provide a customer risk assessment tool and a returns processing service tool, the instructions, when executed by the at least one processor, causing the computing system to (Fig 1, ¶39: “The applications, when executed by the processor 106, operate to perform functionality on the computing device 102. The applications can communicate with counterpart applications or services such as web services accessible via a network 112”; ¶50: “The data storage device 132 in this example is included within the computing device 102 or associated with the computing device 102. In other examples, the data storage device 132 includes a remote data storage accessed by the computing device via the network 112, such as a remote data storage device, a data storage in a remote data center, or the cloud storage 120”; ¶51: “160: Two independent third-party sources of information are utilized by the system to verify the returns account owner's identity to build a trust model for the use”; ¶54: “The item data 136, item return data 134, and/or user-provided data 138 in this example is stored on the local data storage device 132. In other examples, item data 136, item return data 134, and/or user-provided data 138 is stored on a remote data storage, such as the cloud storage 120”; ¶56: “The item data 136, item return data 134, and/or user-provided data 138 in this example is stored on the local data storage device 132. In other examples, item data 136, item return data 134, and/or user-provided data 138 is stored on a remote data storage, such as the cloud storage 120”; ¶80: “A return authorization component 322 determines whether to authorize 324 an unassisted self-return 326 of the item 124 by the user 315 based on the per-user return-trust score 314 calculated for the user 315 and the per-item return value 316 associated with the item 124 to be returned”) Flores discloses a customized returns manager component for calculating a customized return-trust score and a per-item return value based on analysis of item data and transaction history data. Flores further teaches a return management and authorization component and return-trust score for a user attempting to return an item, based on various rules associated with the analysis of item data, user-provided item return data, and transaction history data associated the identified user. A person of ordinary skill in the art would have been motivated to modify the known score-based item returns authorization techniques/rules as taught by Flores to achieve the claimed invention (a regular exchange option, an advanced exchange option…) with a reasonable expectation of success in doing so (" DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006)); and the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such customized return techniques/rules into similar systems, hence resulting in an increased item return efficiency and improved user return system interactions. (¶140-¶156). Flores discloses all of the above limitations, Flores does not distinctly describe the following limitations but Karmakar however as shown discloses, training a computerized model to establish a trained model to identify a likelihood of fraud occurring using the historical data including the customer attributes and the product attributes, and customer behavioral data (¶22: “FIG. 2A illustrates an exemplary high level process flow 200 for fraud risk scoring. Aggregation of transaction metrics occurs during operation 202. Some various transaction metrics are illustrated in FIG. 3, and include: [0023] high refund amount and/or frequency; [0024] high risky cancellations; [0025] high refund rate; high rate of goods not returned (GNR) refunds; [0026] high refunds through web, call center or doorstep; [0027] high rate of repetition; [0028] high-refunding retail facilities, cities, postal codes; [0029] refunds made in high value items and in high risk retail facilities; [0030] repeatedly refunding with the same driver with a high refund amount; [0031] repeatedly cancelling orders by the same sales representative after pickup completed; [0032] repeated refunds of same item; [0033] a spike in refunds; [0034] refunds at a higher price than paid; [0035] claims of damaged goods; [0036] claims of lost or undelivered goods; and [0037] others. [0038] Transaction metrics are aggregated at the customer level and then across the customer base. For example, all transactions for a particular customer ID are aggregated to identify all transactions and amounts associated with each customer ID”; ¶39: “Weights for the transaction metrics are determined in operation 206, which has multiple phases… data is split into training, validation, and test data sets as part of machine learning (ML) component training. Modeling occurs in phase 210, for example using an ML component”; ¶40: “the fraud risk score is based at least on transaction data that includes refund and cancellation data. Additional data is included based on the transaction metrics available, such as a spike in refunds, price arbitrage, substitution fraud, repeated refund of the same items, and others. In some examples, the fraud risk score is calculated as a weighted sum of multiple transaction metrics”; Fig 6A, 6B, ¶41: “Risks are categorized into risk priorities in operation 214, and reasons (e.g., significantly contributing factors) for the risk scores are provided. Risk scores are binned into risk categories. In some examples, determining risk priority involves assigning each customer ID to one of a predetermined set of risk categories”; Fig 3, ¶56: “Common risk KPIs 302, secondary risk factors 304, multi-party-collusion risk factors 306, and other suspicious behaviors 308, described above in relation to FIGS. 2A-3, are included within aggregated transaction metric data sets 750, and are thus sets of aggregated transaction metric”; ¶58: “A scoring component 768 determines, for each customer ID within plurality of customer IDs 716, a risk score 772 based at least on the scoring weights 770 applied to corresponding transaction metrics (in aggregated transaction metric data sets 750) and also determines, based at least on risk scores 772 for each customer ID within plurality of customer IDs 716, a risk priority”; ¶59: “information is saved in transaction feedback data 784. ML component 790 then uses ML training component 794 on transaction feedback data 784 to train ML model 792, for example, as described above in relation to FIG. 2A”) train a computerized model of the customer risk assessment tool with the historical data and customer behavioral data to identify a likelihood of fraud occurring based on both the customer attributes and the product attributes to establish a trained model (¶22: “FIG. 2A illustrates an exemplary high level process flow 200 for fraud risk scoring. Aggregation of transaction metrics occurs during operation 202. Some various transaction metrics are illustrated in FIG. 3, and include: [0023] high refund amount and/or frequency; [0024] high risky cancellations; [0025] high refund rate; high rate of goods not returned (GNR) refunds; [0026] high refunds through web, call center or doorstep; [0027] high rate of repetition; [0028] high-refunding retail facilities, cities, postal codes; [0029] refunds made in high value items and in high risk retail facilities; [0030] repeatedly refunding with the same driver with a high refund amount; [0031] repeatedly cancelling orders by the same sales representative after pickup completed; [0032] repeated refunds of same item; [0033] a spike in refunds; [0034] refunds at a higher price than paid; [0035] claims of damaged goods; [0036] claims of lost or undelivered goods; and [0037] others. [0038] Transaction metrics are aggregated at the customer level and then across the customer base. For example, all transactions for a particular customer ID are aggregated to identify all transactions and amounts associated with each customer ID”; ¶39: “Weights for the transaction metrics are determined in operation 206, which has multiple phases… data is split into training, validation, and test data sets as part of machine learning (ML) component training. Modeling occurs in phase 210, for example using an ML component”; ¶40: “the fraud risk score is based at least on transaction data that includes refund and cancellation data. Additional data is included based on the transaction metrics available, such as a spike in refunds, price arbitrage, substitution fraud, repeated refund of the same items, and others. In some examples, the fraud risk score is calculated as a weighted sum of multiple transaction metrics”; Fig 6A, 6B, ¶41: “Risks are categorized into risk priorities in operation 214, and reasons (e.g., significantly contributing factors) for the risk scores are provided. Risk scores are binned into risk categories. In some examples, determining risk priority involves assigning each customer ID to one of a predetermined set of risk categories”; Fig 3, ¶56: “Common risk KPIs 302, secondary risk factors 304, multi-party-collusion risk factors 306, and other suspicious behaviors 308, described above in relation to FIGS. 2A-3, are included within aggregated transaction metric data sets 750, and are thus sets of aggregated transaction metric”; ¶58: “A scoring component 768 determines, for each customer ID within plurality of customer IDs 716, a risk score 772 based at least on the scoring weights 770 applied to corresponding transaction metrics (in aggregated transaction metric data sets 750) and also determines, based at least on risk scores 772 for each customer ID within plurality of customer IDs 716, a risk priority”; ¶59: “information is saved in transaction feedback data 784. ML component 790 then uses ML training component 794 on transaction feedback data 784 to train ML model 792, for example, as described above in relation to FIG. 2A”) wherein how the product order was received by the customer is one of delivery, in-store pickup and in-store shopping (Fig 1, ¶19: “A customer 102 orders some items in an e-commerce sales transaction, for example by using a computer 104 visiting an online sales site 150 (e.g., a website) over the internet 152, or using an in-store terminal 154 in a retail facility 156. In some situations, delivery vehicle 158 delivers goods 120 (at least some of the ordered items) to customer location 106 (e.g., a residence or office). Each of delivery vehicle 158, online sales site 150, and in-store terminal 154 acts as a sales transaction node, because each can take part in a sales transaction and/or be a source of information. For example, delivery vehicle 158 (or equipment thereon) may collect an electronic record of deliveries and retrievals. Alternatively, customer 102 can elect to forego delivery to customer location 106 and pick up goods 120 at retail facility 156”; Fig 7, ¶54: “Fraud risk scoring tool 700 connects to online sales site 150, in-store terminal 154, delivery vehicle 158, and call center 160 over network 930 (collectively, transaction nodes) to receive and store sales data 710 and refund data 730 locally and/or in a data store 702. Sales data 710 is indexed with one of a plurality of customer IDs 716 and also includes amounts 712, item lists 714 (e.g., items sold), sales representative IDs 718, transaction IDs 720, cancellation flags 722 (indicating whether an order was canceled), pickup flags 724 (indicating whether goods for an order were picked up), and other data 726, such as a retail facility ID, and other information”; ¶58: “A scoring component 768 determines, for each customer ID within plurality of customer IDs 716, a risk score 772 based at least on the scoring weights 770 applied to corresponding transaction metrics (in aggregated transaction metric data sets 750) and also determines, based at least on risk scores 772 for each customer ID within plurality of customer IDs 716, a risk priority… Based on the criteria selected, identified risk transactions 780 are reported, for a selected risk priority. The reported data includes at least one customer ID associated with the selected risk priority”) providing customer attributes including the customer baseline login procedure, the internal reputation information, and the external reputation information of the particular customer, and product attributes of the particular product to the trained model with the trained model responsively determining and outputting a risk score representative of the particular customer and the particular product being involved in fraudulent activity (Figs 1, 2A, 3, ¶19: FIG. 1 illustrates an exemplary environment 100 that can advantageously employ fraud risk scoring. A customer 102 orders some items in an e-commerce sales transaction, for example by using a computer 104 visiting an online sales site 150 (e.g., a website) over the internet 152, or using an in-store terminal 154 in a retail facility 156. In some situations, delivery vehicle 158 delivers goods 120 (at least some of the ordered items) to customer location 106 (e.g., a residence or office)”; ¶22: “FIG. 2A illustrates an exemplary high level process flow 200 for fraud risk scoring. Aggregation of transaction metrics occurs during operation 202. Some various transaction metrics are illustrated in FIG. 3, and include: [0023] high refund amount and/or frequency; [0024] high risky cancellations; [0025] high refund rate; high rate of goods not returned (GNR) refunds; [0026] high refunds through web, call center or doorstep; [0027] high rate of repetition; [0028] high-refunding retail facilities, cities, postal codes; [0029] refunds made in high value items and in high risk retail facilities; [0030] repeatedly refunding with the same driver with a high refund amount; [0031] repeatedly cancelling orders by the same sales representative after pickup completed; [0032] repeated refunds of same item; [0033] a spike in refunds; [0034] refunds at a higher price than paid; [0035] claims of damaged goods; [0036] claims of lost or undelivered goods; and [0037] others …¶38: “ all transactions for a particular customer ID are aggregated to identify all transactions and amounts associated with each customer ID … The transaction metrics are normalized so that a typical customer baseline can be determined.”; ¶39: “Weights for the transaction metrics are determined in operation 206, which has multiple phases… data is split into training, validation, and test data sets as part of machine learning (ML) component training. Modeling occurs in phase 210, for example using an ML component”; ¶40: “a particular customer ID may be associated with multiple different risky transactions …the fraud risk score is based at least on transaction data that includes refund and cancellation data. Additional data is included based on the transaction metrics available, such as a spike in refunds, price arbitrage, substitution fraud, repeated refund of the same items, and others. In some examples, the fraud risk score is calculated as a weighted sum of multiple transaction metrics”; Fig 6A, 6B, ¶41-¶49; ¶41: “Risks are categorized into risk priorities in operation 214, and reasons (e.g., significantly contributing factors) for the risk scores are provided. Risk scores are binned into risk categories. In some examples, determining risk priority involves assigning each customer ID to one of a predetermined set of risk categories… This enables proactive responses for risky transactions. Providing the reasons for the risk scores facilitates remedial measures. Example reasons include: [0042] refund for high value items, claimed items were damaged, GNR; [0043] high rate of refunds through driver, high risk postal code, GNR; [0044] cancellation after pickup, high risk sales representative; [0045] high rate of refunds through driver, high rate of refunds at retail facility; [0046] unwanted substitutes, GNR; and [0047] others”; Fig 2B, ¶48: “operation 222 includes observation of transaction metrics to identify potential fraud detection options. Operation 224 seeks to interpret the observations to generate the risk scores and reasons. Operation 226 includes discovery of new and better detections, such as by ongoing ML training. Operation 228 is prevention of new fraud attempts, for example by suspending risky transactions until remedial actions can be taken to ensure that the transactions are legitimate”; FIG. 3, ¶49: “various transaction metrics include metrics based on common risk key performance indicators (KPIs) 301, secondary risk factors 304, multi-party-collusion risk factors 306, and other suspicious behaviors 308. Common risk KPIs 302 include high refund amount and/or frequency; high risky cancellations; high refund rate; high rate of GNR refunds; high refunds through web, call center or doorstep (via driver); and high rate of repetition. Secondary risk factors 304 include high-refunding retail facilities, cities, postal code; and refunds made in high value items and in high-risk retail facilities. Multi-party collusion risk factors 306 (e.g., when a sales representative or delivery driver colludes with a customer) includes repeatedly refunding with the same driver with a high refund amount; and repeatedly cancelling orders by the same sales representative after pickup completed. Other suspicious behaviors 308 include repeated refunds of the same item; a spike in refunds; refunds at a higher price than what was paid; claims of damaged goods; and claims of lost or undelivered goods” Fig 7, ¶54: “Fraud risk scoring tool 700 connects to online sales site 150, in-store terminal 154, delivery vehicle 158, and call center 160 over network 930 (collectively, transaction nodes) to receive and store sales data 710 and refund data 730 locally and/or in a data store 702. Sales data 710 is indexed with one of a plurality of customer IDs 716 and also includes amounts 712, item lists 714 (e.g., items sold), sales representative IDs 718, transaction IDs 720, cancellation flags 722 (indicating whether an order was canceled), pickup flags 724 (indicating whether goods for an order were picked up), and other data 726, such as a retail facility ID, and other information. Refund data 730 is also indexed with one of the plurality of customer IDs 716, and has similar information: Amounts 732, item lists 734 (e.g., items refunded), sales representative IDs 738, transaction IDs 740, links 742 to the corresponding transaction IDs 720 in sales data 710, return flags 744 (indicating whether the goods were returned), and other data 746, such as a retail facility ID, and other information”; ¶56: “Common risk KPIs 302, secondary risk factors 304, multi-party-collusion risk factors 306, and other suspicious behaviors 308, described above in relation to FIGS. 2A-3, are included within aggregated transaction metric data sets 750, and are thus sets of aggregated transaction metric”; ¶58: “A scoring component 768 determines, for each customer ID within plurality of customer IDs 716, a risk score 772 based at least on the scoring weights 770 applied to corresponding transaction metrics (in aggregated transaction metric data sets 750) and also determines, based at least on risk scores 772 for each customer ID within plurality of customer IDs 716, a risk priority… Based on the criteria selected, identified risk transactions 780 are reported, for a selected risk priority. The reported data includes at least one customer ID associated with the selected risk priority”; ¶59: “Upon an investigation (triggered by the reporting) the finding may be that the risk transactions included fraud (and so should be or should have been suspended), or else were legitimate and suspension is not warranted. This information is saved in transaction feedback data 784. ML component 790 then uses ML training component 794 on transaction feedback data 784 to train ML model 792, for example, as described above in relation to FIG. 2A”; ¶75: “determining, for each customer ID within the plurality of customer IDs, a risk score based at least on the scoring weights applied to corresponding transaction metrics; determining, based at least on the risk score for each customer ID within the plurality of customer IDs, a risk priority”) Applicant’s disclosure teaches at ¶64: “The data analysis application602 receives information from customer history510, attributes from the order512, fraud and security history516, and the number of customer contacts 514 to train a model for identification of potentially fraudulent activity with respect to a particular customer, item, or both. The model can output parameters that may be used in generation of a customer risk score”, ¶80: “The risk application 802 also receives information from a reputation application812. Reputation application 812 includes both internal reputation information 814 an external reputation information816. Internal reputation information 814 includes past fraud history and a customer risk score. External reputation information 816 includes compromise credential information, device reputation, and IP reputation”) Examiner interprets the various transaction metrics used for determining a customer baseline, common risk key performance indicators, secondary risk factors, multi-party-collusion risk factors, and other suspicious behaviors as taught by Karmakar as teaching applicant’s internal and external reputation information; while at least the sales and refund data including item lists and corresponding transaction ID as taught by Karmakar teaches applicant’s product attributes. Flores discloses a customized returns manager component for calculating a customized return-trust score and a per-item return value based on analysis of item data and transaction history data. Flores further discloses a return management and authorization component and return-trust score for a user attempting to return an item, based on various rules associated with the analysis of item data, user-provided item return data, and transaction history data associated the identified user. Karmakar discloses a fraud risk scoring tool based on a weighted sum of multiple metrics. Karmakar further discloses various fraud risk factors used by the fraud risk scoring tool and various transaction metrics based on common risk key performance indicators, secondary risk factors, multi-party collusion risk factors and other suspicious behaviors. Karmakar teaches that customer risk transaction (metric) information is saved in transaction feedback data set which is aggregated and used to train a ML model. Karmakar further teaches that a risk score is based at least on scoring weights applied to corresponding transaction metrics; and determining a risk priority based at least on the risk score for each customer ID within the plurality of customer IDs. Flores and Karmakar are directed to the same field of endeavor since they are related to method/system/techniques for analyzing customer return/refund transaction history for managing fraud avoidance in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art to combine the machine learning component/models and fraud risk scoring tool as taught by Karmakar with the customized return techniques/rules as taught by Flores to provide data-driven identification of potential fraud cases (risk score) of a customer based on scoring weights applied to corresponding transaction metrics and a risk priority. The known machine learning techniques/models and fraud risk scoring tool as taught by Karmakar would have predictably resulted in identifying potential user fraud, generating user risk scores and the prevention of new fraud attempts to ensure transactions are legitimate (Fig 1, 2A, 2B, Fig 3, Fig 6B, Figs 7-9, ¶22-¶38, ¶41-¶59; ¶75). Independent claim 13 recites substantially similar limitations as independent claim 1 therefore, it is also rejected based on the same rationale above. With respect to Claim 2, Flores and Karmakar disclose all of the above limitations, Flores further discloses, Page 19 of 24110604704MML-HAVEN-003-USPATENTwherein the historical data includes total orders, total spent, total purchases, and percentage of returns to purchases (¶135: FIG. 16 is an exemplary block diagram illustrating a screenshot 1600 of a user device displaying a previous purchase history 1602 for the user. In this example, the portion of the purchase history 1602 displayed on the screen 1604 includes a record 1606 for a set of items purchased on May 5, 2017 and another record 1608 for a set of one or more items purchased on Apr. 15, 2017”; FIG. 17 is an exemplary block diagram illustrating a screenshot 1700 of a user device displaying item data associated with a previous transaction”; ¶137: “the transaction of May 5, 2017 includes two items, item 1704 and item 1706. The user can select either item 1704 or item 1706. The user can alternatively select both items 1704 and 1706 for return”; ¶138: “FIG. 18 is an exemplary block diagram illustrating a screenshot 1800 of a user device displaying a set of items purchased in a single purchase transaction”; Figs 17-22; Fig 18, Fign19 “Choose Items you wish to return”; #1902 “Reason for return”; Fig 20, #2002 Changed Mind; Fig 22, Item #2202; Return to Store, #2204, ““Refund Total”#2206, Fig 19, ¶140: “a screenshot 1900 of a user device displaying an item selected for return and providing an option permitting a user to select a reason 1904 for returning the selected item”; ¶165: “the set of item-value parameters comprising at least one of a maximum threshold value of the selected item, a threshold number of instances of a returned item per time-period, a threshold number of item returns in a single transaction, a set of self-return ineligible categories, a set of self-return ineligible items; ¶171: “ wherein the transaction history data comprises at least one of the number of items returned by the selected user within a predetermined time-period, the number of items obtained by the selected user within the predetermine time-period, an identification of items previously returned, or a value of each item previously returned”; ¶179: “calculating the per-item return value for the selected item based on a set of item attributes and a set of item-value parameters, the set of item-value parameters comprising at least one of a maximum threshold value of the selected item, a threshold number of instances of a returned item per time-period, a threshold number of item returns in a single transaction, a set of self-return ineligible categories, a set of self-return ineligible items”; ¶180; “calculating the per-user return-trust score for the first user based on number of items returned by the selected user within a predetermined time-period, the number of items obtained by the selected user within the predetermine time-period, an identification of items previously returned, or a value of each item previously returned”) With respect to Claim 4, Flores and Karmakar disclose all of the above limitations, Flores further discloses, wherein the historical data includes total returns, total replacement amount, total advance replacement amount, total refund amount, (¶1: “If the item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item”; Fig 7, Fig 8, ¶98: “the set of score generation rules 312 can include a number of items returned per a time-period 704 by the user and/or an indication of a score adjustment amount based on the number of successful returns of items and/or unsuccessful attempted returns (fraudulent returns). A number of items obtained via purchase within a given time-period 706 and/or an amount of adjustment of the score up or down based on the number of items purchased within the time-period. The time-period can be any configurable time-period, such as, but not limited to, a one-year time period, a four-year time period, a seven-year time period, or any other amount of item.”; ¶99: “the set of score generation rules 312 can also include threshold returns value 708, threshold purchase value 710, and/or weight(s) 712”; ¶100: “FIG. 8 is an exemplary block diagram illustrating a set of item-value parameters 318. The set of item-value parameters 318 can include a ratio of purchases to returns of the selected item by a plurality of users within a predetermined time-period 802”; ¶101: “per-item returns history 804 can also indicate the number of instances of a given item returned within a predetermined time-period. If the number is unusually high, it can indicate user-supervision/assistance during item return is advisable. In other words, an unusually high number of returns can indicate suspicious returns activity associated with this type of item. For example, if a large number of memory sticks are being returned, it indicates increased monitoring (assistance) recommended; ¶104: “FIG. 9 is an exemplary block diagram illustrating transaction history data 130. The transaction history data 130 includes item return history 902 and/or item purchase history 904 of a selected user. The transaction history data can optionally also include a method of payment used during a transaction, a credit score of the user, and/or any issues associated with attempted item returns. Return issues can include lack of a receipt, lack of packaging associated with an item, loss of the item to be returned, damaged/broken item, etc.”; ¶105: “the item purchase history 904 is used to infer a returns profile for a user. The item purchase history 904 is utilized as first factor of building a return-trust value for the user”; ¶143: “FIG. 22 is an exemplary block diagram illustrating a screenshot 2200 of a user device displaying a selected item 2202 for return, method of return 2204, an amount to be refunded 2206, and a method of providing the refund 2208. In this non-limiting example, the item is to be returned to a store. The refund amount is to be credited back to a credit card of the user. When complete, the user selects an icon 2210 to accept and finish the online return process prior to delivering the item to the store”; ¶164: “calculation component, implemented on the at least one processor, that calculates the per-item return value for the selected item based on a set of item attributes and a set of item-value parameters”;¶166: “wherein the set of item attributes comprises at least one of a category of the selected item, a value of the selected item, an item returns history associated with the selected item”; ¶171: “ wherein the transaction history data comprises at least one of the number of items returned by the selected user within a predetermined time-period, the number of items obtained by the selected user within the predetermine time-period, an identification of items previously returned, or a value of each item previously returned”; ¶179: “calculating the per-item return value for the selected item based on a set of item attributes and a set of item-value parameters, the set of item-value parameters comprising at least one of a maximum threshold value of the selected item, a threshold number of instances of a returned item per time-period, a threshold number of item returns in a single transaction, a set of self-return ineligible categories, a set of self-return ineligible items”; ¶181: “the calculation component, implemented on the at least one processor, that calculates the per-item return value for the set of items based on a set of item attributes and a set of item-value parameters”) Flores discloses a customized returns manager component for calculating a customized return-trust score and a per-item return value based on analysis of item data and transaction history data. Flores teaches that if the item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item. Flores further teaches a return management and authorization component and return-trust score for a user attempting to return an item, based on various rules associated with the analysis of item data, user-provided item return data, and transaction history data associated the identified user. Flores discloses a set of score generation rules including one or more rules for generating a return-trust score for a user. The set of score generation rules 312 can include a threshold ratio of purchases to returns 702 made by the user and/or an amount of adjustment up (number of points added) or adjusted downward (points subtracted) from the score based on ration and/or changes in the ratio. Flores further discloses, the set of score generation rules 312 can include a number of items returned per a time-period 704 by the user and/or an indication of a score adjustment amount based on the number of successful returns of items and/or unsuccessful attempted returns (fraudulent returns). A person of ordinary skill in the art would have been motivated to modify the known score-based item returns authorization techniques/rules as taught by Flores to achieve the claimed invention (historical data includes…total replacement amount, total advance replacement amount) with a reasonable expectation of success in doing so (" DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006)); and the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such customized return techniques/rules into similar systems, hence resulting improved score generation rules for calculating the return-trust score for a user and indicating suspicious returns activity associated with an item (Fig 7, Fig 8, ¶1, ¶98-¶105,¶143, ¶171). With respect to Claim 5, Flores and Karmakar disclose all of the above limitations, Flores further discloses, wherein the set of return processing options includes: a regular refund option in which the corresponding refund is a full refund that is provided only after receiving the particular product from a customer via return mail (¶1: “If the item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item. If the item was ordered via an online source, the user may have to repackage and mail the item to a returns location”; ¶107: “The return authorization component is a component that analyzes a per-user return-trust score and/or a per-item return value using authorization rules to determine whether to authorize an unassisted self-return of an item, such as, but not limited to, the return authorization component 322 in FIG. 3”;¶142 “user clicks on an option to return the item at a store 2102 or return the item via mail 2104”; ¶143: “the item is to be returned to a store. The refund amount is to be credited back to a credit card of the user. When complete, the user selects an icon 2210 to accept and finish the online return process prior to delivering the item to the store”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”) a regular exchange option in which the corresponding refund is an exchange order is processed only after receiving the particular product from the customer via mail; (¶1: “If the item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item. If the item was ordered via an online source, the user may have to repackage and mail the item to a returns location”; ¶107: “The return authorization component is a component that analyzes a per-user return-trust score and/or a per-item return value using authorization rules to determine whether to authorize an unassisted self-return of an item, such as, but not limited to, the return authorization component 322 in FIG. 3”; Fig 21, ¶142: “a screenshot 2100 of a user device displaying return options for returning the item to a designated return location. In this non-limiting example, the user clicks on an option to return the item at a store 2102 or return the item via mail 2104”; ¶143: “the item is to be returned to a store. The refund amount is to be credited back to a credit card of the user. When complete, the user selects an icon 2210 to accept and finish the online return process prior to delivering the item to the store”; ¶151: “the system can allow the first user to complete the return process and will instruct the first user to place the item in bin “X” at the store or print out a return shipping label to mail the item back to the store or other provider/seller of the item”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”) an advance exchange option in which the corresponding refund is an exchange order that is processed regardless of whether the customer has returned the particular product; (¶1: “If the item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item. If the item was ordered via an online source, the user may have to repackage and mail the item to a returns location”; ¶107: “The return authorization component is a component that analyzes a per-user return-trust score and/or a per-item return value using authorization rules to determine whether to authorize an unassisted self-return of an item, such as, but not limited to, the return authorization component 322 in FIG. 3…”;¶146 “FIG. 25 is an exemplary block diagram illustrating a screenshot 2500 of a user device displaying a notification to the user that the return transaction is complete without physically returning the item to an item return location. In other words, the return is completed without taking the item to a store or returning the item by mail.”; ¶148: “the system authorizes a user to return an item remotely via a self-returns application on a user device. Implementing this “keep it” logic, including evaluating re-shelving costs and item disposition cost thresholds, enables users (customers) to immediately receive refunds on select “qualifying” items based on dollar thresholds, transaction history data, and risk models”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”)) an issue refund now option in which the corresponding refund is processed regardless of whether the particular product has been returned (¶1: “If the item return is approved, the amount the user paid for the item may be refunded, in whole or in part, to the user in cash, credit back, a gift card or replacement item”;¶148: “the system authorizes a user to return an item remotely via a self-returns application on a user device. Implementing this “keep it” logic, including evaluating re-shelving costs and item disposition cost thresholds, enables users (customers) to immediately receive refunds on select “qualifying” items based on dollar thresholds, transaction history data, and risk models”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”) a customer-can-keep option in which the corresponding refund is a full refund and the particular product need not be returned. (¶146 “FIG. 25 is an exemplary block diagram illustrating a screenshot 2500 of a user device displaying a notification to the user that the return transaction is complete without physically returning the item to an item return location. In other words, the return is completed without taking the item to a store or returning the item by mail”; ¶148: “the system authorizes a user to return an item remotely via a self-returns application on a user device. Implementing this “keep it” logic, including evaluating re-shelving costs and item disposition cost thresholds, enables users (customers) to immediately receive refunds on select “qualifying” items based on dollar thresholds, transaction history data, and risk models”; ¶161: “The system instructs the user to place the item in a bin or keep the item in accordance with disposition rules. Rules determine when the user's account receives a refund for the return transaction. For example, the user's account can receive the refund when the self-return is approved, when the item is received at the designated return location, after the item is inspected, etc.”)) With respect to Claim 6, Flores and Karmakar disclose all of the above limitations, Karmakar further discloses, wherein each corresponding refund of the set of return processing options includes a time at which the corresponding refund is issued (¶54: Fraud risk scoring tool 700 connects to online sales site 150, in-store terminal 154, delivery vehicle 158, and call center 160 over network 930 (collectively, transaction nodes) to receive and store sales data 710 and refund data 730 locally and/or in a data store 702… Refund data 730 is also indexed with one of the plurality of customer IDs 716, and has similar information: Amounts 732, item lists 734 (e.g., items refunded), sales representative IDs 738, transaction IDs 740, links 742 to the corresponding transaction IDs 720 in sales data 710, return flags 744 (indicating whether the goods were returned), and other data 746, such as a retail facility ID, and other information”; ¶63: “Flow chart 800 should be run for continual fraud monitoring when transactions are ongoing in operation 802. Therefore, operation 830 refreshes the metric data and repeats the foregoing, by returning to operation 804, for example weekly, or another time period, or based on events (e.g., holiday transaction surges). The metrics will contain time dependencies, so risk thresholds will be set according to the time period included within the metric data. For example, the risk threshold for the return amount metric will be different when time period included is a week versus two weeks”) Flores and Karmakar are directed to the same field of endeavor since they are related to method/system/techniques for analyzing customer return/refund transaction history for managing fraud avoidance in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art to combine the customized return techniques/rules of Flores with the machine learning component/models and fraud risk scoring tool as taught by Karmakar since it allows for continual fraud monitoring when transactions are ongoing via training an ML model with transaction metrics (¶59-¶63). With respect to Claim 8, Flores and Karmakar disclose all of the above limitations, Flores further discloses, wherein the risk score exceeding a predetermined threshold represents potential fraudulent customer behavior (Abstract: “If a per-user return-trust score is within an unacceptable threshold range or an item value is within an unacceptable threshold value range, a second user is assigned to assist a first user with completion of the proposed return of the selected item”; ¶4: “A task assignment component assigns a second user to assist the first user with completion of the proposed return of the selected item on condition a per-user return-trust score associated with the user is within an unacceptable return-trust score threshold range or a per-item return value associated with the selected item is within an unacceptable return value threshold range”; ¶58: “If either the return trust score or the item return value is outside the acceptable threshold range, the unassisted return is unauthorized (assistance is requested)”; ¶72: “if a user has returned many items in the past without difficulty and without any issues arising with regard to the item returns, the return-trust score is higher than a score for a user that has attempted one or more item returns associated with an issue, such as, a missing receipt, a fraudulent return, etc”; ¶98: “The set of score generation rules 312 can include a number of items returned per a time-period 704 by the user and/or an indication of a score adjustment amount based on the number of successful returns of items and/or unsuccessful attempted returns (fraudulent returns)”; ¶99: “The set of score generation rules 312 can also include threshold returns value 708, threshold purchase value 710, and/or weight(s) 712. The weight(s) 712 indicate which rules have greater precedence/weight when calculating the return-trust score for a user”) With respect to Claim 9, Flores and Karmakar disclose all of the above limitations, Flores further discloses, wherein the customer attributes are obtained from at least a prior six months of data describing interactions between the customer and the retail enterprise. (Fig 7, Fig 8, ¶98: “the set of score generation rules 312 can include a number of items returned per a time-period 704 by the user and/or an indication of a score adjustment amount based on the number of successful returns of items and/or unsuccessful attempted returns (fraudulent returns). A number of items obtained via purchase within a given time-period 706 and/or an amount of adjustment of the score up or down based on the number of items purchased within the time-period. The time-period can be any configurable time-period, such as, but not limited to, a one-year time period, a four-year time period, a seven-year time period, or any other amount of item.”; ¶99: “the set of score generation rules 312 can also include threshold returns value 708, threshold purchase value 710, and/or weight(s) 712”; ¶100: “FIG. 8 is an exemplary block diagram illustrating a set of item-value parameters 318. The set of item-value parameters 318 can include a ratio of purchases to returns of the selected item by a plurality of users within a predetermined time-period 802”; ¶101: “per-item returns history 804 can also indicate the number of instances of a given item returned within a predetermined time-period. If the number is unusually high, it can indicate user-supervision/assistance during item return is advisable. In other words, an unusually high number of returns can indicate suspicious returns activity associated with this type of item. For example, if a large number of memory sticks are being returned, it indicates increased monitoring (assistance) recommended.) Flores discloses a set of score generation rules including one or more rules for generating a return-trust score for a user. The set of score generation rules 312 can include a threshold ratio of purchases to returns 702 made by the user and/or an amount of adjustment up (number of points added) or adjusted downward (points subtracted) from the score based on ration and/or changes in the ratio. Flores further discloses, the set of score generation rules 312 can include a number of items returned per a time-period 704 by the user and/or an indication of a score adjustment amount based on the number of successful returns of items and/or unsuccessful attempted returns (fraudulent returns). Flores also teaches that the set of score generation rules 312 can also include threshold returns value 708, threshold purchase value 710, and/or weight(s) 712. A person of ordinary skill in the art would have been motivated to modify the known score-based item returns authorization techniques/rules as taught by Flores to achieve the claimed invention (wherein the customer attributes are obtained from at least a prior six months of data) with a reasonable expectation of success in doing so (" DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006)); and the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such customized return techniques/rules into similar systems, hence resulting in generating improved score generation rules indicating which rules (threshold returns value, threshold purchase value and/or weight(s)) have greater precedence/weight when calculating the return-trust score for a user. (Fig 7, Fig 8, ¶98-¶101). With respect to Claim 10, Flores and Karmakar disclose all of the above limitations, Flores further discloses, wherein the customer attributes include customer profiles that include past calculated risk scores, (¶98: “the set of score generation rules 312 can include a number of items returned per a time-period 704 by the user and/or an indication of a score adjustment amount based on the number of successful returns of items and/or unsuccessful attempted returns (fraudulent returns). A number of items obtained via purchase within a given time-period 706 and/or an amount of adjustment of the score up or down based on the number of items purchased within the time-period. The time-period can be any configurable time-period, such as, but not limited to, a one-year time period, a four-year time period, a seven-year time period, or any other amount of item.”) Karmakar further discloses, wherein the customer attributes include customer profiles that include past fraudulent activity and an identification of whether the customer is a known reseller of items purchased from the retail enterprise (Fig 2, Fig 3, ¶22: “FIG. 2A illustrates an exemplary high level process flow 200 for fraud risk scoring. Aggregation of transaction metrics occurs during operation 202. Some various transaction metrics are illustrated in FIG. 3, and include: [0023] high refund amount and/or frequency; [0024] high risky cancellations; [0025] high refund rate; high rate of goods not returned (GNR) refunds; [0026] high refunds through web, call center or doorstep; [0027] high rate of repetition; [0028] high-refunding retail facilities, cities, postal codes; [0029] refunds made in high value items and in high risk retail facilities; [0030] repeatedly refunding with the same driver with a high refund amount; [0031] repeatedly cancelling orders by the same sales representative after pickup completed; [0032] repeated refunds of same item; [0033] a spike in refunds; [0034] refunds at a higher price than paid; [0035] claims of damaged goods; [0036] claims of lost or undelivered goods; and [0037] others. [0038] Transaction metrics are aggregated at the customer level and then across the customer base. For example, all transactions for a particular customer ID are aggregated to identify all transactions and amounts associated with each customer ID”; Fig 3, ¶49: “As indicated in FIG. 3, the various transaction metrics include metrics based on common risk key performance indicators (KPIs) 301, secondary risk factors 304, multi-party-collusion risk factors 306, and other suspicious behaviors 308”; Fig 7, #700 Fraud Risk Scoring Tool; ¶55: “computation engine 760 performs computations described herein, such as determining, based at least on sales data 710 and refund data 730, a plurality of aggregated transaction metric data sets 750 indexed with one of the plurality of customer IDs 716, wherein plurality of aggregated transaction metric data sets 750 includes at least a return amount data set 752, a return frequency data set 754, and a return rate data set 756”) Flores and Karmakar are directed to the same field of endeavor since they are related to method/system/techniques for analyzing customer return/refund transaction history for managing fraud avoidance in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art to combine the customized return techniques/rules of Flores with the machine learning component/models and fraud risk scoring tool as taught by Karmakar since it allows for determining a customer risk score based at least on the scoring weights applied to corresponding transaction metrics (Fig 2, Fig 3, ¶22-¶40, ¶54-¶56). With respect to Claim 11, Flores and Karmakar disclose all of the above limitations, Karmakar further discloses, wherein the product attributes include a value of the product and a frequency of fraudulent activity associated with the product (Fig 2, Fig 3, ¶22: “FIG. 2A illustrates an exemplary high level process flow 200 for fraud risk scoring. Aggregation of transaction metrics occurs during operation 202. Some various transaction metrics are illustrated in FIG. 3, and include: [0023] high refund amount and/or frequency; [0024] high risky cancellations; [0025] high refund rate; high rate of goods not returned (GNR) refunds; [0026] high refunds through web, call center or doorstep; [0027] high rate of repetition; [0028] high-refunding retail facilities, cities, postal codes; [0029] refunds made in high value items and in high risk retail facilities; [0030] repeatedly refunding with the same driver with a high refund amount; [0031] repeatedly cancelling orders by the same sales representative after pickup completed; [0032] repeated refunds of same item; [0033] a spike in refunds; [0034] refunds at a higher price than paid; [0035] claims of damaged goods; [0036] claims of lost or undelivered goods; and [0037] others. [0038] Transaction metrics are aggregated at the customer level and then across the customer base. For example, all transactions for a particular customer ID are aggregated to identify all transactions and amounts associated with each customer ID”; Fig 3, ¶49: “As indicated in FIG. 3, the various transaction metrics include metrics based on common risk key performance indicators (KPIs) 301, secondary risk factors 304, multi-party-collusion risk factors 306, and other suspicious behaviors 308”; Fig 7, #700 Fraud Risk Scoring Tool; ¶55: “computation engine 760 performs computations described herein, such as determining, based at least on sales data 710 and refund data 730, a plurality of aggregated transaction metric data sets 750 indexed with one of the plurality of customer IDs 716, wherein plurality of aggregated transaction metric data sets 750 includes at least a return amount data set 752, a return frequency data set 754, and a return rate data set 756”) Flores and Karmakar are directed to the same field of endeavor since they are related to method/system/techniques for analyzing customer return/refund transaction history for managing fraud avoidance in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art to combine the customized return techniques/rules of Flores with the machine learning component/models and fraud risk scoring tool as taught by Karmakar since it allows for determining a customer risk score based at least on the scoring weights applied to corresponding transaction metrics (Fig 2, Fig 3, ¶22-¶40, ¶54-¶56). With respect to Claims 16 and 22, Flores and Karmakar disclose all of the above limitations, Flores further discloses, wherein the risk score output is updated in real time based on the customer attributes of the particular customer (¶79: “In some examples, the user's return-trust score is updated 320 based on current returns transactions in real-time. In these non-limiting examples, each time a user completes a return transaction successfully, the return-trust score 314 is updated 320 to increase the score. Likewise, if the user attempts a return transaction that is rejected or fails to complete due to a problem with the item, receipt of other issue associated with the return, the return-trust is updated to reflect a lower level of returns-related trust for the user”) wherein the customer attributes include a customer profile, historical sales order metrics, and historical return metrics from a customer attribute database. (Fig 9, ¶104: “FIG. 9 is an exemplary block diagram illustrating transaction history data 130. The transaction history data 130 includes item return history 902 and/or item purchase history 904 of a selected user. The transaction history data can optionally also include a method of payment used during a transaction, a credit score of the user, and/or any issues associated with attempted item returns. Return issues can include lack of a receipt, lack of packaging associated with an item, loss of the item to be returned, damaged/broken item, etc.”; ¶105: “the item purchase history 904 is used to infer a returns profile for a user. The item purchase history 904 is utilized as first factor of building a return-trust value for the user”) The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Stashluk et al., US Patent Application Publication No US 2006/0149577 A1, “System and Method for the Customized Processing of Returned Merchandise”, relating to method and system for the customized processing of returned merchandise. Brow US Patent Application Publication No US 2021/0383319 A1, “Device for Use with an Automated Secured Package Delivery System”, relating to an improved device for use in conjunction with a parcel receptacle apparatus in an automated secured package delivery system which can provide simultaneous confirmation and data to the purchaser, shipper and others. Petri et al., US Patent No US 9,785,988 B2, “In-Application Commerce System and Method with Fraud Prevention, Management and Control”, relating to an in-application solution which features fraud detection with user behavior tracking and fraud controls that limit the features that are offered to a user. Conclusion Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to Kimberly L. Evans whose telephone number is 571.270.3929. The Examiner can normally be reached on Monday-Friday, 9:30am-5:00pm. If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Lynda Jasmin can be reached at 571.272.6782. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal/pair <http://pair-direct.uspto.gov >. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217.9197 (toll-free). Any response to this action should be mailed to: Commissioner of Patents and Trademarks, P.O. Box 1450, Alexandria, VA 22313-1450 or faxed to 571-273-8300. Hand delivered responses should be brought to the United States Patent and Trademark Office Customer Service Window: Randolph Building 401 Dulany Street, Alexandria, VA 22314. /KIMBERLY L EVANS/ Examiner, Art Unit 3629 /LYNDA JASMIN/ Supervisory Patent Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

Jan 30, 2020
Application Filed
Dec 04, 2021
Non-Final Rejection — §103
Feb 23, 2022
Interview Requested
Mar 03, 2022
Applicant Interview (Telephonic)
Mar 07, 2022
Examiner Interview Summary
Mar 09, 2022
Response Filed
Jun 04, 2022
Final Rejection — §103
Aug 15, 2022
Applicant Interview (Telephonic)
Aug 16, 2022
Examiner Interview Summary
Sep 13, 2022
Request for Continued Examination
Sep 22, 2022
Response after Non-Final Action
Sep 29, 2022
Non-Final Rejection — §103
Jan 06, 2023
Response Filed
Jan 06, 2023
Applicant Interview (Telephonic)
Jan 10, 2023
Examiner Interview Summary
May 20, 2023
Final Rejection — §103
Aug 29, 2023
Applicant Interview (Telephonic)
Aug 31, 2023
Request for Continued Examination
Sep 01, 2023
Response after Non-Final Action
Sep 09, 2023
Non-Final Rejection — §103
Jan 16, 2024
Interview Requested
Jan 16, 2024
Response Filed
May 16, 2024
Final Rejection — §103
Aug 02, 2024
Applicant Interview (Telephonic)
Aug 13, 2024
Examiner Interview Summary
Aug 19, 2024
Request for Continued Examination
Aug 20, 2024
Response after Non-Final Action
Sep 23, 2024
Non-Final Rejection — §103
Dec 26, 2024
Response Filed
Apr 05, 2025
Final Rejection — §103
Jun 25, 2025
Interview Requested
Jul 09, 2025
Applicant Interview (Telephonic)
Jul 12, 2025
Examiner Interview Summary
Jul 14, 2025
Request for Continued Examination
Jul 17, 2025
Response after Non-Final Action
Oct 18, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602661
SYSTEM FOR SEARCHING AND CORRELATING ONLINE ACTIVITY WITH INDIVIDUAL CLASSIFICATION FACTORS
2y 5m to grant Granted Apr 14, 2026
Patent 12277615
DETECTING AND VALIDATING IMPROPER RESIDENCY STATUS THROUGH DATA MINING, NATURAL LANGUAGE PROCESSING, AND MACHINE LEARNING
2y 5m to grant Granted Apr 15, 2025
Patent 12118558
ESTIMATING QUANTILE VALUES FOR REDUCED MEMORY AND/OR STORAGE UTILIZATION AND FASTER PROCESSING TIME IN FRAUD DETECTION SYSTEMS
2y 5m to grant Granted Oct 15, 2024
Patent 12056745
Machine-Learning Driven Data Analysis and Reminders
2y 5m to grant Granted Aug 06, 2024
Patent 11990213
METHODS AND SYSTEMS FOR VISUALIZING PATIENT POPULATION DATA
2y 5m to grant Granted May 21, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
12%
Grant Probability
26%
With Interview (+13.4%)
7y 0m
Median Time to Grant
High
PTA Risk
Based on 362 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month