Prosecution Insights
Last updated: April 19, 2026
Application No. 17/811,128

REAL TIME VISUAL FEEDBACK FOR AUGMENTED REALITY MAP ROUTING AND ITEM SELECTION

Final Rejection §101§103§112
Filed
Jul 07, 2022
Examiner
RAMPHAL, LATASHA DEVI
Art Unit
3688
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Capital One Services LLC
OA Round
4 (Final)
34%
Grant Probability
At Risk
5-6
OA Rounds
3y 11m
To Grant
83%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
65 granted / 193 resolved
-18.3% vs TC avg
Strong +49% interview lift
Without
With
+49.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
30 currently pending
Career history
223
Total Applications
across all art units

Statute-Specific Performance

§101
31.7%
-8.3% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 193 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This rejection is in response to Amendments filed 07/11/2025. Claims 1-2, 4-5, 7-12, 14-17, and 19-24 are currently pending and have been examined. Claims 3, 6, 13, and 18 are cancelled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 15, filed 07/11/2025, with respect to 35 U.S.C. 112(b) have been fully considered and are persuasive. The 35 U.S.C. 112(b) rejection has been withdrawn. Applicant's arguments filed 07/11/2025 have been fully considered but they are not persuasive. With respect to applicant’s arguments on pages 16-18 of remarks filed 07/11/2025 that the claims do not recite certain methods of organizing human activity because the claims that recite non-human activities (e.g. augmented reality map and a system comprising processors coupled to memories, device, and computer vision techniques), Examiner respectfully disagrees. The “certain methods of organizing human activity” abstract idea grouping is defined as fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. See MPEP § 2106.04(a)(2)(II & III). The augmented reality map and a system comprising processors coupled to memories, device, and computer vision techniques are not considered as part of the abstract idea under the grouping of certain methods of organizing human activity. The augmented reality map and a system comprising processors coupled to memories, device, and computer vision techniques are considered as additional elements. The enumerated grouping under certain methods of organizing human activity is not defined as concepts performed in the human mind or human activities. The enumerated grouping of mental processed is defined as concepts performed in the human mind. However, the claims have not been grouped under mental processes. Therefore, the claims are directed to certain methods of organizing human activity rather than mental processes. With respect to applicant’s arguments on pages 18-23 of remarks filed 07/11/2025 that the claims are directed to improvements in the technical field of real time visual feedback for AR by making identification of items quicker and faster for users and conserving time of routing the user throughout the location to locate the items. Examiner respectfully disagrees. In determining patent eligibility, examiners should consider whether the claim "purport(s) to improve the functioning of the computer itself" or "any other technology or technical field." If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim, or identifies technical improvements realized by the claim over the prior art. See MPEP § 2106.05(a). To show that the involvement of a computer assists in improving the technology, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § and 2106.05(f). The claims are directed to solving a commercial problem of making it faster and quicker for a user to identify items and route users to items in a location rather than solving a problem rooted in technology. Applicant’s specification in paragraph [0016] describes that using the server device conserves the time it takes to perform the task of locating a replacement item within a location. The specification fails to describe technical details necessary to improve technology. Merely adding the server device to perform the identification of the items and route a user to a location is not sufficient detail to show how the claims improve technology. Therefore, the claims are not integrated into a practical application because the claims do not recite an improvement to technology but rather use the computer as a tool to recommend items. With respect to applicant’s arguments on pages 23-24 of remarks filed 07/11/2025 that Chachek and Franey do not teach the amendments to independent claims based on reasons from interview, Examiner respectfully disagrees. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant’s arguments are unpersuasive because applicant does not explain how the claims distinguish from the prior art references. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-2, 4-5, 7-12, 14-17, and 19-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Independent claims 1, 9, and 17 recite the limitation "the one or more features of the other item depicted in the image." There is insufficient antecedent basis for this limitation in the claim. Appropriate correction or clarification is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-5, 7-12, 14-17, and 19-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (an abstract idea) without significantly more. Under Step 1 of the Subject Matter Eligibility Test, it must be considered whether the claims are directed to one of the four statutory classes of invention. See MPEP § 2106. In the instant case, claims 1-2, 4-5, 7-8, 21, and 23 are directed to a system, claims 9-12, 14-16 and 24 are directed to a method, and claims 17, 19-20, and 22 are directed to a non-transitory medium which falls within one of the four statutory categories of invention(process/apparatus). Accordingly, the claims will be further analyzed under revised step 2: Under step 2A (prong 1) of the Subject Matter Eligibility Test, it must be considered whether the claims recite a judicial exception if so, then determine in Prong Two if the recited judicial exception is integrated into a practical application of that exception. If the claim recites a judicial exception (i.e., an abstract idea), the claim requires further analysis in Prong Two. One of the enumerated groupings of abstract ideas is defined as certain methods of organizing human activity that includes fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). See MPEP § 2106.04(a)(2). Regarding representative independent claim 1, recites the abstract idea of: receive an indication of one or more items associated with a task, wherein the one or more items are associated with an entity location; determine a route through the entity location to an item location for at least one item included in the one or more items; transmit, …, routing AR information associated with the route …, wherein the routing AR information includes an indication of where AR content is to be inserted and overlayed in visual media of the entity location; receive, …, an image captured by the device that is associated with a first item of the one or more items; analyze, …, the image to determine one or more recommended items, associated with the first item, depicted in the image and one or more probability scores associated with each item of the one or more recommended items, wherein the one or more probability scores indicate a likelihood that that each of the one or more recommended items would be accepted as a replacement for the first item based on an output … generate AR feedback information associated with the visual media, wherein the AR feedback information includes colors of respective probability scores located proximate to the one or more recommended items in the visual media, wherein a first color surrounds a first recommended item, of the one or more recommended items, associated with a first probability score, of the one or more probability scores, satisfying a first threshold, and wherein a second color surrounds a second recommended item, of the one or more recommended items, associated with a second probability score, of the one or more probability scores, satisfying a second threshold; and transmit, …, item AR information associated with the image. This arrangement amounts to certain methods of organizing human activity associated with sales activities and commercial interactions involving item recommendations by receiving a task for an item, determining a route to find the item within an entity location, receive and analyze an image of the item to determine the items within the image and probability of the replacement item being accepted, generating information including colors of probability scores satisfying a threshold, and transmitting item information from the image to identify the items or recommended items. Such concepts have been considered ineligible certain methods of organizing human activity by the Courts. See MPEP § 2106. The Step 2A (prong 2) of the Subject Matter Eligibility Test, is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. See MPEP § 2106. In this instance, the claims recite the additional elements such as: A system for providing real time visual feedback for augmented reality (AR) map routing and item selection, the system comprising: one or more memories; and one or more processors, coupled to the one or more memories, configured to:… to a device... to cause an AR view of the route to be displayed by the device… from the device;… using a computer vision technique, … of a machine learning model…;… to the device… to cause AR feedback information to be displayed by the device in connection with the image. (Claim 1); the one or more processors (Claims 2, 4-5, 8, and 23); from the device (claims 5, 8, and 16); real time visual feedback for augmented reality (AR) map routing…by a device…; by the device…;…by the device and to a client device … to be displayed by the client device;… by the device and from the client device…; … by the device…, …a machine learning model,…; by the device…; by the device and from the client device… to cause AR feedback information to be displayed by the client device in connection with the visual media (Claim 9); to the client device,… to cause the client device to display an indication of the next waypoint associated with the route (claim 11 and 21); A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a client device, cause the client device to:… for display by the client device; ..by the client device…;…a machine learning model,…(Claim 17); wherein the one or more instructions, that cause the client device to provide the AR view of the route for display, cause the client device to:… (Claim 19); wherein the one or more instructions further cause the client device to: transmit, to a user device … and receive, from the user device,…(Claim 20); wherein the one or more instructions further cause the client device to: (Claim 22). However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Independent claims and dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. For example, independent claims and dependent claims are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above. Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same. See MPEP § 2106. In Step 2A, several additional elements were identified as additional limitations: A system for providing real time visual feedback for augmented reality (AR) map routing and item selection, the system comprising: one or more memories; and one or more processors, coupled to the one or more memories, configured to:… to a device... to cause an AR view of the route to be displayed by the device… from the device;… using a computer vision technique, … of a machine learning model…;… to the device… to cause AR feedback information to be displayed by the device in connection with the image. (Claim 1); the one or more processors (Claims 2, 4-5, 8, and 23); from the device (claims 5, 8, and 16); real time visual feedback for augmented reality (AR) map routing…by a device…; by the device…;…by the device and to a client device … to be displayed by the client device;… by the device and from the client device…; … by the device…, …a machine learning model,…; by the device…; by the device and from the client device… to cause AR feedback information to be displayed by the client device in connection with the visual media (Claim 9); to the client device,… to cause the client device to display an indication of the next waypoint associated with the route (claim 11 and 21); A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a client device, cause the client device to:… for display by the client device; ..by the client device…;…a machine learning model,…(Claim 17); wherein the one or more instructions, that cause the client device to provide the AR view of the route for display, cause the client device to:… (Claim 19); wherein the one or more instructions further cause the client device to: transmit, to a user device … and receive, from the user device,…(Claim 20); wherein the one or more instructions further cause the client device to: (Claim 22). These additional limitations, including the limitations in the independent claims and dependent claims, do not amount to an inventive concept because the recitations above do not amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. In addition, they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. For these reasons, the claims are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-5, 7-12, 14-17, 19-22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Chachek (US Pub. No. 20230118119 A1, hereinafter “Chachek”) in view of Franey et al. (US Pub. No. 2022/0237530 A1, hereinafter, “Franey”) in further view of Belkin et al. (US 10817981 B1, hereinafter “Belkin”). Regarding claims 1 and 9 Chachek discloses a system for providing real time visual feedback for augmented reality (AR) map routing and item selection, the system comprising: one or more memories; and one or more processors, coupled to the one or more memories, configured to (Chachek, [0004]: systems, devices, and methods for Augmented Reality (AR) based mapping of a venue and its real-time inventory): receive an indication of one or more items associated with a task, wherein the one or more items are associated with an entity location (Chachek, [0061] A user may be inside the store (or other mapped environment) and may utilize a smartphone (or other portable electronic device) to request assistance with in-store navigation or wayfinding or route guidance); determine a route through the entity location to an item location for at least one item included in the one or more items; transmit, to a device, routing AR information associated with the route to cause an AR view of the route to be displayed by the device, wherein the routing AR information includes an indication of where AR content is to be inserted and overlayed in visual media of the entity location; (Chachek, [0062]: (b) extract or deduce the precise current in-store location of the user; (c) determine a walking route from the current location to the destination, based on the Store Map and by utilizing a suitable route guidance algorithm; (d) generate turn-by-turn walking instructions for such route, and convey them to the user by animation; [0063]: the AR-based navigation instructions that are generated, displayed and/or conveyed to the user, may include AR-based arrows or indicators that are shown as an overlay on top of an aisle or shelf of products, which guide the user to walk or move or turn to a particular direction in order to find a particular product; [0082]: an AR route indicates to the user where to turn based on store map); receive, from the device, an image captured by the device that is associated with a first item of the one or more items; analyze, using a computer vision technique, the image to determine one or more recommended items, associated with the first item, depicted in the image…(Chachek, [0063]: perform rapid real-time recognition of the products that are currently imaged in the field-of-view of the imager of the end-user device, to fetch or obtain or lookup or download product data for each recognized product to determine which sub-set of the imaged products; [0109]:image analysis and/or computerized vision algorithms; [0135]: recommended products; and transmit, to the device, item AR information associated with the image to cause AR feedback information to be displayed by the device in connection with the image (Chachek, [0063]: generate and displayed AR-based or VR-based emphasis of such sub-set of products by adding a textual label or a graphical indicator or an animated content; [0109]:image analysis and/or computerized vision algorithms; [0135]: recommended products; [0011]: identify products by analyzing image to detect all items within image). Chachek does not teach: and one or more probability scores associated with each item of the one or more recommended items, wherein the one or more probability scores indicate a likelihood that each of the one or more recommended items would be accepted as a replacement for the first item based on an output of a machine learning model; generate AR feedback information associated with the visual media, wherein the AR feedback information includes colors of respective probability scores located proximate to the one or more recommended items in the visual media, wherein a first color surrounds a first recommended item, of the one or more recommended items, associated with a first probability score, of the one or more probability scores, satisfying a first threshold, and a second color surrounds a second recommended item, of the one or more recommended items, associated with a second probability score, of the one or more probability scores, satisfying a second threshold. However, Franey teaches: and one or more probability scores associated with each item of the one or more recommended items, wherein the one or more probability scores indicate a likelihood that each of the one or more recommended items would be accepted as a replacement for the first item based on an output of a machine learning model (Franey, [0016]: retrieve a set of possible available substitute product options for the missing desired product based in part on an acceptance probability score assigned or attached to each of the possible available product options; [0041]: machine learning to generate acceptance probability scores based on previous substitutions accepted by user; [0075]: apply the acceptance probability algorithm (e.g., machine learning algorithm) to the one or more substitute products to determine acceptance probability score for the potential substitute products identified for the missing product; [0014]: substitute product includes a replacement product). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recommending items of Chachek with a probability value is associated with the recommended item as taught by Franey because the results of such a modification would be predictable. Specifically, Chachek would continue to teach recommending items except that now a probability value is associated with the recommended item according to the teachings of Franey in order to efficiently and quickly select substitute items based on a probability score. This is a predictable result of the combination. (Franey, [0003] and [0016]). However, Belkin teaches: generate AR feedback information associated with the visual media, wherein the AR feedback information includes colors of respective probability scores located proximate to the one or more recommended items in the visual media, wherein a first color surrounds a first recommended item, of the one or more recommended items, associated with a first probability score, of the one or more probability scores, satisfying a first threshold, and a second color surrounds a second recommended item, of the one or more recommended items, associated with a second probability score, of the one or more probability scores, satisfying a second threshold (Belkin, C14, L30-62: determine score for each color that indicates a probability the color is selected from a plurality of colors to be used as accent color based on highest score; C6, L5-67: display content item based on user affinity (e.g. interest in object) and a border surrounding image of content item having the accent color is displayed; C7, L1-55: accent color refers to any color selected for an interface element to be displayed in conjunction with the content item and accent color selected based on having threshold). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recommended items and probability of Chachek and Franey with generating AR feedback information including colors of respective probability scores located proximate to the one or more recommended items in the visual media and a first and second color surrounds a first and second recommended item associated with a first and second probability score satisfying a first and second threshold as taught by Belkin because the results of such a modification would be predictable. Specifically, Chachek would continue to teach the recommended items and probability except that now AR feedback information is generated including colors of respective probability scores located proximate to the one or more recommended items in the visual media and a first and second color surrounds a first and second recommended item associated with a first and second probability score satisfying a first and second threshold according to the teachings of Belkin in order to easily differentiate between certain types of items. This is a predictable result of the combination. (Belkin, C1, L25-45). Regarding claim 2 The combination of Chachek, Franey, and Belkin teaches the system of claim 1, wherein the one or more processors are further configured to: determine, for the first item, the one or more recommended items based on item information associated with the first item (Chachek, [0063]: fetch or obtain or lookup or download product data for each recognized product to determine which sub-set of the imaged products fulfill the constraints that the user provided based on product data for each recognized item; [0106]: indicating a user request to show only items that meet a particular filtering criterion or constraint or set of characteristics; [0096]: generate, for the particular user, an in-store smart or efficient or short shopping route; based on purchase(s) history of the user, community suggestions (e.g., at least P persons have recommended today to purchase a particular product). Regarding claims 4 and 14 The combination of Chachek, Franey, and Belkin teaches the system of claim 1, wherein the one or more processors, to analyze the image, are configured to: identify one or more features of the first item depicted in the image; determine whether the one or more features match one or more recommended attributes associated with the first item, wherein the one or more recommended attributes are based on at least one of: the user information, or the exchange history information; and determine whether the first item depicted in the image is the at least one item based on whether the one or more features match the one or more recommended attributes (Chachek, [0059]: determine that it shows three boxes of “Corn Flakes”; and may optionally detect a logo or a brand-name which may be compared to or match with a list of product makers or product manufacturing; [0109]: image analysis and/or computerized vision algorithms and/or OCR operations of one or more images or frames captured by the camera; [0063]: perform rapid real-time recognition of the products that are currently imaged in the field-of-view of the imager, to fetch or obtain product data for each recognized product to determine which sub-set of the imaged products fulfill the constraints that the user provided based on his Visual Search query; [0096]: recommend products based on past purchases and user limits or constraints; [0171]: image comparison (e.g., isolating a logo or a slogan that is shown on a box of cereal, and comparing it to a set of logos or slogans of various products as pre-defined in a database of products); [0121]: invariant feature matching of images). Regarding claim 5 The combination of Chachek, Franey, and Belkin teaches the system of claim 1, wherein the one or more processors are further configured to: receive, from the device, an indication that the first item is unavailable, wherein the image is associated with an expected location associated with the first item (Chachek, [0112]: the Augmented Reality (AR) image 1802, which depicts stock images or stock photos of the missing products, optionally with a label or tag of their name and/or the fact that they are currently missing or out-of-stock. In some embodiments, system may automatically perform computer vision analysis of the original image 1801, and may recognize or detect that there is an empty shelf space which is greater than a pre-defined threshold), and wherein the one or more processors, to analyze the image, are configured to: identify the one or more recommended items in the image, wherein the one or more recommended items are alternative items to the first item; and determine, based on at least one of user information or the exchange history information that is associated with the task, …each item of the one or more recommended items identified in the image (Chachek, [0135]: it may highlight or point to the user's favorite products, and/or may recommended and point to similar products or related products; [0063]: perform rapid real-time recognition of the products that are currently imaged in the field-of-view of the imager of the end-user device, to fetch or obtain or lookup or download product data for each recognized product to determine which sub-set of the imaged products fulfill the constraints that the user provided based on his Visual Search query; [0106]: indicating a user request to show only items that meet a particular filtering criterion or constraint or set of characteristics; [0126]: replacement item to item of interest). Chachek does not teach: the one or more probability scores associated with each item of the one or more recommended items (emphasis added). However, Franey teaches: the one or more probability scores associated with each item of the one or more recommended items (emphasis added) (Franey, [0016]: retrieve a set of possible available product options for the missing desired product based in part on an acceptance probability score assigned or attached to each of the possible available product options). The motivation to combine Chachek, Franey, and Belkin is the same as set forth above in claim 1. Regarding claim 7 The combination of Chachek, Franey, and Belkin teaches the system of claim 5, wherein the AR feedback information includes visual representations of respective …scores located proximate to the one or more recommended items in the image (Chachek, [0063]: proceeds to generate and displayed AR-based or VR-based emphasis of such sub-set of products by adding a textual label or a graphical indicator or an animated content; [0139]: Augmented with additional information or visual elements (graphics, animation, text, video, labels, tags, prices, filters, emphasis or highlighting of specific products or features, or the like). This may be performed by an AR Element Adding Unit 134, which may generate such additional elements and add them or overlay them onto a real-time imaging output of the venue or region based on items purchased in the past; [0086] FIG. 9, which is an illustration of (AR) image 900 of another frame, which may be generated and/or utilized to demonstrate an AR overlay of content that indicates, for example, additional information about products, wherein the additional information is displayed as “popping out” perpendicularly relative to the shelf or the aisle with product; [0133]: percent of users that purchase items used to place AR promotions in a hot spot). Chachek does not teach: probability scores. However, Franey teaches: probability scores (Franey, [0016]: probability scores). The motivation to combine Chachek, Franey, and Belkin is the same as set forth above in claim 1. Regarding claim 8 The combination of Chachek, Franey, and Belkin teaches the system of claim 1, wherein the one or more processors are further configured to: receive, from the device, an indication that the first item is unavailable (Chachek, [0112]: the Augmented Reality (AR) image 1802, which depicts stock images or stock photos of the missing products, optionally with a label or tag of their name and/or the fact that they are currently missing or out-of-stock. In some embodiments, system may automatically perform computer vision analysis of the original image 1801, and may recognize or detect that there is an empty shelf space which is greater than a pre-defined threshold); and determine, for a recommended item of the one or more recommended items, at least one of: a quantity of the recommended item, one or more attributes of the recommended item, or a brand associated with the recommended item, wherein the one or more recommended items are replacement items for the first item (Chachek, [0135]: it may highlight or point to the user's favorite products, and/or may recommended and point to similar products or related products; [0063]: perform rapid real-time recognition of the products that are currently imaged in the field-of-view of the imager of the end-user device, to fetch or obtain or lookup or download product data for each recognized product to determine which sub-set of the imaged products fulfill the constraints that the user provided based on his Visual Search query; [0106]: indicating a user request to show only items that meet a particular filtering criterion or constraint or set of characteristics; [0096]: generate, for the particular user, an in-store smart or efficient or short shopping route; based on purchase(s) history of the user, community suggestions (e.g., at least P persons have recommended today to purchase a particular product based on predictions of what user is interested in buying; [0069]: percentage frequency that user travels to a product within the store to determine products of interest to the particular user; [0124]: persuading a user to purchase additional items (or additional quantities) relative to the original product that he intends to purchase; [0127]: provide the user with more information about brand products and content around the consumer). Regarding claims 10 and 22 The combination of Chachek, Franey, and Belkin teaches the method of claim 9, wherein determining the route comprises: obtaining layout information associated with the entity location; determining the item location for each item included in the one or more items based on the layout information; determining one or more waypoints, associated with the route, corresponding to the item location for each item included in the one or more items; and determining an order of the one or more waypoints based on the layout information (Chachek, [0057]: Store Map is created automatically or at least semi-automatically, by a computerized unit that collects the captured images or video frame of the environment; [0058]: The captured data enables the system to correctly stitch together the Store Map 101, optionally represented as a planogram, representing therein not only the aisles and corridors in which users walk but also the actual real-time inventory and placement of items and products on the shelves of the store based on geo-spatial locations; [0082]: the system looks up this particular product in the Store Map; and generates a walking route from the current location of the user to the destination product; and conveys this route to the user; [0083]: determine the walking route to the desired product destination, and generate and display on the user device an overlay AR element, or a set of AR elements, indicating the walking path). Regarding claims 11 and 21 The combination of Chachek, Franey, and Belkin teaches the method of claim 9, further comprising: determining, based on analyzing the visual media, that the first item depicted by the visual media is the at least one item; updating the routing AR information associated with the route to indicate that the first item has been obtained and to indicate a next waypoint associated with the route, wherein the next waypoint corresponds to a second item of the one or more items; and transmitting, to the client device, updated routing AR information to cause the client device to display an indication of the next waypoint associated with the route (Chachek , [0185]: taking images and recognizing their content, then inferring the indoor location based on the visual content analysis, and then constructing and updating product inventory maps and planograms based on the real-time data that was learned from captured images in the dynamically-changing environment of a retails store; [0135]: A user may create a shopping list or a wish-list, and the system generates a shopping route that leads the user to all the products on the list, navigating by following the highlighted areas on the on-screen compass; [0082]: the system looks up this particular product in the Store Map; and generates a walking route from the current location of the user to the destination product; and conveys this route to the user; [0083]: determine the walking route to the desired product destination, and generate and display on the user device an overlay AR element, or a set of AR elements, indicating the walking path). Regarding claim 12 The combination of Chachek, Franey, and Belkin teaches the method of claim 9, wherein the visual media includes at least one of: an image, a video, or live stream media (Chachek, [0009]: analysis of an image that was just captured or streamed by the end-user device; [0091]: capturing images and/or video together with enablement of in-store localization data, to collect frames or images or videos of the store with their corresponding locations, and to stitch them and analyze them in order to automatically generate an AR/VR based map of the store and its entire structure and products). Regarding claim 15 The combination of Chachek, Franey, and Belkin teaches the method of claim 14, wherein the one or more recommended attributes include at least one of: a size of the first item, a quantity of pieces associated with the first item, a ripeness level associated with the first item, or a color associated with the first item (Chachek, [0096]: suggest particular products based on parameters; [0080] Other suitable parameters may be utilized for queries, and for constructing or tailoring results based on queries; for example, prices, price range, being on sale or at a discount, being part of a promotion (“buy one and get one free”), being a clearance item, being a discontinued item, being a new or just-received item or recently-launched product, being at a certain size or dimensions or weight). Regarding claim 16 The combination of Chachek, Franey, and Belkin teaches the method of claim 9, further comprising: receiving, from the device, an indication that the first item is unavailable, wherein the visual media is associated with an expected location associated with the first item, and wherein analyzing the visual media comprises: identifying the one or more recommended items in the visual media, wherein the one or more recommended items are alternative items to the first item; determining, based on at least one of the user information or the exchange history information associated with a user that is associated with the task (Chachek, [0135]: it may highlight or point to the user's favorite products, and/or may recommended and point to similar products or related products; [0063]: perform rapid real-time recognition of the products that are currently imaged in the field-of-view of the imager of the end-user device, to fetch or obtain or lookup or download product data for each recognized product to determine which sub-set of the imaged products fulfill the constraints that the user provided based on his Visual Search query; [0106]: indicating a user request to show only items that meet a particular filtering criterion or constraint or set of characteristics; [0096]: generate, for the particular user, an in-store smart or efficient or short shopping route; based on purchase(s) history of the user, community suggestions (e.g., at least P persons have recommended today to purchase a particular product based on predictions of what user is interested in buying; [0069]: percentage frequency that user travels to a product within the store to determine products of interest to the particular user), … scores associated with each item of the one or more recommended items identified in the visual media; and generating the AR feedback information to include visual representations of respective … scores located proximate to the one or more recommended items in the visual media (Chachek, [0063]: proceeds to generate and displayed AR-based of such sub-set of products by adding a textual label or a graphical indicator or an animated content; [0139]: Augmented with additional information or visual elements (e.g. text); [0086] Reference is made to FIG. 9, which is an illustration of an Augmented Reality (AR) image 900 of another frame, which may be generated and/or utilized to demonstrate an AR overlay of content that indicates, for example, additional information about products, wherein the additional information is displayed as “popping out” perpendicularly relative to the shelf or the aisle). Chachek does not teach: …,the one or more probability scores,..., respective one or more probability scores… However, Franey teaches: …,the one or more probability scores,..., respective one or more probability scores… (Franey, [0016]: probability scores). The motivation to combine Chachek, Franey, and Belkin is the same as set forth above in claim 1. Regarding claim 17 Chachek discloses a non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a client device, cause the client device to (Chachek, [0194]: non-transitory storage medium; [0004]: systems, devices, and methods for Augmented Reality (AR) based mapping of a venue and its real-time inventory): receive an indication of one or more items associated with a task, wherein the one or more items are associated with an entity location (Chachek, [0061] A user may be inside the store (or other mapped environment) and may utilize a smartphone (or other portable electronic device) to request assistance with in-store navigation or wayfinding or route guidance. In a first example, the user inputs to his smartphone the query “I am looking right now at the Frosted Flakes shelf; how do I get from here to the Milk?”); obtain routing augmented reality (AR) information associated with a route through the entity location to an item location for at least one item included in the one or more items, wherein the routing AR information includes an indication of where AR content is to be inserted and overlayed in an image of the entity location; provide, based on the routing AR information, an AR view of the route for display by the client device (Chachek, [0062]: (b) extract or deduce the precise current in-store location of the user; (c) determine a walking route from the current location to the destination, based on the Store Map and by utilizing a suitable route guidance algorithm; (d) generate turn-by-turn walking instructions for such route, and convey them to the user by animation; [0063]: the AR-based navigation instructions that are generated, displayed and/or conveyed to the user, may include AR-based arrows or indicators that are shown as an overlay on top of an aisle or shelf of products, which guide the user to walk or move or turn to a particular direction in order to find a particular product; [0082]: an AR route indicates to the user where to turn based on store map); obtain item AR information associated with visual media captured by the client device that identifies one or more recommended items, associated with the at least one item, depicted by the visual media…(Chachek, [0063]: perform rapid real-time recognition of the products that are currently imaged in the field-of-view of the imager of the end-user device, to fetch or obtain or lookup or download product data for each recognized product to determine which sub-set of the imaged products fulfill the constraints that the user provided based on his Visual Search query; [0109]:image analysis and/or computerized vision algorithms); and provide, based on the item AR information, AR feedback information for display in connection with the visual media (Chachek, [0063]: generate and displayed AR-based or VR-based emphasis of such sub-set of products by adding a textual label or a graphical indicator or an animated content; [0109]:image analysis and/or computerized vision algorithms). Chachek does not teach: and one or more probability scores associated with each of the one or more recommended items, wherein the one or more probability scores indicate a likelihood that each of one or more recommended items would be accepted as a replacement for an item, of the one or more items, based on an output of a machine learning model; generate AR feedback information associated with the visual media, wherein the AR feedback information includes colors of respective probability scores located proximate to the one or more recommended items in the visual media, wherein a first color surrounds a first recommended item, of the one or more recommended items, associated with a first probability score, of the one or more probability scores, satisfying a first threshold, and a second color surrounds a second recommended item, of the one or more recommended items, associated with a second probability score, of the one or more probability scores, satisfying a second threshold. However, Franey teaches: and one or more probability scores associated with each of the one or more recommended items, wherein the one or more probability scores indicate a likelihood that each of one or more recommended items would be accepted as a replacement for an item, of the one or more items, based on an output of a machine learning model (Franey, [0016]: retrieve a set of possible available substitute product options for the missing desired product based in part on an acceptance probability score assigned or attached to each of the possible available product options; [0041]: machine learning to generate acceptance probability scores based on previous substitutions accepted by user; [0075]: apply the acceptance probability algorithm (e.g., machine learning algorithm) to the one or more substitute products to determine acceptance probability score for
Read full office action

Prosecution Timeline

Jul 07, 2022
Application Filed
May 17, 2024
Non-Final Rejection — §101, §103, §112
Jul 16, 2024
Interview Requested
Jul 30, 2024
Applicant Interview (Telephonic)
Jul 30, 2024
Examiner Interview Summary
Aug 02, 2024
Response Filed
Nov 14, 2024
Final Rejection — §101, §103, §112
Dec 09, 2024
Interview Requested
Jan 15, 2025
Applicant Interview (Telephonic)
Jan 15, 2025
Examiner Interview Summary
Jan 17, 2025
Response after Non-Final Action
Feb 11, 2025
Request for Continued Examination
Feb 12, 2025
Response after Non-Final Action
Apr 16, 2025
Non-Final Rejection — §101, §103, §112
Jun 16, 2025
Interview Requested
Jul 03, 2025
Examiner Interview Summary
Jul 03, 2025
Applicant Interview (Telephonic)
Jul 11, 2025
Response Filed
Oct 10, 2025
Final Rejection — §101, §103, §112
Nov 17, 2025
Interview Requested
Dec 03, 2025
Applicant Interview (Telephonic)
Dec 03, 2025
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572964
NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM AND SYSTEM PERFORMING SPECIFIC PROCESS WHICH ENABLES PAYMENT OF CHARGE OF ARTICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12572934
SYSTEM AND METHOD FOR IMPLEMENTING AN EDGE QUEUING PLATFORM
2y 5m to grant Granted Mar 10, 2026
Patent 12561750
Barmaster Drink Delivery System
2y 5m to grant Granted Feb 24, 2026
Patent 12555149
QUEUE MANAGEMENT DEVICE FOR PROVIDING INFORMATION ABOUT ACCESS WAITING SCREEN AND METHOD THEREOF
2y 5m to grant Granted Feb 17, 2026
Patent 12548058
RETAIL STORE MOTION SENSOR METHODS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
34%
Grant Probability
83%
With Interview (+49.0%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 193 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month