Prosecution Insights
Last updated: April 19, 2026
Application No. 18/813,550

NAVIGATION PATHS FOR DIRECTING USERS TO FOOD ITEMS BASED ON MEAL PLANS

Non-Final OA §103§DP
Filed
Aug 23, 2024
Examiner
LI, GRACE Q
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Micron Technology, Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
270 granted / 351 resolved
+14.9% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
35 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
63.9%
+23.9% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 351 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “one or more components configured to…” in claims 1-14, “an extended reality (XR) device comprising one or more components configured to…” in claims 19-20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification of the PG-Pub US 20240420431A1 of the instant application shows that the corresponding structure of one or more components correspond to “[0078] The XR device 805 may include various types of hardware, such as processors, sensors, cameras, input devices, and/or displays. [0091] one or more components of the server (e.g., processor 920, memory 930, input component 940, output component 950, and/or communication component 960) may perform or may be configured to perform one or more process blocks of FIG. 11”. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim(s) 1, 15 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 8, 18 of patent US 12141928. Although the claims at issue are not identical, they are not patentably distinct from each other because the pending claim(s) is/are an obvious variation of the patented claims, or entirely covered by the patented claims. For example, claim 1 of the instant application discloses An extended reality (XR) device, comprising: one or more components configured to receive, via an interface of the XR device, an input associated with meals of a user associated with the XR device; determine, based on the input, a meal plan for the user of the XR device, wherein the meal plan is associated with target meals; determine, based on recipes for the target meals, a list of food items for preparing the target meals associated with the meal plan; determine an additional target meal that is attainable using the food items included in the list of food items, wherein the additional target meal is aligned with the input; provide, via the interface, the list of food items, the target meals, and a recommendation that indicates the additional target meal; and provide, via the interface, an in-store navigation path to direct the user of the XR device via overlayed audio-visual cues to locations within a physical retail store to pick up the food items. These limitations are all disclosed by the claim 1 of the patent US 12141928. Therefore, claim 1 of the instant application is covered by the claim 1 of the patent US 12141928, and is/are not patently distinct from the mentioned patent claim. The following table illustrates a comparative mapping between the limitations of claim 1 of the instant application and the mapping claim 1 of patent US 12141928. Claim 1 of the Instant Application 18813550 Claim 8 of the Patent 12141928 An extended reality (XR) device, comprising: one or more components configured to: An extended reality (XR) device, comprising: one or more components configured to: receive, via an interface of the XR device, an input associated with meals of a user associated with the XR device; receive, via an interface of the XR device, an input associated with meals of a user associated with the XR device; determine, based on the input, a meal plan for the user of the XR device, wherein the meal plan is associated with target meals; determine, based on the input, a meal plan for the user of the XR device, wherein the meal plan is associated with target meals; determine, based on recipes for the target meals, a list of food items for preparing the target meals associated with the meal plan; determine, based on recipes for the target meals, a list of food items for preparing the target meals associated with the meal plan; determine an additional target meal that is attainable using the food items included in the list of food items, wherein the additional target meal is aligned with the input; determine additional target meals that are attainable using the food items included in the list of food items, wherein the additional target meals are aligned with the input; and provide, via the interface, the list of food items, the target meals, and a recommendation that indicates the additional target meal; and provide, via the interface, the list of food items and the target meals; provide, via the interface, a recommendation that indicates the additional target meals provide, via the interface, an in-store navigation path to direct the user of the XR device via overlayed audio-visual cues to locations within a physical retail store to pick up the food items. provide, via the interface, an in-store navigation path to direct the user of the XR device via overlayed audio-visual cues to locations within a physical retail store to pick up the food items; detect, using a camera of the XR device, a food item added to a shopping cart, wherein the food item is not included in the list of food items; determine a complexity level associated with updating the list of food items based on the food item added to the shopping cart; transmit, to a server, an indication of the list of food items and the food item added to the shopping cart based on the complexity level satisfying a threshold; and receive, from the server, an updated list of food items. The following is a complete listing of the correspondence between the claim of the instant application and the patents: Claim(s) of the Instant Application 18813550 1 15 Claim(s) of the Patent 12141928 8 18 Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 14, 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt (US 20220021923) in view of KIM et al. (US 20220319666), and further in view of SHAH et al. (US 20240013287). Regarding claim 1, McDevitt discloses An extended reality (XR) device, comprising: one or more components configured to (McDevitt, abstract, “A system and method is disclosed herein that provides a multi-device, multi-screen experience where original content, other content, and associated data can interact and flow between a primary display device and one or more secondary devices”. Claim 16, “wherein the primary content comprises at least one of: streaming content, on-demand content, live television content, audio content, video content, augmented reality content, or virtual reality content”. Therefore, for example, a primary display device displaying augmented reality content, corresponds to an extended reality device): receive, via an interface of the XR device, an input associated with meals of a user associated with the XR device (McDevitt, “[0066] this additional embodiment contemplates that a cooking show is selected by the user to watch on the TV as the primary source content, and when the cooking show passes through the data communication hub 106, the primary source content is recognized as the cooking show. [0068] the user utilizes the tablet to change the primary source content to programming on another TV channel”); determine, based on the input, a meal plan for the user of the XR device, wherein the meal plan is associated with target meals (McDevitt, “[0062] Next, at step 420, the data communication hub 106 then communicates with the Internet and secondary systems 101, via Wi-Fi Connection 204 and router 105, and identifies information relating to the cooking show, such as a recipe for the item being prepared on the cooking show. Then, at step 425, the content processor 208 of the data communication hub 106 causes the identified recipe to be displayed on the display device 107 and/or one or more of the secondary devices 108, 109 and/or 110, as described above. The recipe, which is considered the secondary source content, can be displayed on top of (i.e., as an overlay), adjacent to, or in place of the primary source content (i.e. the cooking show)”); determine, based on recipes for the target meals, a list of food items for preparing the target meals associated with the meal plan (McDevitt, “[0064] As further shown in FIG. 4, at step 430 the processor 205 of the data communication hub 106 accesses the local storage 207 to identify user profile data relating to the secondary source content being presented to the user. In the cooking recipe example, the processor 205 may determine what ingredients are necessary to make the recipe and also determine those ingredients that are currently in the user's possession as well as those ingredients that the user would need to complete the recipe”); provide, via the interface, the list of food items and the target meals (McDevitt, “[0067] If more information is selected then a list of required ingredients and a detailed recipe is presented”). On the other hand, McDevitt fails to explicitly disclose but KIM discloses determine an additional target meal that is attainable using the food items included in the list of food items, wherein the additional target meal is aligned with the input; provide, via the interface, a recommendation that indicates the additional target meal (KIM, “[0086] The item recommender 330 according to an embodiment may identify items usable to create the plan from among the items 331 repeatedly collected from outside through search or the like, based on the user information, e.g., the user profile 311. For example, the item recommender 330 may identify recipes using foods edible by the user as the items usable to create the plan by excluding recipes using foods inedible by the user, based on the allergy information or the health status of the user which is included in the user profile 311. Alternatively, the item recommender 330 may identify recipes suitable for the user as the items usable to create the plan, based on a condition included in the user profile 311 and preset by the user to create the plan. In this case, the user profile 311 is information about the condition input by the user and preset to create the plan, and may further include, for example, the information obtained through the interfaces 210 and 220 of FIG. 2. [0139] The display 1210 displays data processed in the electronic device 1000. According to an embodiment, the display 1210 may display the plan created based on the input of the user”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined KIM and McDevitt. That is, adding the recipes of KIM in addition to the target meals of McDevitt. The motivation/ suggestion would have been to provide a method capable of providing a plan suitable for a user in various circumstances (KIM, [0004]). On the other hand, McDevitt in view of KIM fails to explicitly disclose but SHAH discloses provide, via the interface, an in-store navigation path to direct the user of the XR device via overlayed audio-visual cues to locations within a physical retail store to pick up the food items (SHAH, “[0001] In augmented reality, elements of the real-world environment are “augmented” by computer-generated or extracted input, such as sound, video, graphics, haptics, and/or global positioning system (GPS) data, among other examples. [0040] For example, AR content may be overlayed in the image indicating the item (e.g., “flour”) that is to be retrieved at the waypoint indicated by the AR waypoint. In this way, the user of the client device may quickly and easily identify the route (e.g., that is determined by the server device to be an efficient route for retrieving the one or more items) and/or locations to stop along the route to retrieve the one or more items”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined McDevitt, KIM and SHAH, to include all limitations of claim 1. That is, adding the route overlay of SHAH to the display system of McDevitt and KIM. The motivation/ suggestion would have been to provide a system for providing real time visual feedback for augmented reality (AR) map routing and item selection (SHAH, [0002]). Regarding claim(s) 19, 20, they recite similar limitations as claim 1, except they further disclose a system, comprising: an extended reality (XR) device and a server, and limitations in claim 1 are distributed to one of the XR device and the server, and data is transmitted between the XR device and the server. McDevitt further discloses “abstract, A system and method is disclosed herein that provides a multi-device, multi-screen experience where original content, other content, and associated data can interact and flow between a primary display device and one or more secondary devices. [0037] In either embodiment, through this architecture the various devices are configured to work together to create a unified multi-device, multi-screen experience where primary source content, primary source content metadata, secondary source(s) content, secondary source(s) metadata and device control commands can interact and flow between the display device 107 (i.e., the primary display device) and the secondary devices (e.g., PC's 108, tablets 109, smartphones 110, and the like) as well as to the Internet and secondary systems 101 and back. Claim 16, wherein the primary content comprises at least one of: streaming content, on-demand content, live television content, audio content, video content, augmented reality content, or virtual reality content”. Therefore, as shown in fig.1, McDevitt discloses an XR-server system and data transmission between the XR device and the server. Regarding claim 14, McDevitt in view of KIM and SHAH discloses The XR device of claim 1. On the other hand, McDevitt in view of KIM fails to explicitly disclose but SHAH discloses wherein the in-store navigation path is a shortest path for visiting the locations within the physical retail store at which the food items are held (SHAH, “[0019] As shown in FIG. 1A, a first user (e.g., a customer, an employer, and/or another user) may use the user device to initiate a task. For example, the task may be associated with acquiring one or more items (e.g., food, clothing, cleaning supplies, home goods, and/or any other item). [0035] For example, as shown in FIG. 1A, the server device may determine to order the waypoints such that the route is a shortest length possible (e.g., such that the route does not double back on itself, or does not cross the entire entity layout multiple times)”). The same motivation of claim 1 applies here. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt (US 20220021923) in view of KIM et al. (US 20220319666) and SHAH et al. (US 20240013287), and further in view of Leifer et al. (US 20190228856). Regarding claim 2, McDevitt in view of KIM and SHAH discloses The XR device of claim 1. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Leifer discloses wherein the input includes one or more of: a calorie requirement, a meal composition, food preferences, or links to electronic pages describing potential target meals, and wherein the food preferences indicate one or more of: food allergies, food sensitivities, foods to be avoided, foods associated with an increased priority level, preferred food characteristics, or preferred cuisines (Leifer, fig.1, “[0008] As shown in FIGS. 1-2, embodiments of a method 100 for improving food-related personalized for a user can include: determining user food-related preferences (e.g., health goals, taste preferences, dietary restrictions, etc.) associated with one or more users S110; collecting one or more dietary inputs (e.g., selections of meal options suitable for satisfying different food-related preferences; etc.) from one or more subject matter experts (SMEs) and/or other entities (e.g., human entities, non-human entities, entities associated with machine learning techniques and/or other computational processing methods, etc.) associated with the food-related personalization S120; determining personalized food parameters (e.g., personalize alternative meal options accommodating the user food-related preferences, etc.) for the user based on the user food-related preferences and the dietary inputs S130”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Leifer into the combination of McDevitt, KIM, and SHAH, to include all limitations of clam 2. That is, applying the input of user food-related preferences of Leifer to select the TV show of SHAH and McDevitt, KIM. The motivation/ suggestion would have been to provide methods and systems for improving food-related personalization to user needs (Leifer, [0004]). Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt (US 20220021923) in view of KIM et al. (US 20220319666) and SHAH et al. (US 20240013287), and further in view of Kim2 et al. (US 20230196126). Regarding claim 3, McDevitt in view of KIM and SHAH discloses The XR device of claim 1. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Kim2 discloses wherein the one or more components, to determine the list of food items, are configured to: generate a directed graph that includes a first set of nodes corresponding to a plurality of food items, a second set of nodes corresponding to a plurality of target meals, and a third set of nodes corresponding to intermediate steps between the plurality of food items and the plurality of target meals (Kim2, fig.2, “[0004] Recipes typically start by specifying a set of ingredients used in the recipe, along with a number of steps describing how to combine and modify those ingredients to form the final dish…some recipes may instruct the user to perform certain tasks in parallel. [0021] A flow graph can represent the instructions as a “rooted” knowledge graphs, with the root node representing the result of following the instructions (e.g., a dish produced by a recipe), leaf nodes capturing the entities used in the instructions (e.g., the ingredients and equipment used in a recipe), and intermediates nodes and edges capturing information about the actions taking place to produce intermediate results (e.g., mixing together flour and water to form a batter). [0022] we present our approach to extract information from domain-specific procedural instructions—particularly, recipes from the domain of cooking—to convert them from natural language into flow graphs. [0027] It can also be seen how intermediate results in the recipe are used in subsequent actions, such as how the result of cranberry sauce being mashed in a bowl (labeled here as node A) is participating in the next action of stirring 214, 216, 218 in the other two ingredients to achieve the recipe output (node B)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Kim2 into the combination of McDevitt, KIM and SHAH, to include all limitations of clam 3. That is, adding the knowledge graphs with nodes of Kim2 to the system of SHAH and McDevitt, KIM. The motivation/ suggestion would have been Resources like ontologies or knowledge graphs, which are typically manually curated by domain experts, can provide authoritative knowledge about entities (e.g., ‘beef’ and ‘chicken’ are both a type of ‘meat’). This knowledge in turn can be useful to inform the process of extracting information from the instruction text as well as augmenting the information available in the resulting flow graph (Kim2, [0022]). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt (US 20220021923) in view of KIM et al. (US 20220319666) and SHAH et al. (US 20240013287), and further in view of Geisner et al. (US 20130085345). Regarding claim 4, McDevitt in view of KIM and SHAH discloses The XR device of claim 1. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Geisner discloses wherein the one or more components are configured to: detect, using a camera of the XR device, a food item in a field of view of the camera, wherein the food item is not included in the list of food items; identify nutritional information associated with the food item; determine, based on the nutritional information, whether the food item is aligned with food preferences associated for the user; and provide, via the interface, an alert indicating that the food item does not align with food preferences (Geisner, abstract, “A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD) worn by the user. In a store, a forward-facing camera of the HMDD captures an image of a food item. The food item can be identified by the image, such as based on packaging of the food item. Nutritional parameters of the food item are compared to nutritional preferences of the user to determine whether the food item is recommended. The HMDD displays an augmented reality image to the user indicating whether the food item is recommended. If the food item is not recommended, a substitute food item can be identified. The nutritional preferences can indicate food allergies, preferences for low calorie foods and so forth. In a restaurant, the HMDD can recommend menu selections for a user”. Therefore, determining the food item is not recommended, indicates the food item is not included int the list of food items). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Geisner into the combination of McDevitt, KIM and SHAH, to include all limitations of clam 4. That is, applying the determining whether a food item is recommended of Geisner to the system of SHAH and McDevitt, KIM. The motivation/ suggestion would have been to assist a user in managing a food supply in one's home, and selecting foods such as when shopping in a store or while dining in a restaurant (Geisner, [0003]). Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt (US 20220021923) in view of KIM et al. (US 20220319666) and SHAH et al. (US 20240013287), and further in view of Chaubard et al. (US 20180218351). Regarding claim 5, McDevitt in view of KIM and SHAH discloses The XR device of claim 1, wherein the XR device has been disclosed. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Chaubard discloses wherein the one or more components are configured to: detect, using a camera of the device, a food item added to a shopping cart, wherein the food item is included in the list of food items; and provide, via the interface, an indication of remaining food items (Chaubard, abstract, “If the mobile shopping unit detects a change, the mobile shopping unit captures image data of the contents of the shopping cart using one or more cameras mounted to the shopping cart. The mobile shopping unit uses the image data to identify the item added to or removed from the cart. The mobile shopping unit applies a machine-learned item identification model to the image data received from the cameras to determine an item identifier for the added or removed item. When the mobile shopping unit determines the identifier for the added or removed item, the mobile shopping unit updates a contents list associated with the customer that stores the items currently collected by the customer. [0004] An automated checkout system maintains a contents list for a customer that describes items collected by the customer in a shopping cart”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Chaubard into the combination of McDevitt, KIM and SHAH, to include all limitations of clam 5. That is, adding the updating the content list of Chaubard to the XR system of SHAH and McDevitt, KIM. The motivation/ suggestion would have been Items are automatically added to the contents list without direct user interaction with the automated checkout system, meaning that customers can more easily use such an automated checkout system (Chaubard, [0007]). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt in view of KIM and SHAH, and further in view of Geisner et al. (US 20130085345) and Lin et al. (CN 107358508 B). Regarding claim 6, McDevitt in view of KIM and SHAH discloses The XR device of claim 1, wherein the XR device has been disclosed. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Geisner discloses detect, using a camera of the XR device, a food item (Geisner, abstract, “A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD) worn by the user. In a store, a forward-facing camera of the HMDD captures an image of a food item. The food item can be identified by the image, such as based on packaging of the food item”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Geisner into the combination of McDevitt, KIM and SHAH. That is, applying the determining a food item based on an image captured by a camera of Geisner to the system of SHAH and McDevitt, KIM. The motivation/ suggestion would have been to assist a user in managing a food supply in one's home, and selecting foods such as when shopping in a store or while dining in a restaurant (Geisner, [0003]). On the other hand, McDevitt in view of KIM, SHAH and Geisner fails to explicitly disclose but Lin discloses wherein the one or more components are configured to: detect a food item added to a shopping cart, wherein the food item is not included in the list of food items; and remove other food items from the list of food items based on the food item that is added to the shopping cart (Lin, “The invention claims a similar commodity item deleting method and device, by associating the similar or similar commodity, can through any one commodity item selected in the shopping vehicle and click deleting function; can display the list of a plurality of commodity items similar to or similar to the commodity item added in the shopping cart, for the user to selectively delete. The large class of the commodity can be divided according to the industry of the commodity circulation field. For example: Everybody is electricity, small household electricity, daily goods, food and so on”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Lin into the combination of McDevitt, KIM and SHAH, Geisner, to include all limitations of claim 6. That is, adding the deleting commodity items when similar items have been added to the shopping cart of Lin to the system of SHAH and McDevitt, KIM, Geisner. The motivation/ suggestion would have been to assist a user in managing a food supply in one's home, and selecting foods such as when shopping in a store or while dining in a restaurant (Lin). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt in view of KIM and SHAH, and further in view of Donnels et al. (US 20220309557). Regarding claim 8, McDevitt in view of KIM and SHAH discloses The XR device of claim 1, wherein the list of food items has been disclosed. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Donnels discloses wherein the one or more components are configured to: select, using store information, the physical retail store based on the list of items, wherein the store information indicates a plurality of physical retail stores and real-time item inventory information for each of the plurality of physical retail stores (Donnels, “[0041] The store—item mapper 108 may then programmatically call or communicate with the geolocation store mapper 106 to determine which suitable inventory facilities or physical retailer locations are within a threshold distance of the user device or user. Responsively, the store—item mapper 108 can determine if such inventory facilities or physical retailer locations have such items of interest in stock (or for sale) at the inventory facilities. a database of inventory information in near real-time to determine if the item of interest is currently in stock (or for sale) and the particular prices that the items are offered for sale at. [0085] FIG. 7 is a flow diagram of an example process 700 for generating a user interface element that indicates a recommendation of at least one physical retailer location to purchase an item at, according to some embodiments”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Donnels into the combination of McDevitt, KIM and SHAH. That is, applying the store selection based on inventory information of Donnels to the food stores of KIM, SHAH and McDevitt. The motivation/ suggestion would have been to improve the user experience and computing resource consumption (e.g., disk I/O) relative to other technologies (Donnels, [0002]). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt in view of KIM and SHAH, and further in view of O'Brien et al. (US 11410218). Regarding claim 9, McDevitt in view of KIM and SHAH discloses The XR device of claim 1, wherein the list of food items, and audio-visual cues, have been disclosed. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but O'Brien discloses wherein the one or more components are configured to: retrieve, from a server, store mapping information that indicates a map of store aisles and corresponding items that are available for sale in the store aisles; determine, using a camera of the XR device, a current location within the physical retail store; determine a next item to be purchased from the list of items; determine, based on the store mapping information, a next location in the physical retail store that is associated with the next item; and provide, via the interface, the in-store navigation path to direct the user via the overlayed audio-visual cues from the current location to the next location associated with the next item (O'Brien, “col.8, lines 34-37, The digital assistant may perform operations to access the requested information, and provide (e.g., audio) output describing the requested information. Col.11, lines 48-61, the application 430 is shown providing a map 750 on the display of mobile device 440. The map 750 includes a layout 770 of the store in which user is currently shopping, including a first symbol 730 indicating the user's present location in the store and a second symbol 740 indicating the general location of the selected item, such as the shelves with relevant items (here Aisle K, Row 4). In addition, a suggested route or path 760 superimposed on the map from the first symbol 730 to the second symbol 740 is shown to guide the user to the destination. In other embodiments, the map 750 can also include landmarks such as the entrance to the store, the checkout locations and any other obstacles. The path 760 is automatically updated as the user's position changes.”). It would have been obvious tso one of ordinary skill in the art before the effective filing date of the claimed invention to have combined O'Brien into the combination of McDevitt, KIM and SHAH. That is, adding the navigation path of O'Brien to the XR system of SHAH and McDevitt, KIM. The motivation/ suggestion would have been providing users with real-time intelligent guidance and navigation, and specifically to a method for improving the user's shopping experience by providing supplemental information about a product while the user is browsing a retail environment (O'Brien, col.1, lines 17-19). Claim(s) 10, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt in view of KIM and SHAH, and further in view of Zhao et al. (US 20230326574). Regarding claim 10, McDevitt in view of KIM and SHAH discloses The XR device of claim 1, wherein the list of food items, and audio-visual cues, have been disclosed. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Zhao discloses wherein the one or more components are configured to: identify additional food items that, when combined with certain food items included in the list of food items, make additional target meals; provide, via the interface, a notification of the additional food items, wherein the notification includes an option to accept the additional food items (Zhao, “[0016] A step of recommending may comprise generating a user-perceptible output, such as a visual, audio and/or haptic output (e.g. via a display, speaker or vibrating element respectively) identifying whether an additional ingredient is recommended and/or the identity of the additional ingredients. [0023] The method may be adapted such that, during the first ingredient recommendation process, the step of recommending whether and which one or more additional ingredients should be used comprises identifying a recipe defining the group of one or more ingredients. [0025] a selected additional ingredient being one of the plurality of different potential additional ingredients that has been selected by a user”). On the other hand, McDevitt in view of KIM and Zhao fails to explicitly disclose but SHAH discloses provide, via the interface, the in-store navigation path to direct the user of the XR device via the overlayed audio-visual cues to additional locations within the physical retail store to pick up the additional food items (SHAH, “[0001] In augmented reality, elements of the real-world environment are “augmented” by computer-generated or extracted input, such as sound, video, graphics, haptics, and/or global positioning system (GPS) data, among other examples. [0040] For example, AR content may be overlayed in the image indicating the item (e.g., “flour”) that is to be retrieved at the waypoint indicated by the AR waypoint. In this way, the user of the client device may quickly and easily identify the route (e.g., that is determined by the server device to be an efficient route for retrieving the one or more items) and/or locations to stop along the route to retrieve the one or more items”). The same motivation of claim 1 applies here. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt in view of KIM and SHAH, and further in view of Chaubard et al. (US 20180218351) and Whitehurst et al. (US 9640088). Regarding claim 11, McDevitt in view of SHAH discloses The XR device of claim 1. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Chaubard discloses detect, using a camera of the device, a food item added to a shopping cart (Chaubard, abstract, “If the mobile shopping unit detects a change, the mobile shopping unit captures image data of the contents of the shopping cart using one or more cameras mounted to the shopping cart. The mobile shopping unit uses the image data to identify the item added to or removed from the cart. The mobile shopping unit applies a machine-learned item identification model to the image data received from the cameras to determine an item identifier for the added or removed item”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Chaubard into the combination of McDevitt, KIM and SHAH. That is, adding the using cameras to identify items of Chaubard to the XR system of SHAH and McDevitt, KIM. The motivation/ suggestion would have been Items are automatically added to the contents list without direct user interaction with the automated checkout system, meaning that customers can more easily use such an automated checkout system (Chaubard, [0007]). On the other hand, McDevitt in view of KIM, SHAH and Chaubard fails to explicitly disclose but Whitehurst discloses wherein the input includes a calorie requirement, and wherein the one or more components are configured to: look up nutritional information associated with the food item; determine a remaining number of allowed calories based on the nutritional information associated with the food item and the calorie requirement; and provide, via the interface, a notification of the remaining number of allowed calories (Whitehurst, “col.3, lines 7-20, The electronic vendor device can encode an electronic tag (e.g., an NFC tag) to store data that includes an identification of each ordered item (e.g., hamburger plus cheese minus ketchup, small fries and water), a calorie count for each ordered item and a calorie count for the order as a whole. The vendor employee can stick the electronic tag (which can include an adhesive surface) to a food order package (e.g., bag, box, plate or lid). A user can then tap the electronic tag with a electronic user device, such that power is provided to the tag and/or a transmission of the data from the tag to the electronic user device is initiated. The electronic user device can present the data and/or update one or more user-associated variables (e.g., remaining calories in a daily calorie budget) based on the data. Col.13, line 55- col.14, line 5The target nutrition variable can be set to a defined value, identified in user input and/or can be automatically determined by electronic user device 400. The automatic determination can be based on one or more target results (e.g., target weight, target weight loss, target cholesterol, target blood pressure, target glucose level and/or target vitamin level) as identified based on or in user input and/or based on one or more user characteristics (e.g., current or historical: weight, cholesterol, blood pressure, glucose level and/or vitamin level; current or historical medical condition and/or disease, such as pre-diabetes, diabetes, cardiac arrest or stroke; height; sex; race; genetic or familial risk factors; geographical location; current or past eating patterns (e.g., calories consumed per day, types of food preferences, etc.)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Whitehurst into the combination of McDevitt, KIM and SHAH, Chaubard. That is, adding the determining remaining calories of Whitehurst to the XR device of SHAH and McDevitt, KIM, Chaubard. The motivation/ suggestion would have been to provide a computer-implemented method can be provided for using communications from RFID electronic tags to dynamically update cumulative nutrition variables (Whitehurst, col.1, lines 55-57). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt in view of KIM and SHAH, and further in view of Chaubard et al. (US 20180218351), REICHERT (US 20150278849), and King (US 20110264554). Regarding claim 12, McDevitt in view of KIM and SHAH discloses The XR device of claim 1. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Chaubard discloses detect, using a camera of the device, a food item added to a shopping cart (Chaubard, abstract, “If the mobile shopping unit detects a change, the mobile shopping unit captures image data of the contents of the shopping cart using one or more cameras mounted to the shopping cart. The mobile shopping unit uses the image data to identify the item added to or removed from the cart. The mobile shopping unit applies a machine-learned item identification model to the image data received from the cameras to determine an item identifier for the added or removed item”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Chaubard into the combination of McDevitt, KIM and SHAH. That is, adding the using cameras to identify items of Chaubard to the XR system of SHAH and McDevitt, KIM. The motivation/ suggestion would have been Items are automatically added to the contents list without direct user interaction with the automated checkout system, meaning that customers can more easily use such an automated checkout system (Chaubard, [0007]). On the other hand, McDevitt in view of KIM, SHAH and Chaubard fails to explicitly disclose but REICHERT discloses detect, using the camera, a cost associated with the item (REICHERT, “[0041] an image capture device, such as a camera (e.g., a security camera) 114a (deployed, in the example system 100 of FIG. 1, near the POS device 110a) may be used to capture images of items selected for purchase, and to identity based on the captured images the selected items and retrieve associated data (e.g., price, inventory levels, etc.) for the identified items”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined REICHERT into the combination of McDevitt, KIM and SHAH, Chaubard. That is, applying the price detection based on captured images to the food items of Chaubard, SHAH and McDevitt, KIM. The motivation/ suggestion would have been to enable distributed processing of transaction data at individual retail points (REICHERT, [0041]). On the other hand, McDevitt in view of KIM, SHAH and Chaubard, REICHERT fails to explicitly disclose but King discloses wherein the input includes a budget requirement, and wherein the one or more components are configured to: detect a cost associated with the food item; determine a remaining budget based on the cost associated with the food item and the budget requirement; and provide, via the interface, a notification of the remaining budget (King, “[0019] Further contained within the handset are memory means. The memory means are adapted to store the particular pricing and identification of data of each of the items designated for purchase by the shopper. Further, the memory means adds all of the prices of the purchase items entered therein to provide a total purchase amount displayed on the display means, thereby providing a shopper with a constant update of how much they are spending. Alternatively, the memory means subtracts all the prices of the purchase items entered therein to provide a total budget amount remaining to display on the display means, providing a shopper with a constant update of how much of their budget remains. Abstract, The device further includes an alphanumeric keypad allowing the shopper to input budget information. Claim 6, a module to allow the input of a total budget, a module to allow the addition of purchases items to indicate a running total or the subtraction of items from the budget to indicate the amount remaining”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined King into the combination of McDevitt, KIM and SHAH, Chaubard, REICHERT. That is, adding the determining remaining budget of King to the XR device of SHAH and McDevitt, KIM, Chaubard, REICHERT. The motivation/ suggestion would have been providing an effective means of enabling practitioners to be aware of article pricing and maintain a budget (King, [0003]). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt in view of KIM and SHAH, and further in view of Beltran (US 20220036432). Regarding claim 13, McDevitt in view of KIM and SHAH discloses The XR device of claim 1. On the other hand, McDevitt in view of KIM and SHAH fails to explicitly disclose but Beltran discloses wherein the one or more components are configured to: detect, using a camera of the device, an electronic page that indicates a recipe for a potential target meal, wherein the electronic page indicates food items for preparing the potential target meal, and wherein the meal plan is based on the recipe indicated on the electronic page (Beltran, “[0078] The system optionally may utilize the optical sensor or camera in a personal hand-held device to scan optical codes or other wireless data connections. For example, a user may scan a code in a magazine for a recipe causing the system to populate the list with the ingredients for the recipe. [0082] Optionally, individual items or an entire shopping list may be populated by entering or scanning a code with the computing device. For example, a recipe code can be scanned to add all the ingredients for making that recipe”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Beltran into the combination of McDevitt, KIM and SHAH. That is, adding the recipe with ingredient list via a camera of Beltran to the XR device of SHAH and McDevitt, KIM. The motivation/ suggestion would have been to provide portable electronic device applications, and more particularly, to an application to make shopping more efficient and cost effective (Beltran, [0006]). Claim(s) 15, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt (US 20220021923) in view of KIM et al. (US 20220319666), and further in view of Donnels et al. (US 20220309557). Regarding claim 15, McDevitt discloses a server comprising one or more components configured to (McDevitt, abstract, “A system and method is disclosed herein that provides a multi-device, multi-screen experience where original content, other content, and associated data can interact and flow between a primary display device and one or more secondary devices”. “[0037] In either embodiment, through this architecture the various devices are configured to work together to create a unified multi-device, multi-screen experience where primary source content, primary source content metadata, secondary source(s) content, secondary source(s) metadata and device control commands can interact and flow between the display device 107 (i.e., the primary display device) and the secondary devices (e.g., PC's 108, tablets 109, smartphones 110, and the like) as well as to the Internet and secondary systems 101 and back”. Claim 16, “wherein the primary content comprises at least one of: streaming content, on-demand content, live television content, audio content, video content, augmented reality content, or virtual reality content”. Therefore, as shown in fig.1, McDevitt discloses an XR-server system): receive, from an extended reality (XR) device, an input associated with meals of a user associated with the XR device (McDevitt, “[0066] this additional embodiment contemplates that a cooking show is selected by the user to watch on the TV as the primary source content, and when the cooking show passes through the data communication hub 106, the primary source content is recognized as the cooking show. [0068] the user utilizes the tablet to change the primary source content to programming on another TV channel”); determine, based on the input, a meal plan for the user of the XR device, wherein the meal plan is associated with target meals (McDevitt, “[0062] Next, at step 420, the data communication hub 106 then communicates with the Internet and secondary systems 101, via Wi-Fi Connection 204 and router 105, and identifies information relating to the cooking show, such as a recipe for the item being prepared on the cooking show. Then, at step 425, the content processor 208 of the data communication hub 106 causes the identified recipe to be displayed on the display device 107 and/or one or more of the secondary devices 108, 109 and/or 110, as described above. The recipe, which is considered the secondary source content, can be displayed on top of (i.e., as an overlay), adjacent to, or in place of the primary source content (i.e. the cooking show)”); determine, based on recipes for the target meals, a list of food items for preparing the target meals associated with the meal plan (McDevitt, “[0064] As further shown in FIG. 4, at step 430 the processor 205 of the data communication hub 106 accesses the local storage 207 to identify user profile data relating to the secondary source content being presented to the user. In the cooking recipe example, the processor 205 may determine what ingredients are necessary to make the recipe and also determine those ingredients that are currently in the user's possession as well as those ingredients that the user would need to complete the recipe”); transmit, to the XR device, an indication that indicates: the meal plan including the target meals, the list of food items (McDevitt, “[0067] Next, notification to the user of available secondary source content can be sent to the user by a variety of means including, but not limited to, a notification to the secondary device (e.g., a connected mobile device, such as a phone or tablet that is connected to the data communication hub 106 by any of a variety of pairing methodologies “All Share” or “AllJoyn” via Wi-Fi or Bluetooth, etc.) or by a graphic overlay on the primary display device (e.g., “Click OK on your TV or Tablet for more information”). If more information is selected then a list of required ingredients and a detailed recipe is presented”). On the other hand, McDevitt fails to explicitly disclose but KIM discloses determine an additional target meal that is attainable using the list of food items, wherein the additional target meal is aligned with the input (KIM, “[0086] The item recommender 330 according to an embodiment may identify items usable to create the plan from among the items 331 repeatedly collected from outside through search or the like, based on the user information, e.g., the user profile 311. For example, the item recommender 330 may identify recipes using foods edible by the user as the items usable to create the plan by excluding recipes using foods inedible by the user, based on the allergy information or the health status of the user which is included in the user profile 311. Alternatively, the item recommender 330 may identify recipes suitable for the user as the items usable to create the plan, based on a condition included in the user profile 311 and preset by the user to create the plan. In this case, the user profile 311 is information about the condition input by the user and preset to create the plan, and may further include, for example, the information obtained through the interfaces 210 and 220 of FIG. 2”); transmit, to the electronic device, an indication that indicates: a recommendation identifying the additional target meal (KIM, “[0044] Without being limited to the above-described example, the server 2000 may transmit, to the electronic device 1000, various types of data required by the electronic device 1000 to create the plan. . [0139] The display 1210 displays data processed in the electronic device 1000. According to an embodiment, the display 1210 may display the plan created based on the input of the user”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined KIM and McDevitt. That is, adding the recipes and the recipes transmission of KIM in addition to the target meals of McDevitt. The motivation/ suggestion would have been to provide a method capable of providing a plan suitable for a user in various circumstances (KIM, [0004]). On the other hand, McDevitt in view of KIM fails to explicitly disclose but Donnels discloses select a physical retail store that carries the items on the list (Donnels, “[0041] The store—item mapper 108 may then programmatically call or communicate with the geolocation store mapper 106 to determine which suitable inventory facilities or physical retailer locations are within a threshold distance of the user device or user. Responsively, the store—item mapper 108 can determine if such inventory facilities or physical retailer locations have such items of interest in stock (or for sale) at the inventory facilities. a database of inventory information in near real-time to determine if the item of interest is currently in stock (or for sale) and the particular prices that the items are offered for sale at. [0085] FIG. 7 is a flow diagram of an example process 700 for generating a user interface element that indicates a recommendation of at least one physical retailer location to purchase an item at, according to some embodiments”); transmit, to the user device, an indication that indicates: the physical retail store that carries the items on the list (Donnels, “[0020] In response to such determination that the particular item is within the geographical vicinity, some embodiments cause display of a page (e.g., a personalized page) that identifies the items of interest and/or other information (e.g., an address) of the local retailers that offer the items for sale. [0094] In some embodiments, a user issues a query on the one or more user devices 802, after which the user device(s) 802 communicate, via the network(s) 110, to the one or more servers 804 and the one or more servers 804 executes the query (e.g., via one or more components of FIG. 1) and causes or provides for display information back to the user device(s) 802”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Donnels into the combination of McDevitt and KIM, to include all limitations of claim 15. That is, applying the store selection based on inventory information and local retailer transmission of Donnels to the food stores of the XR-server system of McDevitt and KIM. The motivation/ suggestion would have been to improve the user experience and computing resource consumption (e.g., disk I/O) relative to other technologies (Donnels, [0002]). Regarding claim 17, McDevitt in view of KIM and Donnels discloses The server of claim 15. Claim 17 further recites similar limitations as claim 8, thus is rejected under similar rationale set forth in claim 8. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt (US 20220021923) in view of KIM et al. (US 20220319666) and Donnels et al. (US 20220309557), and further in view of Kim2 et al. (US 20230196126). Regarding claim 16, McDevitt in view of KIM and Donnels discloses The server of claim 15. Claim 16 further recites similar limitations as claim 3, thus is rejected under similar rationale set forth in claim 3. Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over McDevitt (US 20220021923) in view of KIM et al. (US 20220319666) and Donnels et al. (US 20220309557), and further in view of Zhao et al. (US 20230326574). Regarding claim 18, McDevitt in view of KIM and Donnels discloses The server of claim 15. Claim 16 further recites similar limitations as covered by claim 10, thus is rejected under similar rationale set forth in claim 10. Allowable Subject Matter Claim(s) 7 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 7, it recites, wherein the one or more components are configured to: detect, using a camera of the XR device, a food item added to a shopping cart, wherein the food item is not included in the list of food items; determine a complexity level associated with updating the list of food items based on the food item added to the shopping cart; transmit, to a server, an indication of the list of food items and the food item added to the shopping cart based on the complexity level satisfying a threshold; and receive, from the server, an updated list of food items. None of the prior arts on the record or any of the prior arts searched, alone or in combination, renders obvious the combination of elements recited in the claim(s) as a whole. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE Q LI whose telephone number is (571)270-0497. The examiner can normally be reached Monday - Friday, 8:00 am-5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DEVONA FAULK can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GRACE Q LI/Primary Examiner, Art Unit 2618 2/6/2026
Read full office action

Prosecution Timeline

Aug 23, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602880
Controlling Augmented Reality Content Via Selection of Real-World Locations or Objects
2y 5m to grant Granted Apr 14, 2026
Patent 12602942
MODEL FINE-TUNING FOR AUTOMATED AUGMENTED REALITY DESCRIPTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597217
METHODS AND SYSTEMS FOR AUGMENTED REALITY IN AUTOMOTIVE APPLICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12579762
OVERLAY ADAPTATION FOR VISUAL DISCRIMINATION
2y 5m to grant Granted Mar 17, 2026
Patent 12561922
CAPTURE AND DISPLAY OF POINT CLOUDS USING AUGMENTED REALITY DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
90%
With Interview (+12.8%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 351 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month