Prosecution Insights
Last updated: April 19, 2026
Application No. 18/122,013

VIRTUAL PRICE TAG FOR AUGMENTED REALITY AND VIRTUAL REALITY

Non-Final OA §101§103
Filed
Mar 15, 2023
Examiner
GAVIN, KRISTIN ELIZABETH
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Inter Ikea Systems B V
OA Round
3 (Non-Final)
14%
Grant Probability
At Risk
3-4
OA Rounds
3y 8m
To Grant
29%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
21 granted / 154 resolved
-38.4% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
50 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
38.5%
-1.5% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 154 resolved cases

Office Action

§101 §103
DETAILED ACTION This non-final Office action is responsive to amendments filed October 29th, 2025. Claims 1-3, 8-9, 20, 22-23, 25-26, and 28 have been amended. Claim 4 has been cancelled. Claims 1-3 and 5-28 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/29/25 has been entered. Response to Arguments Applicant's arguments regarding claim rejections under 35 USC 101 filed 10/29/25 have been fully considered but they are not persuasive. On pages 9-11 of the provided remarks, Applicant argues that the amended claims present statutory subject matter. Beginning on page 9 of the provided remarks, Applicant argues “these limitations cannot practically be performed in the human mind, as the human mind is not equipped to make these determinations and present a virtual price tag in the small amount of time before the view changes as the user traverses the extended reality scene.” Examiner respectfully disagrees and asserts that the claims recitation of “select a point from the list of predetermined points based on the view of the virtual scene, wherein the point is located on one of the visible surfaces” and “determine a size for the virtual price tag based on the view of the virtual reality scene” are mental evaluations based on an observation of the virtual scene and the retrieved list of predetermined points. This evaluation is still a mental process even if the function is performed for each view of the virtual scene. Applicant’s arguments are not persuasive. Continuing on pages 9-10 of the provided remarks, regarding Step 2A Prong 2 analysis, Applicant argues that independent claim 1 provides a technical solution for improving extended reality devices. Applicant argues “by placing the virtual price tag in this way, the virtual price tag can be generated to look realistic – e.g., by placing the virtual price tag where a physical price tag would typically be placed.” However, Examiner asserts that the presentation of a virtual tag in an extended reality scene is not a technical solution as the presentation to “look realistic” is not a technical solution to a technical problem. Additionally, the claim has been amended to recite, “generating, on a display of an extended reality device, a virtual price tag”. The high-level recitation of the generation of the extended reality device display is not sufficient to display an improvement to the operation of the extended reality device. This presentation of price tags would not account for additional elements that integrate the judicial exception (e.g. abstract idea) into a practical application because receiving/storing data and displaying data merely add insignificant extra-solution activity and merely adds the words to apply it with the judicial exception. Applicant’s arguments are not persuasive. Continuing on page 10-11 of the provided remarks, Applicant argues that the amended claims are analogous to Example 23 of the Subject Matter Eligibility Guidance in which textual information is relocated within a graphical user interface to prevent the overlap of textual information. Examiner respectfully disagrees and asserts that the amended claims high-level recitation of “determine a size for the virtual price tag based on the view of the virtual reality scene” is not analogous to the eligible claim 4 of the argued example which included specific detail regarding the use of a scaling factor to relocate textual information in overlapping windows of the GUI. Additionally, the placement of the virtual tags within the virtual reality scene is not performing the relocation of concepts within the GUI of the argued Example 23. Therefore, the argued analysis regarding Example 23 is not applicable. The 35 USC 101 rejection is maintained. Applicant’s arguments are not persuasive. Applicant’s arguments, see pages 11-13, filed 10/29/25, with respect to the rejection(s) of claim(s) 1-3 and 5-28 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Werner (U.S 2014/0224867 A1) in view of Adato (U.S 2019/0213546 A1) in view of Schweiger (U.S 2014/0279126 A1). Claim Objections Claim 25 is objected to because of the following informalities: the limitation beginning "wherein the point" recites "the current view" which lacks antecedent basis and should recite "the view". Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3 & 5-28 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter; When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Step 1: Independent claims 1 (method), 23 (device), and 26 (device) and dependent claims 2-3, 5-22, 24-25, and 27-28, respectively, fall within at least one of the four statutory categories of 35 U.S.C. 101: (i) process; (ii) machine; (iii) manufacture; or (iv) composition of matter. Claim 1 is directed to a method (i.e. process), claim 23 is directed to a device (i.e. machine), and claim 26 is directed to a device (i.e. machine). Step 2A Prong 1: The independent claims recite presenting virtual price tags, the method comprising: identifying a product visible in an extended reality scene; retrieving a list of predetermined points for the product, wherein each point in the list of predetermined points is a candidate location for placement of a virtual price tag; retrieving price data for the product; and for each view of the extended reality scene in a plurality of views as a user traverses the extended reality scene: identifying visible surfaces of the product from the view of the extended reality scene; selecting a point from the list of predetermined points based on the view of the extended reality scene, wherein the point is located on one of the visible surfaces; determining a size for the virtual price tag based on the view of the extended reality scene; and generating, on a display of an extended reality device, the virtual price tag attached to an image of the product at the selected point and at the determined size within the extended reality scene, the virtual price tag displaying the price data for the product.(Certain Method of Organizing Human Activity & Mental Process), which are considered to be abstract ideas (See PEG 2019 and MPEP 2106.05). [Examiner notes the underlined limitations above recite the abstract idea]. The steps/functions disclosed above and in the independent claims recite the abstract idea of Certain Methods of Organizing Human Activity because the claimed limitations are generating a virtual price tag attached to an image of a product displaying the price data for the product, which is a commercial interaction in the form of sales activity. The Applicant’s claimed limitations are generating a virtual price tag attached to an image of a product displaying the price data for the product, which recite the abstract idea of Organizing Human Activity. The steps/functions disclosed above and in the independent claims recite the abstract idea of Mental Process because the claimed limitations are identifying a product in a visible extended reality scene; identifying visible surfaces of the product from a current view of the extended reality scene; selecting a point from the list of predetermined points based on the view of the extended reality scene, wherein the point is located on one of the visible surfaces; and determining a size for the virtual price tag based on the view of the extended reality scene, which are functions of the human mind in the form of observation, judgement, and evaluation. The Applicant’s claimed limitations are identifying a product in a visible extended reality scene; identifying visible surfaces of the product from a current view of the extended reality scene; selecting a point from the list of predetermined points based on the view of the extended reality scene, wherein the point is located on one of the visible surfaces; and determining a size for the virtual price tag based on the view of the extended reality scene, which recite the abstract idea of Mental Process. In addition, dependent claims 2-3, 5-7, 14-22, 25, and 28 further narrow the abstract idea and recite further defining the assessment of the extended reality scene to determine a lighting condition; detecting objects occluding the product from the current view of the extended reality scene; calculating a distance between the current view and the product; the determination of the predetermined point; the identified extended reality scene; placing the product in a shopping cart; initiating a checkout process; the determined price data for the product; identifying visible surfaces; determining the position of a user relative to the product; determining an order of the two or more virtual price tags based on the determined position of the user relative to two products; and selecting a virtual price tag for the product. These processes are similar to the abstract idea noted in the independent claims because they further the limitations of the independent claims which recite a certain method of organizing human activity which include commercial interactions such as sales activity as well as mental process. Accordingly, these claim elements do not serve to confer subject matter eligibility to the claims since they recite abstract ideas. Dependent claims 8-13, 24, and 27 will be discussed in Prong 2 analysis below. Step 2A Prong 2: In this application, the above “retrieving a list of predetermined points for the product, wherein each point in the list of predetermined points is a candidate location for placement of a virtual price tag; retrieving price data for the product; the virtual price tag displaying the price data for the product” steps/functions of the independent claims would not account for additional elements that integrate the judicial exception (e.g. abstract idea) into a practical application because receiving/storing data and displaying data merely add insignificant extra-solution activity and merely adds the words to apply it with the judicial exception. Also, the claimed “a display of an extended reality device; a camera on the computing device; an e-commerce server; An augmented reality device comprising: a camera; a processor; and a memory storing instructions which, when executed by the processor cause the augmented reality device; a smart glasses device; A virtual reality device comprising: a processor; and a memory storing instructions which, when executed by the processor cause the virtual reality device; a furnishing planner application” would not account for additional elements that integrate the judicial exception (e.g. abstract idea) into a practical application because the claimed structure merely adds the words to apply it with the judicial exception and mere instructions to implement an abstract idea on a computer (See PEG 2019 and MPEP 2106.05). In addition, dependent claims 2-3, 5-7, 14-22, 25, and 28 further narrow the abstract idea and dependent claims 3, 8-15, 20-22, and 27 additionally recite “wherein the list of predetermined points on the product is ordered based on a likelihood that a physical price tag would be attached to each point”; “the virtual price tag is visualized with a virtual string connecting the virtual price tag to the selected point on the product”; “when the virtual price tag is first placed at the selected point it is visualized with a dangling motion animation from the virtual string”; “visualizing the virtual price tag to appear magnified on the display of the extended reality device with the virtual string remaining attached to the visible surface”; “visualizing the second virtual price tag to appear magnified adjacent to the virtual price tag”; “displaying an animation of the virtual price tag being snapped back to the product by the virtual string”; “wherein the virtual price tag is visualized with light conditions at a position in the extended reality scene where it is placed”; “receiving inputs to place the product in a shopping cart”; “wherein the price data for the product is retrieved”; “receiving selections to view two or more virtual price tags corresponding to two or more products”; “presenting a visualization of the two or more virtual price tags adjacent to each other according to the order of the two or more virtual price tags”; “displaying a first magnified view of the virtual price tag; receiving an indication to store the virtual price tag in an inventory of an associated user; receiving an input selecting an option to present the inventory of the associated user on the display; and displaying a second magnified view of the virtual price tag with additional information about the product”; “receiving a selection for the virtual price tag; recording a location of the user at a time of the selection; store the virtual price tag and the recorded location in an inventory associated with the user; receiving an input selecting an option to present the inventory of an associated user on the display; displaying the virtual price tag; and receiving a transport selection to virtually transport the user to the recorded location by displaying a virtual view at the recorded location where the virtual price tag was selected”; and “receive inputs to create the virtual scene including selecting the at least one product” which do not account for additional elements that integrate the judicial exception (e.g. abstract idea) into a practical application because receiving/storing data and displaying data merely add insignificant extra-solution activity and the claimed “extended reality device”, “e-commerce server”; “the virtual reality device” which do not account for additional elements that integrate the judicial exception (e.g. abstract idea) into a practical application because the claimed structure merely adds the words to apply it with the judicial exception and mere instructions to implement an abstract idea on a computer (See PEG 2019 and MPEP 2106.05). Dependent claim 24 recite, “wherein the augmented reality device is a smart glasses device”. The claimed “smart glasses device” does not account for additional elements that integrate the judicial exception (e.g. abstract idea) into a practical application because the claimed structure merely adds the words to apply it with the judicial exception and mere instructions to implement an abstract idea on a computer (See PEG 2019 and MPEP 2106.05). These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer. Even when viewed in combination, the additional elements in the claims do no more than use the computer components as a tool. There is no change to the computers and other technology that is recited in the claim, and thus the claims do not improve computer functionality or other technology (See PEG 2019). The claimed “a display of an extended reality device; a camera on the computing device; an e-commerce server; An augmented reality device comprising: a camera; a processor; and a memory storing instructions which, when executed by the processor cause the augmented reality device; a smart glasses device; A virtual reality device comprising: a processor; and a memory storing instructions which, when executed by the processor cause the virtual reality device; a furnishing planner application” are recited so generically (no details whatsoever are provided other than that they are general purpose computing components and regular office supplies) that they represent no more than mere instructions to apply the judicial exception on a computer. These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer. Even when viewed in combination, the additional elements in the claims do no more than use the computer components as a tool. There is no change to the computers and other technology that is recited in the claim, and thus the claims do not improve computer functionality or other technology (See PEG 2019). Step 2B: When analyzing the additional element(s) and/or combination of elements in the claim(s) other than the abstract idea per se the claim limitations amount(s) to no more than: a general link of the use of an abstract idea to a particular technological environment and merely amounts to the application or instructions to apply the abstract idea on a computer (See MPEP 2106.05 and PEG 2019). Further, method claims 1-3 & 5-22; augmented reality device claims 23-25; and virtual reality device claims 26-28 recite “a display of an extended reality device; a camera on the computing device; an e-commerce server; An augmented reality device comprising: a camera; a processor; and a memory storing instructions which, when executed by the processor cause the augmented reality device; a smart glasses device; A virtual reality device comprising: a processor; and a memory storing instructions which, when executed by the processor cause the virtual reality device; a furnishing planner application”; however, these elements merely facilitate the claimed functions at a high level of generality and they perform conventional functions and are considered to be general purpose computer components which is supported by Applicant’s specification in Paragraphs 0050 and 0052 and Figures 1-3. The Applicant’s claimed additional elements are mere instructions to implement the abstract idea on a general purpose computer and generally link of the use of an abstract idea to a particular technological environment. Also, the above “retrieving a list of predetermined points for the product, wherein each point in the list of predetermined points is a candidate location for placement of a virtual price tag; retrieving price data for the product; the virtual price tag displaying the price data for the product” steps/functions of the independent claims would not account for significantly more than the abstract idea because receiving data and displaying/presenting data (See MPEP 2106.05) have been identified as well-known, routine, and conventional steps/functions to one of ordinary skill in the art. When viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. In addition, claims 2-3, 5-7, 14-22, 25, and 28 further narrow the abstract idea identified in the independent claims. The Examiner notes that the dependent claims merely further define the data being analyzed and how the data is being analyzed. Similarly, claims 3, 8-15, 20-22, and 27 additionally recite “wherein the list of predetermined points on the product is ordered based on a likelihood that a physical price tag would be attached to each point”; “the virtual price tag is visualized with a virtual string connecting the virtual price tag to the selected point on the product”; “when the virtual price tag is first placed at the selected point it is visualized with a dangling motion animation from the virtual string”; “visualizing the virtual price tag to appear magnified on the display of the extended reality device with the virtual string remaining attached to the visible surface”; “visualizing the second virtual price tag to appear magnified adjacent to the virtual price tag”; “displaying an animation of the virtual price tag being snapped back to the product by the virtual string”; “wherein the virtual price tag is visualized with light conditions at a position in the extended reality scene where it is placed”; “receiving inputs to place the product in a shopping cart”; “wherein the price data for the product is retrieved”; “receiving selections to view two or more virtual price tags corresponding to two or more products”; “presenting a visualization of the two or more virtual price tags adjacent to each other according to the order of the two or more virtual price tags”; “displaying a first magnified view of the virtual price tag; receiving an indication to store the virtual price tag in an inventory of an associated user; receiving an input selecting an option to present the inventory of the associated user on the display; and displaying a second magnified view of the virtual price tag with additional information about the product”; “receiving a selection for the virtual price tag; recording a location of the user at a time of the selection; store the virtual price tag and the recorded location in an inventory associated with the user; receiving an input selecting an option to present the inventory of an associated user on the display; displaying the virtual price tag; and receiving a transport selection to virtually transport the user to the recorded location by displaying a virtual view at the recorded location where the virtual price tag was selected”; and “receive inputs to create the virtual scene including selecting the at least one product” which do not account for additional elements that amount to significantly more than the abstract idea because receiving data and displaying/presenting data (See MPEP 2106.05) have been identified as well-known, routine, and conventional steps/functions to one of ordinary skill in the art and the claimed “extended reality device”, “e-commerce server”; “the virtual reality device” which do not account for additional elements that amount to significantly more than the abstract idea because the claimed structure merely amounts to the application or instructions to apply the abstract idea on a computer and does not move beyond a general link of the use of an abstract idea to a particular technological environment (See MPEP 2106.05). Dependent claim 24 recite, “wherein the augmented reality device is a smart glasses device”. The claimed “smart glasses device” do not account for additional elements that amount to significantly more than the abstract idea because the claimed structure merely amounts to the application or instructions to apply the abstract idea on a computer and does not move beyond a general link of the use of an abstract idea to a particular technological environment (See MPEP 2106.05). The additional limitations of the independent and dependent claim(s) when considered individually and as an ordered combination do not amount to significantly more than the abstract idea. The examiner has considered the dependent claims in a full analysis including the additional limitations individually and in combination as analyzed in the independent claim(s). Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-2, 5-9, 13-17, and 21-28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Werner (U.S 2014/0224867 A1) in view of Adato (U.S 2019/0213546 A1) in view of Schweiger (U.S 2014/0279126 A1). Claim 1 Regarding Claim 1, Werner discloses the following: A method for presenting virtual price tags, the method comprising [see at least Paragraph 0006 for reference to a method is provided for transmitting to a user a virtual price tag for an item being offered for sale, the item associated with a smart tag encoded with an item identifier that identifies the item, the smart tag configured to transmit the item identifier to an electronic device of the user when the electronic device is positioned proximate to the smart tag; Figures 12A-B and related text regarding the process for sending a request to obtain a virtual price tag for an item being offered for sale identifying a product visible in an extended reality scene [see at least Paragraph 0057 for reference to the device including a scanner that may be used to obtain item identifying information from a tag or label associated with an item; Paragraph 0057 for reference to the camera being used as a part of the overall system to identify items, such as consumer products; Paragraph 0058 for reference to the NFC interface being used to identify an item such as a consumer product] retrieving price data for the product [see at least Paragraph 0072 for reference to the application information may include applications that interact with the virtual price tag module to send item prices to users electronic devices while the users are shopping in brick-and mortar stores; Paragraph 0108 for reference to the virtual price tag module searching the item information data store to identify pricing information for the item; Figure 5 and related text regarding item 522 ‘virtual price tag module’; Figure 12A and related text regarding item 1216 ‘USER SELECTS OPTION TO GET VIRTUAL PRICE TAG FOR ITEM’; Figure 14 and related text regarding item 1408 ‘SEARCH DATABASE TO FIND THE PRICE OF THE ITEM’] for each view of the extended reality scene in a plurality of views as a user traverses the extended reality scene: identifying visible surfaces of the product from the view of the extended reality scene [see at least Paragraph 0049 for reference to the electronic device using short range wireless connection to communicate with a smart tag; Paragraph 0105 for reference to the user causing the electronic device to “scan” the smart tag of the item for which the user desires to receive a virtual price; Figure 1 and related text regarding the electronic device using short range wireless connection to communicate with a smart tag; Figure 12A and related text regarding item 1204 ‘USER CAUSES MOBILE DEVICE TO “SCAN” THE SMART TAG OF THE ITEM AND OBTAIN THE ITEM IDENTIFIER’] selecting a point based on the view of the extended reality scene, wherein the point is located on one of the visible surfaces [see at least Paragraph 0059 for reference to the smart tag being attached to the appropriate location on an item; Paragraph 0105 for reference to the user causing the electronic device to “scan” the smart tag of the item for which the user desires to receive a virtual price; Figure 12A and related text regarding item 1204 ‘USER CAUSES MOBILE DEVICE TO “SCAN” THE SMART TAG OF THE ITEM AND OBTAIN THE ITEM IDENTIFIER’; Examiner notes the “smart tag” as analogous to the predetermined point] generating, on a display of a device, the virtual price tag attached to an image of the product at the selected point, the virtual price tag displaying the price data for the product [see at least Paragraph 0059 for reference to the tags being attached to, embedded in, or otherwise associated with several items; Paragraph 0107 for reference to the obtained virtual price tag is automatically displayed via the web browser or app, for example, the request webpage having the virtual price tag is displayed in the web browser; Figure 12B and related text regarding item 1234 ‘AUTOMATICALLY DISPLAY VIRTUAL PRICE TAG’] While Werner discloses the limitations above, it does not disclose retrieving a list of predetermined points for the product, wherein each point in the list of predetermined points is a candidate location for placement of a virtual price tag; selecting a point from the list of predetermined points based on the view of the extended reality scene; determining a size for the virtual price tag based on the view of the extended reality scene; generating, on a display of an extended reality device, a virtual price tag attached to an image of the product at the determined size. However, Adato discloses the following: identifying a product visible in an extended reality scene [see at least Paragraph 0114 for reference to the system processing images captured in a store regarding virtually shopping within the establishment; Paragraph 0337 for reference to identifying the at least one product based on the analysis of the representation of the at least one product depicted in the at least one image using the product model subset; Figure 17 and related text regarding item 1711 & 1717 ‘Identify the Product’; Figure 18B and related text regarding item 1854 ‘Analyze image to detect a bottle’; Figure 31A and related text regarding item 3112 ‘Analyze image to detect the products’; Figure 41 and related text regarding item 4116 ‘virtual store’] for each view of the extended reality scene in a plurality of views as a user traverses the extended reality scene: identifying visible surfaces of the product from the view of the extended reality scene [see at least Paragraph 0117 for reference to the identification may be made based at least in part on visual characteristics of the product (e.g., size, shape, logo, text, color, etc.); Paragraph 0158 for reference to multiple image acquisition systems for acquiring images of products; Paragraph 0356 for reference to the detection can at least in part be based on visual characteristics of the bottle and/or contextual information of the bottle as described previously; Figure 6A and related text regarding multiple image acquisition systems for acquiring images of products; Figure 19 and related text regarding item 1906 ‘Identify visual characteristics of first bottle and second bottle’; Figure 41 and related text regarding item 4116 ‘virtual store’] selecting a point on one of the visible surfaces based on the view of the extended reality scene [see at least Paragraph 0117 for reference to the identification may be made based at least in part on visual characteristics of the product (e.g., size, shape, logo, text, color, etc.); Paragraph 0356 for reference to the detection can at least in part be based on visual characteristics of the bottle and/or contextual information of the bottle as described previously; Figure 19 and related text regarding item 1906 ‘Identify visual characteristics of first bottle and second bottle’; Figure 41 and related text regarding item 4116 ‘virtual store’] generating, on a display of an extended reality device, the virtual price tag attached to an image of the product at the selected point within the extended reality scene, the virtual price tag displaying the price data for the product [see at least Paragraph 0133 for reference to the I/O system including an augmented reality device and/or virtual reality device; Paragraph 0266 for reference to the image processing unit recognizing price tag on the products in the image; Paragraph 0321 for reference to the image processing unit recognizing the price tag on the products in the image; Paragraph 0399 for reference to price labels being attached to the product itself; Paragraph 0402 for reference to price label being physically attached to the product; Figure 23 and related text regarding item 2339 ‘price indicator on label’] Before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify the method of Werner to include the extended reality device & scene view of Adato. Doing so would assist in capturing, collecting, and automatically analyzing images of products displayed in retail stores for purposes of providing one or more functions associated with the identified products, as stated by Adato (Paragraph 0002). While the combination of Werner and Adato discloses the limitations above, they do not disclose retrieving a list of predetermined points for the product, wherein each point in the list of predetermined points is a candidate location for placement of a virtual price tag; selecting a point from the list of predetermined points based on the view of the scene; determining a size for the virtual price tag based on the view of the scene; generating, on a display of a device, a virtual price tag attached to an image of the product at the determined size. However, Schweiger discloses the following: retrieving a list of predetermined points for the product, wherein each point in the list of predetermined points is a candidate location for placement of a virtual price tag [see at least Paragraph 0037 for reference to the store sever retrieving a list of all sections for the POG ID from store POG application which includes how an item should be arranged in that block, aisle, section location dictated by the POG section; Paragraph 0041 for reference to Position identifier indicates a section of the POG, a shelf or peg; Figure 4 and related text regarding item 310 ‘RETRIEVE AND SEND TO THE HANDHELD A LIST OF ALL SECTION IDS FOR THE POG ID’; Figure 8 and related text regarding the rending of a list of selectable section identifiers (IDs) provided by the server for the user to select] selecting a point from the list of predetermined points based on the view of the scene [see at least Paragraph 0038 for reference to the store server receiving the selected ID from handheld device; Figure 4 and related text regarding item 312 ‘RETRIEVE SELECT SECTION ID FROM THE HANDHELD’; Figure 8 and related text regarding the rending of a list of selectable section identifiers (IDs) provided by the server for the user to select] determining a size for the virtual price tag based on the view of the scene [see at least Paragraph 0024 for reference to electronic price label placement information being rendered on a display of the electronic price label and providing instructions related to a distance the electronic price label is to be placed; Paragraph 0039 for reference to the server retrieving a count of EPLs that are needed for each item on the selected shelf ID or peg; Paragraph 0043 for reference to the system determining at this position in the POG, the EPL should be placed on the shelf at a distance that is 47 and 3/8 inches from the ridge edge of the section; Figure 4 and related text regarding item 318 ‘RETRIEVE AND SEND A COUNT OF EPLs NEEDED FOR EACH ITEM ON THE SELECT SHELF/PEG; Figure 13 and related text regarding item 366 and the front view of the display fixture] generating, on a display of a device, a virtual price tag attached to an image of the product at the selected point and the determined size [see at least Paragraph 0024 for reference to electronic price label placement information being rendered on a display of the electronic price label and providing instructions related to a distance the electronic price label is to be placed; Paragraph 0025 for reference to electronic price labels are an electronic display module that can be attached to a front of a display structure in a retail store, such as a front of a shelf, a peg hook, or the like; Paragraph 0039 for reference to the EPL ID being printed as a barcode on a label that is attached to the back of an EPL; Paragraph 0041 for reference to the POG screen conveying specific instructions on placement of the EPL wherein all placement information and instructions on where the EPL should be placed being conveyed by display; Figure 12 and related text regarding a front view of an exemplary price label showing a planogram screen on its display; Figure 13 and related text regarding a front view of a display fixture within a store] Before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify the method of Werner to include the electronic price label method including size determination of Schweiger. Doing so would reduce costs related to managing and displaying accurate product information, as stated by Schweiger (Paragraph 0001). Claim 2 While the combination of Werner, Adato, and Schweiger disclose the limitations above, Werner does not disclose assessing the scene to determine a lighting condition of the scene; detecting objects occluding the product from the view of the scene; calculating a distance between the view and the product, wherein the point is further selected based on the lighting conditions of the scene, the objects occluding the product from the view of the scene, and the distance between the view and the product. Regarding Claim 2, Adato discloses the following: assessing the extended reality scene to determine a lighting condition of the extended reality scene [see at least Paragraph 0170 for reference to the analysis of image data associated with a retail shelving unit including a lighting condition; Paragraph 0293 for reference to the image processing unit identifying shelves in the image based on the lighting of the shelf] detecting objects occluding the product from the view of the extended reality scene [see at least Paragraph 0146 for reference to the server determining an existence of an occlusion event (such as, by a person, by store equipment, such as a ladder, cart, etc.); Paragraph 0161 for reference to a selected image viewing area containing an occlusion of a region of interest; Figure 29 and related text regarding item 2903 ‘Detect first occlusion event’] calculating a distance between the view and the product, wherein the point is further selected based on the lighting conditions of the extended reality scene, the objects occluding the product from the view of the extended reality scene, and the distance between the view and the product [see at least Paragraph 0150 for reference to distances and angles of the image capturing devices relative to the captured products should be selected such as to enable adequate product identification, especially when considered in view of image sensor resolution and / or optics specifications] Before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify the method of Werner to include the lighting, occlusion, and distance determination of the extended reality scene view of Adato. Doing so would assist in capturing, collecting, and automatically analyzing images of products displayed in retail stores for purposes of providing one or more functions associated with the identified products, as stated by Adato (Paragraph 0002). Claim 5 While the combination of Werner, Adato, and Schweiger disclose the limitations above, regarding Claim 5, Werner discloses the following: wherein the extended reality scene is a virtual reality scene [see at least Paragraph 0070 for reference to the digital experience manager being launched, viewed, and/or controlled using an electronic device; Paragraph 0071 for reference to the digital experience manager providing digital content related to the item] Claim 6 While the combination of Werner, Adato, and Schweiger disclose the limitations above, regarding Claim 6, Werner discloses the following: capturing, from a camera on the device, one or more images of a physical scene [see at least Paragraph 0057 for reference to the device including a camera to identify items such as consumer products by capture an image of a barcode or a QR-code, which then may be processed by the device to extract the encoded product-identifying information; Figure 1 and related text regarding item 224 ‘camera’] generating the extended reality scene as an augmented reality scene, on the display of the device, using the one or more images [see at least Paragraph 0070 for reference to the digital experience manager being launched, viewed, and/or controlled using an electronic device; Paragraph 0071 for reference to the digital experience manager providing digital content related to the item] While Werner discloses the limitations above, it does not disclose capturing, from a camera on the extended reality device, one or more images of a physical scene; and generating the extended reality scene as an augmented reality scene, on the display of the extended reality device, using the one or more images. However, Adato discloses the following: capturing, from a camera on the extended reality device, one or more images of a physical scene [see at least Paragraph 0260 for reference to contextual information being determined by the image processing unit through captured image data; Figure 1 and related text regarding item 125 ‘capturing device’] generating the extended reality scene as an augmented reality scene, on the display of the extended reality device, using the one or more images [see at least Paragraph 0147 for reference to the capturing device providing a real-time video stream augmented with markings; Figure 11D and related text regarding the GUI display presentation of augmented real-time video stream of the display area] Before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify the method of Werner to include the extended reality device & scene view of Adato. Doing so would assist in capturing, collecting, and automatically analyzing images of products displayed in retail stores for purposes of providing one or more functions associated with the identified products, as stated by Adato (Paragraph 0002). Claim 7 While the combination of Werner, Adato, and Schweiger disclose the limitations above, regarding Claim 7, Werner discloses the following: wherein the augmented reality scene incudes a virtual product [see at least Paragraph 0049 for reference to the item being any consumer product or good purchased or otherwise acquired by the user; Figure 1 and related text regarding item 124 ‘item’] Claim 8 While the combination of Werner, Adato, and Schweiger disclose the limitations above, regarding Claim 8, Werner discloses the following: wherein the virtual price tag is visualized with a virtual string connecting the virtual price tag to the selected point on the product [see at least Paragraph 0059 for reference to the smart tag can be integrated into or attached to hangtags that are attached to or otherwise associated with the item] Claim 9 While the combination of Werner, Adato, and Schweiger disclose the limitations above, regarding Claim 9, Werner discloses the following: wherein when the virtual price tag is first placed at the selected point it is visualized with a dangling motion animation from the virtual string [see at least Paragraph 0059 for reference to the smart tag being attached to the appropriate location on an item; Paragraph 0059 for reference to the smart tag can be integrated into or attached to hangtags that are attached to or otherwise associated with the item] Claim 13 While the combination of Werner, Adato, and Schweiger discloses the limitations above, Werner does not disclose wherein the virtual price tag is visualized with light conditions at a position in the scene where it is placed. Regarding Claim 13, Adato discloses the following: wherein the virtual price tag is visualized with light conditions at a position in the extended reality scene where it is placed [see at least Paragraph 0170 for reference to the analysis of image data associated with a retail shelving unit including a lighting condition; Paragraph 0266 for reference to the image processing unit recognizing price tag on the products in the image; Paragraph 0293 for reference to the image processing unit identifying shelves in the image based on the lighting of the shelf] Before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify the method of Werner to include the updated view determination of the extended reality scene view of Adato. Doing so would assist in capturing, collecting, and automatically analyzing images of products displayed in retail stores for purposes of providing one or more functions associated with the identified products, as stated by Adato (Paragraph 0002). Claim 14 While the combination of Werner, Adato, and Schweiger disclose the limitations above, regarding Claim 14, Werner discloses the following: receiving inputs to place the product in a shopping cart and initiating a checkout process [see at least Paragraph 0103 for reference to the selection of the purchase button enables the user to purchase the item using the electronic device, and to avoid having to wait in line at a point-of-sale terminal; Paragraph 0103 for reference to upon leaving the store, the store can scan the Smart tag of item to get notice that the item has been purchased; Figure 9D and related text regarding item 924 ‘PURCHASE’] Claim 15 While the combination of Werner, Adato, and Schweiger disclose the limitations above, regarding Claim 15, Werner discloses the following: wherein the price data for the product is retrieved from an e-commerce server [see at least Paragraph 0072 for reference to item information data store including a table of item identifiers and corresponding item descriptions, prices, model numbers/names, sizes, colors, links/references to related digital content in the digital content, identifiers of registered owners, and links/references to user accounts of registered owners in the user information; Paragraph 0072 for reference to the application information may include applications that interact with the virtual price tag module to send item prices to users electronic devices while the users are shopping in brick-and mortar stores; Paragraph 0108 for reference to the virtual price tag module searching the item information data store to identify pricing information for the item; Figure 5 and related text regarding item 522 ‘virtual price tag module’; Figure 12A and related text regarding item 1216 ‘USER SELECTS OPTION TO GET VIRTUAL PRICE TAG FOR ITEM’; Figure 14 and related text regarding item 1408 ‘SEARCH DATABASE TO FIND THE PRICE OF THE ITEM’] the method further comprising: transmitting product identifying data to the e-commerce server [see at least Paragraph 0108 for reference to the request to provide a price for the item is received; Figure 14 and related text regarding item 1404 ‘RECEIVE FROM MOBILE DEVICE OF SHOPPER A REQUEST TO PROVIDE PRICE OF THE ITEM’] Claim 16 While the combination of Werner, Adato, and Schweiger disclose the limitations above, regarding Claim 16, Werner discloses the following: wherein upon the product being a physical product, the identifying data is extracted from pixel data of the physical product in one or more images depicting the extended reality scene and the physical product [see at least Paragraph 0057 for reference to the camera capturing an image of a barcode or a QR-code, which then may be processed by the device to extract the encoded product-identifying information] Claim 17 While the combination of Werner, Adato, and Schweiger disclose the limitations above, regarding Claim 17, Werner discloses the following: wherein upon the product being a virtual product, the identifying data is data representing the virtual product [see at least Paragraph 0072 for reference to item information data store including a table of item identifiers and corresponding item descriptions, prices, model numbers/names,
Read full office action

Prosecution Timeline

Mar 15, 2023
Application Filed
Apr 04, 2025
Non-Final Rejection — §101, §103
Jul 02, 2025
Response Filed
Jul 25, 2025
Final Rejection — §101, §103
Sep 29, 2025
Response after Non-Final Action
Oct 29, 2025
Request for Continued Examination
Nov 07, 2025
Response after Non-Final Action
Nov 26, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591899
CASINO PATRON ENGAGEMENT SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586089
METHOD AND SYSTEM FOR PROCESSING EXPERIENCE DIGITAL CONTENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12555138
SYSTEMS AND METHODS FOR TRACKED ELECTRONIC COMMUNICATIONS APPORTIONMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12443911
APPARATUS AND METHODS FOR DETERMINING DELIVERY ROUTES AND TIMES BASED ON GENERATED MACHINE LEARNING MODELS
2y 5m to grant Granted Oct 14, 2025
Patent 12443966
DISTRIBUTED TRACING TECHNIQUES FOR ACQUIRING BUSINESS INSIGHTS
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
14%
Grant Probability
29%
With Interview (+15.2%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 154 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month