Prosecution Insights
Last updated: April 19, 2026
Application No. 18/298,101

AUGMENTED REALITY COMPARATOR

Non-Final OA §103
Filed
Apr 10, 2023
Examiner
BARHAM, RYAN ALLEN
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Shopify Inc.
OA Round
4 (Non-Final)
54%
Grant Probability
Moderate
4-5
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
7 granted / 13 resolved
-8.2% vs TC avg
Strong +60% interview lift
Without
With
+60.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
19 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
45.4%
+5.4% vs TC avg
§112
2.8%
-37.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-12, 15, and 17-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ma (US 11113893 B1), and further in view of Ravichandran (US 10776417 B1). Regarding claim 1, Ma teaches a processor-implemented method comprising: obtaining, by a processor associated with an augmented reality device (col. 5, lines 3 - 14: “The processors 110 can communicate with a hardware controller for devices, such as for a display 130…. Examples of display devices are: an LCD display screen, an LED display screen, a projected, holographic, or augmented reality display (such as a heads-up display device or a head-mounted device), and so on.”), attribute data associated with a current object in a view of the augmented reality device (col. 12, lines 12-15: “Object detection engine 440 can process image data of detectable objects in an environment and detect an object type or other features (e.g., shape, associated labels, orientation, etc.) of the object.”); responsive to receiving an instruction indicating an attribute for a comparison of the current object with a comparison object (col. 2, lines 53-57: “Responsive to a selection of a glint (e.g., manual user selection or inferred selection based on a determined direction of user attention), the glint may be modified to present additional information, such as social content relating to the glint or a preview of content.”; col. 2, lines 18-21: “A “glint,” as used herein, is an actionable virtual object, anchored to another object, that can provide access to additional information or another application.”), generating, by the processor, a similarity score for a potential comparison object (col. 17, lines 7-14: “In various implementations, search results can have an associated similarity score (i.e., score for how well the result matches the identified object parameters, user, or other search criteria) or other ranking metric (e.g., relevance to features identified in the artificial reality environment, how popular the result is with other users, an estimation of value of the content item to the current user, etc.)”); and identifying, by the processor, the potential comparison object as the comparison object based upon the similarity score of the potential comparison object for the attribute data associated with the current object and the attribute indicated by the instruction (col. 17, lines 14-16: “The similarity score or ranking metric can indicate a confidence of the match to the content type and can be used to increase accuracy in detection of the object.”); and generating, by the processor, an augmented reality graphical user interface displaying a virtual representation of the attribute of the comparison object in the view of the current object (col. 2, lines 15-28: “Aspects of the present disclosure are directed to a glint management system that can display glints on a display of an XR device and control how glints respond to an artificial environment and user interaction…. A glint displayed in the environment can be actionable for a user to view additional information, open an application, etc.”). Ma fails to teach the similarity score as based upon comparing a first feature vector extracted using the attribute data of the current object against a second feature vector of the potential comparison object for the attribute indicated by the instruction. Ravichandran teaches a processor-implemented method comprising: responsive to receiving an instruction indicating an attribute for a comparison of the current object with a comparison object (col. 2, line 63 – col. 3, line 1: “The feature values of the desirable visual attributes are determined and used to query the electronic catalog of items, in which items having visual attributes of similar feature values are selected and returned as search results. For example, a user may select the neckline of a first dress and the color of a second dress as the desirable visual attributes.”), generating, by the processor, a similarity score for a potential comparison object based upon comparing a first feature vector extracted using the attribute data of the current object against a second feature vector of the potential comparison object for the attribute indicated by the instruction (col. 8, lines 55-62: “ The feature vectors and an item identifier may be stored in respective attribute databases 312a-c. In some embodiments, the feature vectors 310a-c of all the visual attributes may be stored in one database and associated with the item identifier. Thus, feature values of parts-based visual attributes of items in the electronic catalog can be determined and used to select or rank the items in response to a search query based on desirable visual attributes.”); and identifying, by the processor, the potential comparison object as the comparison object based upon the similarity score of the potential comparison object for the attribute data associated with the current object and the attribute indicated by the instruction (col. 8, lines 55-62, as above). It would have been obvious to one familiar in the art to combine the feature-based similarity scoring method of Ravichandran with the visual glint method of Ma, as both inventions are in the same field of endeavor of product comparison via electronic commerce. Enabling similarity scoring for particular features would allow one using the artificial reality device of Ma to find items more closely aligned with the particular elements they desire. Regarding Claim 4, Ma and Ravichandran teach the method according to claim 1. Ma further teaches wherein the attribute data includes one or more attributes, including at least one of: a dimension attribute, a text attribute, or an object type (col. 16, lines 49-53: “Identifying the real-world object can include identifying an object label or ID; an object type or category; and/or other object characteristics (e.g., size, shape, colors, text markings, user association, location or physical relationship to other objects, etc.)”). Regarding Claim 5, Ma and Ravichandran teach the method according to claim 1. Ma further teaches wherein the instruction comprises a user gesture indicating the current object for the comparison against the comparison object (col. 16, lines 17-22: “In some implementations, a glint group indicator can be maximized by a user, (e.g., ungroups so the glints the group indicator replaces are shown), such as when the user focuses on the group indicator as described in process 600 or the user directs a gesture (e.g., an “air click”) or otherwise selects the group indicator.”). Regarding Claim 6, Ma and Ravichandran teach the method according to claim 1. Ma further teaches: identifying, by the processor, a text attribute of the current object in image data for the current object by applying an object recognition engine (col. 16, lines 34-37: “At block 802, process 800 can identify a real-world object, corresponding XR tags, and/or object properties. This can be based on a suitable objection technique or a code detection technique (e.g., QR code detection).”); and recognizing, by the processor, the text in the text attribute of the current object (col. 16, lines 49-53, as in claim 4 rejection). Regarding Claim 7, Ma and Ravichandran teach the method according to claim 1. Ma further teaches extracting, by the processor, one or more attributes of the attribute data for the current object by applying an object recognition engine on image data for the current object (col. 16, lines 34-55, as in claims 4 and 6 rejections). Regarding Claim 8, Ma and Ravichandran teach the method according to claim 1. Ma further teaches: identifying, by the processor, one or more attributes of the attribute data of the current object (col. 15, lines 50-553: “Identifying the real-world object can include identifying an object label or ID; an object type or category; and/or other object characteristics (e.g., size, shape, colors, text markings, user association, location or physical relationship to other objects, etc.)”); and determining, by the processor, the comparison object having the one or more attributes in the attribute data of the comparison object (col. 16, lines 53-55: “This can include comparing features of the object with known objects to identify a content type of the object.”). Regarding Claim 9, Ma and Ravichandran teach the method according to claim 8. Ma further teaches wherein the processor compares the one or more attributes of the current object against the one or more attributes of the comparison object in response to identifying the one or more attributes of the current object (col. 12, lines 15-18: “The object detection engine 440 can utilize a suitable object detection technique to compare features of the detectable object with known objects to derive object features.”). Regarding claim 10, Ma and Ravichandran teach the method according to claim 1. Ma further teaches wherein identifying the comparison object includes: applying, by the processor, an object recognition engine on image data of the current object to extract one or more attributes of the attribute data for the current object (col. 16, lines 34-55, as in claims 4, 6, and 7 rejections); and applying, by the processor, a selection engine on the attribute of the current object to extract a first feature vector representing the attribute data for the current object satisfying a threshold similarity score to a second feature vector representing the attribute data for the comparison object (col. 17, lines 7-16: “In various implementations, search results can have an associated similarity score (i.e., score for how well the result matches the identified object parameters, user, or other search criteria) or other ranking metric (e.g., relevance to features identified in the artificial reality environment, how popular the result is with other users, an estimation of value of the content item to the current user, etc.) The similarity score or ranking metric can indicate a confidence of the match to the content type and can be used to increase accuracy in detection of the object.”). Regarding Claim 11, Ma and Ravichandran teach the method according to claim 1. Ma further teaches querying, by the processor, a database of comparison objects previously-viewed by the augmented reality device (col. 16, line 65 – col. 17, line 5: “the search can be performed against additional or alternative external sources (e.g., wiki entries, news stores, content items in other applications, etc.) The results of the search can be one or more content items related to the object. This can be any manner of content item such as user tag/comments, database entries, shopping options, associated social media content, images or other media, links to websites, applications with associated content, etc.”). Regarding Claim 12, Ma and Ravichandran teach the method according to claim 1. Ma further teaches wherein the processor continually captures image data for a plurality of current objects and continually applies an object recognition engine on the image data for the plurality of current objects (col. 16, lines 38-40: “the system can process image data of the environment to identify objects in the environment and/or XR tags in the environment.”). Claim 15 is functionally identical to claim 1, save that it teaches a system rather than a method. As such, it is rejected on the same basis as claim 1. Claim 17 is functionally identical to claim 5, save that it depends on claim 15 rather than claim 1. As such, it is rejected on the same basis as claim 5. Claim 18 is functionally identical to claim 6, save that it depends on claim 15 rather than claim 1. As such, it is rejected on the same basis as claim 6. Claim 19 is functionally identical to claim 8, save that it depends on claim 15 rather than claim 1. As such, it is rejected on the same basis as claim 8. Claim 20 is functionally identical to claim 9, save that it depends on claim 15 rather than claim 1. As such, it is rejected on the same basis as claim 9. Claim 21 is functionally identical to claim 1, save that it teaches a non-transitory machine-readable storage medium rather than a method. As such, it is rejected on the same basis as claim 1. Claim(s) 2-3, 13-14, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ma (US 11113893 B1) and Ravichandran (US 10776417 B1) as applied to claims 1 and 15 above, and further in view of Greenberger (US 11282133 B2). Regarding Claim 2, Ma and Ravichandran teach the method according to claim 1. Greenberger further teaches wherein generating the augmented reality graphical user interface includes generating, by the processor, an augmented reality overlay for the virtual representation of the attribute to be displayed in the augmented reality graphical user interface, wherein the processor generates the augmented reality overlay based upon the attribute indicated by the instruction (col. 16, lines 9-23). It would have been obvious to incorporate Greenberger’s method into Ma’s method, as both are in the same field of endeavor of augmented reality product comparison. A graphical user interface such as that taught by Greenberger would be a necessary and intuitive innovation to properly display the comparisons being made as taught by Ma. Regarding Claim 3, Ma and Ravichandran teach the method according to claim 2. Greenberger further teaches comprising: generating, by the processor, a second augmented reality overlay for the virtual representation of a second attribute of at least one of the comparison object or the current object; and updating, by the processor, the augmented reality graphical user interface to display a second virtual representation of the second attribute using the second augmented reality overlay for the second attribute (col. 8, lines 47-67). It would have been obvious to incorporate Greenberger’s method into Ma’s method, as both are in the same field of endeavor of augmented reality product comparison. A graphical user interface such as that taught by Greenberger would be a necessary and intuitive innovation to properly display the comparisons being made as taught by Ma. Regarding Claim 13, Ma and Ravichandran teach the method according to claim 1. Greenberger further teaches wherein the augmented reality graphical user interface includes an augmented reality overlay of the virtual representation of the attribute of the comparison object in proximity to the attribute of the current object Column 5, Lines 28-45). It would have been obvious to incorporate Greenberger’s method into Ma’s method, as both are in the same field of endeavor of augmented reality product comparison. A graphical user interface such as that taught by Greenberger would be a necessary and intuitive innovation to properly display the comparisons being made as taught by Ma. Regarding Claim 14, Ma and Ravichandran teach the method according to claim 1. Greenberger further teaches wherein the instruction indicating the attribute for the comparison includes a verbal instruction (Column 4, Line 63 – Column 5, Line 9). It would have been obvious to incorporate Greenberger’s method into Ma’s method, as both are in the same field of endeavor of augmented reality product comparison. A verbal instruction can be a convenient way to facilitate product comparison in situations where the user’s hands are otherwise occupied, such as in handling the object for comparison. Claim 16 is functionally identical to claims 2 and 3, save that it depends on claim 15 rather than claim 1. As such, it is rejected on the same basis as claims 2 and 3. Response to Arguments Applicant’s arguments with respect to claim(s) 1 and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN A BARHAM whose telephone number is (571)272-4338. The examiner can normally be reached Mon-Fri, 8:30am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu, can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN ALLEN BARHAM/Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Apr 10, 2023
Application Filed
Mar 31, 2025
Non-Final Rejection — §103
Jun 25, 2025
Applicant Interview (Telephonic)
Jun 26, 2025
Examiner Interview Summary
Jul 22, 2025
Response Filed
Aug 21, 2025
Final Rejection — §103
Sep 04, 2025
Interview Requested
Sep 11, 2025
Examiner Interview Summary
Sep 11, 2025
Applicant Interview (Telephonic)
Oct 23, 2025
Request for Continued Examination
Nov 03, 2025
Response after Non-Final Action
Nov 13, 2025
Non-Final Rejection — §103
Jan 27, 2026
Interview Requested
Feb 03, 2026
Examiner Interview Summary
Feb 03, 2026
Applicant Interview (Telephonic)
Feb 17, 2026
Response Filed
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12564345
MEDICAL APPARATUS, AND IMAGE GENERATION METHOD FOR VISUALIZING TEMPORAL TRENDS OF BIOMAGNETIC DATA ON AN ORGAN MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12548109
Preserving Tumor Volumes for Unsupervised Medical Image Registration
2y 5m to grant Granted Feb 10, 2026
Patent 12530836
OBJECT TRANSITION BETWEEN DEVICE-WORLD-LOCKED AND PHYSICAL-WORLD-LOCKED
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+60.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month