Prosecution Insights
Last updated: April 19, 2026
Application No. 18/535,771

MENU HIERARCHY NAVIGATION ON ELECTRONIC MIRRORING DEVICES

Final Rejection §102§103
Filed
Dec 11, 2023
Examiner
NUNEZ, JORDANY
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Snap Inc.
OA Round
2 (Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
93%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
284 granted / 474 resolved
+4.9% vs TC avg
Strong +33% interview lift
Without
With
+33.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
8 currently pending
Career history
482
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 474 resolved cases

Office Action

§102 §103
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 13, 16, 17, 19, 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wigdor et al. (US20110134047, Wigdor). As to claims 1, 16, 20, Wigdor shows: A method, corresponding system, and corresponding non-transitory machine-readable storage medium comprising: after detecting overlap between a first body part of a user (e.g., hand 802) and a given menu option (e.g., modal region 800) of one or more menu options displayed on a video (¶ [0037]; [0032]) (e.g., modal region 800 [is] invoked by a hand posture of the user's first hand 802; there may be more than one modal region on the multi-touch display displayed at a time, and they may spatially overlap in location), detecting a gesture performed by the user using a combination of the first body part and a second body part of the user (¶ [0037]) (e.g., Upon invoking the modal region, a user's second hand 804 may direct a touch input toward the display within the modal region,); and in response to detecting the gesture, displaying a set of options related to the given menu option (¶ [0037]) (e.g., direct[ing] a touch input toward the display within the modal region, upon which a menu 806 of possible modes may be displayed.). As to claims 2, Wigdor shows: The method of claim 1, the first body part comprising a first hand and the second body part comprising a second hand, the one or more menu options relating to a first level in a hierarchy of levels (fig. 8). As to claims 13, Wigdor shows: The method of claim 1, further comprising enabling selection of the given menu option after selecting the given menu option from the set of options (¶ [0038]) (e.g., At subsequent time t.sub.3, the user may choose to, for example, email object 906 by dragging object 906 to the corresponding heading). As to claims 17, Wigdor shows: The system of claim 16, the first body part comprising a first hand and the second body part comprising a second hand (fig. 8). As to claims 19, Wigdor shows: The system of claim 16, the operations further comprising: accessing the video from a camera; displaying the video that depicts the user; and detecting overlap between the first body part of the user and the given menu option. It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275, 277 (CCPA 1968)). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art Note: In order to better show what is and is not taught by the references, Examiner shows some words underlined. Words that are underlined indicate teachings of the cited reference, and may not specifically be claimed. Claims 3-6, 18, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wigdor et al. (US20110134047, Wigdor) in view of Anderson et al. (US20150098143, Anderson). As to claims 3, 18: Wigdor shows a method, system substantially as claimed, as specified above. Wigdor further shows: any suitable touch gesture that the computing system is configured to recognize may be used (¶ [0025]). Wigdor fails to specifically show: wherein the combination comprises at least one of the first or second hands being in a closed fist configuration. In the same field of invention, Anderson teaches: reflection based target selection. Anderson further teaches: wherein the combination comprises at least one of the first or second hands being in a closed fist configuration (¶ [0065]) (e.g., the user 350 could perform a grasping gesture, either with the hand that pre-selects the GUI element or with the other). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Wigdor and Anderson before the effective filing date of the invention, to have combined the teachings of Anderson with the method, system as taught by Wigdor. One would have been motivated to make such combination because a way to interact with large displays without being constrained by traditional input devices would have been obtained and desired, as expressly taught by Anderson (¶ [0006]). As to claims 4, 19: Wigdor shows a method, and corresponding system, substantially as claimed, as specified above. Wigdor fails to specifically show: further comprising: accessing the video from a camera; displaying the video that depicts the user; and detecting overlap in the video between the first body part of the user and the given menu option. In the same field of invention, Anderson teaches: reflection based target selection. Anderson further teaches: accessing the video from a camera (¶ [0021]) (e.g., input devices include still or video cameras); displaying the video that depicts the user (¶ [0052]) (e.g., the user video window 482 simultaneously shows video of the user 350 performing the target movement, enabling the user 350 to quickly assess his or her movements as compared with the target movement); and detecting overlap in the video between the first body part of the user and the given menu option (¶ [0060]) (e.g., The user 350 may select a GUI element via any technically feasible technique, including, without limitation Immediate Intersection selection, crossing selection, dwelling selection, reach selection, gesture selection, and physical button selection.). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Wigdor and Anderson before the effective filing date of the invention, to have combined the teachings of Anderson with the method, and corresponding system, as taught by Wigdor. One would have been motivated to make such combination because a way to interact with large displays without being constrained by traditional input devices would have been obtained and desired, as expressly taught by Anderson (¶ [0006]). As to claims 5, 6: Wigdor shows a method substantially as claimed, as specified above. Wigdor fails to specifically show: further comprising selecting an option from the set of options in response to determining that a position of the first body part of the user continues to overlap the position of the given option for an entire duration of a timer; further comprising: performing an adjustment to the given menu option based on the position of the first body part that overlaps the position of the given menu option. In the same field of invention, Anderson teaches: reflection based target selection. Anderson further teaches: further comprising selecting an option from the set of options in response to determining that a position of the first body part of the user continues to overlap the position of the given option for an entire duration of a timer (e.g., During the dwell period, the GUI element visually expands, providing feedback to the user that the GUI element is pre-selected) (¶ [0063]); further comprising: performing an adjustment to the given menu option based on the position of the first body part that overlaps the position of the given menu option (¶ [0063]) (e.g., The activation area associated with the GUI element increases during the dwell period, allowing the hand of the user 350 to drift without losing the pre-selection. After the expiration of the dwell period, the selection is confirmed). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Wigdor and Anderson before the effective filing date of the invention, to have combined the teachings of Anderson with the method as taught by Wigdor. One would have been motivated to make such combination because a way to interact with large displays without being constrained by traditional input devices would have been obtained and desired, as expressly taught by Anderson (¶ [0006]). Claims 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Wigdor et al. (US20110134047, Wigdor) in view of Anderson et al. (US20150098143, Anderson) further in view of Marlin et al. (US20200404161, Marlin). As to claim 7: Wigdor, Anderson show a method substantially as claimed, as specified above. Wigdor, Anderson fail to specifically show: wherein the given menu option comprises a video clip generation option that causes capture of a video clip of the user of a specified duration, and wherein the adjustment to the given menu option comprises changing the specified duration of the video clip. In the same field of invention, Marlin teaches: method for photo and video capture. Marlin further teaches: A given menu option comprises a video clip generation option that causes capture of a video clip of the user of a specified duration, and wherein an adjustment to the given menu option comprises changing the specified duration of the video clip (abstract, fig. 11). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Wigdor, Anderson, Marlin before the effective filing date of the invention, to have combined the teachings of Marlin with the method as taught by Wigdor, Anderson. One would have been motivated to make such combination because a way to enable a user to not miss capture of a desired photo or video would have been obtained and desired, as expressly taught by Marlin (¶ [0009]). As to claim 8: Wigdor, Anderson show a method substantially as claimed, as specified above. Wigdor, Anderson fail to specifically show: wherein the one or more menu options comprise an undo previously captured video clip option, a video clip generation option, and a filter option. In the same field of invention, Marlin teaches: method for photo and video capture. Marlin further teaches: one or more menu options comprise an undo previously captured video clip option, a video clip generation option, and a filter option (abstract, fig. 11). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Wigdor, Anderson, Marlin before the effective filing date of the invention, to have combined the teachings of Marlin with the method as taught by Wigdor, Anderson. One would have been motivated to make such combination because a way to enable a user to not miss capture of a desired photo or video would have been obtained and desired, as expressly taught by Marlin (¶ [0009]). As to claim 9, 10: Wigdor, Anderson show a method substantially as claimed, as specified above. Anderson further teaches: and wherein the given menu option is a first option comprising a first augmented reality experience (¶ [0058]) (e.g., the user 350 interacts with the augmented reality environment 300 to directly activate GUI elements of the user interface 245 that are projected onto the augmented reality mirror 310. Such GUI elements may include, without limitation, the home screen navigation button 472, the annotation button 490, repeat module button 492, next module button 494, and keyframe navigation buttons 496.); wherein the set of options comprises a second option relating to a second augmented reality experience (¶ [0058]) (e.g., the user 350 interacts with the augmented reality environment 300 to directly activate GUI elements of the user interface 245 that are projected onto the augmented reality mirror 310. Such GUI elements may include, without limitation, the home screen navigation button 472, the annotation button 490, repeat module button 492, next module button 494, and keyframe navigation buttons 496.). One would have been motivated to make such combination of Wigdor and Anderson because a way to interact with large displays without being constrained by traditional input devices would have been obtained and desired, as expressly taught by Anderson (¶ [0006]). Wigdor, Anderson fail to specifically show: The method of claim 1, wherein the given menu option comprises a video clip generation option that causes capture of a video clip. In the same field of invention, Marlin teaches: method for photo and video capture. Marlin further teaches: A given menu option comprises a video clip generation option that causes capture of a video clip (abstract, fig. 11). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Wigdor, Anderson, Marlin before the effective filing date of the invention, to have combined the teachings of Marlin with the method as taught by Wigdor, Anderson. One would have been motivated to make such combination because a way to enable a user to not miss capture of a desired photo or video would have been obtained and desired, as expressly taught by Marlin (¶ [0009]). Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Wigdor et al. (US20110134047, Wigdor) in view of Anderson et al. (US20150098143, Anderson) in view of Marlin et al. (US2020040161, Marlin), further in view of Fretwell et al (US20130185679, Fretwell). As to claim 11: Wigdor, Anderson, Marlin show a method substantially as claimed, as specified above. Wigdor, Anderson, Marlin fail to specifically show: wherein the first augmented reality experience comprises application of one or more augmented reality whole body outfits to the user, and wherein the second augmented reality experience comprises addition of one or more augmented reality elements to the video. In the same field of invention, Fretwell teaches: system on selecting object on display. Fretwell further teaches: A first augmented reality experience comprises application of one or more augmented reality whole body outfits to the user, and wherein the second augmented reality experience comprises addition of one or more augmented reality elements to the video (¶ [0041]) (e.g., The user may select a recommended article of clothing to virtually try on the recommended article of clothing with the previously selected article of clothing. In other words, the user may virtually try on multiple articles of clothing simultaneously as an outfit. Additionally, or alternatively, the user may remove an article of clothing that the user is virtually trying on by selecting a remove selectable object on the display device 150.). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Wigdor, Anderson, Marlin, Fretwell before the effective filing date of the invention, to have combined the teachings of Fretwell with the method as taught by Wigdor, Anderson, Marlin. One would have been motivated to make such combination because a way to enable a user to add and/or remove articles of clothing from a virtual outfit would have been obtained and desired, as expressly taught by Fretwell (¶ [0009]). As to claim 12, Wigdor shows: The method of claim 10, wherein the second augmented reality experience comprises launching one or more applications (¶ [0038]) (e.g., At subsequent time t.sub.3, the user may choose to, for example, email object 906 by dragging object 906 to the corresponding heading). Claims 14, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Wigdor et al. (US20110134047, Wigdor) in view of Ferens et al. (US20170285345, Ferens). As to claims 14, 15: Wigdor shows a method substantially as claimed, as specified above. Wigdor fails to specifically show: wherein the one or more menu options are displayed by a display of an eyewear device at a position in three-dimensional space that overlaps a static mirror; wherein the user sees a reflection of the user in the static mirror through lenses of the eyewear device and the one or more menu options appear as augmented reality elements on the lenses of the eyewear device on the static mirror. In the same field of invention, Ferens teaches: controls and interfaces for user interactions in virtual spaces. Ferens further teaches: wherein the one or more menu options are displayed by a display of an eyewear device at a position in three-dimensional space that overlaps a static mirror; wherein the user sees a reflection of the user in the static mirror through lenses of the eyewear device and the one or more menu options appear as augmented reality elements on the lenses of the eyewear device on the static mirror (¶ [0014]) (e.g., a real object in a field of view of the user 12 may be augmented by the AR object 38 (e.g., a GUI for a menu, a 2D notation such as user weight, steps taken, etc.)). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Wigdor, Ferens before the effective filing date of the invention, to have combined the teachings of Ferens with the method as taught by Wigdor. One would have been motivated to make such combination because a way to provide a low cost, smart mirror functionality for reality augmentation would have been obtained and desired, as expressly taught by Ferens (¶ [0002]). It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275, 277 (CCPA 1968)). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Anvaripour et al. [U.S. 20210303077], navigating through augmented reality content Schimke [U.S. 20210055838), augmented reality appearance enhancement Ng et al. [U.S. 20210335043], augmented reality interaction and contextual menu system Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jordany Núñez whose telephone number is (571)272-2753. The examiner can normally be reached M-F 8:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on 5712724128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORDANY NUNEZ/Primary Examiner, Art Unit 2145 2/26/2026
Read full office action

Prosecution Timeline

Dec 11, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §102, §103
Oct 27, 2025
Response Filed
Feb 26, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579455
ANALYZING MESSAGE FLOWS TO SELECT ACTION CLAUSE PATHS FOR USE IN MANAGEMENT OF INFORMATION TECHNOLOGY ASSETS
2y 5m to grant Granted Mar 17, 2026
Patent 12578835
Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
2y 5m to grant Granted Mar 17, 2026
Patent 12530430
Detecting a User's Outlier Days Using Data Sensed by the User's Electronic Devices
2y 5m to grant Granted Jan 20, 2026
Patent 12481723
Intelligent Data Ranking System Based on Multi-Facet Intra and Inter-Data Correlation and Data Pattern Recognition
2y 5m to grant Granted Nov 25, 2025
Patent 12430533
NEURAL NETWORK PROCESSING APPARATUS, NEURAL NETWORK PROCESSING METHOD, AND NEURAL NETWORK PROCESSING PROGRAM
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
93%
With Interview (+33.1%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 474 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month