Prosecution Insights
Last updated: April 19, 2026
Application No. 18/819,587

CURATED CONTEXTUAL OVERLAYS FOR AUGMENTED REALITY EXPERIENCES

Non-Final OA §102§103§DP
Filed
Aug 29, 2024
Examiner
GODDARD, TAMMY
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
1 (Non-Final)
30%
Grant Probability
At Risk
1-2
OA Rounds
5y 4m
To Grant
49%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
41 granted / 138 resolved
-32.3% vs TC avg
Strong +20% interview lift
Without
With
+19.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
10 currently pending
Career history
148
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
59.4%
+19.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 138 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA This application is a Continuation of U.S. Application Serial No. 17/736,142 filed on May 4, 2022, now US 12,100,066, which claims priority to U.S. Provisional Application No. 63/184,448 filed on May 5, 2021. Claim Objections Claim 1 is objected to because of the following informalities: In the first line of the claim, the word overlay should be added after the word contextual. Appropriate correction is required. Claim 16 is objected to because of the following informalities: In the third line of the claim, the word couple should be the word coupled. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 7 of U.S. Patent No. 12,100,066. Although the claims at issue are not identical, they are not patentably distinct from each other as the table below shows that the like lettered elements of claim 1 of the instant application correspond across the column to the like lettered elements of claim 7 of patent 12,100,066, including all of the elements of independent claim 1 from which claim 7 depends. Although the claims at issue are not identical, they are not patentably distinct from each other as presented in the table below. It is clear to one of ordinary skill in the art prior to the effective filing date of the invention that all the elements of the application claim 1 are to be found in patent claim 7 as the application claim 1 fully encompasses patent claim 7 of the patent. The difference between the application claim 1 and the patent claim 7 lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus, the invention of claim 7 of the patent is in effect a “species” of the “generic" invention of the application claim 1. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claim 1 is anticipated by claim 7 of the patent, it is not patentably distinct from claim 7 of the patent. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over U.S. Patent No. 12,100,066 as the instant application has at least one examined application claim that is not patentably distinct from a reference claim of U.S. Patent No. 12,100,066. Application 18/819,587 Patent 12,100,066 - Application 17/736,142 1. A method of presenting a contextual {overlay} using an (a) eyewear device comprising a memory, a camera, and a display, the method comprising: (b) maintaining in the memory a profile comprising an activity and a locale; (c) capturing frames of video data at a frame rate with the camera; (d) detecting in the frames of video data a food item in a physical environment at a current item position; (e) determining the current item position relative to the display based on the frames of video data; (f) determining a current eyewear location relative to the current item position based on the frames of video data and the frame rate; (g) retrieving data associated with the food item; (h) curating a contextual overlay based on the activity and the current eyewear location relative to the locale, (i) wherein curating comprises populating the contextual overlay with at least a portion of the data; and (j) presenting on the display the contextual overlay at an overlay position relative to the display, (k) wherein the overlay position is persistently associated with the current item position. 1. A method of presenting a contextual overlay in response to items detected with an (a) eyewear device in a physical environment, the eyewear device comprising a memory, a camera, a microphone, a loudspeaker, a contextual overlay application, an image processing system, a localization system, and a display, the method comprising: (b) maintaining in the memory a profile comprising an activity and a locale; (c) perceiving a start command with the microphone; playing through the loudspeaker a confirming message in response to the perceived start command; (c) capturing frames of video data at a frame rate and within a field of view of the camera; with the image processing system, (d) detecting in the captured frames of video data a food item at a current item position relative to the display; (e) determining with the localization system a current eyewear location relative to the physical environment based exclusively on the captured frames of video data; updating the current eyewear location in accordance with the frame rate; (g) retrieving data associated with the detected food item with the contextual overlay application; (h) curating a contextual overlay based on the retrieved data and the profile, (i) populating the contextual overlay in accordance with the current eyewear location relative to the locale; and (j) presenting on the display the contextual overlay at an overlay position relative to the display. 7. The method of claim 1, wherein the method further comprises: (f) determining, with the localization system, the current eyewear location relative to the current item position; (j) calculating a correlation between the current item position and the display, in accordance with the current eyewear location; and (j) presenting the contextual overlay in accordance with the calculated correlation, (k) such that the contextual overlay is persistently presented adjacent the current item position as the eyewear moves through the physical environment. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-7, 9-14 and 16-19 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Geisner et al. (U. S. Patent Application Publication 2013/0085345 A1, already of record, hereafter ‘345). Regarding claim 1, Geisner teaches a method of presenting a contextual (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user) using an eyewear device (‘345; figs. 2A and 3A, element 2; Abstract) comprising a memory (‘345; figs. 2A, and 3A; ¶ 0100; element 214; memory 214 (e.g., DRAM)), a camera (‘345; figs. 2A, and 3A; ¶ 0083; one or more forward-facing cameras 33), and a display (‘345; fig. 2A, element 120, microdisplay units; ¶ 0097; ¶ 0102), the method comprising: maintaining in the memory (‘345; figs. 2A, and 3A; ¶ 0100; element 214; memory 214 (e.g., DRAM) a profile (‘345; fig. 5B; ¶ 0084; ¶ 0136; ¶ 0149; A user computing device 13 can also be used by the user who receives food recommendations, and includes a communication interface 15, control circuits 17, a memory 19, a display screen 21 and an input device 23. For example, the computing device may be used to set up profiles, and preferences/restrictions.) comprising an activity (‘345; ¶ 0131; ¶ 0134; A user wearing an at least partially see-through, head mounted display can register (passively or actively) their presence at an event or location and a desire to receive information about the event or location) and a locale (‘345; ¶ 0150; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance); capturing frames of video data at a frame rate with the camera (‘345; figs. 2A, 3A and 6A; ¶ 0083; one or more forward-facing cameras 33; ¶ 0099; Images from forward facing cameras can be used to identify a physical environment of the user, including a scene which is viewed by the user, e.g., including food items, people and other objects in a field of view of the user, as well as gestures such as a hand gesture of the user; ¶ 0143; capture video (and/or still images and/or depth images) of the user's surroundings; ¶ 0128; images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like); detecting in the frames of video data a food item in a physical environment at a current item position (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0162; ¶ 0201-0205; ¶ 0247-0251; as the user moves around the user's various food storage locations, the personal A/V apparatus will view these food storage locations (in step 706 of FIG. 7B) and capture still, video and/or depth images of the food storage locations. As the personal A/V apparatus views images of food storage locations, it will automatically recognize items on the food inventory using one or more image recognition processes in conjunction with knowing its three-dimensional location and orientation); determining the current item position relative to the display based on the frames of video data (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0162; ¶ 0201-0205; ¶ 0247-0251; as the user moves around the user's various food storage locations, the personal A/V apparatus will view these food storage locations (in step 706 of FIG. 7B) and capture still, video and/or depth images of the food storage locations. As the personal A/V apparatus views images of food storage locations, it will automatically recognize items on the food inventory using one or more image recognition processes in conjunction with knowing its three-dimensional location and orientation); determining a current eyewear location relative to the current item position based on the frames of video data and the frame rate (‘345; figs. 6A-6F; ¶ 0149-0151; ¶ 0128; images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like); retrieving data associated with the food item (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0202-0205; ¶ 0247-0251; retrieving data associated with the detected food item with the contextual overlay application); curating a contextual overlay based on the activity (‘345; ¶ 0131; ¶ 0134; A user wearing an at least partially see-through, head mounted display can register (passively or actively) their presence at an event or location and a desire to receive information about the event or location) and the current eyewear location relative to the locale (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; curating a contextual overlay based on the retrieved data; as one example, the augmented reality image 921 is projected to the user at a defined location relative to the food item 668, e.g., centered in front of the food item 668, in the field of view of the user), wherein curating comprises populating the contextual overlay with at least a portion of the data (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user); and presenting on the display the contextual overlay at an overlay position (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user) relative to the display (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display), wherein the overlay position is persistently associated with the current item position (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display; The augmented reality image 1464 is projected to the user at a defined location relative to the food item 1011). Regarding claim 2, Geisner teaches the method of claim 1 and further teaches wherein presenting the contextual overlay comprises: presenting the contextual overlay as an overlay relative to the physical environment (‘345; figs. 2A, 3A and 6A; ¶ 0083; one or more forward-facing cameras 33; ¶ 0099; Images from forward facing cameras can be used to identify a physical environment of the user, including a scene which is viewed by the user, e.g., including food items, people and other objects in a field of view of the user, as well as gestures such as a hand gesture of the user; ¶ 0143; capture video (and/or still images and/or depth images) of the user's surroundings); and presenting the contextual overlay as an overlay relative to the food item (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; curating a contextual overlay based on the retrieved data; as one example, the augmented reality image 921 is projected to the user at a defined location relative to the food item 668, e.g., centered in front of the food item 668, in the field of view of the user). Regarding claim 3, Geisner teaches the method of claim 1 and further teaches wherein curating the contextual overlay comprises: populating the contextual overlay in accordance with the activity (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user), wherein the activity comprises a purpose selected from an activity group consisting of cooking (‘345; ¶ 0171, If the user is cooking, then in step 774, the system will check a database of menus), nutrition (‘345; ¶ 0004, a food item may be recommended if the food item is compatible with nutritional goals of the user), dietary restrictions (‘345; ¶ 0004, recommendations in terms of pursuing a low salt, heart healthy, or vegan diet, or in avoiding an allergen), shopping (‘345; ¶ 0003, Technology described herein provides various embodiments for implementing an augmented reality system that can assist a user in managing a food supply in one's home, and selecting foods such as when shopping in a store or while dining in a restaurant), a recent purpose (‘345; ¶ 0170, computing devices can monitor one or more calendars to determine that a holiday is approaching, a birthday is approaching or the special family event marked on the calendar is approaching. In step 772, the user will be provided with a query asking if the user will be cooking for this holiday or special event), and a frequent purpose (‘345; ¶ 0003, Technology described herein provides various embodiments for implementing an augmented reality system that can assist a user in managing a food supply in one's home, and selecting foods such as when shopping in a store or while dining in a restaurant). Regarding claim 4, Geisner teaches the method of claim 1 and further teaches wherein curating the contextual overlay comprises populating the contextual overlay in accordance with the locale (‘345; ¶ 0150; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance), wherein the locale (‘345; ¶ 0150; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance) comprises an environment selected from a location group consisting of an outdoor market (‘345; ¶ 0197; For example, the user could be looking at food in a supermarket or other type of store.), a grocery store (‘345; ¶ 0197; For example, the user could be looking at food in a supermarket or other type of store.), a refrigerator (‘345; ¶ 0141), a pantry (‘345; figs. 6E and 6F), and a countertop (‘345; ¶ 0164). Regarding claim 5, Geisner teaches the method of claim 1 and further teaches wherein retrieving data comprises gathering food information associated with the food item from at least one of a food data library (‘345; ¶ 0188-0191; food recommendation server 50), a recipe library (‘345; ¶ 0165; recipe database), or the Internet (‘345; fig. 1; ¶ 0135; ¶ 0190; Supplemental Information Provider 504 is located remotely from personal A/V apparatus 502 so that they communication over the Internet, cellular network or other longer range communication means). Regarding claim 6, Geisner teaches the method of claim 1 and further teaches the method as further comprising: detecting in the frames of video data a subsequent food item (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0162; ¶ 0201-0205; ¶ 0247-0251; as the user moves around the user's various food storage locations, the personal A/V apparatus will view these food storage locations (in step 706 of FIG. 7B) and capture still, video and/or depth images of the food storage locations. As the personal A/V apparatus views images of food storage locations, it will automatically recognize items on the food inventory using one or more image recognition processes in conjunction with knowing its three-dimensional location and orientation - detecting a subsequent food item); retrieving subsequent data associated with the detected subsequent food item (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0202-0205; ¶ 0247-0251; retrieving subsequent data associated with the detected subsequent food item); and curating further the contextual overlay based on the retrieved subsequent data (‘345 figs. 7C-7E, 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; curating a contextual overlay based on the retrieved data for the subsequent food item; as one example, the augmented reality image 921 is projected to the user at a defined location relative to the food item 668, e.g., centered in front of the food item 668 and or to subsequent data associated with the detected subsequent food item as further detailed in the cited paragraphs). Regarding claim 7, Geisner teaches the method of claim 1 and further teaches wherein presenting the contextual overlay comprises at least one of: presenting on the display a graphical element based on the data (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display), presenting on the display adjacent the current item position a label based on the data (‘345 figs. 7C-7E, 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display), and does not teach, or playing through a loudspeaker (‘345; ¶ 0097; two ear phones 130 of the eyewear device) a presentation message based on the data (‘345; ¶ 0209, The user can be informed of the recommendation in other ways as well, such as by an audio message or tone ( e.g. a pleasing bell for a positive recommendation or a warning horn for a negative recommendation). Regarding claim 9, Geisner teaches the method of claim 1 and further teaches wherein the eyewear device comprises a localization system (‘345; fig. 3A, elements 144, 132B, 132C, 132A; ¶ 0103; ¶ 0150; the system determines the location of the user via various means; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance – global and various interrelated local reference frames may be determined as needed for particular segments of applications in real-time in response to the user’ activity at any one time), wherein determining the current eyewear location comprises updating the current eyewear location using the localization system (‘345; ¶ 0150), and wherein presenting the contextual overlay comprises: calculating a correlation between the current item position and the display (‘345; ¶ 0162), in accordance with the current eyewear location (‘345; ¶ 0150; ¶ 0250); and presenting the contextual overlay in accordance with the correlation (‘345 figs. 7C-7E, 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display). Regarding claim 10, Geisner teaches a contextual overlay system (‘345 figs. 2A, 2B, 9B, 9C and 10A; Abstract; A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD), comprising: an eyewear device (‘345; figs. 2A and 3A, element 2; Abstract) comprising a camera (‘345; figs. 2A, and 3A; ¶ 0083; one or more forward-facing cameras 33), a memory (‘345; figs. 2A, and 3A; ¶ 0100; element 214; memory 214 (e.g., DRAM)), a processor (‘345; fig. 3A, processor 210; ¶ 0100), a localization system (‘345; fig. 3A, elements 144, 132B, 132C, 132A; ¶ 0103; ¶ 0150; the system determines the location of the user via various means; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance – global and various interrelated local reference frames may be determined as needed for particular segments of applications in real-time in response to the user’ activity at any one time), and a display (‘345; fig. 2A, element 120, microdisplay units; ¶ 0097; ¶ 0102); programming in the memory (‘345; figs. 2A, and 3A; ¶ 0100; element 214; memory 214 (e.g., DRAM), ¶ 0133; one or more tangible, non-transitory processor-readable storage devices, or other non-volatile or volatile storage devices. The storage device, as a computer-readable media, can be provided, e.g., by components 7, 19, 35, 55, 63, 73, 214, 326, 330, 334, 403, 406, 410, 412, 440(1)-440(6) and 476), wherein execution of the programming by the processor configures the eyewear device to perform functions (‘345; ¶ 0086), including functions to: maintain in the memory (‘345; figs. 2A, and 3A; ¶ 0100; element 214; memory 214 (e.g., DRAM)) a profile (‘345; fig. 5B; ¶ 0084; ¶ 0136; ¶ 0149; A user computing device 13 can also be used by the user who receives food recommendations, and includes a communication interface 15, control circuits 17, a memory 19, a display screen 21 and an input device 23. For example, the computing device may be used to set up profiles, and preferences/restrictions.) comprising an activity (‘345; ¶ 0131; ¶ 0134; A user wearing an at least partially see-through, head mounted display can register (passively or actively) their presence at an event or location and a desire to receive information about the event or location) and a locale (‘345; ¶ 0150; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance); capture frames of video data at a frame rate with the camera (‘345; figs. 2A, 3A and 6A; ¶ 0083; one or more forward-facing cameras 33; ¶ 0099; Images from forward facing cameras can be used to identify a physical environment of the user, including a scene which is viewed by the user, e.g., including food items, people and other objects in a field of view of the user, as well as gestures such as a hand gesture of the user; ¶ 0143; capture video (and/or still images and/or depth images) of the user's surroundings; ¶ 0128; images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like); detect in the frames of video data a food item in a physical environment at a current item position (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0162; ¶ 0201-0205; ¶ 0247-0251; as the user moves around the user's various food storage locations, the personal A/V apparatus will view these food storage locations (in step 706 of FIG. 7B) and capture still, video and/or depth images of the food storage locations. As the personal A/V apparatus views images of food storage locations, it will automatically recognize items on the food inventory using one or more image recognition processes in conjunction with knowing its three-dimensional location and orientation); determine the current item position relative to the display based on the frames of video data (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0162; ¶ 0201-0205; ¶ 0247-0251; as the user moves around the user's various food storage locations, the personal A/V apparatus will view these food storage locations (in step 706 of FIG. 7B) and capture still, video and/or depth images of the food storage locations. As the personal A/V apparatus views images of food storage locations, it will automatically recognize items on the food inventory using one or more image recognition processes in conjunction with knowing its three-dimensional location and orientation); determine, using the localization system (‘345; fig. 3A, elements 144, 132B, 132C, 132A; ¶ 0103; ¶ 0150; the system determines the location of the user via various means; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance – global and various interrelated local reference frames may be determined as needed for particular segments of applications in real-time in response to the user’ activity at any one time), a current eyewear location relative to the current item position based on the frames of video data and the frame rate (‘345; figs. 6A-6F; ¶ 0149-0151; ¶ 0128; images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like); retrieve data associated with the food item (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0202-0205; ¶ 0247-0251; retrieving data associated with the detected food item with the contextual overlay application); curate a contextual overlay based on the activity (‘345; ¶ 0131; ¶ 0134; A user wearing an at least partially see-through, head mounted display can register (passively or actively) their presence at an event or location and a desire to receive information about the event or location) and the current eyewear location relative to the locale (‘345; ¶ 0150; ¶ 0250), wherein the function curate comprises a function to populate the contextual overlay with at least a portion of the data (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user); and present on the display the contextual overlay at an overlay position relative to the display (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display), wherein the overlay position is persistently associated with the current item position (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display; The augmented reality image 1464 is projected to the user at a defined location relative to the food item 1011). Regarding claim 11, Geisner teaches the contextual overlay system of claim 10 and further teaches wherein the function to present the contextual overlay comprises functions to: present the contextual overlay as an overlay relative to the physical environment (‘345; figs. 2A, 3A and 6A; ¶ 0083; one or more forward-facing cameras 33; ¶ 0099; Images from forward facing cameras can be used to identify a physical environment of the user, including a scene which is viewed by the user, e.g., including food items, people and other objects in a field of view of the user, as well as gestures such as a hand gesture of the user; ¶ 0143; capture video (and/or still images and/or depth images) of the user's surroundings); and present the contextual overlay as an overlay relative to the food item (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; curating a contextual overlay based on the retrieved data; as one example, the augmented reality image 921 is projected to the user at a defined location relative to the food item 668, e.g., centered in front of the food item 668, in the field of view of the user). Regarding claim 12, Geisner teaches the contextual overlay system of claim 10 and further teaches wherein the function to curate the contextual overlay comprises a function to: populate the contextual overlay in accordance with the activity (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user), wherein the activity comprises a purpose selected from an activity group consisting of cooking (‘345; ¶ 0171, If the user is cooking, then in step 774, the system will check a database of menus), nutrition (‘345; ¶ 0004, a food item may be recommended if the food item is compatible with nutritional goals of the user), dietary restrictions (‘345; ¶ 0004, recommendations in terms of pursuing a low salt, heart healthy, or vegan diet, or in avoiding an allergen), shopping (‘345; ¶ 0003, Technology described herein provides various embodiments for implementing an augmented reality system that can assist a user in managing a food supply in one's home, and selecting foods such as when shopping in a store or while dining in a restaurant), a recent purpose (‘345; ¶ 0170, computing devices can monitor one or more calendars to determine that a holiday is approaching, a birthday is approaching or the special family event marked on the calendar is approaching. In step 772, the user will be provided with a query asking if the user will be cooking for this holiday or special event), and a frequent purpose (‘345; ¶ 0003, Technology described herein provides various embodiments for implementing an augmented reality system that can assist a user in managing a food supply in one's home, and selecting foods such as when shopping in a store or while dining in a restaurant). Regarding claim 13, Geisner teaches the contextual overlay system of claim 10 and further teaches wherein the function to curate the contextual overlay comprises a function to: populate the contextual overlay in accordance with the locale (‘345; ¶ 0150; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance), wherein the locale comprises an environment selected from a location group consisting of an outdoor market (‘345; ¶ 0197; For example, the user could be looking at food in a supermarket or other type of store.), a grocery store (‘345; ¶ 0197; For example, the user could be looking at food in a supermarket or other type of store.), a refrigerator (‘345; ¶ 0141), a pantry (‘345; figs. 6E and 6F), and a countertop (‘345; ¶ 0164). Regarding claim 14, Geisner teaches the contextual overlay system of claim 10 and further teaches wherein the execution configures the eyewear device to perform further functions to: detect in the frames of video data a subsequent food item (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0162; ¶ 0201-0205; ¶ 0247-0251; as the user moves around the user's various food storage locations, the personal A/V apparatus will view these food storage locations (in step 706 of FIG. 7B) and capture still, video and/or depth images of the food storage locations. As the personal A/V apparatus views images of food storage locations, it will automatically recognize items on the food inventory using one or more image recognition processes in conjunction with knowing its three-dimensional location and orientation - detecting a subsequent food item); retrieve subsequent data associated with the detected subsequent food item (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0202-0205; ¶ 0247-0251; retrieving subsequent data associated with the detected subsequent food item); and curate further the contextual overlay based on the retrieved subsequent data (‘345 figs. 7C-7E, 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; curating a contextual overlay based on the retrieved data for the subsequent food item; as one example, the augmented reality image 921 is projected to the user at a defined location relative to the food item 668, e.g., centered in front of the food item 668 and or to subsequent data associated with the detected subsequent food item as further detailed in the cited paragraphs). Regarding claim 16, Geisner teaches a non-transitory computer-readable medium storing program code (‘345; ¶ 0086; The control circuits provide control of hardware and/or software of the respective computing devices. For example, the control circuits can include one or more processors which execute instructions stored on one or more tangible, non-transitory processor-readable storage devices for performing processor- or computer-implemented methods as described herein. The memories can store the instructions as code, and can provide the processor-readable storage devices. The databases and/or memories can provide data stores or sources which contain data which is accessed to perform the techniques described herein) which, when executed, is operative to cause an electronic processor to perform the steps (‘345; ¶ 0086; The control circuits provide control of hardware and/or software of the respective computing devices. For example, the control circuits can include one or more processors which execute instructions stored on one or more tangible, non-transitory processor-readable storage devices for performing processor- or computer-implemented methods as described herein. The memories can store the instructions as code, and can provide the processor-readable storage devices. The databases and/or memories can provide data stores or sources which contain data which is accessed to perform the techniques described herein) of: maintaining in a memory (‘345; figs. 2A, and 3A; ¶ 0100; element 214; memory 214 (e.g., DRAM) couple to an eyewear device (‘345; figs. 2A and 3A, element 2; Abstract) a profile (‘345; fig. 5B; ¶ 0084; ¶ 0136; ¶ 0149; A user computing device 13 can also be used by the user who receives food recommendations, and includes a communication interface 15, control circuits 17, a memory 19, a display screen 21 and an input device 23. For example, the computing device may be used to set up profiles, and preferences/restrictions.) comprising an activity (‘345; ¶ 0131; ¶ 0134; A user wearing an at least partially see-through, head mounted display can register (passively or actively) their presence at an event or location and a desire to receive information about the event or location) and a locale (‘345; ¶ 0150; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance), wherein the eyewear device further comprises a camera (‘345; figs. 2A, and 3A; ¶ 0083; one or more forward-facing cameras 33) and a display (‘345; fig. 2A, element 120, microdisplay units; ¶ 0097; ¶ 0102); capturing frames of video data at a frame rate with the camera (‘345; figs. 2A, 3A and 6A; ¶ 0083; one or more forward-facing cameras 33; ¶ 0099; Images from forward facing cameras can be used to identify a physical environment of the user, including a scene which is viewed by the user, e.g., including food items, people and other objects in a field of view of the user, as well as gestures such as a hand gesture of the user; ¶ 0143; capture video (and/or still images and/or depth images) of the user's surroundings; ¶ 0128; images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like); detecting in the frames of video data a food item in a physical environment at a current item position (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0162; ¶ 0201-0205; ¶ 0247-0251; as the user moves around the user's various food storage locations, the personal A/V apparatus will view these food storage locations (in step 706 of FIG. 7B) and capture still, video and/or depth images of the food storage locations. As the personal A/V apparatus views images of food storage locations, it will automatically recognize items on the food inventory using one or more image recognition processes in conjunction with knowing its three-dimensional location and orientation); determining the current item position relative to the display based on the frames of video data (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0162; ¶ 0201-0205; ¶ 0247-0251; as the user moves around the user's various food storage locations, the personal A/V apparatus will view these food storage locations (in step 706 of FIG. 7B) and capture still, video and/or depth images of the food storage locations. As the personal A/V apparatus views images of food storage locations, it will automatically recognize items on the food inventory using one or more image recognition processes in conjunction with knowing its three-dimensional location and orientation); determining a current eyewear location relative to the current item position based on the frames of video data and the frame rate (‘345; figs. 6A-6F; ¶ 0149-0151; ¶ 0128; images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like); determining a current eyewear location relative to the current item position based on the frames of video data and the frame rate (‘345; figs. 6A-6F; ¶ 0149-0151; ¶ 0128; images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like); retrieving data associated with the food item (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0202-0205; ¶ 0247-0251; retrieving data associated with the detected food item with the contextual overlay application); curating a contextual overlay based on the activity (‘345; ¶ 0131; ¶ 0134; A user wearing an at least partially see-through, head mounted display can register (passively or actively) their presence at an event or location and a desire to receive information about the event or location) and the current eyewear location relative to the locale (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; curating a contextual overlay based on the retrieved data; as one example, the augmented reality image 921 is projected to the user at a defined location relative to the food item 668, e.g., centered in front of the food item 668, in the field of view of the user), wherein curating comprises populating the contextual overlay with at least a portion of the data (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user); and presenting on the display the contextual overlay at an overlay position relative to the display (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display), wherein the overlay position is persistently associated with the current item position (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display; The augmented reality image 1464 is projected to the user at a defined location relative to the food item 1011). Regarding claim 17, Geisner teaches the non-transitory computer-readable medium storing program code of claim 16, wherein presenting the contextual overlay comprises: presenting the contextual overlay as an overlay relative to the physical environment (‘345; figs. 2A, 3A and 6A; ¶ 0083; one or more forward-facing cameras 33; ¶ 0099; Images from forward facing cameras can be used to identify a physical environment of the user, including a scene which is viewed by the user, e.g., including food items, people and other objects in a field of view of the user, as well as gestures such as a hand gesture of the user; ¶ 0143; capture video (and/or still images and/or depth images) of the user's surroundings); and presenting the contextual overlay as an overlay relative to the food item (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; curating a contextual overlay based on the retrieved data; as one example, the augmented reality image 921 is projected to the user at a defined location relative to the food item 668, e.g., centered in front of the food item 668, in the field of view of the user). Regarding claim 18, Geisner teaches the non-transitory computer-readable medium storing program code of claim 16 and further teaches wherein curating the contextual overlay comprises: populating the contextual overlay in accordance with the activity (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user), wherein the activity comprises a purpose selected from an activity group consisting of cooking (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user); and populating the contextual overlay in accordance with the locale (‘345; ¶ 0150; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance), wherein the locale (‘345; ¶ 0150; the system determines the location of the user. For example, the location could be detected based on image recognition of a room the user is in, GPS or cell phone positioning data, and/or by sensing wireless signals from a WI-FI.RTM. network, BLUETOOTH.RTM. network, RF or infrared beacon, or a wireless point-of-sale terminal, for instance) comprises an environment selected from a location group consisting of an outdoor market (‘345; ¶ 0197; For example, the user could be looking at food in a supermarket or other type of store.), a grocery store (‘345; ¶ 0197; For example, the user could be looking at food in a supermarket or other type of store.), a refrigerator (‘345; ¶ 0141), a pantry (‘345; figs. 6E and 6F), and a countertop (‘345; ¶ 0164). Regarding claim 19, Geisner teaches the non-transitory computer-readable medium storing program code of claim 16 and further teaches wherein the program code, when executed, is operative to cause the electronic processor to perform the steps of: detecting in the frames of video data a subsequent food item (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0162; ¶ 0201-0205; ¶ 0247-0251; as the user moves around the user's various food storage locations, the personal A/V apparatus will view these food storage locations (in step 706 of FIG. 7B) and capture still, video and/or depth images of the food storage locations. As the personal A/V apparatus views images of food storage locations, it will automatically recognize items on the food inventory using one or more image recognition processes in conjunction with knowing its three-dimensional location and orientation - detecting a subsequent food item); retrieving subsequent data associated with the detected subsequent food item (‘345; figs. 7C-7E, 9B, 9C and 10A; ¶ 0202-0205; ¶ 0247-0251; retrieving subsequent data associated with the detected subsequent food item); and curating further the contextual overlay based on the retrieved subsequent data (‘345 figs. 7C-7E, 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; curating a contextual overlay based on the retrieved data for the subsequent food item; as one example, the augmented reality image 921 is projected to the user at a defined location relative to the food item 668, e.g., centered in front of the food item 668 and or to subsequent data associated with the detected subsequent food item as further detailed in the cited paragraphs). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 8, 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Geisner et al. (U. S. Patent Application Publication 2013/0085345 A1, already of record, hereafter ‘345) as applied to claims 1-7, 9-14 and 16-19 above, and in view of Burns et al. (U. S. Patent Application Publication 2018/0101986 A1, already of record, hereafter ‘986). Regarding claim 8, Geisner teaches the method of claim 1 and further teaches, presenting on the display a plurality of graphical elements (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display), each associated with additional data associated with the food item (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay information associating the plurality of graphical elements with additional data associated with the food item at an overlay position relative to the display), wherein executing comprises populating the contextual overlay based on the select graphical element (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user) and does not teach, wherein the eyewear device further comprises a touchpad, and wherein the method comprises: detecting a current fingertip location relative to the touchpad; presenting a movable element at a current element position on the display in accordance with the detected current fingertip location; detecting a tapping gesture relative to the touchpad; identifying on the display a select graphical element positioned nearest to the current element position; and executing a selecting action relative to the select graphical element in accordance with the tapping gesture. Burns, working in the same field of endeavor, however, teaches wherein the eyewear device further comprises a touchpad (‘986; ¶ 0043; The touch-sensitive interfaces can include components that are built into or are independent of input device 120. For example, a touchpad component or device can be associated with input device 120. The touchpad includes a tactile sensor that can specifically operate as a specialized surface that can translate motion and position relative to the surface. A touchscreen electronic visual display can also operate with input device 120 to receive inputs via the touchscreen. A user can use their fingers on the touch-sensitive interfaces to provide touch input. Any of the various inputs and/or combinations thereof can be used to provide specifically defined controls for a graphical user interface on HMD 104), detecting a current fingertip location relative to the touchpad (‘986; ¶ 0042-0043; Input device 120 can also operate with touch-sensitive interfaces that either replace or operate in combination with the trigger buttons. The touch-sensitive interfaces can include components that are built into or are independent of input device 120. For example, a touchpad component or device can be associated with input device 120. The touchpad includes a tactile sensor that can specifically operate as a specialized surface that can translate motion and position relative to the surface. A touchscreen electronic visual display can also operate with input device 120 to receive inputs via the touchscreen. A user can use their fingers on the touch-sensitive interfaces to provide touch input. Any of the various inputs and/or combinations thereof can be used to provide specifically defined controls for a graphical user interface on HMD 104; This can include processing input based on pressure associated with input device 120 (e.g., the triggers thereof) and can be associated with a pressure sensor(s) that measures and reports pressure data); presenting a movable element at a current element position on the display in accordance with the detected current fingertip location (‘986; ¶ 0046-0047; In some implementations, HDM 104 identifies the object based on a user selecting the object using HDM 104 and/or at least one input device such as input device 120. The user may select an object using any suitable mechanism. As an example, the user may select the object using a graphical interface associated with display of the object (i.e., by providing inputs to the interface). This can include, as examples, any combination of the user gazing at the object, moving a physical and/or virtual cursor towards the object, and pointing a physical and/or virtual cursor towards the object. It is noted that as used herein, a user can refer to a user wearing a HMD, contacting the virtual reality device, and/or detectable by the virtual reality device. Any determinations made by 3D graphical visualization mechanism 102 described herein as being based on a user can be based on one or more of any combination of these types of users (e.g., gaze direction of one or more users, inputs from one or more users, etc.). Furthermore, in methods described herein determinations may be based on a different user(s) for different portions of the method. Where a cursor is employed, HDM 104 can identify an object based on a position of the cursor (in real and/or virtual space) with respect to the object. An example of a physical cursor includes input device 120 when a position of input device 120 in space corresponds to a point or region of the 3D virtual reality environment that will be affected by input from the user. For example, a user may select one object by pointing input device 120 at or at least proximate the object (as perceived by or from a perspective of the user), or select another object by pointing input device 120 at or at least proximate the other object (as perceived by or from a perspective of the user). As another example, a finger, hand, arm, or other physical portion of a person or persons could be used as a physical cursor similar to input device 120); detecting a tapping gesture relative to the touchpad (‘986; ¶ 0042-0043; Input device 120 can also operate with touch-sensitive interfaces that either replace or operate in combination with the trigger buttons. The touch-sensitive interfaces can include components that are built into or are independent of input device 120. For example, a touchpad component or device can be associated with input device 120. The touchpad includes a tactile sensor that can specifically operate as a specialized surface that can translate motion and position relative to the surface. A touchscreen electronic visual display can also operate with input device 120 to receive inputs via the touchscreen. A user can use their fingers on the touch-sensitive interfaces to provide touch input. Any of the various inputs and/or combinations thereof can be used to provide specifically defined controls for a graphical user interface on HMD 104; This can include processing input based on pressure associated with input device 120 (e.g., the triggers thereof) and can be associated with a pressure sensor(s) that measures and reports pressure data); identifying on the display a select graphical element positioned nearest to the current element position (‘986; ¶ 0044; Feedback generator 128 is responsible for generating feedback for a user on input device 120, which can include haptic feedback, audible feedback, visual feedback, or any combination thereof. Haptic feedback can refer to the application of forces, vibrations or motions at input device 120 to recreate a sense of touch. Feedback generator 128 may generate the feedback at the direction of I/O manager 106); and executing a selecting action relative to the select graphical element in accordance with the tapping gesture (‘986; ¶ 0046-0047; In some implementations, HDM 104 identifies the object based on a user selecting the object using HDM 104 and/or at least one input device such as input device 120. The user may select an object using any suitable mechanism. As an example, the user may select the object using a graphical interface associated with display of the object (i.e., by providing inputs to the interface). This can include, as examples, any combination of the user gazing at the object, moving a physical and/or virtual cursor towards the object, and pointing a physical and/or virtual cursor towards the object. It is noted that as used herein, a user can refer to a user wearing a HMD, contacting the virtual reality device, and/or detectable by the virtual reality device. Any determinations made by 3D graphical visualization mechanism 102 described herein as being based on a user can be based on one or more of any combination of these types of users (e.g., gaze direction of one or more users, inputs from one or more users, etc.). Furthermore, in methods described herein determinations may be based on a different user(s) for different portions of the method. Where a cursor is employed, HDM 104 can identify an object based on a position of the cursor (in real and/or virtual space) with respect to the object. An example of a physical cursor includes input device 120 when a position of input device 120 in space corresponds to a point or region of the 3D virtual reality environment that will be affected by input from the user. For example, a user may select one object by pointing input device 120 at or at least proximate the object (as perceived by or from a perspective of the user), or select another object by pointing input device 120 at or at least proximate the other object (as perceived by or from a perspective of the user). As another example, a finger, hand, arm, or other physical portion of a person or persons could be used as a physical cursor similar to input device 120.) for the benefit enabling direct tactile user input to a real physical device for interacting with real or virtual objects displayed within a 3D virtual reality environment. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to have combined the teachings of Burns where the eyewear device further comprises a touchpad, and wherein the method comprises: detecting a current fingertip location relative to the touchpad; presenting a movable element at a current element position on the display in accordance with the detected current fingertip location; detecting a tapping gesture relative to the touchpad; identifying on the display a select graphical element positioned nearest to the current element position; and executing a selecting action relative to the select graphical element in accordance with the tapping gesture with the methods of presenting a contextual overlay in response to items detected with an eyewear device in a physical environment as taught by Geisner for the benefit enabling direct tactile user input to a real physical device for interacting with real or virtual objects displayed within a 3D virtual reality environment. Regarding claim 15, Geisner teaches the contextual overlay system of claim 10 and further teaches, wherein the execution configures the eyewear device to perform further functions to: present on the display a plurality of graphical elements (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display), each associated with additional data associated with the food item (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay information associating the plurality of graphical elements with additional data associated with the food item at an overlay position relative to the display); wherein the function to execute comprises a function to populate the contextual overlay based on the select graphical element (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user) and does not teach wherein the eyewear device further comprises a touchpad; detect a current fingertip location relative to the touchpad; present a movable element at a current element position on the display in accordance with the detected current fingertip location; detect a tapping gesture relative to the touchpad; identify on the display a select graphical element positioned nearest to the current element position; and execute a selecting action relative to the select graphical element in accordance with the tapping gesture. Burns, working in the same field of endeavor, however, teaches wherein the eyewear device further comprises a touchpad (‘986; ¶ 0043; The touch-sensitive interfaces can include components that are built into or are independent of input device 120. For example, a touchpad component or device can be associated with input device 120. The touchpad includes a tactile sensor that can specifically operate as a specialized surface that can translate motion and position relative to the surface. A touchscreen electronic visual display can also operate with input device 120 to receive inputs via the touchscreen. A user can use their fingers on the touch-sensitive interfaces to provide touch input. Any of the various inputs and/or combinations thereof can be used to provide specifically defined controls for a graphical user interface on HMD 104); detect a current fingertip location relative to the touchpad (‘986; ¶ 0042-0043; Input device 120 can also operate with touch-sensitive interfaces that either replace or operate in combination with the trigger buttons. The touch-sensitive interfaces can include components that are built into or are independent of input device 120. For example, a touchpad component or device can be associated with input device 120. The touchpad includes a tactile sensor that can specifically operate as a specialized surface that can translate motion and position relative to the surface. A touchscreen electronic visual display can also operate with input device 120 to receive inputs via the touchscreen. A user can use their fingers on the touch-sensitive interfaces to provide touch input. Any of the various inputs and/or combinations thereof can be used to provide specifically defined controls for a graphical user interface on HMD 104; This can include processing input based on pressure associated with input device 120 (e.g., the triggers thereof) and can be associated with a pressure sensor(s) that measures and reports pressure data); present a movable element at a current element position on the display in accordance with the detected current fingertip location (‘986; ¶ 0046-0047; In some implementations, HDM 104 identifies the object based on a user selecting the object using HDM 104 and/or at least one input device such as input device 120. The user may select an object using any suitable mechanism. As an example, the user may select the object using a graphical interface associated with display of the object (i.e., by providing inputs to the interface). This can include, as examples, any combination of the user gazing at the object, moving a physical and/or virtual cursor towards the object, and pointing a physical and/or virtual cursor towards the object. It is noted that as used herein, a user can refer to a user wearing a HMD, contacting the virtual reality device, and/or detectable by the virtual reality device. Any determinations made by 3D graphical visualization mechanism 102 described herein as being based on a user can be based on one or more of any combination of these types of users (e.g., gaze direction of one or more users, inputs from one or more users, etc.). Furthermore, in methods described herein determinations may be based on a different user(s) for different portions of the method. Where a cursor is employed, HDM 104 can identify an object based on a position of the cursor (in real and/or virtual space) with respect to the object. An example of a physical cursor includes input device 120 when a position of input device 120 in space corresponds to a point or region of the 3D virtual reality environment that will be affected by input from the user. For example, a user may select one object by pointing input device 120 at or at least proximate the object (as perceived by or from a perspective of the user), or select another object by pointing input device 120 at or at least proximate the other object (as perceived by or from a perspective of the user). As another example, a finger, hand, arm, or other physical portion of a person or persons could be used as a physical cursor similar to input device 120); detect a tapping gesture relative to the touchpad (‘986; ¶ 0042-0043; Input device 120 can also operate with touch-sensitive interfaces that either replace or operate in combination with the trigger buttons. The touch-sensitive interfaces can include components that are built into or are independent of input device 120. For example, a touchpad component or device can be associated with input device 120. The touchpad includes a tactile sensor that can specifically operate as a specialized surface that can translate motion and position relative to the surface. A touchscreen electronic visual display can also operate with input device 120 to receive inputs via the touchscreen. A user can use their fingers on the touch-sensitive interfaces to provide touch input. Any of the various inputs and/or combinations thereof can be used to provide specifically defined controls for a graphical user interface on HMD 104; This can include processing input based on pressure associated with input device 120 (e.g., the triggers thereof) and can be associated with a pressure sensor(s) that measures and reports pressure data); identify on the display a select graphical element positioned nearest to the current element position (‘986; ¶ 0044; Feedback generator 128 is responsible for generating feedback for a user on input device 120, which can include haptic feedback, audible feedback, visual feedback, or any combination thereof. Haptic feedback can refer to the application of forces, vibrations or motions at input device 120 to recreate a sense of touch. Feedback generator 128 may generate the feedback at the direction of I/O manager 106); and execute a selecting action relative to the select graphical element in accordance with the tapping gesture (‘986; ¶ 0046-0047; In some implementations, HDM 104 identifies the object based on a user selecting the object using HDM 104 and/or at least one input device such as input device 120. The user may select an object using any suitable mechanism. As an example, the user may select the object using a graphical interface associated with display of the object (i.e., by providing inputs to the interface). This can include, as examples, any combination of the user gazing at the object, moving a physical and/or virtual cursor towards the object, and pointing a physical and/or virtual cursor towards the object. It is noted that as used herein, a user can refer to a user wearing a HMD, contacting the virtual reality device, and/or detectable by the virtual reality device. Any determinations made by 3D graphical visualization mechanism 102 described herein as being based on a user can be based on one or more of any combination of these types of users (e.g., gaze direction of one or more users, inputs from one or more users, etc.). Furthermore, in methods described herein determinations may be based on a different user(s) for different portions of the method. Where a cursor is employed, HDM 104 can identify an object based on a position of the cursor (in real and/or virtual space) with respect to the object. An example of a physical cursor includes input device 120 when a position of input device 120 in space corresponds to a point or region of the 3D virtual reality environment that will be affected by input from the user. For example, a user may select one object by pointing input device 120 at or at least proximate the object (as perceived by or from a perspective of the user), or select another object by pointing input device 120 at or at least proximate the other object (as perceived by or from a perspective of the user). As another example, a finger, hand, arm, or other physical portion of a person or persons could be used as a physical cursor similar to input device 120.) for the benefit enabling direct tactile user input to a real physical device for interacting with real or virtual objects displayed within a 3D virtual reality environment. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to have combined the teachings of Burns where the eyewear device further comprises a touchpad, and wherein the method comprises: detecting a current fingertip location relative to the touchpad; presenting a movable element at a current element position on the display in accordance with the detected current fingertip location; detecting a tapping gesture relative to the touchpad; identifying on the display a select graphical element positioned nearest to the current element position; and executing a selecting action relative to the select graphical element in accordance with the tapping gesture with the methods of presenting a contextual overlay in response to items detected with an eyewear device in a physical environment as taught by Geisner for the benefit enabling direct tactile user input to a real physical device for interacting with real or virtual objects displayed within a 3D virtual reality environment. Regarding claim 20, Geisner teaches the non-transitory computer-readable medium storing program code of claim 16 and further teaches wherein the program code, when executed, is operative to cause the electronic processor to perform the steps of: presenting on the display a graphical element based on the data (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display), each associated with additional data associated with the food item (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay information associating the plurality of graphical elements with additional data associated with the food item at an overlay position relative to the display); presenting on the display adjacent the current item position a label based on the data (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display); playing through a loudspeaker (‘345; ¶ 0097; two ear phones 130 of the eyewear device) a presentation message based on the data (‘345; ¶ 0209, The user can be informed of the recommendation in other ways as well, such as by an audio message or tone ( e.g. a pleasing bell for a positive recommendation or a warning horn for a negative recommendation); presenting on the display a plurality of graphical elements (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay at an overlay position relative to the display), each associated with additional data associated with the food item (‘345 figs. 9B, 9C; 14 G and 14H; ¶ 0202-0205; ¶ 0247-0251; presenting on the display the contextual overlay information associating the plurality of graphical elements with additional data associated with the food item at an overlay position relative to the display), wherein executing comprises populating the contextual overlay based on the select graphical element (‘345; figs. 7C, 7E, 9B, 10A, 14G and 14H; ¶ 0167; In a composite image 790 of the storage location, an augmented reality image 791 in the form of an arrow is depicted to highlight the food item 652. The augmented reality image 654 which outlines the box of vegan pasta 652 could also be provided. The augmented reality image 791 is projected to the user at a defined location relative to the food item 652, e.g., above the food item and pointing at the food item, in the field of view of the user. This is a field of view of the augmented reality system. Similarly, the augmented reality image 654 is projected to the user at a defined location relative to the food item 652, e.g., around a border of the food item 652 and conforming to the shape of the food item 652, in the field of view of the user); and does not teach detecting a current fingertip location relative to a touchpad coupled to the eyewear device; presenting a movable element at a current element position on the display in accordance with the detected current fingertip location; detecting a tapping gesture relative to the touchpad; identifying on the display a select graphical element positioned nearest to the current element position; and executing a selecting action relative to the select graphical element in accordance with the tapping gesture. Burns, working in the same field of endeavor, however, teaches detecting a current fingertip location relative to a touchpad (‘986; ¶ 0042-0043; Input device 120 can also operate with touch-sensitive interfaces that either replace or operate in combination with the trigger buttons. The touch-sensitive interfaces can include components that are built into or are independent of input device 120. For example, a touchpad component or device can be associated with input device 120. The touchpad includes a tactile sensor that can specifically operate as a specialized surface that can translate motion and position relative to the surface. A touchscreen electronic visual display can also operate with input device 120 to receive inputs via the touchscreen. A user can use their fingers on the touch-sensitive interfaces to provide touch input. Any of the various inputs and/or combinations thereof can be used to provide specifically defined controls for a graphical user interface on HMD 104; This can include processing input based on pressure associated with input device 120 (e.g., the triggers thereof) and can be associated with a pressure sensor(s) that measures and reports pressure data) coupled to the eyewear device (‘986; ¶ 0043; The touch-sensitive interfaces can include components that are built into or are independent of input device 120. For example, a touchpad component or device can be associated with input device 120. The touchpad includes a tactile sensor that can specifically operate as a specialized surface that can translate motion and position relative to the surface. A touchscreen electronic visual display can also operate with input device 120 to receive inputs via the touchscreen. A user can use their fingers on the touch-sensitive interfaces to provide touch input. Any of the various inputs and/or combinations thereof can be used to provide specifically defined controls for a graphical user interface on HMD 104); presenting a movable element at a current element position on the display in accordance with the detected current fingertip location (‘986; ¶ 0046-0047; In some implementations, HDM 104 identifies the object based on a user selecting the object using HDM 104 and/or at least one input device such as input device 120. The user may select an object using any suitable mechanism. As an example, the user may select the object using a graphical interface associated with display of the object (i.e., by providing inputs to the interface). This can include, as examples, any combination of the user gazing at the object, moving a physical and/or virtual cursor towards the object, and pointing a physical and/or virtual cursor towards the object. It is noted that as used herein, a user can refer to a user wearing a HMD, contacting the virtual reality device, and/or detectable by the virtual reality device. Any determinations made by 3D graphical visualization mechanism 102 described herein as being based on a user can be based on one or more of any combination of these types of users (e.g., gaze direction of one or more users, inputs from one or more users, etc.). Furthermore, in methods described herein determinations may be based on a different user(s) for different portions of the method. Where a cursor is employed, HDM 104 can identify an object based on a position of the cursor (in real and/or virtual space) with respect to the object. An example of a physical cursor includes input device 120 when a position of input device 120 in space corresponds to a point or region of the 3D virtual reality environment that will be affected by input from the user. For example, a user may select one object by pointing input device 120 at or at least proximate the object (as perceived by or from a perspective of the user), or select another object by pointing input device 120 at or at least proximate the other object (as perceived by or from a perspective of the user). As another example, a finger, hand, arm, or other physical portion of a person or persons could be used as a physical cursor similar to input device 120); detecting a tapping gesture relative to the touchpad (‘986; ¶ 0042-0043; Input device 120 can also operate with touch-sensitive interfaces that either replace or operate in combination with the trigger buttons. The touch-sensitive interfaces can include components that are built into or are independent of input device 120. For example, a touchpad component or device can be associated with input device 120. The touchpad includes a tactile sensor that can specifically operate as a specialized surface that can translate motion and position relative to the surface. A touchscreen electronic visual display can also operate with input device 120 to receive inputs via the touchscreen. A user can use their fingers on the touch-sensitive interfaces to provide touch input. Any of the various inputs and/or combinations thereof can be used to provide specifically defined controls for a graphical user interface on HMD 104; This can include processing input based on pressure associated with input device 120 (e.g., the triggers thereof) and can be associated with a pressure sensor(s) that measures and reports pressure data); identifying on the display a select graphical element positioned nearest to the current element position (‘986; ¶ 0044; Feedback generator 128 is responsible for generating feedback for a user on input device 120, which can include haptic feedback, audible feedback, visual feedback, or any combination thereof. Haptic feedback can refer to the application of forces, vibrations or motions at input device 120 to recreate a sense of touch. Feedback generator 128 may generate the feedback at the direction of I/O manager 106); and executing a selecting action relative to the select graphical element in accordance with the tapping gesture (‘986; ¶ 0046-0047; In some implementations, HDM 104 identifies the object based on a user selecting the object using HDM 104 and/or at least one input device such as input device 120. The user may select an object using any suitable mechanism. As an example, the user may select the object using a graphical interface associated with display of the object (i.e., by providing inputs to the interface). This can include, as examples, any combination of the user gazing at the object, moving a physical and/or virtual cursor towards the object, and pointing a physical and/or virtual cursor towards the object. It is noted that as used herein, a user can refer to a user wearing a HMD, contacting the virtual reality device, and/or detectable by the virtual reality device. Any determinations made by 3D graphical visualization mechanism 102 described herein as being based on a user can be based on one or more of any combination of these types of users (e.g., gaze direction of one or more users, inputs from one or more users, etc.). Furthermore, in methods described herein determinations may be based on a different user(s) for different portions of the method. Where a cursor is employed, HDM 104 can identify an object based on a position of the cursor (in real and/or virtual space) with respect to the object. An example of a physical cursor includes input device 120 when a position of input device 120 in space corresponds to a point or region of the 3D virtual reality environment that will be affected by input from the user. For example, a user may select one object by pointing input device 120 at or at least proximate the object (as perceived by or from a perspective of the user), or select another object by pointing input device 120 at or at least proximate the other object (as perceived by or from a perspective of the user). As another example, a finger, hand, arm, or other physical portion of a person or persons could be used as a physical cursor similar to input device 120.) for the benefit enabling direct tactile user input to a real physical device for interacting with a real or virtual object displayed within a 3D virtual reality environment. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to have combined the teachings of Burns where the eyewear device further comprises a touchpad, and wherein the method comprises: detecting a current fingertip location relative to the touchpad; presenting a movable element at a current element position on the display in accordance with the detected current fingertip location; detecting a tapping gesture relative to the touchpad; identifying on the display a select graphical element positioned nearest to the current element position; and executing a selecting action relative to the select graphical element in accordance with the tapping gesture with the methods of presenting a contextual overlay in response to items detected with an eyewear device in a physical environment as taught by Geisner for the benefit enabling direct tactile user input to a real physical device for interacting with real or virtual objects displayed within a 3D virtual reality environment. Conclusion The following prior art, made of record, was not relied upon but is considered pertinent to applicant's disclosure: US 20220179665 A1 Displaying User Related Contextual Keywords and Controls For User Selection And Storing And Associating Selected Keywords And User Interaction With Controls Data With User – System and method for displaying contextual keywords with corresponding associated or related or contextual one or more types of user actions, reactions, call-to-actions, relations, activities and interactions for user selection, wherein displaying contextual keywords based on plurality of factors and in the event of selection of keywords store and associate keywords with unique identity of user and in the event of selection of keyword associated action, associate keywords with unique identity of user and store data related to selected type of user actions, reactions, call-to-actions, relations, activities and interactions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD MARTELLO whose telephone number is (571)270-1883. The examiner can normally be reached on M-F from 9AM to 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached at telephone number (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD MARTELLO/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 29, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573004
GENERATIVE IMAGE FILLING USING A REFERENCE IMAGE
2y 5m to grant Granted Mar 10, 2026
Patent 12548257
Systems and Methods for 3D Facial Modeling
2y 5m to grant Granted Feb 10, 2026
Patent 12530839
RELIGHTABLE NEURAL RADIANCE FIELD MODEL
2y 5m to grant Granted Jan 20, 2026
Patent 12462480
IMAGE PROCESSING METHOD
2y 5m to grant Granted Nov 04, 2025
Patent 10140972
TEXT TO SPEECH PROCESSING SYSTEM AND METHOD, AND AN ACOUSTIC MODEL TRAINING SYSTEM AND METHOD
2y 5m to grant Granted Nov 27, 2018
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
30%
Grant Probability
49%
With Interview (+19.5%)
5y 4m
Median Time to Grant
Low
PTA Risk
Based on 138 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month