Prosecution Insights
Last updated: April 19, 2026
Application No. 18/797,312

HYPER-CONNECTED AND SYNCHRONIZED AR GLASSES

Non-Final OA §102§DP
Filed
Aug 07, 2024
Examiner
ROBINSON, TERRELL M
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
403 granted / 486 resolved
+20.9% vs TC avg
Moderate +8% lift
Without
With
+7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
27 currently pending
Career history
513
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
17.2%
-22.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 486 resolved cases

Office Action

§102 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Double Patenting (Non-Statutory) The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Double patenting between App. 18/797,312 and US Patent No. 12,088,781 B2 Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 6, 8, 14, 15, and 19 of U.S. Patent No. 12,088,781 B2 in view of Haddick (WO 2013/049248 A1). Although the claims at issue are not identical, they are not patentably distinct from each other because it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of US Patent No. 12,088,781 B2 to use the synchronized AR glasses of Haddick to allow the user functions for enabling selective sharing of video streams for manipulation of objects improving the user experience by allowing connected wearers to be in synch with their physical senses. Application 18/797,312 U.S. Patent No. 12,088,781 B2 Claim 1 Claim 1 + Haddick Claim 2 Claim 1 + Haddick Claim 3 Claim 6 + Haddick Claim 4 Claim 4 + Haddick Claim 5 Claim 8 + Haddick Claim 6 Claim 1 + Haddick Claim 7 Claim 7 + Haddick Claim 8 Claim 14 + Haddick Claim 9 Claim 15 + Haddick Claim 10 Claim 15 + Haddick Claim 11 Claim 15 + Haddick Claim 12 Claim 15 + Haddick Claim 13 Claim 15 + Haddick Claim 14 Claim 15 + Haddick Claim 15 Claim 15 + Haddick Claim 16 Claim 15 + Haddick Claim 17 Claim 19 + Haddick Claim 18 Claim 19 + Haddick Claim 19 Claim 19 + Haddick Claim 20 Claim 19 + Haddick Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Haddick (WO 2013/049248 A1, hereinafter referenced “Haddick”) In regards to claim 1. Haddick discloses an electronic eyewear device adapted to be worn on a head of a first user (Haddick, Abstract), comprising: -at least one camera arranged to capture a point of view (POV) video stream in an environment of the first user (Haddick, Fig. 21 and para [00462] and [00466]; Reference at [00462] discloses augmented reality piece 2100 includes a frame 2102 and left and right earpieces or temple pieces 2104. Protective lenses 2106, such as ballistic lenses, are mounted on the front of the frame 2102 to protect the eyes of the user or to correct the user's view of the surrounding environment if they are prescription lenses. The front portion of the frame may also be used to mount a camera or image sensor 2130. Paragraph [00466] discloses the eyepiece may be able to do context-aware capture of video that adjusts video capture parameters based on the motion of the viewer, where a parameter may be image resolution, video compression, frames per second rate, and the like.); -a microphone arranged to capture an audio stream in the environment of the first user (Haddick, Fig. 21 and para [00462] and [00609]; Reference at [00462] discloses augmented reality piece 2100 includes a frame 2102 and left and right earpieces or temple pieces 2104. Protective lenses 2106, such as ballistic lenses, are mounted on the front of the frame 2102 to protect the eyes of the user or to correct the user's view of the surrounding environment if they are prescription lenses. The front portion of the frame may also be used to mount a camera or image sensor 2130 and one or more microphones 2132. Paragraph [00609] discloses the eyepiece may utilize digital CMOS image sensors and directional microphones (e.g. microphone arrays) as described herein, such as for visible imaging for monitoring the visible scene (e.g. for biometric recognition, gesture control, coordinated imaging with 2D/3D projected maps), IR/UV imaging for scene enhancement (e.g. seeing through haze, smoke, in the dark), sound direction sensing (e.g. the direction of a gunshot or explosion, voice detection), and the like (i.e. capturing audio stream in environment of the user); -a display (Haddick, Fig. 21 and para [00464]; Reference discloses in an embodiment, a digital signal processor (DSP) may be programmed and/or configured to receive video feed information and configure the video feed to drive whatever type of image source is being used with the optical display); -a memory that stores instructions (Haddick, para [00464]; Reference discloses the DSP may include a memory, such as a random access memory (RAM) or other dynamic storage device (e.g., dynamic RAM (DRAM), static RAM (SRAM), and synchronous DRAM (SDRAM)), coupled to the bus for storing information and instructions to be executed); -and a processor that executes the instructions to perform operations (Haddick, para [00464]; Reference discloses the DSP may include a memory, such as a random access memory (RAM) or other dynamic storage device (e.g., dynamic RAM (DRAM), static RAM (SRAM), and synchronous DRAM (SDRAM)), coupled to the bus for storing information and instructions to be executed) including: -activating a social media application to identify and select at least a second user to participate in a communication session (Haddick, para [00744]; Reference discloses using other applications, such as photo identifying software from Flickr, one can then identify the particular nearby person, and one can then download information from social networking sites with information about the person. This information may include the person's name and the profile the person has made available on sites such as Facebook, Twitter, and the like. This application may be used to refresh a user's memory of a person or to identify a nearby person, as well as to gather information about the person); -enabling the first user to select permissions for at least the second user to access at least one of the at least one camera, the microphone, or the display of the electronic eyewear device of the first user (Haddick, para [00627]; Reference discloses there may be an authentication facility associated with accessing functionality of the eyepiece, such as access to displayed or projected content, access to restricted projected content, enabling functionality of the eyepiece itself (e.g. as through a login to access functionality of the eyepiece) either in whole or in part, and the like. Authentication allows for access to eyepiece functions i.e. camera, microphone, or display); -establishing an always-on communication session with a second electronic eyewear device adapted to be worn on a head of the second user (Haddick, para [00651]; Reference discloses the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device…Communications between the eyepieces may be direct, through an Internet network, through the cell-network, through a satellite network, and the like. Processing of position information contributing to the synchronization may be executed in a master processor in a single eyepiece, collectively amongst a group of eyepieces, in remote server system, and the like, or any combination thereof); -sharing the permissions with the second user upon establishment of the communication session (Haddick, para [00627] and [00744]; Reference at [00744] discloses in another example, a person may be able to post that comment at the location of the place such that the comment is available when another person comes to that location. In this way, a wearer may be able to access comments left by others when they come to the location (i.e. shared permission as content is left behind for second user via the authentications described in [00627]); -selectively sharing at least one of the audio stream or the POV video stream with the second electronic eyewear device of the second user during the communication session (Haddick, para [00651]; Reference discloses the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device (i.e. selective sharing of at least one of audio or video between eyewear devices for two users)… For example, a group of concertgoers may synchronize their eyepieces with a feed from the concert producers such that visual effects or audio may be pushed to people with eyepieces by the concert producer, performers, other audience members, and the like. In an example, the performer may have a master eyepiece and may control sending content to audience members.), -wherein the second user selectively accesses the at least one of the camera, the microphone, or the display during the communication session in accordance with the selected permissions (Haddick, para [00651]; Reference discloses the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device (i.e. selective sharing of at least one of audio or video between eyewear devices for two users)… For example, a group of concertgoers may synchronize their eyepieces with a feed from the concert producers such that visual effects or audio may be pushed to people with eyepieces by the concert producer, performers, other audience members, and the like. In an example, the performer may have a master eyepiece and may control sending content to audience members. The content of the performers or audience members being accessed is interpreted as secondary users selectively accessing the camera, microphone, or display during the communication session in accordance with the selected permissions). In regards to claim 2. Haddick discloses the electronic eyewear device of claim 1. Haddick further discloses -wherein enabling the first user to select permissions for at least the second user includes the processor executing instructions to cause the processor to perform additional operations including enabling the first user to set permissions for at least the second user to access at least one of the audio stream or the POV video stream (Haddick, para [00627]; Reference discloses there may be an authentication facility associated with accessing functionality of the eyepiece, such as access to displayed or projected content (i.e. audio or POV stream), access to restricted projected content, enabling functionality of the eyepiece itself (e.g. as through a login to access functionality of the eyepiece) either in whole or in part, and the like. Authentication allows for access to eyepiece functions i.e. camera, microphone, or display). In regards to claim 3. Haddick discloses the electronic eyewear device of claim 2. Haddick further discloses -wherein the at least one camera comprises at least two cameras that capture respective POV video streams in the environment of the first user, and wherein enabling the first user to select permissions includes the processor executing instructions to cause the processor to perform additional operations including at least one of determining which POV video stream from the at least two cameras the second user may access and for how long or determining whether the audio stream may be accessed by the second user and for how long (Haddick, para [00627] and [00743]; Reference at [00627] discloses there may be an authentication facility associated with accessing functionality of the eyepiece, such as access to displayed or projected content (i.e. audio or POV stream), access to restricted projected content, enabling functionality of the eyepiece itself (e.g. as through a login to access functionality of the eyepiece) either in whole or in part, and the like (i.e. authentication allows for access to eyepiece functions i.e. camera, microphone, or display interpreted as permissions). Para [00743] discloses in an embodiment, when the wearer points the eyepiece in a target user's direction, they may indicate interest in the user if the eyepiece is pointed for a duration of time and/or a gesture, head, eye, or audio control is activated. The target user may receive an indication of interest on their phone or in their glasses. If the target user had marked the wearer as interesting but was waiting on the wearer to show interest first, an indication may immediately pop up in the eyepiece of the target user's interest). In regards to claim 4. Haddick discloses the electronic eyewear device of claim 1. Haddick further discloses -wherein enabling the first user to select permissions for at least the second user includes the processor executing instructions to cause the processor to perform additional operations including enabling the first user to select to receive a notification to the display when at least the second user has opted to view the POV video stream, listen to the audio stream, or both during the always-on communication session (Haddick, para [00743]; Reference discloses in an embodiment, when the wearer points the eyepiece in a target user's direction, they may indicate interest in the user if the eyepiece is pointed for a duration of time and/or a gesture, head, eye, or audio control is activated. The target user may receive an indication of interest on their phone or in their glasses. If the target user had marked the wearer as interesting but was waiting on the wearer to show interest first, an indication may immediately pop up in the eyepiece of the target user's interest (i.e. providing a notification to the display when the second user has opted to view the video stream)). In regards to claim 5. Haddick discloses the electronic eyewear device of claim 1. Haddick further discloses -wherein selectively sharing at least one of the audio stream or the POV video stream with the second electronic eyewear device of the second user during the communication session comprises the processor executing the instructions to cause the processor to share the at least one of the audio stream or the POV video stream with the second electronic eyewear device of the second user for a duration of sharing time specified by the first user (Haddick, para [00743]; Reference discloses in an embodiment, when the wearer points the eyepiece in a target user's direction, they may indicate interest in the user if the eyepiece is pointed for a duration of time and/or a gesture, head, eye, or audio control is activated. The target user may receive an indication of interest on their phone or in their glasses. If the target user had marked the wearer as interesting but was waiting on the wearer to show interest first, an indication may immediately pop up in the eyepiece of the target user's interest). In regards to claim 6. Haddick discloses the electronic eyewear device of claim 5. Haddick further discloses -wherein selectively sharing at least one of the audio stream or the POV video stream with the second electronic eyewear device of the second user during the communication session comprises the processor executing the instructions to end the sharing when the first user or the second user in the always-on communication session taps the first or second electronic eyewear devices or provides a gesture to end the sharing (Haddick, para [00684]; Reference discloses control of the eyepiece may be enabled though gestures by the wearer. For instance, the eyepiece may have a camera that views outward (e.g. forward, to the side, down) and interprets gestures or movements of the hand of the wearer as control signals. Hand signals may include passing the hand past the camera, hand positions or sign language in front of the camera, pointing to a real-world object (such as to activate augmentation of the object), and the like. Hand motions may also be used to manipulate objects displayed on the inside of the translucent lens, such as moving an object, rotating an object, deleting an object, opening-closing a screen or window in the image, and the like (i.e. reference discloses gesture control as the specific tap gesture to end a session would be a design choice)). In regards to claim 7. Haddick discloses the electronic eyewear device of claim 6. Haddick further discloses -wherein selectively sharing at least one of the audio stream or the POV video stream with the another electronic eyewear device of the second user during the communication session comprises the processor executing the instructions to initiate an audio stream, POV video stream, or both to share with the second user or to receive a request from the second user to drop in on at least one of the audio stream or POV video stream that is currently being captured by at least one of the camera or the microphone (Haddick, para [00651]; Reference discloses in embodiments, the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device (interpreted as initiating an audio stream, video stream, or both to share with at least the second user)). In regards to claim 8. Haddick discloses the electronic eyewear device of claim 1. Haddick further discloses -wherein establishing the always-on communication session with another electronic eyewear device comprises the processor executing the instructions to establish a livestreaming session of at least one of audio or video to a server for selection by the second user (Haddick, para [00466]; Reference discloses the eyepiece may be able to do context-aware capture of video that adjusts video capture parameters based on the motion of the viewer, where a parameter may be image resolution, video compression, frames per second rate, and the like. The eyepiece may be used for a plurality of video applications, such as recording video taken through an integrated camera or as transmitted from an external video device, playing back video to the wearer through the eyepiece (by methods and systems as described herein), streaming live video either from an external source (e.g. a conference call, a live news feed, a video stream from another eyepiece) or from an integrated camera (e.g. from an integrated non-line-of-sight camera), and the like). In regards to claim 9. Haddick discloses a method of providing hyper-connectivity between a first user of a first electronic eyewear device and a second user of a second electronic eyewear device (Haddick, Abstract), including: -activating a social media application to enable the first user to identify and select at least the second user to participate in a communication session (Haddick, para [00744]; Reference discloses using other applications, such as photo identifying software from Flickr, one can then identify the particular nearby person, and one can then download information from social networking sites with information about the person. This information may include the person's name and the profile the person has made available on sites such as Facebook, Twitter, and the like. This application may be used to refresh a user's memory of a person or to identify a nearby person, as well as to gather information about the person); -enabling the first user to select permissions for at least the second user to access at least one of a camera, a microphone, or a display of the first electronic eyewear device (Haddick, para [00627]; Reference discloses there may be an authentication facility associated with accessing functionality of the eyepiece, such as access to displayed or projected content, access to restricted projected content, enabling functionality of the eyepiece itself (e.g. as through a login to access functionality of the eyepiece) either in whole or in part, and the like. Authentication allows for access to eyepiece functions i.e. camera, microphone, or display); -establishing an always-on communication session between the first electronic eyewear device and the second electronic eyewear device (Haddick, para [00651]; Reference discloses the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device…Communications between the eyepieces may be direct, through an Internet network, through the cell-network, through a satellite network, and the like. Processing of position information contributing to the synchronization may be executed in a master processor in a single eyepiece, collectively amongst a group of eyepieces, in remote server system, and the like, or any combination thereof); -enabling the first user to share permissions with the second user upon establishment of the communication session (Haddick, para [00627] and [00744]; Reference at [00744] discloses in another example, a person may be able to post that comment at the location of the place such that the comment is available when another person comes to that location. In this way, a wearer may be able to access comments left by others when they come to the location (i.e. shared permission as content is left behind for second user via the authentications described in [00627]); -selectively sharing at least one of an audio stream or a point of view (POV) video stream from the first electronic eyewear device with the second electronic eyewear device during the communication session (Haddick, para [00651]; Reference discloses the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device (i.e. selective sharing of at least one of audio or video between eyewear devices for two users)… For example, a group of concertgoers may synchronize their eyepieces with a feed from the concert producers such that visual effects or audio may be pushed to people with eyepieces by the concert producer, performers, other audience members, and the like. In an example, the performer may have a master eyepiece and may control sending content to audience members), -wherein the second user selectively accesses at least one of the camera, the microphone, or the display of the electronic eyewear device of the first electronic eyewear device during the communication session in accordance with the selected permissions (Haddick, para [00651]; Reference discloses the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device (i.e. selective sharing of at least one of audio or video between eyewear devices for two users)… For example, a group of concertgoers may synchronize their eyepieces with a feed from the concert producers such that visual effects or audio may be pushed to people with eyepieces by the concert producer, performers, other audience members, and the like. In an example, the performer may have a master eyepiece and may control sending content to audience members. The content of the performers or audience members being accessed is interpreted as secondary users selectively accessing the camera, microphone, or display during the communication session in accordance with the selected permissions). In regards to claim 10. Haddick discloses the method of claim 9. Haddick further discloses wherein enabling the first user to select permissions for at least the second user includes enabling the first user to select permissions for at least the second user to access at least one of the audio stream or the POV video stream from the electronic eyewear device of the first user (Haddick, para [00627]; Reference discloses there may be an authentication facility associated with accessing functionality of the eyepiece, such as access to displayed or projected content (i.e. audio or POV stream), access to restricted projected content, enabling functionality of the eyepiece itself (e.g. as through a login to access functionality of the eyepiece) either in whole or in part, and the like. Authentication allows for access to eyepiece functions i.e. camera, microphone, or display). In regards to claim 11. Haddick discloses the method of claim 10. Haddick further discloses -wherein enabling the first user to select permissions for at least the second user includes at least one of determining which POV video stream from at least two cameras of the first electronic eyewear device the second user may access and for how long or determining whether the audio stream may be accessed by the second user and for how long (Haddick, para [00627] and [00743]; Reference at [00627] discloses there may be an authentication facility associated with accessing functionality of the eyepiece, such as access to displayed or projected content (i.e. audio or POV stream), access to restricted projected content, enabling functionality of the eyepiece itself (e.g. as through a login to access functionality of the eyepiece) either in whole or in part, and the like (i.e. authentication allows for access to eyepiece functions i.e. camera, microphone, or display interpreted as permissions). Para [00743] discloses in an embodiment, when the wearer points the eyepiece in a target user's direction, they may indicate interest in the user if the eyepiece is pointed for a duration of time and/or a gesture, head, eye, or audio control is activated. The target user may receive an indication of interest on their phone or in their glasses. If the target user had marked the wearer as interesting but was waiting on the wearer to show interest first, an indication may immediately pop up in the eyepiece of the target user's interest). In regards to claim 12. Haddick discloses the method of claim 9. Haddick further discloses -wherein enabling the first user to select permissions for at least the second user comprises giving the first user an option of receiving a notification to the display when at least the second user has opted to view the POV video stream, listen to the audio stream, or both during the always-on communication session (Haddick, para [00743]; Reference discloses in an embodiment, when the wearer points the eyepiece in a target user's direction, they may indicate interest in the user if the eyepiece is pointed for a duration of time and/or a gesture, head, eye, or audio control is activated. The target user may receive an indication of interest on their phone or in their glasses. If the target user had marked the wearer as interesting but was waiting on the wearer to show interest first, an indication may immediately pop up in the eyepiece of the target user's interest (i.e. providing a notification to the display when the second user has opted to view the video stream)). In regards to claim 13. Haddick discloses the method of claim 9. Haddick further discloses -further comprising the first user sharing at least one of the audio stream or the POV video stream with the second electronic eyewear device of the second user during the communication session for a duration of sharing time specified by the first user (Haddick, para [00743]; Reference discloses in an embodiment, when the wearer points the eyepiece in a target user's direction, they may indicate interest in the user if the eyepiece is pointed for a duration of time and/or a gesture, head, eye, or audio control is activated. The target user may receive an indication of interest on their phone or in their glasses. If the target user had marked the wearer as interesting but was waiting on the wearer to show interest first, an indication may immediately pop up in the eyepiece of the target user's interest). In regards to claim 14. Haddick discloses the method of claim 13. Haddick further discloses -further comprising ending the selectively sharing of at least one of the audio stream or the POV video stream with the second electronic eyewear device of the second user during the communication session in response to at least one of the first user or the second user in the always-on communication session tapping at least one of the first or second electronic eyewear devices or providing a gesture to end the sharing (Haddick, para [00684]; Reference discloses control of the eyepiece may be enabled though gestures by the wearer. For instance, the eyepiece may have a camera that views outward (e.g. forward, to the side, down) and interprets gestures or movements of the hand of the wearer as control signals. Hand signals may include passing the hand past the camera, hand positions or sign language in front of the camera, pointing to a real-world object (such as to activate augmentation of the object), and the like. Hand motions may also be used to manipulate objects displayed on the inside of the translucent lens, such as moving an object, rotating an object, deleting an object, opening-closing a screen or window in the image, and the like (i.e. reference discloses gesture control as the specific tap gesture to end a session would be a design choice)). In regards to claim 15. Haddick discloses the method of claim 14. Haddick further discloses -wherein selectively sharing at least one of the audio stream or the POV video stream with the second electronic eyewear device of the second user during the communication session includes initiating an audio stream, POV video stream, or both to share with the second user or receiving a request from the second user to drop in on at least one of the audio stream or POV video stream that is currently being captured by at least one the camera or the microphone of the first electronic eyewear device (Haddick, para [00651]; Reference discloses in embodiments, the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device (interpreted as initiating an audio stream, video stream, or both to share with at least the second user)). In regards to claim 16. Haddick discloses the method of claim 9. Haddick further discloses -wherein establishing the always-on communication session with the second electronic eyewear device includes establishing a livestreaming session of at least one of audio or video to a server for selection by the second user (Haddick, para [00466]; Reference discloses the eyepiece may be able to do context-aware capture of video that adjusts video capture parameters based on the motion of the viewer, where a parameter may be image resolution, video compression, frames per second rate, and the like. The eyepiece may be used for a plurality of video applications, such as recording video taken through an integrated camera or as transmitted from an external video device, playing back video to the wearer through the eyepiece (by methods and systems as described herein), streaming live video either from an external source (e.g. a conference call, a live news feed, a video stream from another eyepiece) or from an integrated camera (e.g. from an integrated non-line-of-sight camera), and the like). In regards to claim 17. Haddick discloses a non-transitory computer-readable storage medium that stores instructions that when executed by at least one processor cause the at least one processor to provide hyper-connectivity between a first user of a first electronic eyewear device and a second user of a second electronic eyewear device by performing operations (Haddick, Abstract and para [00465]) including: -activating a social media application to enable the first user to identify and select at least the second user to participate in a communication session (Haddick, para [00744]; Reference discloses using other applications, such as photo identifying software from Flickr, one can then identify the particular nearby person, and one can then download information from social networking sites with information about the person. This information may include the person's name and the profile the person has made available on sites such as Facebook, Twitter, and the like. This application may be used to refresh a user's memory of a person or to identify a nearby person, as well as to gather information about the person); -enabling the first user to select permissions for at least the second user to access at least one of a camera, a microphone, or a display of the first electronic eyewear device (Haddick, para [00627]; Reference discloses there may be an authentication facility associated with accessing functionality of the eyepiece, such as access to displayed or projected content, access to restricted projected content, enabling functionality of the eyepiece itself (e.g. as through a login to access functionality of the eyepiece) either in whole or in part, and the like. Authentication allows for access to eyepiece functions i.e. camera, microphone, or display); -establishing an always-on communication session between the first electronic eyewear device and the second electronic eyewear device (Haddick, para [00651]; Reference discloses the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device…Communications between the eyepieces may be direct, through an Internet network, through the cell-network, through a satellite network, and the like. Processing of position information contributing to the synchronization may be executed in a master processor in a single eyepiece, collectively amongst a group of eyepieces, in remote server system, and the like, or any combination thereof); -enabling the first user to share permissions with the second user upon establishment of the communication session (Haddick, para [00627] and [00744]; Reference at [00744] discloses in another example, a person may be able to post that comment at the location of the place such that the comment is available when another person comes to that location. In this way, a wearer may be able to access comments left by others when they come to the location (i.e. shared permission as content is left behind for second user via the authentications described in [00627]); -selectively sharing at least one of an audio stream or a point of view (POV) video stream from the first electronic eyewear device with the second electronic eyewear device during the communication session (Haddick, para [00651]; Reference discloses the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device (i.e. selective sharing of at least one of audio or video between eyewear devices for two users)… For example, a group of concertgoers may synchronize their eyepieces with a feed from the concert producers such that visual effects or audio may be pushed to people with eyepieces by the concert producer, performers, other audience members, and the like. In an example, the performer may have a master eyepiece and may control sending content to audience members), -wherein the second user selectively accesses at least one of the camera, the microphone, or the display of the electronic eyewear device of the first electronic eyewear device during the communication session in accordance with the selected permissions (Haddick, para [00651]; Reference discloses the user of one eyepiece may be able to synchronize their view of a projected image or video with at least the view of a second user of an eyepiece or other video display device (i.e. selective sharing of at least one of audio or video between eyewear devices for two users)… For example, a group of concertgoers may synchronize their eyepieces with a feed from the concert producers such that visual effects or audio may be pushed to people with eyepieces by the concert producer, performers, other audience members, and the like. In an example, the performer may have a master eyepiece and may control sending content to audience members. The content of the performers or audience members being accessed is interpreted as secondary users selectively accessing the camera, microphone, or display during the communication session in accordance with the selected permissions). In regards to claim 18. Haddick discloses the medium of claim 17. Haddick further discloses -further storing instructions that when executed by the at least one processor cause the at least one processor to enable the first user to select permissions for at least the second user by enabling the first user to select permissions for at least the second user to access at least one of the audio stream or the POV video stream from the electronic eyewear device of the first user (Haddick, para [00627]; Reference discloses there may be an authentication facility associated with accessing functionality of the eyepiece, such as access to displayed or projected content (i.e. audio or POV stream), access to restricted projected content, enabling functionality of the eyepiece itself (e.g. as through a login to access functionality of the eyepiece) either in whole or in part, and the like. Authentication allows for access to eyepiece functions i.e. camera, microphone, or display). In regards to claim 19. Haddick discloses the medium of claim 18. Haddick further discloses -further storing instructions that when executed by the at least one processor cause the at least one processor to enable the first user to select permissions by at least one of determining which POV video stream from at least two cameras of the first electronic eyewear device the second user may access and for how long or determining whether the audio stream may be accessed by the second user and for how long (Haddick, para [00627] and [00743]; Reference at [00627] discloses there may be an authentication facility associated with accessing functionality of the eyepiece, such as access to displayed or projected content (i.e. audio or POV stream), access to restricted projected content, enabling functionality of the eyepiece itself (e.g. as through a login to access functionality of the eyepiece) either in whole or in part, and the like (i.e. authentication allows for access to eyepiece functions i.e. camera, microphone, or display interpreted as permissions). Para [00743] discloses in an embodiment, when the wearer points the eyepiece in a target user's direction, they may indicate interest in the user if the eyepiece is pointed for a duration of time and/or a gesture, head, eye, or audio control is activated. The target user may receive an indication of interest on their phone or in their glasses. If the target user had marked the wearer as interesting but was waiting on the wearer to show interest first, an indication may immediately pop up in the eyepiece of the target user's interest). In regards to claim 20. Haddick discloses the medium of claim 17. Haddick further discloses -further storing instructions that when executed by the at least one processor cause the at least one processor to enable the first user to select permissions for at least the second user by giving the first user an option of receiving a notification to the display when at least the second user has opted to view the POV video stream, listen to the audio stream, or both during the always-on communication session (Haddick, para [00743]; Reference discloses in an embodiment, when the wearer points the eyepiece in a target user's direction, they may indicate interest in the user if the eyepiece is pointed for a duration of time and/or a gesture, head, eye, or audio control is activated. The target user may receive an indication of interest on their phone or in their glasses. If the target user had marked the wearer as interesting but was waiting on the wearer to show interest first, an indication may immediately pop up in the eyepiece of the target user's interest (i.e. providing a notification to the display when the second user has opted to view the video stream)). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: See the Notice of References Cited (PTO-892) Any inquiry concerning this communication or earlier communications from the examiner should be directed to TERRELL M ROBINSON whose telephone number is (571)270-3526. The examiner can normally be reached 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KENT CHANG can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TERRELL M ROBINSON/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Aug 07, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602852
DYNAMIC GRAPHIC EDITING METHOD AND DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12572196
MANAGING AN INDUSTRIAL ENVIRONMENT HAVING MACHINERY OPERATED BY REMOTE WORKERS AND PHYSICALLY PRESENT WORKERS
2y 5m to grant Granted Mar 10, 2026
Patent 12573124
PROGRESSIVE REAL-TIME DIFFUSION OF LAYERED CONTENT FILES WITH ANIMATED FEATURES
2y 5m to grant Granted Mar 10, 2026
Patent 12573111
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD FOR APPROPRIATE DISPLAY OF PRESENTER AND PRESENTATION ITEM
2y 5m to grant Granted Mar 10, 2026
Patent 12561904
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD FOR CORRECTING COMPUTER GRAPHICS IMAGE IN MIXED REALITY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
90%
With Interview (+7.5%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 486 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month