Prosecution Insights
Last updated: April 19, 2026
Application No. 18/597,233

Asynchronous Shared Virtual Experiences

Non-Final OA §102§103
Filed
Mar 06, 2024
Examiner
GRAY, RYAN M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Niantic, Inc.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
589 granted / 672 resolved
+25.6% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
18 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 672 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-5, 7 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Skidmore (US 2019/0244431) Claim 1 Examiner’s Interpretation: The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. Claim Mapping: Skidmore discloses a computer-implemented method comprising: receiving an item of visual media content from a first client device, the visual media content depicting one or more real-world objects (Fig. 5; “FIG. 5 depicts a selection view 500 of a real world environment. For example, the user can see such a selection view 500 by putting on a head mounted display (HMD) with a camera and providing input to place the device into selection mode. Once the user has entered the selection mode, the user is able to provide input to select one or more of the real world objects to be included in the user experience”); receiving user input selecting an object of the one or more real-world objects (Skidmore, ¶ 59: “As a concrete example, an image of a real world view (i.e., a real world image) may include within its field of view a building with a typical rectangular shape. The building has a particular GPS location. More specifically, each of the four corners of the building that touch the ground has their own GPS coordinates. In a corresponding virtual world, a virtual object in the form of a rectangular prism exists at coordinates which align with the real world GPS coordinates. The virtual object (in this case the rectangular prism) if displayed in an augmented reality would align with the real building in any augmented view so that the two objects—the real world object and the virtual object, align, one superimposed on the other”); generating one or more virtual representations of the selected object using the visual media content (Skidmore, ¶ 59: “As a concrete example, an image of a real world view (i.e., a real world image) may include within its field of view a building with a typical rectangular shape. The building has a particular GPS location. More specifically, each of the four corners of the building that touch the ground has their own GPS coordinates. In a corresponding virtual world, a virtual object in the form of a rectangular prism exists at coordinates which align with the real world GPS coordinates. The virtual object (in this case the rectangular prism) if displayed in an augmented reality would align with the real building in any augmented view so that the two objects—the real world object and the virtual object, align, one superimposed on the other”); and providing at least one virtual representation of the selected object for display at the first client device (Skidmore, ¶ 59: “As a concrete example, an image of a real world view (i.e., a real world image) may include within its field of view a building with a typical rectangular shape. The building has a particular GPS location. More specifically, each of the four corners of the building that touch the ground has their own GPS coordinates. In a corresponding virtual world, a virtual object in the form of a rectangular prism exists at coordinates which align with the real world GPS coordinates. The virtual object (in this case the rectangular prism) if displayed in an augmented reality would align with the real building in any augmented view so that the two objects—the real world object and the virtual object, align, one superimposed on the other”) Claim 2 Skidmore discloses further comprising: providing a plurality of virtual representations of the selected object for display at the first client device (Skidmore, ¶ 42: “The modification may include the selection and addition of specific pre-determined AR content. The AR content may be one or more of visual, audial, and tactile. The modification may include the selection and addition of a virtual marker M2 or multiple additional virtual markers. In the latter case, the virtual marker M2 has the capability of triggering some further downstream AR engine to perform some other modification.”); receiving user input comprising a selection of a virtual representation of the plurality of virtual representations (Skidmore, ¶ 42: “The modification may include the selection and addition of specific pre-determined AR content. The AR content may be one or more of visual, audial, and tactile. The modification may include the selection and addition of a virtual marker M2 or multiple additional virtual markers. In the latter case, the virtual marker M2 has the capability of triggering some further downstream AR engine to perform some other modification.”); and providing the selected virtual representation to the first client device (Skidmore, ¶ 42: “The modification may include the selection and addition of specific pre-determined AR content. The AR content may be one or more of visual, audial, and tactile. The modification may include the selection and addition of a virtual marker M2 or multiple additional virtual markers. In the latter case, the virtual marker M2 has the capability of triggering some further downstream AR engine to perform some other modification.”);. Claim 3 Skidmore discloses further comprising: receiving user input comprising a selection of one or more other users of a location-based application (e.g. as part of the adding of virtual markers (See claim 2 above); ¶ 41: “The real world content may include metadata like GPS coordinates, time of capture information (e.g., time stamps), perspective data (e.g., orientation, position, field of view),”); and providing the at least one virtual representation of the selected object for display at a client device of each of the selected one or more users (Skidmore, ¶ 42: “The modification may include the selection and addition of specific pre-determined AR content. The AR content may be one or more of visual, audial, and tactile. The modification may include the selection and addition of a virtual marker M2 or multiple additional virtual markers. In the latter case, the virtual marker M2 has the capability of triggering some further downstream AR engine to perform some other modification.”). Claim 4 Skidmore discloses further comprising: receiving user input comprising an identification of a location in a virtual world paralleling the real world; and placing the at least one virtual representation of the selected object at the identified location in the virtual world (Skidmore, ¶ 59: “As a concrete example, an image of a real world view (i.e., a real world image) may include within its field of view a building with a typical rectangular shape. The building has a particular GPS location. More specifically, each of the four corners of the building that touch the ground has their own GPS coordinates. In a corresponding virtual world, a virtual object in the form of a rectangular prism exists at coordinates which align with the real world GPS coordinates. The virtual object (in this case the rectangular prism) if displayed in an augmented reality would align with the real building in any augmented view so that the two objects—the real world object and the virtual object, align, one superimposed on the other”) Claim 5 Skidmore discloses wherein generating the one or more virtual representations of the selected object comprises: generating a two-dimensional (2D) or three-dimensional (3D) model of the selected object using the visual media content; and instantiating the one or more virtual representations of the object using the 2D or 3D model (Skidmore, ¶ 59: “As a concrete example, an image of a real world view (i.e., a real world image) may include within its field of view a building with a typical rectangular shape. The building has a particular GPS location. More specifically, each of the four corners of the building that touch the ground has their own GPS coordinates. In a corresponding virtual world, a virtual object in the form of a rectangular prism exists at coordinates which align with the real world GPS coordinates. The virtual object (in this case the rectangular prism) if displayed in an augmented reality would align with the real building in any augmented view so that the two objects—the real world object and the virtual object, align, one superimposed on the other”) Claim 7 Skidmore discloses wherein the one or more virtual representations of the selected object are stored in association with a location at which the item of visual media content was captured (Skidmore, ¶ 15: “captures a video showing storefronts of an empty stripmall. The original video containing only real world content is processed by a first AR engine. This first engine modifies the video by adding a virtual marker above each store front.”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Use of indicates a limitation is not explicitly disclosed by the reference alone. Claim(s) 8-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Skidmore (US 2019/0244431) in view of Heizer (US Patent 9,645,221) Claim 8 Skidmore discloses a computer-implemented method comprising: PNG media_image1.png 556 580 media_image1.png Greyscale receiving, from a first client device, media content (image, video, augmentations), an indication of a first user of a location-based application, and an indication of a geographic location (e.g. GPS coordinates and the like) depicted in the media content (Skidmore, ¶ 59: “As a concrete example, an image of a real world view (i.e., a real world image) may include within its field of view a building with a typical rectangular shape. The building has a particular GPS location. More specifically, each of the four corners of the building that touch the ground has their own GPS coordinates. In a corresponding virtual world, a virtual object in the form of a rectangular prism exists at coordinates which align with the real world GPS coordinates. The virtual object (in this case the rectangular prism) if displayed in an augmented reality would align with the real building in any augmented view so that the two objects—the real world object and the virtual object, align, one superimposed on the other”). storing the media content in association with the indication of the first user and the indication of the geographic location depicted in the media content (¶ 71: “Associations among virtual markers and the responses they trigger (e.g., the augmentations they cause to be applied to real world content) may be stored”); receiving an indication of a current location of a second client device associated with a second user of the location-based application (Skidmore, ¶ 64: “When the AR engine detects that the user has visibility of these coordinates in the real world, it accesses the data geocoded with the matching coordinates in the virtual model. The engine then takes the virtual content and uses it to generate an augmentation superimposed on or otherwise incorporated with a user's real world view at coordinates 40.6892° N, 74.0445° W.”); responsive to the current location being proximate to the geographic location, providing the media content to the second client device for presentation to the second user (Skidmore, ¶ 64: “When the AR engine detects that the user has visibility of these coordinates in the real world, it accesses the data geocoded with the matching coordinates in the virtual model. The engine then takes the virtual content and uses it to generate an augmentation superimposed on or otherwise incorporated with a user's real world view at coordinates 40.6892° N, 74.0445° W.”); and Skidmore does not explicitly disclose, but Heizer discloses providing, in response to a request from the second client device indicating the second user interacted with the media content, a communication channel between the second client device and the first client device (Heizer, “n an embodiment, the user profile interface 1700 includes an input icon 1724 that triggers the electronic device 200 to send a request to connect or “start a conversation,” as described above. The user interface of the electronic device 200 in one example provides a chat function. The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town). For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to open a communication channel. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 9 Examiner’s Interpretation: Display parameters: covers subject matter such as included in dependent claim 10 (e.g. permissions, specific users, etc.) Claim Mapping: Skidmore does not explicitly disclose, but Heizer discloses further comprising: receiving, from the first client device, one or more content display parameters associated with the media content; and storing the content display parameters in association with the media content, wherein the media content is provided for display to the second client device subject to the one or more content display parameters (Heizer, “n an embodiment, the user profile interface 1700 includes an input icon 1724 that triggers the electronic device 200 to send a request to connect or “start a conversation,” as described above. The user interface of the electronic device 200 in one example provides a chat function. The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town). For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to open a communication channel based on permissions and the like. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 10 Skidmore does not explicitly disclose, but Heizer discloses wherein the one or more content display parameters include one or more of: an identification of one or more other users of the location-based application to whom the media content may be displayed, an indication that the media content may be displayed to any other users of the location-based application, and an expiration date or other time-based restriction for display of the media content (Heizer, “n an embodiment, the user profile interface 1700 includes an input icon 1724 that triggers the electronic device 200 to send a request to connect or “start a conversation,” as described above. The user interface of the electronic device 200 in one example provides a chat function. The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town). For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to open a communication channel based on permissions and the like. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 11 Skidmore does not disclose, but Heizer discloses further comprising: receiving, from the first client device, an identification of the second user as a recipient of the media content and a message from the first user to the second user; and responsive to determining that the second client device is proximate to the geographic location depicted in the media content, providing the media content and the message to the second client device for presentation to the second user (Heizer, “n an embodiment, the user profile interface 1700 includes an input icon 1724 that triggers the electronic device 200 to send a request to connect or “start a conversation,” as described above. The user interface of the electronic device 200 in one example provides a chat function. The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town). For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to open a communication channel based on permissions and the like. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 12 Skidmore does not disclose, but Heizer discloses further comprising: providing for display on the second client device a map interface including indications of media content captured at geographic locations within a threshold distance of the second client device; and responsive to receiving user input comprising a selection of a displayed indication of media content, providing the media content to the second client device for presentation to the second user (Heizer, Fig. 4: “urning to FIG. 4, the electronic device 200 in one example provides a user interface 400 on the display 210. The user interface 400 allows interaction with a visible booie to view additional user information, as described above. The user interface 400 may include a two-dimensional display or a three-dimensional display, such as a map, captured image, or virtual environment. The 2D/3D map function allows a user to scan their surroundings using the electronic device 200. Locations of other users are displayed with respective booies around them. In one example, a user is displayed as an anchor with that user's booie attached to the anchor. The user interface 400 also allows the user to select filters to show only desired users or booie.”) PNG media_image2.png 582 439 media_image2.png Greyscale Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use a map interface. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 13 Skidmore does not disclose, but Heizer discloses wherein the second user provides input to initiate a communication with the first user and wherein the communication channel is opened responsive to the first user providing input to acknowledge the communication (Heizer, “n an embodiment, the user profile interface 1700 includes an input icon 1724 that triggers the electronic device 200 to send a request to connect or “start a conversation,” as described above. The user interface of the electronic device 200 in one example provides a chat function. The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town). For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use consider a permission request and approval. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 14 Skidmore does not disclose, but Heizer discloses wherein the communication channel remains open until terminated by the first user or the second user (e.g. a change of permission would correspond to terminating chat; “For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”), until a threshold amount of time has passed since a most recent communication between the first user and the second user, or until the second client device is no longer proximate to the geographic location (Heizer, “The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town).”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider termination conditions such as when to end the chat. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 15 Skidmore discloses a computer-implemented method comprising: receiving, from a first client device of a first user of a location-based application, media content, an indication of a geographic location associated with the media content (Skidmore, ¶ 59: “As a concrete example, an image of a real world view (i.e., a real world image) may include within its field of view a building with a typical rectangular shape. The building has a particular GPS location. More specifically, each of the four corners of the building that touch the ground has their own GPS coordinates. In a corresponding virtual world, a virtual object in the form of a rectangular prism exists at coordinates which align with the real world GPS coordinates. The virtual object (in this case the rectangular prism) if displayed in an augmented reality would align with the real building in any augmented view so that the two objects—the real world object and the virtual object, align, one superimposed on the other”), storing the media content in association with the first user and the geographic location associated with the media content (¶ 71: “Associations among virtual markers and the responses they trigger (e.g., the augmentations they cause to be applied to real world content) may be stored”); Skidmore does not disclose, but Heizer discloses indication of a second user intended as a recipient of the media content (Heizer, “The identifiers in one example are permission-based augmented reality identifiers. The communication system 100 allows the user to select which identifiers are displayed or made available to other users.”) receiving an indication of a current location of a second client device associated with the second user (Heizer, “The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town).”); and responsive to the current location being proximate to the geographic location, providing the media content to the second client device for presentation to the second user (Heizer, Fig. 4: “urning to FIG. 4, the electronic device 200 in one example provides a user interface 400 on the display 210. The user interface 400 allows interaction with a visible booie to view additional user information, as described above. The user interface 400 may include a two-dimensional display or a three-dimensional display, such as a map, captured image, or virtual environment. The 2D/3D map function allows a user to scan their surroundings using the electronic device 200. Locations of other users are displayed with respective booies around them. In one example, a user is displayed as an anchor with that user's booie attached to the anchor. The user interface 400 also allows the user to select filters to show only desired users or booie.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use consider user associated location based content. One of ordinary skill in the art would have motivation to facilitate interaction between users based on filters, permissions, and the like. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 16 Skidmore does not disclose, but Heizer discloses wherein the media content is a message from the first user to the second user (Heizer, “The electronic device 200 may send identifiers upon receipt of a message (e.g., a discovery request from another electronic device), upon a status change or event, or at pre-selected times, such as a time received in a message or on a schedule for the electronic devices 200 of the communication system 100.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use consider user messaging. One of ordinary skill in the art would have motivation to facilitate interaction between users based on filters, permissions, and the like. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 17 Examiner’s Interpretation: Display parameters: covers subject matter such as included in dependent claim 10 (e.g. permissions, specific users, etc.) Claim Mapping: Skidmore does not disclose, but Heizer discloses further comprising: receiving, from the first client device, one or more content display parameters associated with the media content; and storing the content display parameters in association with the media content, wherein the media content is provided for display to the second client device subject to the one or more content display parameters (Heizer, “n an embodiment, the user profile interface 1700 includes an input icon 1724 that triggers the electronic device 200 to send a request to connect or “start a conversation,” as described above. The user interface of the electronic device 200 in one example provides a chat function. The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town). For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to open a communication channel based on permissions and the like. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 18 Skidmore does not disclose, but Heizer discloses further comprising: providing, in response to a request from the second client device indicating the second user interacted with the media content, a communication channel between the second client device and the first client device (Heizer, “n an embodiment, the user profile interface 1700 includes an input icon 1724 that triggers the electronic device 200 to send a request to connect or “start a conversation,” as described above. The user interface of the electronic device 200 in one example provides a chat function. The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town). For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to open a communication channel based on permissions and the like. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 19 Skidmore does not disclose, but Heizer discloses wherein the second user provides input to initiate a communication with the first user and wherein the communication channel is opened responsive to the first user providing input to acknowledge the communication (Heizer, “n an embodiment, the user profile interface 1700 includes an input icon 1724 that triggers the electronic device 200 to send a request to connect or “start a conversation,” as described above. The user interface of the electronic device 200 in one example provides a chat function. The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town). For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to open a communication channel based on permissions and the like. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim 20 Skidmore does not disclose, but Heizer discloses wherein the communication channel remains open until terminated by the first user or the second user (e.g. a change of permission would correspond to terminating chat; “For example, a user may restrict incoming chat messages to those with similar booies by using permissions or filters.”), until a threshold amount of time has passed since a most recent communication between the first user and the second user, or until the second client device is no longer proximate to the geographic location (Heizer, “The chat function may be specific to a booie or to users in proximity to each other (e.g., within 100 meters, in a same town).”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider termination conditions. One of ordinary skill in the art would have motivation to facilitate interaction between users. One of ordinary skill in the art would have had a reasonable expectation of success because both references are directed to augmented reality. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Skidmore (US 2019/0244431) in view of Smith (US Patent 10,474,336) Claim 6 Skidmore does not disclose, but Smith discloses further comprising: instructing a user of the first client device to capture one or more additional photos or videos of the selected object; and generating the 2D or 3D model using the one or more additional photos or videos (Smith, “3D model of the objects in the room and then identifies the corresponding real world objects to be included based on the identified boundaries. The user may be prompted to repeat this process from different viewing perspectives, i.e., different locations”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to capture from different angles. One of ordinary skill in the art would have motivation to capture all sides of the virtual content. One of ordinary skill in the art would have had a reasonable expectation of success because Smith is also directed to augmented reality object creation. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN M GRAY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597216
ARTIFICIAL INTELLIGENCE VIRTUAL MAKEUP METHOD AND DEVICE USING MULTI-ANGLE IMAGE RECOGNITION
2y 5m to grant Granted Apr 07, 2026
Patent 12586252
METHOD FOR ENCODING THREE-DIMENSIONAL VOLUMETRIC DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12572892
SYSTEMS AND METHODS FOR VISUALIZATION OF UTILITY LINES
2y 5m to grant Granted Mar 10, 2026
Patent 12561928
SYSTEMS AND METHODS FOR CALCULATING OPTICAL MEASUREMENTS AND RENDERING RESULTS
2y 5m to grant Granted Feb 24, 2026
Patent 12542946
REMOTE PRESENTATION WITH AUGMENTED REALITY CONTENT SYNCHRONIZED WITH SEPARATELY DISPLAYED VIDEO CONTENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
98%
With Interview (+10.9%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 672 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month