Prosecution Insights
Last updated: April 19, 2026
Application No. 18/785,875

VR ENVIRONMENT FOR ACCIDENT RECONSTRUCTION

Non-Final OA §103§DP
Filed
Jul 26, 2024
Examiner
GRAY, RYAN M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
589 granted / 672 resolved
+25.6% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
18 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 672 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). Claim(s) 1-5, 7-12, 14-19 rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1-20 of U.S. Patent No. 12,073,010 in view of Yang (US 2019/0043351) Regarding Claims 1, 8, 15: Although the claims at issue are not identical, they are not patentably distinct from each other because while ‘875 differs from ‘010 in the use of real-time VR feeds and additionally includes the limitation in response to obtaining the indication of the event, these features are disclosed by Yang: Yang, ¶ 426: “Once an incident has been detected, that will trigger local data collection by the detecting device (block 5604) along with nearby data collection by any surrounding devices (block 5608), and the incident will also be given a name (block 5610)” Yang, ¶ 137: “For example, vision analytics API 860 may perform inline (e.g. real-time)” Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to substitute real-time feeds and incident responsiveness in the context of ‘875. The functions are disclosed in Yang as usable in combination or as alternatives (See Fig. 9 below), and one of ordinary skill in the art would have recognized that each may be substituted depending on the desired response. One of ordinary skill in the art could have made the substitution and the results would have been predictable because Yang covers the same field of endeavor and suggests different applications depending on context (such as live incident response in emergencies) PNG media_image1.png 436 607 media_image1.png Greyscale Regarding Claims 7, 14: Although the claims at issue are not identical, they are not patentably distinct from each other because while ‘875 differs from ‘010 in the use of Light detection and LIDAR, these features are disclosed by Yang: Yang, ¶ 51: “Visual sensors 120 may include any type of visual or optical sensors, such as cameras, ultraviolet (UV) sensors, laser rangefinders (e.g., light detection and ranging (LIDAR)), infrared (IR) sensors, electro-optical/infrared (EO/IR) sensors, and so forth.” Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to substitute Lidar and light detection in the context of ‘875. The functions are disclosed in Yang as usable in combination or as alternatives (See ¶ 51), and one of ordinary skill in the art would have recognized that each may be substituted depending on the desired response. One of ordinary skill in the art could have made the substitution and the results would have been predictable because Yang covers the same field of endeavor and suggests different applications depending on context (such as live incident response in emergencies) Claim(s) 6, 13, 20 rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1-20 of U.S. Patent No. 12,073,010 in view of Yang (US 2019/0043351) and Ackerson (US 2021/0133929) Although the claims at issue are not identical, they are not patentably distinct from each other because while ‘875 differs from ‘010 in the detection of hail events, these features are disclosed by Ackerson: Ackerson, ¶ 193: “In other variations, a first system 1A01 is being used by a client that is desirous of capturing what might be considered a “local” vs. “global” scene such as their car that received damage in a recent hail storm, or perhaps their house that was damaged in a storm or simply is being readied for sale”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to generate a VR feed based on collected event data as claimed. Ackerson suggests application to the same field of endeavor as Yang: “In other variations, a first system 1A01 is being used by a client that is desirous of capturing what might be considered a “local” vs. “global” scene such as their car that received damage in a recent hail storm, or perhaps their house that was damaged in a storm or simply is being readied for sale” (Ackerson, ¶ 193). Therefore, one of ordinary skill in the art would recognize that Yang could be improved by incorporating enhanced visual reconstruction techniques from Ackerson. Table 1 below shows an example claim mapping between the present application and U.S. Patent No. 12,073,010 Table 2 below lists corresponding claims between the present application and U.S. Patent No. 12,073,010 Table 1. Example Claim Mapping 18/785,875 12,073,010 1. A computer-implemented method for generating a virtual reality (VR) feed corresponding to an event, the method comprising: obtaining, via one or more processors, an indication of an event occurring in a geographic area, wherein the event is at least one of: a vehicle collision, a crime, a weather event, or a natural disaster; in response to obtaining the indication of the event, generating, via the one or more processors, a real-time VR feed of the geographic area at a time of the event based upon real-time condition data from the geographic area, the real-time VR feed including a virtual representation of the geographic area at the time of the event to reflect the real-time conditions at the geographic area; and providing, via the one or more processors, the real-time VR feed for presentation to a user within a virtual reality display for the user to experience the geographic area where the event occurred within a VR environment. 1. A computer-implemented method for generating a virtual reality (VR) feed corresponding to an event, the method comprising: obtaining, via one or more processors, an indication of an event occurring in a geographic area, wherein the event is at least one of: a vehicle collision, a crime, a weather event, or a natural disaster; (See Yang Rationale Above) generating, via the one or more processors, a VR feed of the geographic area at a time of the event based upon real-time condition data from the geographic area, the VR feed including a virtual representation of the geographic area at the time of the event to reflect the real-time conditions at the geographic area; and providing, via the one or more processors, the generated VR feed for presentation to a user within a virtual reality display for the user to experience the geographic area where the event occurred within a VR environment; wherein the indication is a first indication, the event is a first event, and the method further includes: obtaining, via the one or more processors, a second indication of a second event occurring in the geographic area; and interrupting, via the one or more processors, the providing of the generated VR feed by providing an option to the user to experience the second event. 8. A computer system configured to generate a virtual reality (VR) feed corresponding to an event, the computer system comprising one or more local or remote processors, transceivers, and/or sensors configured to: obtain an indication of an event occurring in a geographic area, wherein the event is at least one of: a vehicle collision, a crime, a weather event, or a natural disaster; in response to obtaining the indication of the event, generate a real-time VR feed of the geographic area at a time of the event based upon real-time condition data from the geographic area, the real-time VR feed including a virtual representation of the geographic area at the time of the event to reflect the real-time conditions at the geographic area; and provide the real-time VR feed for presentation to a user within a virtual reality display for the user to experience the geographic area where the event occurred within a VR environment. 8. A computer system configured to generate a virtual reality (VR) feed corresponding to an event, the computer system comprising one or more local or remote processors, transceivers, and/or sensors configured to: obtain an indication of an event occurring in a geographic area, wherein the event is at least one of: a vehicle collision, a crime, a weather event, or a natural disaster; (See Yang Rationale Above) generate a VR feed of the geographic area at a time of the event based upon real-time condition data from the geographic area, the VR feed including a virtual representation of the geographic area at the time of the event to reflect the real-time conditions at the geographic area; and provide the generated VR feed for presentation to a user within a virtual reality display for the user to experience the geographic area where the event occurred within a VR environment; wherein the indication is a first indication, the event is a first event, and the one or more local or remote processors, transceivers, and/or sensors are further configured to: obtain a second indication of a second event occurring in the geographic area; and interrupt the providing of the generated VR feed by providing an option to the user to experience the second event. 15. A computer device for generating a virtual reality (VR) feed corresponding to an event, the computer device comprising: one or more processors; and one or more non-transitory memories coupled to the one or more processors; the one or more non-transitory memories including computer executable instructions stored therein that, when executed by the one or more processors, cause the one or more processors to: obtain an indication of an event occurring in a geographic area, wherein the event is at least one of: a vehicle collision, a crime, a weather event, or a natural disaster; in response to obtaining the indication of the event, generate a real-time VR feed of the geographic area at a time of the event based upon real-time condition data from the geographic area, the real-time VR feed including a virtual representation of the geographic area at the time of the event to reflect the real-time conditions at the geographic area; and provide the real-time VR feed for presentation to a user within a virtual reality display for the user to experience the geographic area where the event occurred within a VR environment. 13. A computer device for generating a virtual reality (VR) feed corresponding to an event, the computer device comprising: one or more processors; and one or more non-transitory memories coupled to the one or more processors; the one or more non-transitory memories including computer executable instructions stored therein that, when executed by the one or more processors, cause the one or more processors to: obtain an indication of an event occurring in a geographic area, wherein the event is at least one of: a vehicle collision, a crime, a weather event, or a natural disaster; (See Yang Rationale Above) generate a VR feed of the geographic area at a time of the event based upon real-time condition data from the geographic area, the VR feed including a virtual representation of the geographic area at the time of the event to reflect the real-time conditions at the geographic area, wherein the generate the VR feed further includes identifying a representation of an individual in the virtual representation of the geographic area, and replacing the representation of the individual with an avatar; and provide the generated VR feed for presentation to a user within a virtual reality display for the user to experience the geographic area where the event occurred within a VR environment; wherein the one or more memories including computer executable instructions stored therein that, when executed by the one or more processors, further cause the one or more processors to send, to an emergency response entity, the virtual representation of the geographic area including the representation of the individual. Table 2. Corresponding Claims 18/785,875 12,073,010 1 1 2 2 3 3 4 4 5 7 6 See Ackerson Rationale above 7 See Yang Rationale above 8 8 9 9 10 10 11 11 12 12 13 See Ackerson Rationale above 14 See Yang Rationale above 15 13 16 14 17 15 18 17 19 18 20 See Ackerson Rationale above A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Use of indicates a limitation is not explicitly disclosed by the reference alone. Claim(s) 1-4, 6-11, 13-18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang (US 2019/0043351) in view of Ackerson (US 2021/0133929) Claim 1 Yang discloses a computer-implemented method for generating a virtual reality (VR) feed corresponding to an event, the method comprising: obtaining, via one or more processors, an indication of an event occurring in a geographic area (Yang, ¶¶ 88, 421: “an emergency vehicle 424 may be alerted by an automated teller machine 420 that a burglary is in progress…In particular, the ubiquitous witness provides real-time data collection using visual fog computing. For example, when an interesting event (e.g., anomalous, unusual, rare) occurs, a snapshot of local data is locked (e.g., securely stored) by the subject device that detected the event, thus preventing the data from being overwritten. Further, the subject that detected the event notifies other relevant subjects (e.g., nearby subjects in many cases) in real time to lock their respective counterpart data snapshots…”), wherein the event is at least one of: a vehicle collision (Yang, ¶ 443: “collision may be detected and recorded as an anomalous incident (e.g., with details of time and location) by any of the vehicles involved in the collision”), a crime (Yang, ¶ 443: “preventing crime and vandalism…a neighbor's camera may capture a much clearer view of an incident in or around a nearby house”), a weather event (Yang, ¶ 427: “weather, and/or any other contextual or circumstantial information associated with the incident, among other example”), or in response to obtaining the indication of the event (Yang, ¶ 426: “Once an incident has been detected, that will trigger local data collection by the detecting device (block 5604) along with nearby data collection by any surrounding devices (block 5608), and the incident will also be given a name (block 5610)”), generating, via the one or more processors, a real-time VR feed of the geographic area at a time of the event based upon real-time condition data from the geographic area (Yang, ¶ 212: “In some embodiments, storage architecture 1800 may store visual metadata as a property graph to identify relationships between visual data, such as images that contain the same object or person, images taken in the same location, and so forth.”), the real-time VR feed including a virtual representation of the geographic area at the time of the event to reflect the real-time conditions at the geographic area (Yang, ¶ 442-43: “FIG. 57 illustrates an example use case 5700 for automotive anomaly detection and event reconstruction. The illustrated use case 5700 includes a plurality of cars 5710 driving on a road, along with multiple roadside units (RSUs) 5720 on the side of the road (e.g., traffic lights, lampposts, road signs, and/or other roadside infrastructure). The cars 5710 and RSUs 5720 are each equipped with a collection of sensors and/or cameras for capturing data associated with their respective operating environments, along with communication interface(s) to facilitate communication with each other and/or other networks…Moreover, the illustrated example portrays a snapshot in time and space of an automotive anomaly that involves a collision between two vehicles.”); and providing, via the one or more processors, the real-time VR feed (Yang, ¶ 137: “For example, vision analytics API 860 may perform inline (e.g. real-time)”) for presentation to a user within a virtual reality display for the user to experience the geographic area where the event occurred within a VR environment (Yang, ¶ 414: “As another example, the described embodiments could be used for video playback, such as user-centric video rendering, focused replays, and so forth. For example, user-centric video rendering could be used to perform focused rendering on 360-degree video by analyzing what the user is focusing on, and performing no or low-resolution processing on portions of the video that are outside the focus area of the user (e.g., for virtual-reality (VR) and/or augmented-reality (AR) applications).” Yang does not explicitly disclose, but Ackerson makes obvious: generating a virtual reality feed (Ackerson, ¶¶ 5, 451: “present inventors have proposed significant advancements in scene reconstruction…improvements in the means for experiencing types of data include higher resolution and better performing 2D and 3D displays, autostereoscopic displays, holographic display and extended reality devices such as virtual reality (VR) headsets and augmented reality (AR) headsets and methods…FIG. 78 shows an example case 7800 of subscene extraction for purposes of image generation.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to generate a VR feed based on collected event data as claimed. Yang suggests this application: “As another example, the described embodiments could be used for video playback, such as user-centric video rendering, focused replays, and so forth. For example, user-centric video rendering could be used to perform focused rendering on 360-degree video by analyzing what the user is focusing on, and performing no or low-resolution processing on portions of the video that are outside the focus area of the user (e.g., for virtual-reality (VR) and/or augmented-reality (AR) applications).” (Yang, ¶ 414). Additionally, Ackerson suggests application to the same field of endeavor as Yang: “In other variations, a first system 1A01 is being used by a client that is desirous of capturing what might be considered a “local” vs. “global” scene such as their car that received damage in a recent hail storm, or perhaps their house that was damaged in a storm or simply is being readied for sale” (Ackerson, ¶ 193). Therefore, one of ordinary skill in the art would recognize that Yang could be improved by incorporating enhanced visual reconstruction techniques from Ackerson. Claim 2 Yang discloses wherein the real-time VR feed is generated based upon data generated by: (i) a camera of a vehicle in the geographic area (Yang, ¶ 433: “With respect to the automotive industry, for example, vehicles currently come equipped with an array of sensors designed to sense and record a multitude of data (e.g., speed, direction, fuel levels). These sensors are often present internally within a vehicle as well as mounted externally on the vehicle. Externally mounted sensors, for example, may include visual/audio sensors such as cameras”), (ii) a drone in the geographic area (Yang, ¶ 55: “IoT devices 114 can also be mobile, such as devices in vehicles or aircrafts, drones”),, and/or (iii) an infrastructure camera in the geographic area (Yang, ¶ 447: “For example, roadside infrastructure may include various types of roadside units (RSUs) with edge and fog computing capabilities (e.g., storage, processing, communication/routing, sensors/cameras), such as traffic lights, street lights, lampposts, road signs, and so forth. In this manner, roadside infrastructure within the anomaly coverage area 5730 may detect, witness, or otherwise be alerted to an anomalous incident, and thus may trigger an alert or response to the incident.”). Claim 3 Yang discloses wherein: the event is the vehicle collision (Yang, ¶ 443: “collision may be detected and recorded as an anomalous incident (e.g., with details of time and location) by any of the vehicles involved in the collision (either directly involved or indirectly involved as witnesses) and/or the roadside infrastructure or RSUs”); the indication is obtained from a vehicle in the geographic area (Yang, ¶ 443: “collision may be detected and recorded as an anomalous incident (e.g., with details of time and location) by any of the vehicles involved in the collision (either directly involved or indirectly involved as witnesses) and/or the roadside infrastructure or RSUs); and the real-time VR feed is generated based upon data received from a vehicle camera of the vehicle in the geographic area (Yang, ¶ 442: “The cars 5710 and RSUs 5720 are each equipped with a collection of sensors and/or cameras for capturing data associated with their respective operating environments, along with communication interface(s) to facilitate communication with each other and/or other networks.”). Claim 4 Yang discloses wherein generating the real-time VR feed further comprises blurring out: a face of an individual, identifying information of an individual, and/or a license plate (Yang, ¶ 333: “for example, vision workloads may be scheduled and executed across visual fog nodes based on specified privacy constraints. As an example, privacy constraints for an MTMCT and/or ReID workload may require tasks that output pictures with faces to remain on-premises (e.g., neither the tasks nor their output are assigned or transmitted beyond the premise or to the cloud), be anonymized (e.g., face-blurred)”) Claim 6 Yang does not disclose, but Ackerson discloses wherein the event comprises the weather event, and the weather event comprises a hail event (Ackerson, ¶ 193: “In other variations, a first system 1A01 is being used by a client that is desirous of capturing what might be considered a “local” vs. “global” scene such as their car that received damage in a recent hail storm, or perhaps their house that was damaged in a storm or simply is being readied for sale”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to generate a VR feed based on collected event data as claimed. Yang suggests this application: “As another example, the described embodiments could be used for video playback, such as user-centric video rendering, focused replays, and so forth. For example, user-centric video rendering could be used to perform focused rendering on 360-degree video by analyzing what the user is focusing on, and performing no or low-resolution processing on portions of the video that are outside the focus area of the user (e.g., for virtual-reality (VR) and/or augmented-reality (AR) applications).” (Yang, ¶ 414). Additionally, Ackerson suggests application to the same field of endeavor as Yang: “In other variations, a first system 1A01 is being used by a client that is desirous of capturing what might be considered a “local” vs. “global” scene such as their car that received damage in a recent hail storm, or perhaps their house that was damaged in a storm or simply is being readied for sale” (Ackerson, ¶ 193). Therefore, one of ordinary skill in the art would recognize that Yang could be improved by incorporating enhanced visual reconstruction techniques from Ackerson. Claim 7 Yang discloses wherein the real-time condition data includes light detection and ranging (LIDAR) data (Yang, ¶ 51: “Visual sensors 120 may include any type of visual or optical sensors, such as cameras, ultraviolet (UV) sensors, laser rangefinders (e.g., light detection and ranging (LIDAR)), infrared (IR) sensors, electro-optical/infrared (EO/IR) sensors, and so forth.”) Claim 8 The same teachings and rationales in claim 1 are applicable to claim 8, with Yang disclosing a computer system configured to generate a virtual reality (VR) feed corresponding to an event, the computer system comprising one or more local or remote processors, transceivers, and/or sensors (Fig. 1) PNG media_image2.png 376 579 media_image2.png Greyscale Claim 9 The same teachings and rationales in claim 2 are applicable to claim 9. Claim 10 The same teachings and rationales in claim 3 are applicable to claim 10. Claim 11 The same teachings and rationales in claim 4 are applicable to claim 11. Claim 13 The same teachings and rationales in claim 6 are applicable to claim 9. Claim 14 The same teachings and rationales in claim 7 are applicable to claim 14. Claim 15 The same teachings and rationales in claim 1 are applicable to claim 15, with Yang disclosing a computer device for generating a virtual reality (VR) feed corresponding to an event, the computer device comprising: one or more processors; and one or more non-transitory memories coupled to the one or more processors; the one or more non-transitory memories including computer executable instructions stored therein that, when executed by the one or more processors (Fig. 1) Claim 16 The same teachings and rationales in claim 2 are applicable to claim 16. Claim 17 The same teachings and rationales in claim 3 are applicable to claim 17. Claim 18 The same teachings and rationales in claim 4 are applicable to claim 18. Claim 20 The same teachings and rationales in claim 6 are applicable to claim 20. Claim(s) 5, 12, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang (US 2019/0043351) in view of Ackerson (US 2021/0133929) and Rutschman (US 2018/0239948) Claim 5 Yang does not disclose, but Rutschman discloses wherein the event comprises the natural disaster event, and the natural disaster event comprises a forest fire, or a hurricane (Rutschman, ¶ 327: “detect any one of a flood, earthquake, hurricane, typhoon, tornado, tsunami, windstorm, fire, explosion, or other man-made or natural disaster event”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider disasters. As suggested by Yang, emergency support is considered and Rutschman provides another type of sensing system for providing video capture and analysis. Claim 12 The same teachings and rationales in claim 5 are applicable to claim 12. Claim 19 The same teachings and rationales in claim 5 are applicable to claim 19. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN M GRAY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 26, 2024
Application Filed
Mar 05, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597216
ARTIFICIAL INTELLIGENCE VIRTUAL MAKEUP METHOD AND DEVICE USING MULTI-ANGLE IMAGE RECOGNITION
2y 5m to grant Granted Apr 07, 2026
Patent 12586252
METHOD FOR ENCODING THREE-DIMENSIONAL VOLUMETRIC DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12572892
SYSTEMS AND METHODS FOR VISUALIZATION OF UTILITY LINES
2y 5m to grant Granted Mar 10, 2026
Patent 12561928
SYSTEMS AND METHODS FOR CALCULATING OPTICAL MEASUREMENTS AND RENDERING RESULTS
2y 5m to grant Granted Feb 24, 2026
Patent 12542946
REMOTE PRESENTATION WITH AUGMENTED REALITY CONTENT SYNCHRONIZED WITH SEPARATELY DISPLAYED VIDEO CONTENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
98%
With Interview (+10.9%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 672 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month