Prosecution Insights
Last updated: April 19, 2026
Application No. 18/757,988

VIRTUAL MIRROR GRAPHICAL REPRESENTATION FOR PREDICTIVE COLLISION ANALYSIS

Non-Final OA §102
Filed
Jun 28, 2024
Examiner
LETT, THOMAS J
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Faro Technologies Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
47%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
599 granted / 719 resolved
+21.3% vs TC avg
Minimal -36% lift
Without
With
+-36.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
26 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
27.4%
-12.6% vs TC avg
§102
47.6%
+7.6% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 719 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-11 and 13-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Leipner et al. “Simulation of mirror surfaces for virtual estimation of visibility lines for 3D motor vehicle collision reconstruction” (2017). Regarding claim 1, Leipner et al. discloses a computer-implemented method for performing a predictive collision analysis, the method comprising: initiating, on a processing system, the predictive collision analysis to be performed on the processing system, the predictive collision analysis is based on an environment of a vehicle collision (reconstructions of motor vehicle collisions are used to identify the causes of events and to identify potential violations of traffic regulations; possibility of preventing an event is another important factor related to motor vehicle collision reconstruction; recognition of potential danger by a vehicle driver, see Introduction, para. 1); defining a virtual mirror property for a virtual mirror (e.g., aspheric, convex curvature or flatness of mirrors, see Introduction, paras. 3, 5, 6) for the predictive collision analysis, the virtual mirror depicting rear views of a vehicle in the vehicle collision; and performing, by the processing system, the predictive collision analysis, wherein performing the predictive collision analysis includes generating a virtual mirror graphical representation comprising the virtual mirror based on the virtual mirror property (During the evaluation of all mirrors with the rendered virtual scenes, the greatest measured deviation between an original photo and the corresponding virtual image was 20 pixels in the transverse direction for an image width of 4256 pixels, see Results, para. 3). Regarding claim 2, Leipner et al. discloses the computer-implemented method of claim 1, wherein defining the virtual mirror property comprises defining a plurality of virtual mirror properties (e.g., aspheric, convex curvature or flatness of mirrors, see Introduction, paras. 3, 5, 6; also analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references, see Methods, para. 1.). Regarding claim 3, Leipner et al. discloses the computer-implemented method of claim 1, wherein the virtual mirror property comprises a position of the virtual mirror within a digital representation of an environment comprising at least one of: the virtual mirror graphical representation; a mirror offset that defines the position of a virtual camera sensor used to generate the virtual mirror; an orientation of the virtual mirror (see Reconstruction and visualization in 3ds Max, para. 1); a zoom of the virtual mirror (the curvature or flatness of mirrors used inherently affects zoom); and a field of view of the virtual mirror (field of view (FOV) of a driver through the front and side windows of a vehicle can also be visualized in 3D, see Introduction, para. 3). Regarding claim 4, Leipner et al. discloses the computer-implemented method of claim 1, further comprising defining prediction properties (virtual estimation of visibility lines, field-of-views and lines of sight for the driver, recognition of potential danger) for the predictive collision analysis. Regarding claim 5, Leipner et al. discloses the computer-implemented method of claim 4, wherein defining the prediction properties comprises defining a number and type of virtual mirrors (field of view (FOV) of a driver through the front and side windows of a vehicle can also be visualized in 3D, see Introduction, para. 3; also see Fig. 1: “Mirror selection” of interior or exterior mirror types.). Regarding claim 6, Leipner et al. discloses the computer-implemented method of claim 5, wherein the type of virtual mirrors is selected from a group comprising: a left side-view mirror; a right side-view mirror; and a rear-view mirror (see Fig. 1: “Mirror selection” of interior or exterior mirror types.). Regarding claim 7, Leipner et al. discloses the computer-implemented method of claim 5, wherein defining the virtual mirror property comprises defining a plurality of virtual mirror properties for each of the number and the type of virtual mirrors (e.g., aspheric, convex curvature or flatness of mirrors, see Introduction, paras. 3, 5, 6; also analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references, see Methods, para. 1.). Regarding claim 8, Leipner et al. discloses the computer-implemented method of claim 1, further comprising generating a digital representation of the predictive collision analysis, wherein the digital representation includes the virtual mirror graphical representation (structured light scans were processed using ATOS Professional V7.5 (GOM mbH, Braunschweig, Germany) to create polygonal models. In Geomagic Wrap (3D Systems Inc., Rock Hill, South Carolina, USA), each mirror scan (Fig. 3a) was separated into parts containing the table and the wall, the mirror frame (Fig. 3b) and the mirror surface itself (Fig. 3c), see section 2.2: Data Processing”, para. 2). Regarding claim 9, Leipner et al. discloses the computer-implemented method of claim 8, wherein the digital representation including the virtual mirror graphical representation is displayed on a display of the processing system (structured light scans were processed using ATOS Professional V7.5 (GOM mbH, Braunschweig, Germany) to create polygonal models. In Geomagic Wrap (3D Systems Inc., Rock Hill, South Carolina, USA), each mirror scan (Fig. 3a) was separated into parts containing the table and the wall, the mirror frame (Fig. 3b) and the mirror surface itself (Fig. 3c). See also Figure 8). Regarding claim 10, Leipner et al. discloses the computer-implemented method of claim 8, wherein the digital representation includes a plurality of virtual mirror graphical representations (See Figure 8). Regarding claim 11, Leipner et al. discloses the computer-implemented method of claim 1, further comprising collecting the 3D data of an environment using a 3D coordinate measurement device, wherein the 3D data of the environment is used to perform the predictive collision analysis (the camera match function of 3ds Max was used; at least 11 points were defined on the 3D model of the setup as well as on the background image. Hence, the exterior orientation of the virtual camera was generated at a position comparable to the real one. Because the focal length of the virtual camera cannot be determined automatically, it has to be manually adjusted, until the 11 paired points match. As the background images are rectified no further calculations are necessary. For better comparison, black-and-white coloration was applied to the virtual checkered pattern parts. The material editor was used to assign each mirror an Autodesk mirror material. In the last step, the rendering function of 3ds Max was used to generate an image of the virtual scene (Fig. 5b), see section 2.4). Regarding claim 13, Leipner et al. discloses the computer-implemented method of claim 1, further comprising collecting images of an environment using a camera and generating 3D data of the environment based at least in part on the images by using a photogrammetry (For scene reconstruction and to generate virtual images, we used 3ds Max. We compared the simulated virtual images and photographs of real scenes using Adobe Photoshop. Different 3D scanning techniques are used, such as laser scanning or structured light scanning [3]. The resulting data can be used later to generate 3D reconstructions and visualizations of events, see Introduction, para. 2) or videogrammetry technique, wherein the 3D data of the environment is used to perform the predictive collision analysis. Regarding claim 14, Leipner et al. discloses a processing system comprising: a memory (e.g., processed using ATOS Professional V7.5 which implies use of memory) comprising computer readable instructions; and a processing device (e.g., processed using ATOS Professional V7.5 which implies use of a processor) for executing the computer readable instructions, the computer readable instructions controlling the processing device to perform operations for performing a predictive collision analysis, the operations comprising: initiating the predictive collision analysis to be performed (reconstructions of motor vehicle collisions are used to identify the causes of events and to identify potential violations of traffic regulations; possibility of preventing an event is another important factor related to motor vehicle collision reconstruction; recognition of potential danger by a vehicle driver, see Introduction, para. 1); defining a virtual mirror property for a virtual mirror for the predictive collision analysis (e.g., aspheric, convex curvature or flatness of mirrors, see Introduction, paras. 3, 5, 6); and performing the predictive collision analysis, wherein performing the predictive collision analysis includes generating a virtual mirror graphical representation comprising the virtual mirror based on the virtual mirror property (During the evaluation of all mirrors with the rendered virtual scenes, the greatest measured deviation between an original photo and the corresponding virtual image was 20 pixels in the transverse direction for an image width of 4256 pixels, see Results, para. 3). Regarding claim 14, Leipner et al. discloses the processing system of claim 14, wherein the virtual mirror property comprises a position of the virtual mirror within a digital representation of an environment comprising at least one of: the virtual mirror graphical representation; a mirror offset that defines the position of a virtual camera sensor used to generate the virtual mirror; an orientation of the virtual mirror (see Reconstruction and visualization in 3ds Max, para. 1); a zoom of the virtual mirror (the curvature or flatness of mirrors used inherently affects zoom); and a field of view of the virtual mirror (field of view (FOV) of a driver through the front and side windows of a vehicle can also be visualized in 3D, see Introduction, para. 3). Regarding claim 16, Leipner et al. discloses the processing system of claim 14, further comprising defining prediction properties for the predictive collision analysis (virtual estimation of visibility lines, field-of-views and lines of sight for the driver, recognition of potential danger). Regarding claim 17, Leipner et al. discloses the processing system of claim 16, wherein defining the prediction properties comprises defining a number and type of virtual mirrors (field of view (FOV) of a driver through the front and side windows of a vehicle can also be visualized in 3D, see Introduction, para. 3; also see Fig. 1: “Mirror selection” of interior or exterior mirror types.). Regarding claim 18, Leipner et al. discloses the processing system of claim 17, wherein the type of virtual mirrors is selected from a group comprising: a left side-view mirror (field of view (FOV) of a driver through the front and side windows of a vehicle can also be visualized in 3D, see Introduction, para. 3; also see Fig. 1: “Mirror selection” of interior or exterior mirror types.); a right side-view mirror (field of view (FOV) of a driver through the front and side windows of a vehicle can also be visualized in 3D, see Introduction, para. 3; also see Fig. 1: “Mirror selection” of interior or exterior mirror types.); and a rear-view mirror (field of view (FOV) of a driver through the front and side windows of a vehicle can also be visualized in 3D, see Introduction, para. 3; also see Fig. 1: “Mirror selection” of interior or exterior mirror types.). Regarding claim 19, Leipner et al. discloses the processing system of claim 17, wherein defining the virtual mirror property comprises defining a plurality of virtual mirror properties for each of the number and the type of virtual mirrors (e.g., aspheric, convex curvature or flatness of mirrors, see Introduction,,paras. 3, 5, 6; also analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references, see Methods, para. 1.). Regarding claim 20, Leipner et al. discloses the processing system of claim 14, wherein the operations further comprise generating a digital representation of the predictive collision analysis, wherein the digital representation includes the virtual mirror graphical representation (structured light scans were processed using ATOS Professional V7.5 (GOM mbH, Braunschweig, Germany) to create polygonal models. In Geomagic Wrap (3D Systems Inc., Rock Hill, South Carolina, USA), each mirror scan (Fig. 3a) was separated into parts containing the table and the wall, the mirror frame (Fig. 3b) and the mirror surface itself (Fig. 3c), see section 2.2: Data Processing”, para. 2). Allowable Subject Matter Claim 12 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J LETT whose telephone number is (571)272-7464. The examiner can normally be reached Mon-Fri 9-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS J LETT/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jun 28, 2024
Application Filed
Dec 20, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602714
LIGHTING AND INTERNET OF THINGS DESIGN USING AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12570401
Robot and Unmanned Aerial Vehicle (UAV) Systems for Cell Sites and Towers
2y 5m to grant Granted Mar 10, 2026
Patent 12567217
SMART CONTENT RENDERING ON AUGMENTED REALITY SYSTEMS, METHODS, AND DEVICES
2y 5m to grant Granted Mar 03, 2026
Patent 12561867
SYSTEMS AND METHODS FOR AUTOMATICALLY ADDING TEXT CONTENT TO GENERATED IMAGES
2y 5m to grant Granted Feb 24, 2026
Patent 12555276
Image Generation Method and Apparatus
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
47%
With Interview (-36.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 719 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month