Prosecution Insights
Last updated: April 19, 2026
Application No. 18/543,502

Systems and Methods for Lean Ortho Correction for Computer Models of Structures

Final Rejection §103
Filed
Dec 18, 2023
Examiner
WU, YANNA
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Insurance Services Office Inc.
OA Round
4 (Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
354 granted / 438 resolved
+18.8% vs TC avg
Strong +35% interview lift
Without
With
+35.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
20 currently pending
Career history
458
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 438 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is in response to applicant’s amendment/response filed on 01/30/2026, which has been entered and made of record. Claim 1 and 10 are amended. Claims 7 and 16 are canceled. Claims 1-6, 8-15 and 17-18 are pending in the application. Response to Arguments Applicant arguments regarding claim rejections under 103 are considered, but are not persuasive. Applicant argues: The references used do not teach the amended limitations of claim 1 and 10. Examiner disagrees: Ciarcia teaches a lean ortho correction algorithm to match the points of oblique image with the points of an orthorectified image. FIG. 9 gives an example of the points matching. FIG. 9 below shows that the square point one match the circle position, the second square does not match the second circle position, while the third square matches. Specifically, Ciarcia teaches the amended limitations of “the lean ortho correction algorithm processing a first point of the plurality of world 3D points that matches a first corresponding image pixel in the orthorectified image, a second point of the plurality of world 3D points that does not match a second corresponding image pixel in the orthorectified image, and a third point of the plurality of world 3D points that matches the second corresponding image pixel in the orthorectified image;”([0057], “An example of the result of this fitting process is found in FIG. 9, which shows diagram 66 that shows a point pattern plot in oblique image space. The vertical axis 218 represents the "v" coordinate in the oblique image space; the horizontal axis 216 represents the "u" coordinate in the oblique image space. The squares 220 show the position of the defined points in the oblique image space. The circles 222 represent the location of the subset of ortho points that were transformed into the oblique image space using the current candidate arguments for the Transform Function.” FIG. 9 below shows that the square point one match the circle position, the second square does not match the second circle position, while the third square matches. PNG media_image1.png 554 652 media_image1.png Greyscale ) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-6, 8-15, 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fathi et al. (US 2018/0053347 A1, Fathi) in view of Ciarcia (US 2014/0099035 A1). Regarding claim 1, Fathi teaches: A system for lean ortho correction for computer models of structures, comprising: a processor for processing a structure model; ([0015], “In another aspect, a system comprises at least one computing device comprising a processor; and a wireframe verification application stored in memory, where execution of the wireframe verification application by the processor causes the at least one computing device to:”) a user interface in communication with the processor; (Abstract: “The wireframe can be adjusted by a user and/or a computer to align the 2D images and/or 3D representations thereto, thereby generating a verified wireframe including at least a portion of the structure or element of interest.”) and computer system code executed by the processor, the computer system code causing the processor ([0015], “In another aspect, a system comprises at least one computing device comprising a processor; and a wireframe verification application stored in memory, where execution of the wireframe verification application by the processor causes the at least one computing device to:”) to: display an image of a structure on the user interface; ([0031], “As set forth in more detail herein, such user confidence can result from projecting or overlaying the unverified wireframe on one or more 2D image representations and/or a 3D representation of the structure or element of interest, and facilitating navigation by the user through and around the unverified wireframe during a verification process. Such navigation can include zooming and panning to provide multiple angles for the unverified wireframe to allow the user to inspect the unverified wireframe vertices, whereby the positioning of such vertices can be adjusted against the 2D image representation(s) and/or 3D representation.”) project the structure model onto the image; ([0163], “The unverified wireframe can be projected on 2D images, 3D representations, or a combination of both. The 2D images used for verification of the wireframe can include 2D images that were used to generate the unverified wireframe. In some cases, the 2D images used for verification of the wireframe are separate from the 2D images used to generate the wireframe (or a point cloud used to generate the wireframe). With the wireframe projected over the image or point cloud at 112, endpoints and edges can be compared to features of the underlying 2D image or 3D representation to ensure confidence in the final verified wireframe.”) identify, via user input, a plurality of features in the image; and adjust the structure model by transforming coordinates of the structure model using the plurality of features and a lean ortho correction algorithm; ([0163], “With the wireframe projected over the image or point cloud at 112, endpoints and edges can be compared to features of the underlying 2D image or 3D representation to ensure confidence in the final verified wireframe. If adjustments are made at 115, then the changes can be propagated through all of the 2D images used to verify the unverified wireframe at 118. Confirmation of whether additional 2D images are available for verification can then be checked at 121. If there is not adjustment, then the workflow passes to 121.” [0048], “This could be even extended to using an as-built BIM model to verify the unverified wireframe. The only criterion for this functionality is to register the 3D representation, unverified wireframe, and imagery in the same coordinate system. This can be as simple as having the 3D representation, unverified wireframe, and different sets of imagery in a global coordinate system such as Geodetic or Geocentric coordinate system or different but known local coordinate systems. A more complex scenario happens when the 3D representation, unverified wireframe, and different sets of imagery are in different unknown coordinate systems. In this scenario, a set of 2D and/or 3D features could be extracted and matched automatically among the different datasets or provided by a user. Corresponding 2D features can be converted into 3D features using visual triangulation techniques. Having at least three corresponding 3D features between two unknown coordinate systems allows computing a transform matrix that maps the two coordinate systems. Such a transform matrix or a set of transform matrices could be used to bring the 3D representation, unverified wireframe, and different sets of 2D imagery into a known coordinate system.” [0013], teaches user is involved in the process.) However, Faithi does not explicitly teach, but Ciarcia teaches: the image can be an orthorectified image ([0022], “House 106 is located substantially below airplane 102, and the image captured from this angle is called an orthogonal ("ortho") image.”) The image features can be: world three-dimensional (“3D”) points in the orthorectified image ([0041], “For example, in FIG. 4 image 56, 18 points were chosen in the ortho image to identify features of a roof. These points include the peak edge of a gable 136, an eave edge 134, an opposing eave edge 138, and another peak edge of a gable 132 at the opposite side of the house.”) project the adjusted structure model onto the orthorectified image. ([0070]-[0071], “The roof modeling engine 402 performs at least some of the functions described with reference to FIGS. 1-13 above. In particular, the roof modeling engine 402 generates a model based on one or more images of a building that are obtained from the Roof Estimation System data repository 416 or directly from the image source computing system 465. As noted, model generation may be performed semi-automatically, based on at least some inputs received from the operator computing system 475. In addition, at least some aspects of the model generation may be performed automatically. In particular, to generate the 3D model, the point-to-point registration and elevation computation engine 478 simulates the perspective change between an orthogonal and oblique view of a roof provided in the different acquired images by applying a convoluted vanishing point perspective projection VPPP model. In some embodiments, the point-to-point registration and elevation computation engine 478 performs registration and computation of point elevations within orthogonal (overhead) roof images based on the use of determining a proper Transformation Function by iterating through Gaussian Mixture Model (GMM) and evaluation using Graphical Form Fitting.”) the lean ortho correction algorithm processing a first point of the plurality of world 3D points that matches a first corresponding image pixel in the orthorectified image, a second point of the plurality of world 3D points that does not match a second corresponding image pixel in the orthorectified image, and a third point of the plurality of world 3D points that matches the second corresponding image pixel in the orthorectified image;([0057], “An example of the result of this fitting process is found in FIG. 9, which shows diagram 66 that shows a point pattern plot in oblique image space. The vertical axis 218 represents the "v" coordinate in the oblique image space; the horizontal axis 216 represents the "u" coordinate in the oblique image space. The squares 220 show the position of the defined points in the oblique image space. The circles 222 represent the location of the subset of ortho points that were transformed into the oblique image space using the current candidate arguments for the Transform Function.” FIG. 9 below shows that the square point one match the circle position, the second square does not match the second circle position, while the third square matches. PNG media_image1.png 554 652 media_image1.png Greyscale ) Faithi teaches ortho correction to computer models, in which at least three features are selected to be used for transformation. Ciacia explicitly teach how to select three points as the three features and a specific transformation process that do not need previous stored metadata. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combined the teachings of Faithi with the specific teachings of Ciacia. The three points selected by Ciacia effectively reflect the feature of the model. The transformation method of Ciacia does not require pre-existing metadata, which provides a more flexible transformation method. The benefit would be make it more efficiently to align two images and correct the model. Regarding claim 2, Faithi in view of Ciacia teaches: The system of Claim 1, wherein the structure model comprises a wireframe model or polygonal model of the structure. (Faithi [0014], “In one or more aspects, the structure or element of interest can comprise all or part of a roof, and the one or more structural aspects of interest comprises a roof area, a roof corner, a roof pitch, a roof edge, a roof gutter, a roof gable, a dormer, or a skylight” FIG. 1, 109) Regarding claim 3, Faithi in view of Ciacia teaches: The system of Claim 2, wherein the structure is a three-dimensional model of a house or a building. (Faithi [0014], “In one or more aspects, the structure or element of interest can comprise all or part of a roof, and the one or more structural aspects of interest comprises a roof area, a roof corner, a roof pitch, a roof edge, a roof gutter, a roof gable, a dormer, or a skylight” FIG. 1, 109) Regarding claim 4, Faithi in view of Ciacia teaches: The system of Claim 1, wherein one point of the plurality of world 3D points corresponds to a corner of the structure. (Ciacia, [0041], “For example, in FIG. 4 image 56, 18 points were chosen in the ortho image to identify features of a roof. These points include the peak edge of a gable 136, an eave edge 134, an opposing eave edge 138, and another peak edge of a gable 132 at the opposite side of the house.” The combination of claim 1 is incorporated here.) Regarding claim 5, Faithi in view of Ciacia teaches: The system of Claim 4, wherein a second point of the plurality of world 3D points corresponds to a point on the structure model. (Ciacia, [0041], “For example, in FIG. 4 image 56, 18 points were chosen in the ortho image to identify features of a roof. These points include the peak edge of a gable 136, an eave edge 134, an opposing eave edge 138, and another peak edge of a gable 132 at the opposite side of the house.” The combination of claim 1 is incorporated here.) Regarding claim 6, Faithi in view of Ciacia teaches: The system of Claim 5, wherein the second point of the plurality of world 3D points has an elevation greater than the first world 3D point. (Ciacia, [0041], “For example, in FIG. 4 image 56, 18 points were chosen in the ortho image to identify features of a roof. These points include the peak edge of a gable 136, an eave edge 134, an opposing eave edge 138, and another peak edge of a gable 132 at the opposite side of the house.” The combination of claim 1 is incorporated here.) Regarding claim 8, Faithi in view of Ciacia teaches: The system of Claim 1, wherein the lean ortho correction algorithm transforms image coordinates to model coordinates. (Faithi [0048], “A more complex scenario happens when the 3D representation, unverified wireframe, and different sets of imagery are in different unknown coordinate systems. In this scenario, a set of 2D and/or 3D features could be extracted and matched automatically among the different datasets or provided by a user. Corresponding 2D features can be converted into 3D features using visual triangulation techniques. Having at least three corresponding 3D features between two unknown coordinate systems allows computing a transform matrix that maps the two coordinate systems. Such a transform matrix or a set of transform matrices could be used to bring the 3D representation, unverified wireframe, and different sets of 2D imagery into a known coordinate system.” Ciacia further teaches the transformation can be transform points from one image to another image to see if they match each other. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Faithi with the specific teachings of Ciacia to transform points from 2D image coordinates into the model coordinates to obtain predicable results.) Regarding claim 9, Faithi in view of Ciacia teaches: The system of Claim 1, wherein the lean ortho correction algorithm transforms model coordinates to image coordinates. (Faithi [0048], “A more complex scenario happens when the 3D representation, unverified wireframe, and different sets of imagery are in different unknown coordinate systems. In this scenario, a set of 2D and/or 3D features could be extracted and matched automatically among the different datasets or provided by a user. Corresponding 2D features can be converted into 3D features using visual triangulation techniques. Having at least three corresponding 3D features between two unknown coordinate systems allows computing a transform matrix that maps the two coordinate systems. Such a transform matrix or a set of transform matrices could be used to bring the 3D representation, unverified wireframe, and different sets of 2D imagery into a known coordinate system.” Ciacia further teaches the transformation can be transform points from one image to another image to see if they match each other. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Faithi with the specific teachings of Ciacia to transform points from model coordinates into the image coordinates to obtain predicable results.) Claim 10-15, 17-18 recite similar limitations of claim 1-6, 8-9 respectively, in a form of method, thus are rejected using the same rationale respectively. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YANNA WU whose telephone number is (571)270-0725. The examiner can normally be reached Monday-Thursday 8:00-5:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 5712722330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YANNA WU/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Dec 18, 2023
Application Filed
Jul 13, 2024
Non-Final Rejection — §103
Dec 13, 2024
Response Filed
Jan 03, 2025
Final Rejection — §103
Jul 08, 2025
Request for Continued Examination
Jul 15, 2025
Response after Non-Final Action
Jul 28, 2025
Non-Final Rejection — §103
Jan 30, 2026
Response Filed
Feb 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602850
GENERATIVE AI VIRTUAL CLOTHING TRY-ON
2y 5m to grant Granted Apr 14, 2026
Patent 12579664
EYE TRACKING METHOD, APPARATUS AND SENSOR FOR DETERMINING SENSING COVERAGE BASED ON EYE MODEL
2y 5m to grant Granted Mar 17, 2026
Patent 12573106
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD FOR PROCESSING OVERLAY IMAGES
2y 5m to grant Granted Mar 10, 2026
Patent 12573108
HEAD-POSE AND GAZE REDIRECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12555187
CLIENT-SERVER MEDICAL IMAGE STACK RETRIEVAL AND DISPLAY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+35.3%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 438 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month