Prosecution Insights
Last updated: April 19, 2026
Application No. 18/675,273

SYSTEM FOR GENERATION OF FLOOR PLANS AND THREE-DIMENSIONAL MODELS

Non-Final OA §102
Filed
May 28, 2024
Examiner
MUSHAMBO, MARTIN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Occipital Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
690 granted / 816 resolved
+22.6% vs TC avg
Moderate +14% lift
Without
With
+14.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
15 currently pending
Career history
831
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 816 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/08/2024 and 05/28/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2 and 4 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Moreno et al. (US 20190279420 A1) hereinafter referred to as Moreno.Claim 1. A method comprising: receiving image data associated with a physical environment (Moreno, Fig.2 receiving point cloud data from a LIDAR system that is scanning an environment); generating, based at least in part on the image data, a semantic segmentation (Moreno, [0051]-[0054] a segmentation or classification algorithm can be applied in order to distinguish surfaces. [0073] color segmentation is also used to distinguish for example roofs from surrounding objects); generating, based at least in part on the image data, a line segment segmentation (Moreno, [0067] a line segment can be drawn from one of the two free end points of the intersection lines in the principal direction); determining an intersection based at least in part on the semantic segmentation and the line segment segmentation (Moreno, [0056] [0059] [0062] [0064] The processing module can receive a list of surfaces that includes all points in each surface. Intersection lines defining the internal edges of the roof can be calculated and refined by identifying intersections between the surfaces, thus defining the internal topology of the roof); and determining, based at least in part on the intersection, a planar surface (roof surface) associated with the physical environment (Moreno, [0055] roof surfaces are identified and grouped).Claim 2. The method as recited in claim 1, further comprising: generating, based at least in part on the image data, a segmentation by normal (Moreno, [0012] computing normal of each LiDAR point); and wherein: determining the intersection is based at least in part on the segmentation by normal (Moreno, [0055] After the normals are calculated, the points can be clustered into groups according to the direction of their respective normals, with such groups corresponding to roof surfaces. [0058] a plurality of connected geometric shapes or a polygon mesh defined by line segments. The plurality of surfaces making the roof are interconnected by intersecting segments).Claim 4. The method as recited in claim 1, further comprising generating a three-dimensional model of the physical environment based at least in part on the planar surface (Moreno, [0057] generating a 3D model of the roof using the identified surfaces). Allowable Subject Matter Claims 5-20 are allowed. The following is a statement of reasons for the indication of allowable subject matter: no prior art teaches alone or in combination the bolded and italicized features.Claim 5. A method comprising: receiving a model associated with a physical environment, the physical environment representing an interior of a room; identifying, based at least in part on the model, two or more unconnected endpoints associated with the model; generating, based at least in part on the two or more unconnected endpoints, pairs of unconnected endpoints; generating, for each pair of unconnected endpoints, one or more variant line segments; generating, for each of the one or more variant line segments, a response value score; selecting, based at least in part on the response value score for each of the one or more variant line segments, a first variant line segment; and completing, based at least in part on the first variant line segment, the model associated with the physical environment.Claims 6-13 depend on allowable claim 5 and are therefore allowable for the same reasons as claim 5.Claim 14. A method comprising: receiving a model associated with a physical environment, the physical environment representing an interior of a room; determining a trusted segment of the model; identifying a first segment of the model, the first segment different than the trusted segment; generating, for the first segment, one or more variant segments, each of the one or more variant segments either parallel or orthogonal to the trusted segment; and determining a selected variant segment of the one or more variant segments for use in a surface segment of the model; and generating the surface segment of the model based at least in part on the selected variant segment.Claims 15-20 depend on allowable claim 14 and are therefore allowable for the same reasons as claim 14. Claim 3 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 3, no prior art teaches alone or in combination the features “The method as recited in claim 2, wherein: generating the semantic segmentation further comprises inputting the image data into one or more first machine learned models and receiving as an output the semantic segmentation, the one or more first machine learned models trained on first image data of interiors of physical environments including various surfaces and objects; generating the line segment segmentation further comprises inputting the image data into one or more second machine learned models and receiving as an output the semantic segmentation, the one or more second machine learned models trained on second image data of interiors of physical environments including various surfaces and objects; and generating the segmentation by normals further comprises inputting the image data into one or more third machine learned models and receiving as an output the segmentation by normals, the one or more third machine learned models trained on third image data of interiors of physical environments including various surfaces and objects. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is as follows: US 20080130996 A1 a method for classifying a line segment of a handwritten line into a reference feature set, wherein said handwritten line comprises one or several curves representing a plurality of symbols. First, sample data representing said handwritten line is received. Next, a sample line segment in said received sample data is identified by detecting a sample line segment start point (SLSSP) and a sample line segment end point (SLSEP). Then, a sample feature set of said identified sample line segment is determined. Finally, the determined sample feature set is matched to a reference feature set among a plurality of reference feature sets. US 20170193829 A1 Methods, systems, and apparatus, including computer programs encoded on computer storage media, for an unmanned aerial system inspection system. One of the methods is performed by a UAV and includes receiving, by the UAV, flight information describing a job to perform an inspection of a rooftop. A particular altitude is ascended to, and an inspection of the rooftop is performed including obtaining sensor information describing the rooftop. Location information identifying a damaged area of the rooftop is received. The damaged area of the rooftop is traveled to. An inspection of the damaged area of the rooftop is performed including obtaining detailed sensor information describing the damaged area. A safe landing location is traveled to. US 20190205485 A1 a computer-implemented method for generating a 3D model representing a building. The method comprises providing a 2D floor plan representing a layout of the building. The method also comprises determining a semantic segmentation of the 2D floor plan. The method also comprises determining the 3D model based on the semantic segmentation. Such a method provides an improved solution for processing a 2D floor plan. US 20200282929 A1 This disclosure is directed to validating a calibration of and/or calibrating sensors using semantic segmentation information about an environment. For example, the semantic segmentation information can identify bounds of objects, such as invariant objects, in the environment. Techniques described herein may determine sensor data associated with the invariant objects and compare that data to a feature known from the invariant object. Misalignment of sensor data with the known feature can be indicative of a calibration error. In some implementations, the calibration error can be determined as a distance between the sensor data and a line or plane representing a portion of the invariant object. US 20200302686 A1 A method for determining a visual scene virtual representation and a highly accurate visual scene-aligned geometric representation for virtual interaction. US 20210150805 A1 Techniques are provided for determining one or more environmental layouts. For example, one or more planes can be detected in an input image of an environment. The one or more planes correspond to one or more objects in the input image. One or more three-dimensional parameters of the one or more planes can be determined. One or more polygons can be determined using the one or more planes and the one or more three-dimensional parameters of the one or more planes. A three-dimensional layout of the environment can be determined based on the one or more polygons. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARTIN MUSHAMBO whose telephone number is (571)270-3390. The examiner can normally be reached Monday-Friday (8:00AM-5:00PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARTIN MUSHAMBO/Primary Examiner, Art Unit 2615 02/07/2026
Read full office action

Prosecution Timeline

May 28, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602892
WALLPAPER DISPLAY METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598282
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586331
SYSTEM AND METHOD FOR CHANGING OVERALL STYLE OF PUBLIC AREA BASED ON VIRTUAL SCENE
2y 5m to grant Granted Mar 24, 2026
Patent 12579754
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573146
PRODUCT PLACEMENT SYSTEMS AND METHODS FOR 3D PRODUCTIONS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+14.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 816 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month