Prosecution Insights
Last updated: April 19, 2026
Application No. 17/887,436

REALITY MODEL OBJECT RECOGNITION USING CROSS-SECTIONS

Final Rejection §103
Filed
Aug 13, 2022
Examiner
TRAN, DUY ANH
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Skyyfish LLC
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
104 granted / 128 resolved
+19.3% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
29 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
26.7%
-13.3% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 128 resolved cases

Office Action

§103
DETAILED ACTION This Action is in response to Applicant’s response filed on 05/10/2025. Claims 1-14 are still pending in the present application. This Action is made FINAL. Response to Amendment Claim Objection: The amended claims filed on 05/10/2025 overcomes the Claim Objection in the previous office action. Response to Arguments Applicant's arguments filed on 05/10/2025 have been fully considered but they are not persuasive. In the present application, applicant argues that neither Godwin nor Fathi, either alone or in combination, disclose the invention as currently claimed and amended. For instance, the prior art of record does not disclose that the drone generates, in real time, "a plurality of finer 2D slices at or proximate to the one or more dense centers" and "adding said plurality of finer 2D slices to said library." In the claimed invention, the drone detects one or more dense centers around the 3D object, then generates finer 2D slices at or around the dense centers. The prior art does not teach adjusting the resolution of the 2D slices when a dense center is detected. For at least the reasons above, Applicant submits that independent claim 1 is patentable over Godwin in view of Fathi.” (Remark Pages 6-7) Examiner respectfully disagrees. With respect to the Applicant’s arguments that “the prior art of record does not disclose that the drone generates, in real time, "a plurality of finer 2D slices at or proximate to the one or more dense centers" and "adding said plurality of finer 2D slices to said library." In the claimed invention, the drone detects one or more dense centers around the 3D object, then generates finer 2D slices at or around the dense centers. The prior art does not teach adjusting the resolution of the 2D slices when a dense center is detected.”. (Remark Pages 6-7) Fathi discloses the segmented/abstracted 2D image information including the one or more objects 240, the 3D information including all or part of the one or more objects of interest 230, and the combined 2D/3D information including the one or more objects of interest 220 are processed in a plurality of cross-referencing steps in 245 until a consensus about the one or more objects is reached, whereupon the cross-reference generates a set of validated 2D image information (generating a plurality of finer 2D slices) and 3D information about the one or more objects in 250 is interpreted as “generating a plurality of finer 2D slices at or proximate to the one or more dense centers”. (Paragraph 151) Also, Fathi discloses the output of the cross referencing procedure between the 2D and 3D information can also be characterized as validated 2D and 3D information, where such validated 2D and 3D information is suitable to provide geometric information, either or both geodesic or Euclidean, about the one or more objects present in the scene is interpreted as “generating a plurality of finer 2D slices at or proximate to the one or more dense centers”. (Paragraph 98). For the purpose of examination, “a set of validated 2D image a plurality of finer 2D slices” is considered as “a plurality of finer 2D slices”, in fact the applicant does not specifically define what “a plurality of finer 2D slices” is. Also, the applicant argument that “adjusting the resolution of the 2D slices” is not included in the claims limitation. Thus, it is suggested the applicant at least to amend to define detail about “finer 2D slices” is for compact prosecution purpose. Furthermore, Fathi discloses the validated 2D and 3D information about the one or more objects can now be processed in an object recognition engine, as illustrated by 255-280 in FIG. 2B, to determine their location in both 3D and 2D scene representations (Paragraph 152) and an object library for the location or environment becomes populated with new relevant object information provided by the method and the machine learning algorithms are further trained as to the object content of the location or the type of environment … This can enable the capture of a variety of validated object information for use with the machine learning algorithms so as to generate a higher quality object library and, as a result, a higher quality object labeling output is interpreted as “adding said plurality of finer 2D slices to said library”. (Paragraphs 113). Additionally, Fathi discloses geometric information, topological information, etc. can be verified, and such verified information incorporated into the object libraries for subsequent use. For example, if a measurement for an object is generated from the validated 2D image information and the validated 3D information, and that object is labeled as a “window,” the returned measurement can be verified as being likely to be correct read as “adding said plurality of finer 2D slices to said library”). (Paragraph 123) The Examiner states that in light of MPEP 2111, the Examiner has interpreted the claims properly. Specifically, during patent prosecution, the pending claims must be “given their broadest reasonable interpretation assistant with the specification.” The Examiner has interpreted the claim language in reference to the specification. Because applicant has the opportunity to amend the claims during prosecution, given a claim in its broadest reasonable interpretation will reduce the possibility that the claim, once issued will be interpreted more or broadly than is justified. Although the cited reference is different from the invention disclosed, the language of Applicant's claims is sufficiently broad to reasonably read on the cited reference. A broad reading does not constitute “teaching away.” Further, it has been held that nonpreferred embodiments failing to assert discovery beyond that known in the art does not constitute a “teaching away” unless such disclosure criticizes, discredits, or otherwise discourages the solution claimed. In re Susi, 440 F.2d 442, 169 USPQ 423 (CCPA 1971), In re Gurley, 27 F.3d 551, 554, 31 USPQ2d 1130, 1132 (Fed. Cir. 1994), In re Fulton, 391 F.3d 1195, 1201, 73 USPQ2d 1141, 1146 (Fed. Cir. 2004), (see MPEP §2124). Disclosed examples and preferred embodiments do not constitute a teaching away from a broader disclosure or nonpreferred embodiments. In re Susi, 440 F.2d 442, 169 USPQ 423 (CCPA 1971). “A known or obvious composition does not become patentable simply because it has been described as somewhat inferior to some other product for the same use.” In re Gurley, 27 F.3d 551, 554, 31 USPQ2d 1130, 1132 (Fed. Cir. 1994) (The invention was directed to an epoxy impregnated fiber-reinforced printed circuit material. The applied prior art reference taught a printed circuit material similar to that of the claims but impregnated with polyester-imide resin instead of epoxy. The reference, however, disclosed that epoxy was known for this use, but that epoxy impregnated circuit boards have “relatively acceptable dimensional stability” and “some degree of flexibility,” but are inferior to circuit boards impregnated with polyester-imide resins. The court upheld the rejection concluding that applicant’s argument that the reference teaches away from using epoxy was insufficient to overcome the rejection since “Gurley asserted no discovery beyond what was known in the art.” 27 F.3d at 554, 31 USPQ2d at 1132.). Furthermore, “[t]he prior art’s mere disclosure of more than one alternative does not constitute a teaching away from any of these alternatives because such disclosure does not criticize, discredit, or otherwise discourage the solution claimed….” In re Fulton, 391 F.3d 1195, 1201, 73 USPQ2d 1141, 1146 (Fed. Cir. 2004). (MPEP §2124). In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The examiner states Godwin et al (U.S. 20180075649 A1; Godwin), in view of Fathi et al (U.S. 20170220887 A1; Fathi) does teach/disclose on “a plurality of finer 2D slices at or proximate to the one or more dense centers" and "adding said plurality of finer 2D slices to said library”. Fathi is only used to disclose “generating a plurality of finer 2D slices at or proximate to the one or more dense centers; adding said plurality of finer 2D slices to said library; applying pattern matching to identify the critical components from said library; identifying one or more pieces of equipment in said 3D model by correlating said identified one or more critical components with real world standard objects,” the other limitations has been disclosed by Godwin. The Examiner made a proper determination of obviousness under 35 U.S.C. §103, and also provided an appropriate supporting rationale in view of the decision by the Supreme Court in KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007). The Examiner’s rational are based on the Office’s current understanding of the law, and are believed to be fully consistent with the binding precedent of the Supreme Court. Furthermore, the Examiner supported the rejection under 35 U.S.C. §103 via making the clear articulation of the reason(s) why the claimed invention would have been obvious by citing the specific areas in the prior art references. Further the Examiner, clearly stating the modification of the inventions, supported the rejection under 35 U.S.C. §103 by making the analysis explicit. Last, the Examiner did not make conclusory statements. The Court quoting In re Kahn, 441 F.3d 977, 988, 78 USPQ2d 1329, 1336 (Fed. Cir. 2006), stated that “‘[R]ejections on obviousness cannot be sustained by mere conclusory statements; instead, there must be some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness.’” KSR, 550 U.S. at ___, 82 USPQ2d at 1396. Therefore, the Examiner has established a proper 35 U.S.C. §103 rejection, which is disclosed in detail below. Claims Status Claim(s) 1-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Godwin et al (U.S. 20180075649 A1; Godwin), in view of Fathi et al (U.S. 20170220887 A1; Fathi). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Godwin et al (U.S. 20180075649 A1; Godwin), in view of Fathi et al (U.S. 20170220887 A1; Fathi). Regrading claim 1, Godwin discloses an object recognition method using reality-based modelling (Paragraph 9: “Systems and methods using augmented reality to visualize a telecommunications site for planning, engineering, and installing equipment includes creating a three-dimensional (3D) model of a virtual object representing the equipment;”) comprising: locating a 3-Dimensional (3D) model generated by photogrammetry, lidar or other scanning techniques that generate 3D models (Paragraph 235: “The 3D digital objects can be created via photogrammetry or created as a 3D model.”) including point clouds, meshes and reality models); (Fig.26 ; Paragraph 143: “The 3D model creation process 1700 performs initial processing on the input data (step 1702). An output of the initial processing includes a sparse point cloud, a quality report, and an output file can be camera outputs. The sparse point cloud is processed into a point cloud and mesh (step 1704) providing a densified point cloud and 3D outputs.”) defining one or more reference points or planes in said 3D model; (Paragraph 151: “In the method 1800, the point of interest can be the cell tower 12. The point of interest can be selected at an appropriate altitude and once selected, the UAV 50 circles in flight about the point of interest.”) generating outputs comprising a plurality of 2-Dimensional (2D) slices of the 3D model at various elevations; (Fig.8 and Paragraph 101: “the UAV 50 is configured to take various photos during flight, at different angles, orientations, heights, etc. … It is important for accurate correlation between photos to enable construction of a 3D model from a plurality of 2D photos. The photos can all include multiple location identifiers (i.e., where the photo was taken from, height and exact location).”) adding said plurality of 2D slices at various angles to a library; (Paragraph 108 & Paragraph 109: “The photos are stored locally in the UAV 50 and/or transmitted wirelessly to a mobile device, controller, server, etc. … post-processing occurs to combine the photos or “stitch” them together to construct the 3D model.”) identifying one or more dense centers with critical components on the 3D model; (Paragraph 174: “a method 2100 for verifying equipment and structures at the cell site 10 using 3D modeling. As described herein, an intermediate step in the creation of a 3D model includes a point cloud, e.g., a sparse or dense point cloud. A point cloud is a set of data points in some coordinate system, e.g., in a three-dimensional coordinate system, these points are usually defined by X, Y, and Z coordinates, and can be used to represent the external surface of an object.”) and generating a report of said 3D model comprising said one or more pieces of equipment. (Paragraph 145: “The method 1750 can further include remotely performing a site survey of the cell site utilizing a Graphical User Interface (GUI) of the 3D model to collect and obtain information about the cell site, the cell tower, one or more buildings, and interiors thereof”) However, Godwin does not disclose generating a plurality of finer 2D slices at or proximate to the one or more dense centers; adding said plurality of finer 2D slices to said library; applying pattern matching to identify the critical components from said library; identifying one or more pieces of equipment in said 3D model by correlating said identified one or more critical components with real world standard objects; Fathi discloses locating a 3-Dimensional (3D) model generated by photogrammetry, lidar or other scanning techniques that generate 3D models including point clouds, meshes and reality models); (Paragraph 34: “systems and methods to generate information about one or more objects of interest in a scene relates to associating 3D information for the one or more objects with 2D image information for the one or more objects, … 3D information can include information from sources such as point clouds, wireframes, CAD drawings, GeoJSON data, 3D vector models, polygon meshes, 3D models and surfaces, etc.”) identifying one or more dense centers with critical components on the 3D model; (Fig.1: 3D information for Scene/Object(s) ; Paragraph 146: “3D information is provided at 125, wherein the 3D information comprises information of a scene that includes all or part of the selected object(s) of interest. As discussed above, the 3D information can comprise a plurality of point clouds, wireframes, or other sources of 3D information”) generating a plurality of finer 2D slices at or proximate to the one or more dense centers; (Paragraph 151: “the segmented/abstracted 2D image information including the one or more objects 240, the 3D information including all or part of the one or more objects of interest 230, and the combined 2D/3D information including the one or more objects of interest 220 are processed in a plurality of cross-referencing steps in 245 until a consensus about the one or more objects is reached, whereupon the cross-reference generates a set of validated 2D image information and 3D information about the one or more objects in 250”; Paragraph 98: “the output of the cross referencing procedure between the 2D and 3D information can also be characterized as validated 2D and 3D information, where such validated 2D and 3D information is suitable to provide geometric information, either or both geodesic or Euclidean, about the one or more objects present in the scene”) adding said plurality of finer 2D slices to said library; (Paragraph 152:“ the validated 2D and 3D information about the one or more objects can now be processed in an object recognition engine, as illustrated by 255-280 in FIG. 2B, to determine their location in both 3D and 2D scene representations” ; Paragraph 113: “an object library for the location or environment becomes populated with new relevant object information provided by the method and the machine learning algorithms are further trained as to the object content of the location or the type of environment … This can enable the capture of a variety of validated object information for use with the machine learning algorithms so as to generate a higher quality object library and, as a result, a higher quality object labeling output”; Paragraph 123: “geometric information, topological information, etc. can be verified, and such verified information incorporated into the object libraries for subsequent use. For example, if a measurement for an object is generated from the validated 2D image information and the validated 3D information, and that object is labeled as a “window,” the returned measurement can be verified as being likely to be correct”) applying pattern matching to identify the critical components from said library; (Paragraph 26: “Machine learning algorithms used in object recognition generally rely on matching, learning, or pattern recognition techniques applied on the detected objects using either or both of appearance-based or feature-based techniques.; Fig.1: he 2D image information 120; Paragraph 145: “The object(s) of interest selection can be selected by the machine or computing device, … the 2D image information 120 includes information about the selected object(s).”) identifying one or more pieces of equipment in said 3D model by correlating said identified one or more critical components with real world standard objects; (Paragraph 23: “ “object of interest” can encompass a wide variety of objects that may be present in a scene such as: components of a building … landscape components” ; Fig.1 step 130 and Paragraph 146: “ In 130, the 3D information 125 and 2D image information 120 is processed to generate projective geometry information that combines the 3D information and 2D image information in 135. This projective geometry information includes information about all or part of the selected object(s) and establishes relationships between either or both of the 3D information and 2D image information incorporates all or part of the selected object(s).”) and generating a report of said 3D model comprising said one or more pieces of equipment. (Fig.1: step 135; Paragraph 146: “This projective geometry information includes information about all or part of the selected object(s) and establishes relationships between either or both of the 3D information and 2D image information incorporates all or part of the selected object(s).”) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Godwin by including a cross-validation methodologies between the projective geometry information, clustered 3D information, and/or segmented 2D image information that is taught by Fathi, to make the invention that extraction of information about objects from scene information; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the nature and quality of the information that can be obtained about the one or more objects of interest in a scene as well as improving the quality of virtual reality environment. (Fathi: Paragraphs 34 and 134). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Regrading claim 2, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said one or more reference points comprises a base metal or some well-known reference point or plane, center of top and the bottom of said 3D object. (Paragraph 151: “In the method 1800, the point of interest can be the cell tower 12. The point of interest can be selected at an appropriate altitude and once selected, the UAV 50 circles in flight about the point of interest. Further, the radius, altitude, direction, and speed can be set for the point of interest flight as well as a number of repetitions of the circle”) Regrading claim 3, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said library comprises 2D cross section snapshots of computer aided design (CAD) outputs of various real-world objects, the reality outputs, and/or original photographs of said 3D model. (Paragraph 9: “The multiple files can include an object file, a material library file, and a texture file. The 3D model can be created through steps of creating the virtual object utilizing Computer Aided Design (CAD) software.”; Paragraph 113: “One useful aspect of the 3D model GUI is an ability to click anywhere on the 3D model and bring up corresponding 2D photos.”) Regrading claim 4, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said 3D model is a cellular tower or other piece of infrastructure. (Paragraph 60: “UAV-based systems and methods for 3D modeling and representing of cell sites and cell towers.”) Regrading claim 5, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said report comprises real- world identity, dimensional information, material of said one or more mounted equipment and their location on said 3D model. (Paragraph 185: “The user performing the cell site audit or survey can include determining a down tilt angle of one or more antennas of the cell site components … determining dimensions of the cell site components; determining equipment type and serial number of the cell site components; and determining connections between the cell site components.”) Regrading claim 6, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses further comprising recognizing anomaly in at least one of said one or more mounted equipment and ordering remedial action. (Paragraph 185: “The user performing the cell site audit or survey can include determining a down tilt angle of one or more antennas of the cell site components … determining dimensions of the cell site components; determining equipment type and serial number of the cell site components; and determining connections between the cell site components.”; Paragraph 186: “The 3D model can be used for a cell site audit, survey, site inspection, etc.”) Regrading claim 7,Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said photogrammetry, lidar scan, or other capture technique (or a combination of techniques) is performed by one or more drones. (Paragraph 58: “systems and methods for obtaining three-dimensional (3D) modeling data using Unmanned Aerial Vehicles (UAVs) (also referred to as “drones”) or the like at cell sites, cell towers, etc .. to obtain data, i.e., pictures and/or video, used to create a 3D model of a cell site subsequently.”) Regrading claim 8, Godwin discloses a computer program product for reality-based object recognition, (Paragraph 9: “systems and methods using augmented reality to visualize a telecommunications site for planning, engineering, and installing equipment includes creating a three-dimensional (3D) model of a virtual object representing the equipment;”) the computer program product comprising non-transitory computer-readable media encoded with instructions for execution by a processor to perform a method (Paragraph 75: “a non-transitory computer-readable storage medium having computer readable code stored thereon for programming a computer, server, appliance, device, etc. each of which may include a processor to perform methods”) comprising: locating a 3-Dimensional (3D) model generated by photogrammetry, lidar or other scanning techniques that generate 3D models (Paragraph 235: “The 3D digital objects can be created via photogrammetry or created as a 3D model.”) including point clouds, meshes and reality models); (Fig.26 ; Paragraph 143: “The 3D model creation process 1700 performs initial processing on the input data (step 1702). An output of the initial processing includes a sparse point cloud, a quality report, and an output file can be camera outputs. The sparse point cloud is processed into a point cloud and mesh (step 1704) providing a densified point cloud and 3D outputs.”) defining one or more reference points or planes in said 3D model; (Paragraph 151: “In the method 1800, the point of interest can be the cell tower 12. The point of interest can be selected at an appropriate altitude and once selected, the UAV 50 circles in flight about the point of interest.”) generating outputs comprising a plurality of 2-Dimensional (2D) slices of the 3D model at various elevations; (Fig.8 and Paragraph 101: “the UAV 50 is configured to take various photos during flight, at different angles, orientations, heights, etc. … It is important for accurate correlation between photos to enable construction of a 3D model from a plurality of 2D photos. The photos can all include multiple location identifiers (i.e., where the photo was taken from, height and exact location).”) adding said plurality of 2D slices at various angles to a library; (Paragraph 108: “The photos are stored locally in the UAV 50 and/or transmitted wirelessly to a mobile device, controller, server, etc. … post-processing occurs to combine the photos or “stitch” them together to construct the 3D model.”) identifying one or more dense centers with critical components on the 3D model; (Paragraph 174: “a method 2100 for verifying equipment and structures at the cell site 10 using 3D modeling. As described herein, an intermediate step in the creation of a 3D model includes a point cloud, e.g., a sparse or dense point cloud. A point cloud is a set of data points in some coordinate system, e.g., in a three-dimensional coordinate system, these points are usually defined by X, Y, and Z coordinates, and can be used to represent the external surface of an object.”) and generating a report of said 3D model comprising said one or more pieces of equipment and spatial information using a computer aided design (CAD) model. (Paragraph 59: “From the 3D model, any aspect of the site survey can be performed remotely including determinations of equipment location, accurate spatial rendering, planning through drag and drop placement of equipment, access to actual photos through a Graphical User Interface, indoor texture mapping, and equipment configuration visualization mapping the equipment in a 3D view of a rack.”; Paragraph 9: “The 3D model can be created through steps of creating the virtual object utilizing Computer Aided Design (CAD) software.”) However, Godwin does not disclose generating a plurality of finer 2D slices at or proximate to the one or more dense centers; adding said plurality of finer 2D slices to said library; applying pattern matching to identify the critical components from said library; identifying one or more pieces of equipment in said 3D model by correlating said identified one or more critical components with real world standard objects; Fathi discloses locating a 3-Dimensional (3D) model generated by photogrammetry, lidar or other scanning techniques that generate 3D models including point clouds, meshes and reality models); (Paragraph 34: “systems and methods to generate information about one or more objects of interest in a scene relates to associating 3D information for the one or more objects with 2D image information for the one or more objects, … 3D information can include information from sources such as point clouds, wireframes, CAD drawings, GeoJSON data, 3D vector models, polygon meshes, 3D models and surfaces, etc.”) identifying one or more dense centers with critical components on the 3D model; (Fig.1: 3D information for Scene/Object(s) ; Paragraph 146: “3D information is provided at 125, wherein the 3D information comprises information of a scene that includes all or part of the selected object(s) of interest. As discussed above, the 3D information can comprise a plurality of point clouds, wireframes, or other sources of 3D information”) generating a plurality of finer 2D slices at or proximate to the one or more dense centers; (Paragraph 151: “the segmented/abstracted 2D image information including the one or more objects 240, the 3D information including all or part of the one or more objects of interest 230, and the combined 2D/3D information including the one or more objects of interest 220 are processed in a plurality of cross-referencing steps in 245 until a consensus about the one or more objects is reached, whereupon the cross-reference generates a set of validated 2D image information and 3D information about the one or more objects in 250”; Paragraph 98: “the output of the cross referencing procedure between the 2D and 3D information can also be characterized as validated 2D and 3D information, where such validated 2D and 3D information is suitable to provide geometric information, either or both geodesic or Euclidean, about the one or more objects present in the scene”) adding said plurality of finer 2D slices to said library; (Paragraph 152:“ the validated 2D and 3D information about the one or more objects can now be processed in an object recognition engine, as illustrated by 255-280 in FIG. 2B, to determine their location in both 3D and 2D scene representations” ; Paragraph 113: “an object library for the location or environment becomes populated with new relevant object information provided by the method and the machine learning algorithms are further trained as to the object content of the location or the type of environment … This can enable the capture of a variety of validated object information for use with the machine learning algorithms so as to generate a higher quality object library and, as a result, a higher quality object labeling output”; Paragraph 123: “geometric information, topological information, etc. can be verified, and such verified information incorporated into the object libraries for subsequent use. For example, if a measurement for an object is generated from the validated 2D image information and the validated 3D information, and that object is labeled as a “window,” the returned measurement can be verified as being likely to be correct”) applying pattern matching to identify the critical components from said library; (Paragraph 26: “Machine learning algorithms used in object recognition generally rely on matching, learning, or pattern recognition techniques applied on the detected objects using either or both of appearance-based or feature-based techniques.; Fig.1: 2D image information 120; Paragraph 145: “The object(s) of interest selection can be selected by the machine or computing device, … the 2D image information 120 includes information about the selected object(s).”) identifying one or more pieces of equipment in said 3D model by correlating said identified one or more critical components with real world standard objects; (Paragraph 23: “ “object of interest” can encompass a wide variety of objects that may be present in a scene such as: components of a building … landscape components” ; Fig.1 step 130 and Paragraph 146: “ In 130, the 3D information 125 and 2D image information 120 is processed to generate projective geometry information that combines the 3D information and 2D image information in 135. This projective geometry information includes information about all or part of the selected object(s) and establishes relationships between either or both of the 3D information and 2D image information incorporates all or part of the selected object(s).”) and generating a report of said 3D model comprising said one or more pieces of equipment. (Fig.1: step 135; Paragraph 146: “This projective geometry information includes information about all or part of the selected object(s) and establishes relationships between either or both of the 3D information and 2D image information incorporates all or part of the selected object(s).”) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Godwin by including a cross-validation methodologies between the projective geometry information, clustered 3D information, and/or segmented 2D image information that is taught by Fathi, to make the invention that extraction of information about objects from scene information; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the nature and quality of the information that can be obtained about the one or more objects of interest in a scene as well as improving the quality of virtual reality environment. (Fathi: Paragraphs 34 and 134). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Regrading claim 9, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said one or more reference points comprises a base metal, center of top and the bottom of said 3D model. (Paragraph 151: “In the method 1800, the point of interest can be the cell tower 12. The point of interest can be selected at an appropriate altitude and once selected, the UAV 50 circles in flight about the point of interest. Further, the radius, altitude, direction, and speed can be set for the point of interest flight as well as a number of repetitions of the circle”) Regrading claim 10, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said library comprises 2D snapshots of computer aided design (CAD) outputs of various real-world objects, the reality outputs, and/or original photographs of said 3D model. (Paragraph 9: “The multiple files can include an object file, a material library file, and a texture file. The 3D model can be created through steps of creating the virtual object utilizing Computer Aided Design (CAD) software.”; Paragraph 113: “One useful aspect of the 3D model GUI is an ability to click anywhere on the 3D model and bring up corresponding 2D photos.”) Regrading claim 11, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said 3D model is a cellular tower. (Paragraph 60: “UAV-based systems and methods for 3D modeling and representing of cell sites and cell towers.”) Regrading claim 12, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said report comprises real-world identity of said one or more mounted equipment and their location (x, y, z) and angle on said 3D object and all dimensional information. (Paragraph 185: “The user performing the cell site audit or survey can include determining a down tilt angle of one or more antennas of the cell site components … determining dimensions of the cell site components; determining equipment type and serial number of the cell site components;”) Regrading claim 13, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said method further comprises recognizing anomaly in at least one of said one or more mounted equipment and ordering remedial action. (Paragraph 185: “The user performing the cell site audit or survey can include determining a down tilt angle of one or more antennas of the cell site components … determining dimensions of the cell site components; determining equipment type and serial number of the cell site components; and determining connections between the cell site components.”; Paragraph 186: “The 3D model can be used for a cell site audit, survey, site inspection, etc.”) Regrading claim 14, Godwin, as modified by Fathi, discloses all the claims invention. Godwin further discloses said model is generated from photogrammetry, lidar, or combinations thereof performed by one or more drones. (Paragraph 58: “systems and methods for obtaining three-dimensional (3D) modeling data using Unmanned Aerial Vehicles (UAVs) (also referred to as “drones”) or the like at cell sites, cell towers, etc .. to obtain data, i.e., pictures and/or video, used to create a 3D model of a cell site subsequently.”) Relevant Prior Art Directed to State of Art Qiu et al (U.S. 20140192050 A1), “Three-Dimensional Point Processing and Model Generation”, teaches about method for three-dimensional point processing and model generation includes providing data comprising a three-dimensional point cloud representing a scene, applying a primitive extraction to the data to associate primitive shapes with points within the three-dimensional point cloud, projecting each segment of the segmented point cloud onto respective planes to produce respective two-dimensional point clouds, detecting circle patterns in each two-dimensional point cloud, and processing the circle patterns to determine cylinder parameters for each of a plurality of candidate cylinders, and assembling the candidate cylinders into a three-dimensional surface model of the scene. Taylor et al (U.S. 20150347872 A1), “System And Method For Detecting Features In Aerial Images Using Disparity Mapping And Segmentation Techniques”, teaches about the system includes an object detection pre-processing engine for object detection and classification using one or more aerial images. The object detection pre-processing engine includes disparity map generation, segmentation, and classification to identify various objects and types of objects in an aerial image. The information derived from these pre-processed images can then be used by the mass production engine for the manual and/or automated production of drawings, sketches, and models. Wekel et al (U.S. 20210063578 A1), “Object Detection and Classification Using Lidar Range Images for Autonomous Machine Applications”, teaches about a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. Warner et al (U.S. 20200371494 A1), “System and Methods for 3D Model Evaluation”, teaches about an apparatus, a method or a program that has capabilities to search for an object similar to a predetermined shape from inputted shape models (CAD models, or mesh models) and is able to change the search scope in terms of size and/or proportion. It also teaches about method, in each of three orthogonal orientations, obtains dimensional layers of triangular mesh data of the 3D object from a slicer program. Perimeter length values for each layer of each of the three orthogonal orientations are obtained and compared to stored perimeter length value for a reference object to determine a degree of matching. Matching with the reference object is made based on the assigned total perimeter values. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Duy A Tran whose telephone number is (571)272-4887. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL R MISTRY can be reached at (313)-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUY TRAN/ Examiner, Art Unit 2674 /ONEAL R MISTRY/ Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Aug 13, 2022
Application Filed
Oct 10, 2024
Non-Final Rejection — §103
May 01, 2025
Response after Non-Final Action
May 10, 2025
Response Filed
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573024
IMAGE AUGMENTATION FOR MACHINE LEARNING BASED DEFECT EXAMINATION
2y 5m to grant Granted Mar 10, 2026
Patent 12561934
AUTOMATIC ORIENTATION CORRECTION FOR CAPTURED IMAGES
2y 5m to grant Granted Feb 24, 2026
Patent 12548284
METHOD FOR ANALYZING ONE OR MORE ELEMENT(S) OF ONE OR MORE PHOTOGRAPHED OBJECT(S) IN ORDER TO DETECT ONE OR MORE MODIFICATION(S), AND ASSOCIATED ANALYSIS DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12530798
LEARNED FORENSIC SOURCE SYSTEM FOR IDENTIFICATION OF IMAGE CAPTURE DEVICE MODELS AND FORENSIC SIMILARITY OF DIGITAL IMAGES
2y 5m to grant Granted Jan 20, 2026
Patent 12505539
CELL BODY SEGMENTATION USING MACHINE LEARNING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+17.5%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 128 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month