Prosecution Insights
Last updated: April 19, 2026
Application No. 18/746,494

METHOD FOR CREATING 3D OBJECTS FOR AERIAL VIEW VIDEO BASED MAP, AND COMPUTER PROGRAM RECORDED ON RECORDING MEDIUM FOR EXECUTING METHOD THEREFOR

Non-Final OA §101§103§112
Filed
Jun 18, 2024
Examiner
WU, MING HAN
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Mobiltech
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
282 granted / 370 resolved
+14.2% vs TC avg
Strong +23% interview lift
Without
With
+23.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
35 currently pending
Career history
405
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
68.3%
+28.3% vs TC avg
§102
2.1%
-37.9% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 370 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 9 is directed non statutory because it is a claim to a software per se on a computer program recorded on a recording medium. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 9 recites “… wherein the computer program is coupled to a computing device comprising …”. It creates ambiguity because how can a computer program be coupled to a computing device, a physical device ? Dependent claims not mentioned specifically above inherit the deficiencies from the claims stated above on which they depend. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 3, 6, 8, 9, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Bockem et al. (Publication: US 2009/0079761 A1) in view of Dane et al. (Publication: US 2017/0084038 A1). Regarding claim 1, Bockem discloses a method for creating a 3D object for an [[aerial view video]] based map comprising ([0185], Fig. 5 – generated local 3D models in a 3D map aerial view.): designating a processing region for generating at least one 3D object model on a map generated based on an [[aerial view video]], by a map generating device ([0187] - By selecting buildings these buildings 21 within the 3D map view, e.g. by moving a mouse cursor or a touchscreen input means 13 over the respective areas, an area corresponding to the extent of the selected surveying data set (providing a local 3D model) is marked with its boarders 22 and text with additional information is overlaid, e.g. naming the used surveying device 23, a creation date 24, and quality indication 25, e.g. in the form of a one-dimensional quality index. The selected area is a polygon connecting by points with border. [0028] – the methods above were executed by the computer program product.); selecting a type of an object model that is to be generated in the designated processing region, by the map generating device ([0036], [0187] - a user may initially choose whether to start the registering process close to the ground or close to a certain height, e.g. when registering 3D model associated to a certain story of a building. By selecting buildings these buildings within the 3D map view, e.g. by moving a mouse cursor or a touchscreen input means over the respective areas, an area corresponding to the extent of the preselected surveying data set (providing a local 3D model) . [0028] – the methods above were executed by the computer program product. PNG media_image1.png 212 340 media_image1.png Greyscale ); and generating a pre-stored object model corresponding to the selected type within the designated processing region, by the map generating device ( [0081] upon selection of one of the local 3D models, a 3D main visualization of the selected local 3D model is provided, e.g. a full screen 45° aerial view, wherein the 3D main visualization is different to, e.g. more detailed than, the 3D thumbnail visualization corresponding to the selected local 3D model item. Furthermore, the main visualization has a viewing direction which (initially) corresponds to a current viewing direction provided by the 3D thumbnail visualization corresponding to the selected local 3D model item. Fig. 4 - The representations is a pre-stored object in the list as shown in the following display. PNG media_image2.png 790 486 media_image2.png Greyscale [0199] The middle part of FIG. 8 shows a transition step wherein the 3D environment visualization corresponding to the 3D item visualization 2 and the flat ground section 32 is vertically projected onto a horizontal flat plane 33. For illustrative purposes the outer border 36 of the horizontal flat plane 33 is depicted in the top and the middle part of the figure. PNG media_image3.png 688 536 media_image3.png Greyscale [0028] – the methods above were executed by the computer program product.). Bockem do not however Dan discloses Creating object for an aerial view video ([0042] - an unmanned aerial vehicle (UAV) may generate a non-obstacle map 126 from video recorded while in flight, may navigate based on detected objects (e.g., buildings, signs, people, packages, etc.). See Fig. 12 A and 12 B, the map include objects.); map generated based on an aerial view video ([0042] - an unmanned aerial vehicle (UAV) may generate a non-obstacle map 126 from video recorded while in flight, may navigate based on detected objects (e.g., buildings, signs, people, packages, etc.).); aerial view video ([0042] - an unmanned aerial vehicle (UAV) may generate a non-obstacle map 126 from video recorded while in flight, may navigate based on detected objects (e.g., buildings, signs, people, packages, etc.).). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Bockem with Creating object for an aerial view video, map generated based on an aerial view video, aerial view video, as taught by Dan. The motivation for doing is to improve image process.. Regarding claim 2, Bockem in view of Dane disclose all the limitations of claim 1. Bockem discloses wherein the designating selects a plurality of points on the map, and designates a polygon, made by connecting the plurality of points based on a selected order, as the processing region ([0187] - By electing buildings these buildings 21 within the 3D map view, e.g. by moving a mouse cursor or a touchscreen input means 13 over the respective areas, an area corresponding to the extent of the preselected surveying data set (providing a local 3D model) is marked with its boarders 22 and text with additional information is overlaid, e.g. naming the used surveying device 23, a creation date 24, and quality indication 25, e.g. in the form of a one-dimensional quality index. The selected area is a polygon connecting by points with border. PNG media_image4.png 578 516 media_image4.png Greyscale ) . Regarding claim 3, Bockem in view of Dane disclose all the limitations of claim 1. Bockem discloses wherein the designating selects at least one object on the map ([0036], [0187] - a user may initially choose whether to start the registering process close to the ground or close to a certain height, e.g. when registering 3D model associated to a certain story of a building. By selecting buildings these buildings within the 3D map view, e.g. by moving a mouse cursor or a touchscreen input means over the respective areas, an area corresponding to the extent of the preselected surveying data set (providing a local 3D model) . PNG media_image1.png 212 340 media_image1.png Greyscale ), identifies a set of similar objects whose similarity to the object is higher than a preset value based on the selected object, and designates a region including all the identified similar objects as the processing region ( [0017], [0081], [0138] - upon selection of one of the local 3D models, a 3D main visualization of the selected local 3D model is provided. Matching the local building model and the translocal city model typically requires information of a rough alignment and/or orientation of the two models with respect to each other in order that, for example, an feature extraction and matching algorithm can precisely align the two models, e.g. to generate a common 3D model wherein the data of the translocal 3D model corresponding to the area 3 represented by the local 3D model is replaced by data of the local 3D model. PNG media_image5.png 766 496 media_image5.png Greyscale [0199] Fig. 8, As shown in Fig. 8, a region is designated. wherein the 3D environment visualization corresponding to the 3D item visualization 2 and the flat ground section is vertically projected onto a horizontal flat plane. PNG media_image3.png 688 536 media_image3.png Greyscale ). Regarding claim 6, Bockem in view of Dane disclose all the limitations of claim 1. Bockem discloses wherein the designating identifies an actual location on the map of the designated processing region ([0187] - By selecting buildings these buildings 21 within the 3D map view, e.g. by moving a mouse cursor or a touchscreen input means 13 over the respective areas, an area corresponding to the extent of the selected surveying data set (providing a local 3D model) is marked with its boarders 22 and text with additional information is overlaid, e.g. naming the used surveying device 23, a creation date 24, and quality indication 25, e.g. in the form of a one-dimensional quality index. The selected area is a polygon connecting by points with border.) extracts altitude restriction information about a building on a land corresponding to the identified actual location, and designates the height of the processing region based on the extracted altitude restriction information ( [0022] - identify and extract sections from the existing 3D models corresponding to certain items in the environment, e.g. cars or particular kinds of building roofs. [0170] The 3D item visualization 2 is moveable within the 3D environment visualization 1 by means of touchscreen input or mouse input, wherein different input modes are provided to position and orient the 3D item visualization 2 within the 3D environment visualization 1. [0171] For example, as depicted from top to bottom of the figure, the 3D item visualization 2 has already been rotated into its correct orientation, wherein for finally arranging the represented building it is switched between two different input modes 4A,4B restricting movement of the 3D item visualization 2 to different subsets of translational degrees of freedom each. By way of example, [0172] in a first input mode 4A, movement of the 3D item visualization 2 is restricted to translations along horizontal (orthogonal) x and y axes, wherein any rotation and the height 5 above ground level are kept fixed, and [0173] in a second input mode 4B, movement of the 3D item visualization 2 is restricted to adapting the height 5 (along a z axis orthogonal to the x and y axes). [0174] For example, the switch between input modes 4A,4B may be based on a keystroke combo or a multi-touch gesture such as sweeping with one finger for x-y-movement and sweeping with two fingers for the height adjustment. [0176] After placing the 3D item visualization 2 to an end position 3, the relative configuration 6 between the 3D environment visualization 1 and the 3D item visualization 2 is locked and used, e.g. by an automatic feature extraction and matching algorithm, to precisely align the two models in order to generate a common 3D model visualized in the bottom frame of the figure.). Regarding claim 8, Bockem in view of Dane disclose all the limitations of claim 7. Bockem discloses the selecting extracts an edge existing in the processing region, extracts one or more enclosures by the extracted edge, and identifies a plurality of objects existing in the processing region through the extracted enclosures ([0187] - By selecting buildings these buildings 21 within the 3D map view, e.g. by moving a mouse cursor or a touchscreen input means 13 over the respective areas, an area corresponding to the extent of the selected surveying data set (providing a local 3D model) is marked with its boarders 22 and text with additional information is overlaid, e.g. naming the used surveying device 23, a creation date 24, and quality indication 25, e.g. in the form of a one-dimensional quality index. The selected area is a polygon connecting by points with border. [0089] the local 3D models are provided by aerial or ground based reality capture devices, e.g. of similar kind as described above with respect of providing a translocal 3D model and/or local 3D models. The 3D models may further be provided as output of an algorithm, e.g. a machine learning algorithm, configured for analyzing and/or combining existing 3D models, e.g. to identify and extract sections from the existing 3D models corresponding to certain items in the environment, e.g. cars or particular kinds of building roofs. PNG media_image3.png 688 536 media_image3.png Greyscale ). Regarding claim 9, 9. The method of claim 8, Bockem discloses wherein the selecting displays a pre-stored object model list corresponding to the type of the selected object, and selects one of the displayed object model lists ( [0081] upon selection of one of the local 3D models, a 3D main visualization of the selected local 3D model is provided, e.g. a full screen 45° aerial view, wherein the 3D main visualization is different to, e.g. more detailed than, the 3D thumbnail visualization corresponding to the selected local 3D model. Furthermore, the main visualization has a viewing direction which (initially) corresponds to a current viewing direction provided by the 3D thumbnail visualization corresponding to the selected local 3D model. [0086] at least one of the 3D thumbnail visualizations provides a representation of its corresponding local 3D model such that the corresponding area within the environment is viewed embedded in part of the environment around an acquisition location of surveying data that provided the local 3D model corresponding to said at least one 3D thumbnail visualization. For example, the environment may be provided by a translocal 3D model as described above and the acquisition location may be roughly known from metadata of the corresponding local 3D model or the embedding position may be known because the corresponding local 3D model is already registered. By way of example, the 3D thumbnail visualization is a white plaster model view wherein the area corresponding to the local 3D model is highlighted by a color. Fig. 4 - The representations is a pre-stored object in the list as shown in the following display. PNG media_image2.png 790 486 media_image2.png Greyscale ). Regarding claim 10, Bockem discloses wherein the computer program with a computing device comprising ([0028] The invention further relates to a computer program product comprising program code, which, when executed by a computer, causes the computer to carry out the following described method:): designating a processing region for generating at least one 3D object model on a map generated based on an [[aerial view video]], by the processor ([0187] - By selecting buildings these buildings 21 within the 3D map view, e.g. by moving a mouse cursor or a touchscreen input means 13 over the respective areas, an area corresponding to the extent of the selected surveying data set (providing a local 3D model) is marked with its boarders 22 and text with additional information is overlaid, e.g. naming the used surveying device 23, a creation date 24, and quality indication 25, e.g. in the form of a one-dimensional quality index. The selected area is a polygon connecting by points with border. [0028] – the methods above were executed by the computer program product.); selecting a type of an object model that is to be generated in the designated processing region, by the processor ([0036], [0187] - a user may initially choose whether to start the registering process close to the ground or close to a certain height, e.g. when registering 3D model associated to a certain story of a building. By selecting buildings these buildings within the 3D map view, e.g. by moving a mouse cursor or a touchscreen input means over the respective areas, an area corresponding to the extent of the preselected surveying data set (providing a local 3D model) . [0028] – the methods above were executed by the computer program product. PNG media_image1.png 212 340 media_image1.png Greyscale ); and generating at least one object model corresponding to the selected type within the designated processing region, by the processor ([0081] upon selection of one of the local 3D models, a 3D main visualization of the selected local 3D model is provided, e.g. a full screen 45° aerial view, wherein the 3D main visualization is different to, e.g. more detailed than, the 3D thumbnail visualization corresponding to the selected local 3D model item. Furthermore, the main visualization has a viewing direction which (initially) corresponds to a current viewing direction provided by the 3D thumbnail visualization corresponding to the selected local 3D model item. Fig. 4 - The representations is a pre-stored object in the list as shown in the following display. PNG media_image2.png 790 486 media_image2.png Greyscale [0199] The middle part of FIG. 8 shows a transition step wherein the 3D environment visualization corresponding to the 3D item visualization 2 and the flat ground section 32 is vertically projected onto a horizontal flat plane 33. For illustrative purposes the outer border 36 of the horizontal flat plane 33 is depicted in the top and the middle part of the figure. PNG media_image3.png 688 536 media_image3.png Greyscale [0028] – the methods above were executed by the computer program product.). Bockem do not however Dan discloses a computer program recorded on a recording medium, wherein the computer program is coupled to a computing device comprising ([0164] Data 1721a and instructions 1741a may be stored in the memory 1739. The instructions 1741a may be executable by the processor 1728 to implement one or more of the methods described herein. Executing the instructions 1741a may involve the use of the data that is stored in the memory 1739. When the processor 1728 executes the instructions 1741, various portions of the instructions 1741b may be loaded onto the processor 1728, and various pieces of data 1721b may be loaded onto the processor 1728.): a memory ([0164] Data 1721a and instructions 1741a may be stored in the memory 1739. The instructions 1741a may be executable by the processor 1728 to implement one or more of the methods described herein. Executing the instructions 1741a may involve the use of the data that is stored in the memory 1739. When the processor 1728 executes the instructions 1741, various portions of the instructions 1741b may be loaded onto the processor 1728, and various pieces of data 1721b may be loaded onto the processor 1728.); a transceiver ([0165] – transceiver); and a processor processing a command loaded in the memory, whereby the computer program executes ([0164] Data 1721a and instructions 1741a may be stored in the memory 1739. The instructions 1741a may be executable by the processor 1728 to implement one or more of the methods described herein. Executing the instructions 1741a may involve the use of the data that is stored in the memory 1739. When the processor 1728 executes the instructions 1741, various portions of the instructions 1741b may be loaded onto the processor 1728, and various pieces of data 1721b may be loaded onto the processor 1728.): Map generated based on an aerial view video ([0042] - an unmanned aerial vehicle (UAV) may generate a non-obstacle map 126 from video recorded while in flight, may navigate based on detected objects (e.g., buildings, signs, people, packages, etc.). See Fig. 12 A and 12 B, the map include objects.); aerial view video ([0042] - an unmanned aerial vehicle (UAV) may generate a non-obstacle map 126 from video recorded while in flight, may navigate based on detected objects (e.g., buildings, signs, people, packages, etc.). See Fig. 12 A and 12 B, the map include objects.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Bockem with a computer program recorded on a recording medium, wherein the computer program is coupled to a computing device comprising: a memory; a transceiver; and a processor processing a command loaded in the memory, whereby the computer program executes: Map generated based on an aerial view video ; aerial view video; as taught by Dan. The motivation for doing is to improve image process.. Claims 4 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Bockem et al. (Publication: US 2009/0079761 A1) in view of Dane et al. (Publication: US 2017/0084038 A1) and Dal Mutto et al. (Publication: US 2020/0372626 A1) Regarding claim 4, Bockem in view of Dane disclose all the limitations of claim 1. Bockem discloses wherein the designating identifies the set of similar objects by determining the similarity ([0056] The identification may also be based on matching or connecting surfaces inside the subarea associated to the local 3D model with immediately adjoining surfaces outside the subarea. Accordingly, in a further embodiment the identification is based on analyzing a part of the local 3D model corresponding to an inside part of the subarea and a part of the translocal 3D model corresponding to an outside part to the subarea, wherein the inside part and the outside part immediately adjoin each other. [0058] A further aspect of the invention relates to a computer-implemented method, comprising [0059] reading input data providing a translocal 3D model of an environment and a local 3D model of an item within the environment, e.g. wherein the input data are of similar kind as described above for the snapping-in aspect.). Bockem in view of Dan do not however Dal discloses determining based on RGB (Red, Green, Blue) values of the selected object ([0100] – object of interest will be determined based on the pixels of the RGB-D frame.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Bockem in view of Dan with determining based on RGB (Red, Green, Blue) values of the selected object as taught by Dal. The motivation for doing is to help to find the minimal size box thus reducing time consuming. Regarding claim 7, Bockem in view of Dane disclose all the limitations of claim 1. Bockem discloses wherein the selecting identifies a plurality of objects included in the processing region, classifies a type of the identified objects based on a [[shape]] of the identified objects ([0054] the identification involves an assignment of different surfaces within the local 3D model and of different surfaces within the translocal 3D model, respectively, into different surface classes by semantic and/or geometric classification, and a comparison of the local 3D model with the translocal 3D model in order to match surfaces assigned to corresponding classes.), and selects a type of an object that exists in the processing region as the type of the object model ([0181] FIG. 4 schematically depicts a task list according to a further embodiment of the inventive computer-implemented method, the top part of the figure showing an initial state of the task list, wherein 3D thumbnail visualizations 11 are at rest, and the bottom part showing a state of the task list, wherein one of the 3D thumbnail visualizations 12 is automatically rotating upon preselection by touchscreen input 13. Therefore, a user may pre-view the data set corresponding to a list entry 14 based on the 3D thumbnail visualization 11,12, which, for example, simplifies data selection and interpretation of additional data information in a text section 15 of the list entry 14 (each list entry 14 comprising one of the 3D thumbnail visualizations and a corresponding text section). PNG media_image6.png 752 488 media_image6.png Greyscale ). Bockem in view of Dan do not however Dal discloses Classified based on a shape of the identified objects ([0161] Particular heuristic rules are specific to the various different classes of objects. As another example, the heuristics may include a canonical general shape for objects of the class, then scale the canonical shape in accordance with the dimensions of the partial 3D model. For example, while reusable coffee filters may differ in appearance, most reusable coffee filters have the same general shape, and therefore scaling the canonical shape the size of the partial 3D model will extrapolate an approximately accurately sized model for computing a minimum (or tight]y) enclosing bounding box for the object.) selects a type of an object that exists most frequently ([0161] Particular heuristic rules are specific to the various different classes of objects. As another example, the heuristics may include a canonical general shape for objects of the class, then scale the canonical shape in accordance with the dimensions of the partial 3D model. For example, while reusable coffee filters may differ in appearance, most reusable coffee filters have the same general shape, and therefore scaling the canonical shape the size of the partial 3D model will extrapolate an approximately accurately sized model for computing a minimum (or tight]y) enclosing bounding box for the object. So in this case, the objects that exists most frequently is based on the objects that has the same feature ( scaling the canonical shape the size of the partial 3D model will extrapolate an approximately accurately sized model for computing a minimum (or tight]y) enclosing bounding box for the object ) ). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Bockem in view of Dan with Classified based on a shape of the identified objects; selects a type of an object that exists most frequently as taught by Dal. The motivation for doing is to help to find the minimal size box thus reducing time consuming. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Bockem et al. (Publication: US 2009/0079761 A1) in view of Dane et al. (Publication: US 2017/0084038 A1) and Pugh et al. (Publication: US 2021/0142497 A1). Regarding claim 5, Bockem in view of Dane disclose all the limitations of claim 1 including the aerial view video. Rockem in view of Dan do not however Pugh discloses wherein the designating identifies a plurality of objects included in the processing region based on depth information contained in point cloud data acquired by a lidar as well as the view , estimates an average height of the plurality of identified objects, and designates a height of the processing region based on the estimated average height ([0051] S200 can include determining three-dimensional features (S210). The three-dimensional features can be determined based on: 3D features from visual-inertial odometry and/or SLAM, from multiple view triangulation of points or lines, from active depth sensors (e.g., depth data from time-of-flight sensors, structured light, LIDAR, range sensors, etc.), from stereo or multi-lens optics, from photogrammetry, from neural networks, and any other suitable method for extracting 3D features. [0100] In an eighth example of S440, global scale can be determined by determining the height of the camera from the floor plane the photographer is standing on based on the heights of known objects in the image calculated using single-view odometry using gravity (see FIG. 7), an average camera height (e.g., 1.43 meters, 4.7 feet, 5 feet, etc.), and/or determined in any other suitable manner; determining planes or parameters thereof (e.g., height) based on user input (e.g., fine tuning) where the user adjusts a floor height to define the height (e.g., based on visual cues) or drags a virtual marker to define the corners and/or edges of the floor or wall; and/or determining planes based on user input (e.g., manual measures) where the user can mark a vertical floor height for a known height in the image; but can additionally or alternatively include any other suitable process. The process can be a single process, a set of chained processes (e.g., executed sequentially) and/or suitable process.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Rockem in view of Dan with wherein the designating identifies a plurality of objects included in the processing region based on depth information contained in point cloud data acquired by a lidar as well as the view , estimates an average height of the plurality of identified objects, and designates a height of the processing region based on the estimated average height as taught by Pugh. The motivation for doing is to improve editable function for the user. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MING WU whose telephone number is (571)270-0724. The examiner can normally be reached on Monday - Thursday and alternate Fridays: 9:30am - 6:00pm EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MING WU/ Primary Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Jun 18, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597109
SYSTEMS AND METHODS FOR GENERATING THREE-DIMENSIONAL MODELS USING CAPTURED VIDEO
2y 5m to grant Granted Apr 07, 2026
Patent 12579702
METHOD AND SYSTEM FOR ADAPTING A DIFFUSION MODEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579623
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12567185
Method and system of creating and displaying a visually distinct rendering of an ultrasound image
2y 5m to grant Granted Mar 03, 2026
Patent 12548202
TEXTURE COORDINATE COMPRESSION USING CHART PARTITION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+23.3%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 370 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month