Prosecution Insights
Last updated: April 19, 2026
Application No. 18/560,920

NAVIGATION MAPPING METHOD FOR CONSTRUCTING EXTERNAL IMAGES OF MACHINE

Non-Final OA §101§103§112
Filed
Nov 14, 2023
Examiner
YANG, JIANXUN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Shenzhen Seauto Technology Co. Ltd.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
472 granted / 635 resolved
+12.3% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
45 currently pending
Career history
680
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-10 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, the following 2-step analysis is applied for analyzing the 35 U.S.C. § 101 subject matter eligibility of the claims. Step 1: Statutory Category Claim 1 recites a " navigation mapping method" which falls within the statutory category of a process. Step 2A, Prong 1: recites an abstract idea Claim 1 is directed to mathematical concepts and mental processes. The steps of "processing" images to determine coverage and "obtaining... positions" are mathematical calculations performed on data. Furthermore, a human could perform the mental equivalent of interpreting a 2D image to estimate a location. Thus, the claim recites a judicial exception. Step 2A, Prong 2: not integrated into a practical application The claim does not integrate the judicial exception into a practical application. The step of "processing" is recited at a high level of generality without providing specific details on how the image is processed or how the position is calculated. The claim merely uses a generic computer environment to perform the abstract calculations. Step 2B: does not amount to significantly more The additional elements in the claim do not amount to an inventive concept. Data Acquisition: "Obtaining real-time images" is mere data gathering. Real-time: The constraint of "real-time" processing does not add significantly more to the abstract idea. Conventionality: The remaining steps represent well-understood, routine, and conventional activities. Conclusion: Claim 1 is directed to an abstract idea and lacks an inventive concept. Claim 1 is rejected as ineligible subject matter under 35 U.S.C. § 101. Regarding dependent claims 2-10: Limitations in all dependent claims have been examined in a similar way to the above independent claims. It was found that all dependent claims 2-10 are patent-ineligible under 35 U.S.C. § 101: each merely adds further mathematical details or data gathering steps to the abstract idea without technological improvement or unconventional implementation. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more Claim(s) particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more Claim(s) particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 6 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 6 recites limitation “speed computing formulas of the machine in the coordinate system are as follows: P(x,y); ...”. P(x,y) is a symbol and not a formula by itself. It is not clear how the recited P(x,y) relates a formula. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claim(s) 1-2 and 7-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Behzadan (US20230029573) in view of Nadir et al (US20120200703). Regarding claim 1, Da teaches a navigation mapping method for constructing external images of a machine, the method comprising: obtaining real-time images of a target area; (Behzadan, disclose a method for obtaining aerial video footage via cameras installed in UAVs for identifying, locating, and mapping targets, with cameras actively collecting images in real time across multiple UAVs, “a method for identifying, locating, and mapping targets of interest using UAV camera footage... comprising obtaining UAV visual data, wherein the UAV visual data comprises aerial video footage comprising a plurality of frames”, “the UAV visual data comprises aerial video footage comprising a plurality of frames”, [0006]; “UAVs (e.g., drones) equipped with RGB cameras may be deployed to automatically recognize and map target objects”, [0014]; “the UAV visual data in Step 104 may be any images or videos collected by UAVs during operation. The images or videos may comprise perspective view footage of landscapes or seascapes scattered with various targets of interest. ..., the targets of interest may be, without limitation, ... cars, trucks...”, [0015]; car => “machine”) Behzadan does not expressly disclose but Nadir teaches: processing the real-time images and obtaining a two dimensional (2D) image of the target area; and (Nadir, stitching multiple simultaneous camera views into a single, planar, wide-FOV image (i.e., a 2D stitched ground-plane image) and doing so in real time, “The cameras are directed towards different zones in a scene and acquire on sensors thereof synchronized different pictures which are amended or corrected for distortions (deformations) such as perspective and seamlessly stitched to form in the sensors a continuous image corresponding to a common plane on or over the scene“, [0014]; “handling, storing and transmission of the sub-images or video stream are performed in real-time during the operation of the imaging system”, [0012]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Nadir into the system or method of Behzadan in order to stitch images from a UAV to help track a vehicle by creating a larger, more complete view of an area, which offers greater context and enables more reliable long-term tracking. The combination of Behzadan and Nadir also teaches other enhanced capabilities. The combination of Behzadan and Nadir further teaches: based on a coverage of the machine in the 2D image, obtaining real-time positions of the machine in the target area. (Behzadan, enable object (vehicle) identification and localization by using CNNs on UAV-captured imagery, mapping detected targets (including vehicles) by projecting them onto an orthogonal (map) coordinate system, thus yielding real-time position data, “passing the UAV visual data through a convolutional neural network (CNN) to detect targets of interest... wherein the CNN defines pixel coordinates for the targets of interest... applying a geometric transformation to the known and defined pixel coordinates to obtain real-world orthogonal positions; and projecting the detected targets of interest onto an orthogonal map based on the obtained real-world orthogonal positions”, [0006]; “the UAV visual data may be passed through a CNN to identify and localize targets of interest. In embodiments, particularly those in which the UAV visual data may be video footage, the identification and localization of the targets of interest may be performed for each frame in the video footage. The CNN, when trained on relevant datasets, may effectively detect objects based on visual features and output the pixel coordinates of those detected objects. In embodiments, the detected objects may be the targets of interest. As such, the CNN may identify the pixel coordinates for substantially all the targets of interest in each frame of the video footage“, [0018]; obviously, the target identification and target localization may be carried out in either individual frames or the stitched (combined) frames) Regarding claim 2, the combination of Behzadan and Nadir teaches its/their respective base claim(s). The combination further teaches the navigation mapping method for constructing the external images of the machine according to claim 1, wherein, the acquisition of the real-time images and the real-time positions comprising: S1: at time T, taking photos facing the target area by covering the target area, so as to obtain a real-time image outside the machine at the time T, (Nadir, “the image is generated by a simultaneously triggering a plurality of sensors oriented (aimed) at different directions relative to a scene and simultaneously acquiring image data of different zones of a scene (‘pictures’) along at least one direction of the scene”, [0007]; the sensors all point to the scene) S2: processing the real-time image at the time T to obtain a 2D image, S3: identifying a coverage of the machine in the 2D image at the time T, and determining a position of the machine in the 2D image at the time T based on the coverage, S4: after time t, obtaining the following equation: T=T+t, and repeating S1 to S4. (Behzadan, Nadir, S1 to S3: see comments on claim 1 with T interpreted as a time period; Nadir, the procedure may be repeated for tracking the target object, “on-going repeating imaging course (e.g. acquisition, processing, transmission and/or storing)”, [0012]; “tracking the object as long as the target is in the field of view of system 100”, [0185]) Regarding claim 7, the combination of Behzadan and Nadir teaches its their respective base claim(s). The combination further teaches the navigation mapping method for constructing the external images of the machine according to claim 2, wherein, in S1, taking photos of the target area by using one or more cameras, wherein the one or more cameras are set exterior to the machine. (Behzadan, Nadir, see comments on claim 1) Regarding claim 8, the combination of Behzadan and Nadir teaches its their respective base claim(s). The combination further teaches the navigation mapping method for constructing the external images of the machine according to claim 7, wherein, obtaining the real-images by using at least two cameras, and the at least two cameras are mounted at a high point of the target area; the photographing height of the plurality of cameras are the same; the plurality of cameras form photographing areas by photographing, and each photographing area comprises at least one other photographing area for forming an intersection area with the photographing area; and fusing the photographing areas based on the intersection area to obtain the 2D image. (Behzadan, Nadir, see comments on claim 1; Nadir, “As the UAV is flying, the sensors of the plurality of cameras 102 acquire a plurality of high resolution pictures of a scene at different directions, possibly with some overlapping at adjoining margins, collectively covering a high resolution large field-of-view of the scene”, [0143]; when flying all on-board cameras have the same height relative to the target of interest (TOI) on the ground; the overlapping images may be stitched together, “stitching of pictures into a larger image”, [0136]) Regarding claim 9, the combination of Behzadan and Nadir teaches its their respective base claim(s). The combination further teaches the navigation mapping method for constructing the external images of the machine according to claim 7, wherein, obtaining the real-time images by using at least three cameras, and selecting at least one of the at least three cameras as a high-mounted camera; a high-mounted photographing area formed by photographing with the high-mounted camera forms an intersection area with other photographing areas respectively, and a combination of the other photographing areas can cover all of the target area; and fusing the photographing areas based on the intersection area to obtain the 2D image. (Behzadan, Nadir, see comments on claim 1; Nadir, Fig. 1B, one of cameras 102 on the top and two cameras 102 at the bottom may have their respective images covering areas overlapping each other) Regarding claim 10, the combination of Behzadan and Nadir teaches its their respective base claim(s). The combination further teaches navigation mapping method for constructing the external images of the machine according to claim 9, wherein, fusing the photographing areas according to a multi-view image fusion algorithm, and a formula of the multi-view image fusion algorithm is as follows: PNG media_image1.png 79 143 media_image1.png Greyscale wherein xi and xj are respectively an abscissa coordinate of a feature point of two photographing areas in the intersection area, yi and yj are respectively an ordinate coordinate of the feature point of the two photographing areas in the intersection area, F is a basic matrix from a photographing area to another photographing area. (Nadir, Fig. 2A, “The pictures footprints (or ‘pictures’) 204 are directed to capture a central zone 204 c, two longitudinal zones 204 g at the sides of 204 c, and two latitudinal zones 204 t at the other sides of 204 c. Pictures 204 are optionally overlap at the margins 206 thereof due to the inclinations of cameras 102 relative to each other, facilitating combination (‘stitching’) of pictures 204 (or corresponding tiles) into a practically contiguous image”, [0154]; “seamlessly stitched to form in the sensors a continuous image corresponding to a common plane on or over the scene”, [0014]; “stitching and optional alignment are carried out on the sensors of the cameras about the region of interest only”, [0160]; the process of seamlessly stitching a feature point Q from two separate images may be as simple as an alignment operation; this is a well-known process in the art, see, e.g., “Image stitching, Wikipedia 2022”, x’ =H · x, p4; or https://web.archive.org/web/20221211181446/https://en.wikipedia.org/wiki/Image_stitching) Claim(s) 3-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Behzadan (US20230029573) in view of Nadir et al (US20120200703) and further in view of Shapiro et al (US20210407121). Regarding claim 3, the combination of Behzadan and Nadir teaches its/their respective base claim(s). The combination does not expressly disclose but Shapiro teaches the navigation mapping method for constructing the external images of the machine according to claim 2, wherein, a processing for the real-time images in S2 includes a binarization operation, and the binarization operation comprises: obtaining a grayscale threshold; based on the grayscale threshold, determining each pixel point of the real-time image; if a grayscale value of the pixel point is greater than the grayscale threshold, setting the grayscale value of the pixel point to one (1), and if the grayscale value of the pixel point is less than the grayscale threshold, setting the grayscale value of the pixel point to zero (0). (Shapiro, “pre-processing can optionally include a binarization, where a grayscale or color image is converted to black and white. Black levels can be assigned with respect to a threshold pixel value (e.g., white when the pixel value is less than the threshold and black when the pixel value is greater than the threshold value)”, [0078]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Shapiro into the modified system or method of Behzadan and Nadir in order to reduce image complexity, remove noise, and improve the focus on essential features like shapes, or contours, for making subsequent analysis faster and more accurate. The combination of Behzadan, Nadir and Shapiro also teaches other enhanced capabilities. Regarding claim 4, the combination of Behzadan, Nadir and Shapiro teaches its/their respective base claim(s). The combination further teaches the navigation mapping method for constructing the external images of the machine according to claim 3, wherein, in the 2D image, an area with the grayscale value “0” of the pixel point is determined as a machine-walkable area, and an area with the grayscale value “1” of the pixel point is determined as a machine-unwalkable area. (The flooded areas of Fig. 5 of Behzadan may become black areas after a binarization of Shapiro; the black areas may be assigned to any meanings per design choice such as vehicle-unwalkable areas because they are flooded) Regarding claim 5, the combination of Behzadan, Nadir and Shapiro teaches its/their respective base claim(s). The combination further teaches the navigation mapping method for constructing the external images of the machine according to claim 4, wherein, in S4, recording the position of the machine in the 2D image obtained every time, and establishing a coordinate system, including: setting a coordinate origin in the 2D image and establishing a coordinate system based on the coordinate origin, and on the coordinate system, recording coordinates of specific points on the coverage of the machine. (Behzadan, see comments on claim 1; “projecting them onto an orthogonal (map) coordinate system”, [0006]) Regarding claim 6, the combination of Behzadan, Nadir and Shapiro teaches its/their respective base claim(s). The combination further teaches the navigation mapping method for constructing the external images of machine according to claim 5, wherein, speed computing formulas of the machine in the coordinate system are as follows: PNG media_image2.png 79 137 media_image2.png Greyscale wherein P(x, y) is a coordinate recording in the coordinate system of a specific point, vx is a speed on a x-axis of the specific point within a time t, vy is a speed on a y-axis of the specific point within the time t, xT+t is a abscissa of the specific point after the time t, xT is a abscissa of the specific point before the time t, yT+t is a ordinate of the specific point after the time t; yT is a ordinate of the specific point before the time t. (Nadir, “in real time, for changes in said TOI's position relative to the UAV”, [0073]; from basic physics, if, at T, position vector from UAV to TOI is x(T), and at T+t, position vector from UAV to TOI is x(T+t), then the difference position vector ∆x = x(T+t) – x(T), the velocity vector v = ∆x/t = (x(T+t) – x(T))/t for the TOI; the projection of v to the (x, y) plane will be vx = (x(T+t)-x(T))/t and vy = (y(T+t)-y(T))/t) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANXUN YANG whose telephone number is (571)272-9874. The examiner can normally be reached on MON-FRI: 8AM-5PM Pacific Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JIANXUN YANG/ Primary Examiner, Art Unit 2662 2/9/2026
Read full office action

Prosecution Timeline

Nov 14, 2023
Application Filed
Feb 08, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602917
OBJECT DETECTION DEVICE AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602853
METHODS AND APPARATUS FOR PET IMAGE RECONSTRUCTION USING MULTI-VIEW HISTO-IMAGES OF ATTENUATION CORRECTION FACTORS
2y 5m to grant Granted Apr 14, 2026
Patent 12590906
X-RAY INSPECTION APPARATUS, X-RAY INSPECTION SYSTEM, AND X-RAY INSPECTION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586223
METHOD FOR RECONSTRUCTING THREE-DIMENSIONAL OBJECT COMBINING STRUCTURED LIGHT AND PHOTOMETRY AND TERMINAL DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586152
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR TRAINING IMAGE PROCESSING MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month