Prosecution Insights
Last updated: April 19, 2026
Application No. 18/296,232

ALIGNING AN INERTIAL NAVIGATION SYSTEM (INS)

Non-Final OA §103
Filed
Apr 05, 2023
Examiner
SHABMAN, MARK A
Art Unit
2855
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Honeywell International Inc.
OA Round
3 (Non-Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
862 granted / 1023 resolved
+16.3% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
40 currently pending
Career history
1063
Total Applications
across all art units

Statute-Specific Performance

§101
1.5%
-38.5% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1023 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 8 December 2025 has been entered. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. STATUS The prior rejection has been overcome by amendment; however, a new rejection follows. Applicant is invited to reach out to Examiner to discuss any further steps or amendments to advance the prosecution of the case. Claims 7, 16 and 20 each comprise limitations with regard to equations defining the coordinate system. Within the amendment, the equations are difficult to read, however the claims are not objected to as they are the same, unamended claims as previously filed. Should the claims be deemed allowable, a clearly copy would need to be provided. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot in view of the new ground(s) of rejection. The arguments are directed towards the newly added limitations to independent claims 1, 10 and 19 which includes the addition of “a security code also encoded in the machine-readable image to authenticate the machine-readable image.” The previously cited reference Tatsubori includes similar limitations which are incorporated below in further detail. Claim Objections Claims 7, 16 and 20 are objected to because of the following informalities: the claims each contain equations which while legible as they correspond to clearer versions in the specification, should be amended to read more clearly. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dockter et al. US 2008/0071431, Lim US 2018/0224868 and Tatsubori 2020/0249689. Regarding claim 1, Dockter teaches a method for determining a location for alignment of an inertial navigation system (paragraph 0009) comprising, aiming an imaging and ranging system (camera, paragraph 0011) mounted on a gimbal (paragraph 0011) at a known location (target point 16 which may be a landing zone on the deck of a movable ship, and therefore is captured by the camera as an image, paragraphs 0013, 0062), determining an azimuth, an elevation angle and a slant range to the known location (paragraph 0029), capturing a machine-readable image via the camera, decoding the known location (the target point detected is known and processed by the computing means (paragraph 0009), deriving a precise location of the vehicle from the known location, the azimuth elevation angle and the slant range (paragraph 0036 discloses using the data to control the aircraft’s position) and aligning the INS based on the precise location (paragraph 0065 discloses determining the INS location of the aircraft under certain circumstances). Dockter teaches the target as being machine readable but does not explicitly disclose the target point as being a machine-readable image with the known location encoded in the machine-readable image. Lim teaches a method for navigating an aerial vehicle which is similar to that of Dockter, wherein the vehicle 116 can be directed to a landing zone on the deck of a movable ship. The landing zone further comprises an image (landing circle 1200) which is tracked by the vehicle and processed by the imaging system of the vehicle 1200. Since Lim is analogous art in the area of aerial vehicle navigation and tracking, it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of the landing circle of Lim with the system of Dockter to provide a computer readable image on the deck of the ship in Dockter to clearly indicate the landing zone at its known location for accurate tracking and landing. In combination, Dockter and Lim do not explicitly disclose the machine-readable image as comprising the known location encoded therein. Tatsubori teaches a system for navigation in which image guidance indicators 120 are positioned at known locations (paragraph 0021) and comprise matrix barcodes which comprise known location information such as high-definition coordinates (paragraph 0021). Tatsubori further teaches the guidance indicator as including a signature (security) code to identify the authenticity of the indicator (paragraph 0049). A vehicle such as an autonomous drone or self-driving vehicle reads the guidance indicators to extract the information therein and determine the position of the vehicle (and the image) therefrom (paragraph 0023). It would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Tatsubori with those of Dockter and Lim to apply the same teachings with regard to the matrix barcodes comprising location data and the security/authentication to an aerial vehicle such as that of Tatsubori and Lim in order to provide similar location information to aid in secure navigation of the vehicle. Regarding claim 2, Dockter discloses aiming the imaging and ranging system at the target (paragraph 0015) but does not explicitly disclose doing so “manually” as claimed. However since there is no indicated benefit to manual aiming of the camera in the present application and the system would perform equally well by automatically or manually aiming the camera it would have been obvious to one having ordinary skill in the art at the time the invention was made to have used manual aiming to ensure by an operator that the camera of Dockter is aimed at the correct location, since it has been held that choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success is obvious. KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007). Regarding claim 3, in combination as above, the determining the azimuth and the elevation angle comprises determining the azimuth and the elevation angle based on an orientation of the imaging and ranging system mounted on the gimbal since the imaging system of Dockter is mounted to the gimbal (Dockter, paragraph 0015). Regarding claim 4, in the combination, the machine-readable image is captured with the camera of Dockter (paragraph 0014). Regarding claim 5, Dockter and Lim teach the claimed method but do not explicitly disclose the machine-readable image as comprising a bar code, 2D bar code, Data Matrix code QR code, High-Capacity Color Barcode or other standardized or proprietary geometric coding scheme as claimed. Tatsubori discloses the machine-readable image as being a matrix or QR code (paragraph 0023). It would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Tatsubori with regard to the QR code with Dockter and Lim in order to be able to store more information such as the location information in the code since QR codes are capable of doing so. Regarding claim 6, Dockter and Lim disclose the claimed method but do not explicitly teach the decoding of the known location comprising decoding latitude, longitude and altitude of the known location from the machine-readable image as claimed. Tatsubori teaches a method of determining a position for a vehicle in which machine-readable images 120 include information comprising latitude, longitude and altitude information for their location (paragraph 0027). A vehicle traveling can read and decode the information with a camera to process it for autonomous driving. It would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Tatsubori with those of Dockter and Lim in order to provide additional details with regard to the target location such as the latitude, longitude and altitude information to verify that the information measured (Dockter paragraph 0009) is correct for accurate calculations during operation. Regarding claim 7, Dockter discloses determining a distance to the target in a navigation system from as being in a NED frame (paragraphs 0039-0049) which is calculated relative to the local geodetic spherical coordinate system using the equation of paragraph 0047 in the same manner as that claimed. Regarding claim 8, in combination, the precise location of the vehicle is determined in latitude, longitude and altitude using the known location and the relative position of the vehicle since all data is incorporated into the tracking (Dockter, paragraph 0029). Regarding claim 9, Dockter and Lim teach the claimed invention but do not explicitly disclose processing the image to remove distortions caused by an orientation of the system relative to the orientation of the machine-readable image. Tatsubori discloses in paragraph 0022 the use of a transformation technique to correct for geometric distortions and deformations. It would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Tatsubori with those of Doctor and Lim in order to ensure the correct information is recorded. Regarding claim 10, Dockter teaches an apparatus for determining a location for alignment of an inertial navigation system (paragraph 0009) comprising, an imaging and ranging system (camera, paragraph 0011) mounted on a gimbal (paragraph 0011) and configured to be aimed at a known location (target point 16 which may be a landing zone on the deck of a movable ship, and therefore is captured by the camera as an image, paragraphs 0013, 0062), a processor (paragraph 0009) configured to execute program instructions which perform the method of: determining an azimuth, an elevation angle and a slant range to the known location (paragraph 0029), capturing a machine-readable image via the camera, decoding the known location (the target point detected is known and processed by the computing means (paragraph 0009), deriving a precise location of the vehicle from the known location, the azimuth elevation angle and the slant range (paragraph 0036 discloses using the data to control the aircraft’s position) and aligning the INS based on the precise location (paragraph 0065 discloses determining the INS location of the aircraft under certain circumstances). Dockter does not explicitly disclose the target point as being a machine-readable image. Lim teaches a method for navigating an aerial vehicle which is similar to that of Dockter, wherein the vehicle 116 can be directed to a landing zone on the deck of a movable ship. The landing zone further comprises an image (landing circle 1200) which is tracked by the vehicle and processed by the imaging system of the vehicle 1200. Since Lim is analogous art in the area of aerial vehicle navigation and tracking, it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of the landing circle of Lim with the system of Dockter to provide a computer readable image on the deck of the ship in Dockter to clearly indicate the landing zone at its known location for accurate tracking and landing. In combination, Dockter and Lim do not explicitly disclose the machine-readable image as comprising the known location encoded therein. Tatsubori teaches a system for navigation in which image guidance indicators 120 are positioned at known locations (paragraph 0021) and comprises matrix barcodes which comprise known location information such as high-definition coordinates (paragraph 0021) and can include the guidance indicator having a signature (security) code to identify the authenticity of the indicator (paragraph 0049). A vehicle such as an autonomous drone or self-driving vehicle can read the guidance indicators to extract the information therein and determine the position of the vehicle (and the image) therefrom (paragraph 0023). It would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Tatsubori with those of Dockter and Lim to apply the same teachings with regard to the matrix barcodes comprising location data to an aerial vehicle and the security/authentication such as that of Tatsubori and Lim in order to provide similar location information to aid in navigation of the vehicle. Regarding claim 11, Dockter teaches using a camera for recording the target, and a laser rangefinder for determining the distance to the target. Dockter does not explicitly teach the laser rangefinder as being LIDAR. Lim teaches the use of LIDAR for determining a landing area of the vehicle (paragraph 0015). Since Lim and Dockter are analogous art within the field of aerial navigation, it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the LIDAR of Lim for the rangefinder of Dockter since it was known to use LIDAR to accurately determine distance to an object such as the target of Dockter. Regarding claim 12, the apparatus of Dockter comprises a gimbal which can pivot in at least 2 axes to which the imaging system is mounted. Regarding claim 13, in combination, the determining the azimuth and the elevation angle comprises determining the azimuth and the elevation angle based on an orientation of the imaging and ranging system mounted on the gimbal since the imaging system of Dockter is mounted to the gimbal (paragraph 0015). Regarding claim 14, Dockter and Lim teach the claimed method but do not explicitly disclose the machine-readable image as comprising a bar code, 2D bar code, Data Matrix code QR code, High-Capacity Color Barcode or other standardized or proprietary geometric coding scheme as claimed. Tatsubori discloses the machine-readable image as being a matrix or QR code (paragraph 0023). It would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Tatsubori with regard to the QR code with Dockter and Lim in order to be able to store more information such as the location information in the code since QR codes are capable of doing so. Regarding claim 15, Dockter and Lim disclose the claimed method but do not explicitly teach the decoding of the known location comprising decoding latitude, longitude and altitude of the known location from the machine-readable image as claimed. Tatsubori teaches a method of determining a position for a vehicle in which machine-readable images 120 include information comprising latitude, longitude and altitude information for their location (paragraph 0027). A vehicle traveling can read the information with a camera to process it for autonomous driving. It would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Tatsubori with those of Dockter and Lim in order to provide additional details with regard to the target location such as the latitude, longitude and altitude information to verify that the information measured (Dockter paragraph 0009) is correct for accurate calculations during operation. Regarding claim 16, Dockter discloses determining a distance to the target in a navigation system from as being in a NED frame (paragraphs 0039-0049) which is calculated relative to the local geodetic spherical coordinate system using the equation of paragraph 0047 in the same manner as that claimed. Regarding claim 17, in the combination above, the precise location of the vehicle is determined in latitude, longitude and altitude using the known location and the relative position of the vehicle since all data is incorporated into the tracking (Dockter, paragraph 0029). Regarding claim 18, the system of Dockter aligns the INS based on all available data including the precise location of the vehicle determined using the known location and a relative position of the vehicle at its current position (paragraph 0009 discloses using the GPS determined location and a target point below the vehicle as a relative position). Regarding claim 19, Dockter teaches a method for determining a location for alignment of an inertial navigation system (paragraph 0009) which when performed by the non-transitory computer readable medium of a processor comprises, determining an azimuth, an elevation angle and a slant range to the known location (paragraph 0029), capturing a machine-readable image via the camera, decoding the known location (the target point detected is known and processed by the computing means (paragraph 0009), deriving a precise location of the vehicle from the known location, the azimuth elevation angle and the slant range (paragraph 0036 discloses using the data to control the aircraft’s position) and aligning the INS based on the precise location (paragraph 0065 discloses determining the INS location of the aircraft under certain circumstances). Dockter does not explicitly disclose the target point as being a machine-readable image. Lim teaches a method for navigating an aerial vehicle which is similar to that of Dockter, wherein the vehicle 116 can be directed to a landing zone on the deck of a movable ship. The landing zone further comprises an image (landing circle 1200) which is tracked by the vehicle and processed by the imaging system of the vehicle 1200. Since Lim is analogous art in the area of aerial vehicle navigation and tracking, it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of the landing circle of Lim with the system of Dockter to provide a computer readable image on the deck of the ship in Dockter to clearly indicate the landing zone at its known location for accurate tracking and landing. In combination, Dockter and Lim do not explicitly disclose the machine-readable image as comprising the known location encoded therein. Tatsubori teaches a system for navigation in which image guidance indicators 120 are positioned at known locations (paragraph 0021) and comprises matrix barcodes which comprise known location information such as high-definition coordinates (paragraph 0021) and further teaches the guidance indicator as including a signature code to identify the authenticity of the indicator (paragraph 0049). A vehicle such as an autonomous drone or self-driving vehicle can read the guidance indicators to extract the information therein and determine the position of the vehicle (and the image) therefrom (paragraph 0023). It would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Tatsubori with those of Dockter and Lim to apply the same teachings with regard to the matrix barcodes comprising location data and the security/authentication to an aerial vehicle such as that of Tatsubori and Lim in order to provide similar location information to aid in navigation of the vehicle. Regarding claim 20, Dockter discloses determining a distance to the target in a navigation system from as being in a NED frame (paragraphs 0039-0049) which is calculated relative to the local geodetic spherical coordinate system using the equation of paragraph 0047 in the same manner as that claimed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mark A. Shabman whose telephone number is (571)272-8589. The examiner can normally be reached M-F 8:00-4:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Laura Martin can be reached at 571-272-2160. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARK A SHABMAN/Primary Examiner, Art Unit 2855
Read full office action

Prosecution Timeline

Apr 05, 2023
Application Filed
May 01, 2025
Non-Final Rejection — §103
Jul 24, 2025
Interview Requested
Aug 04, 2025
Applicant Interview (Telephonic)
Aug 04, 2025
Examiner Interview Summary
Aug 06, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103
Dec 08, 2025
Response after Non-Final Action
Jan 05, 2026
Request for Continued Examination
Jan 22, 2026
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596102
RESONATOR STRUCTURE FOR MASS SENSING
2y 5m to grant Granted Apr 07, 2026
Patent 12596050
DEVICE AND METHOD FOR LEAKAGE DETECTING OF CRUDE OIL TANK FLOOR
2y 5m to grant Granted Apr 07, 2026
Patent 12590542
Method for Detecting Stress State of Roadway Surrounding Rocks Based on Three-Dimensional Electric Potential Response
2y 5m to grant Granted Mar 31, 2026
Patent 12584837
DEVICE FOR MEASURING PHYSICOCHEMICAL PROPERTIES OF A DEFORMABLE MATRIX, IMPLEMENTATION METHOD AND USES
2y 5m to grant Granted Mar 24, 2026
Patent 12575496
SYSTEM AND METHOD FOR TERAHERTZ FREQUENCY CROP CONTAMINATION DETECTION AND HANDLING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
98%
With Interview (+14.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 1023 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month