Prosecution Insights
Last updated: April 18, 2026
Application No. 18/234,939

METHOD AND SYSTEM FOR ADJUSTING INFORMATION SYSTEM OF MOBILE MACHINE

Final Rejection §103
Filed
Aug 17, 2023
Examiner
JAMES, DOMINIQUE NICOLE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Katholieke Universiteit Leuven
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
16 granted / 21 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+38.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status This action is in response to the application filed on January 30, 2026. Claims 1 and 15 are amended, and claim 16 has been added. Thus, claims 1-16 are pending for examination in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on December 09, 2025 is being considered by the examiner. Response to Amendments Applicant’s arguments regarding the 35 U.S.C. 112(f) interpretations previously set forth in the Non-Final Office Action mailed November 03, 2025, are persuasive. Accordingly, the 35 U.S.C. 112(f) interpretations are withdrawn in response. Response to Arguments Applicant’s arguments filed January 30, 2026, regarding the rejection(s) of claim(s) 1-16 have been fully and completely considered but are moot because the arguments do not apply to the new combination of the references, facilitated by Applicant’s newly submitted amendments, including new prior art— Shambik et al, US 20220027642 being used in the current rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 9, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yous et al, US 9196054 in view of Shambik et al, US 20220027642. Regarding claim 1, Yous teaches a method (see Yous, Abstract, “An improved method and a system are disclosed for recovering a three-dimensional (3D) scene structure from a plurality of two-dimensional (2D) image frames obtained from imaging means”) for adjusting an information system (see Yous, Col 3, Lines 38-44, “the system may be usefully embodied in the onboard navigation system of a large complex vehicle such as an aircraft, land transport vehicle or sea transport vehicle. More preferably still, the system may be embodied in the flight management system of an airliner,” onboard navigation system is considered to be an information system) of a mobile machine (see Yous, Col 3, Lines 38-44, “a large complex vehicle such as an aircraft, land transport vehicle or sea transport vehicle,” the aircraft is considered to be a mobile machine) based upon information acquired from monocular images (see Yous, Col 5, Lines 4-8, “A plurality of image frames 101 are continuously obtained from imaging means 105, typically at least one digital camera,” a digital camera is considered to capture monocular images), the information system being configured to calculate 3D information relative to a scene in which the mobile machine is moving, the method comprising (see Yous, Col 5, Lines 23-30, “An ‘anchor-based minimization’ module 104 subjects the generated 3D rays 204.sub.i to an anchor-based minimization process, for determining camera motion parameters 108 and 3D scene points coordinates 109, whereby a structure of said 3D scene 203 is recovered”): acquiring at least a first image of the scene at a first time with an imaging device and a second image of the scene at a second time with the imaging device (see Yous, Col 3, Lines 48-52, “extracting a first set of 2D features from a first image frame, extracting a second set of 2D features from a second image frame, and Col 8, Lines 2-9, “One or more of the cameras 105 may be legacy cameras that are already part of the aircraft onboard avionics system 402, but the image frame data output 101.sub.i of which is processed by the set of instructions embodying the state machine 100 of FIG. 1, as and when the aircraft onboard avionics system 402 is configured with the set of instructions”); detecting one or more scene features in the first image and the second image (see Yous, Col 3, Lines 48-52, “extracting a first set of 2D features from a first image frame, extracting a second set of 2D features from a second image frame”); matching the one or more scene features across the first image and the second image based upon detection of the one or more scene features (Yous, Col 3, Lines, 50-54, “matching the second set with the first set, such that at least one pair of matched 2D features refers to a same 3D point in a 3D scene,”); estimating an egomotion of the mobile machine based upon the matching of the one or more scene features across the first image and the second image (see Yous, Col 3, Lines 53-58, “subjecting the generated 3D rays to an anchor-based minimization process, for determining camera motion parameters and 3D scene points coordinates, thereby recovering a structure of said 3D scene,” determining camera motion parameters is considered to be estimating an egomotion); and adjusting the information system (see Yous, Fig. 6A-6C, Fig 7, and Col 8, Lines 26-44, “The avionics or navigation system 402 of the airliner 401 includes at least one instrument display 601 by way of a Human-Machine Interface (‘HMI’) with the pilot crew, and the system 100 is adapted to configure same with a user-configurable user interface 602 in which a structure 603 of the 3D scene determined by the system 100 is output. … The operating mode selection may effectively correspond to a user adjustment of the range threshold 504, taking into account the range of ground speeds associated with the type of manoeuvring.”). Yous does not expressively teach the first image and the second image are monocular images and adjusting the information system, including hardware and software associated with the imaging device, by taking into account the estimation of the egomotion of the mobile machine and in response to an automatic diagnostic of irregularities corresponding to the imaging device However, Shambik in a similar invention in the same field of endeavor teaches the first image and the second image are monocular images (see, Shambik Paragraph [0135], “the first camera may be connected to a first image processor to perform monocular image analysis of images provided by the first camera, and the second camera may be connected to a second image processor to perform monocular image analysis of images provided by the second camera”); and adjusting the information system, including hardware and software associated with the imaging device, by taking into account the estimation of the egomotion of the mobile machine and in response to an automatic diagnostic of irregularities corresponding to the imaging device (see Shambik, Paragraph [0334], “vehicle 200 may detect erroneous point 2491 for example, by detecting anomaly 2495 in the image, or by identifying the error based on detected lane mark points before and after the anomaly. Based on detecting the anomaly, the vehicle may omit point 2491 or may adjust it to be in line with other detected points”). The combination of Yous and Shambik are analogous art because they are both in the field of endeavor of estimating camera motion of a 3D scene. Therefore, it would’ve been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to perform monocular image analysis of images provided by the first and second cameras; detecting and mitigating errors based on anomalies and adjusting the system as taught in the method of Shambik in the method of Yous to analyze the surroundings of vehicle 200 and navigate vehicle 200 in response to the analysis (see Shambik, Paragraph [0132]). Regarding claim 2, Yous in view of Shambik further teaches the method according to claim 1, wherein the estimating the egomotion of the mobile machine based upon the matching of the one or more scene features across the first image and the second image includes applying one or more of a generalized camera model and linear approach to obtain a rotation of the mobile machine from the first time to the second time and a translation of the mobile machine from the first time to the second time (see Yous, Fig. 2, Fig. 3, and Col 5, Lines 15-32, “A ‘ray generation’ module 103 of the system generates a 3D ray 204.sub.i by back-projection from each 2D feature 201.sub.i, whereby the intersection of the two 3D rays 204.sub.i, 204.sub.i−1 of the matched features 201.sub.i, 201.sub.i−1 corresponds to the related 3D points 202.sub.i, 202.sub.i−1. … Camera motion is parameterized as a rotation matrix and a translation vector T” ). The rationale of claim 1 has been applied herein. Regarding claim 3, Yous in view of Shambik further teaches the method according to claim 1, wherein: the acquiring the first image with the imaging device includes acquiring a first image with a first imaging device and acquiring a first image with a second imaging device (see Paragraph [0135], “a first camera and a second camera (e.g., image capture devices 122 and 124) may be positioned at the front and/or the sides of a vehicle (e.g., vehicle 200). The first camera may have a field of view that is greater than, less than, or partially overlapping with, the field of view of the second camera. In addition, the first camera may be connected to a first image processor to perform monocular image analysis of images provided by the first camera, and the second camera may be connected to a second image processor to perform monocular image analysis of images provided by the second camera”); and the acquiring the second image with the imaging device includes acquiring a second image with the first imaging device and acquiring a second image with the second imaging device (see Shambik, Paragraph [0114], “Image capture devices 124 and 126 may acquire a plurality of second and third images relative to a scene associated with the vehicle 200”). Regarding claim 9, Yous in view of Shambik further teaches the method according to claim 1, further comprising transmitting the first image with the imaging device and the second image with the imaging device to an electronic control system for correcting the first image with the imaging device and the second image with the imaging device by converting first viewpoint parameters of the first image and the second image into second viewpoint parameters (see Yous, Fig. 1, system 100, and Col 3, Lines, 13-18, “the first motion transforms a ray from a first view into a corresponding view in a second view and a second motion comprises a rotation around a selected anchor point”). The rationale of claim 1 has been applied herein. As per claim 15, Claim 15 claims a system for adjusting an information system comprising one or more processors configured to: complete the same limitations as Claim 1 therefore, the rejection and rationale are analogous to that made in Claim 1. Yous in view of Shambik further teaches one or more processors (see Shambik, Paragraph [0080], “In some embodiments, system 100 may include a processing unit 110 … Processing unit 110 may include one or more processing devices.”) Claim(s) 4-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yous et al, US 9196054 in view of Shambik et al, US 20220027642 in view of Liao et al, US 20170277197. Regarding claim 4, Yous in view of Shambik does not expressively teach the method of claim 3, wherein the adjusting the information system includes adjusting one or more of the first imaging device and the second imaging device based upon: estimating one or more of egomotions of the mobile machine based upon matching one or more scene features across the first image with the first imaging device and the second image with the first imaging device and estimating one or more of egomotions of the mobile machine based upon matching one or more scene features across the first image with the second imaging device and the second image with the second imaging device However, Liao in a similar invention in the same field of endeavor teaches wherein the adjusting the information system includes adjusting one or more of the first imaging device and the second imaging device based upon: estimating one or more of egomotions of the mobile machine based upon matching one or more scene features across the first image with the first imaging device and the second image with the first imaging device and (see Liao, Fig. 4 and Paragraph [0032], “A plurality of feature points are tracked from the first image pair to a second image pair ,“ and Paragraph [0071], “As shown in the figure, two stereo image pairs are used to estimate the robot's movement between time s−1 and time s”); estimating one or more of egomotions of the mobile machine based upon matching one or more scene features across the first image with the second imaging device and the second image with the second imaging device (see Liao, Fig. 4 and Paragraph [0032], “A plurality of feature points are tracked from the first image pair to a second image pair,“ and Paragraph [0071], “As shown in the figure, two stereo image pairs are used to estimate the robot's movement between time s−1 and time s,” estimating robot movements is considered to be egomotion). The combination of Yous, Shambik, and Liao are analogous art because they are all in the field of endeavor of estimating camera motion of a 3D scene. Therefore, it would’ve been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for a plurality of feature tracked from the first image pair to a second image pair to track motion as taught in the method of Liao in the method of Yous in view of Shambik to aid with obstacle avoidance (see Liao, Paragraph [0025]). Regarding claim 5, Yous in view of Shambik does not expressively teach the method according to claim 1, further comprising estimating intrinsic parameters of the one or more imaging devices based upon the matching of the one or more scene features across the first image with the imaging device and the second image with the imaging device. However, Liao in a similar invention in the same field of endeavor teaches further comprising estimating intrinsic parameters of the one or more imaging devices based upon the matching of the one or more scene features across the first image with the imaging device and the second image with the imaging device (see Liao, Fig. 4, and Paragraph [0067], “Camera calibration requires capturing multiple images of the calibration pattern in different poses, in order to accurately estimate the intrinsic camera parameter”). The combination of Yous, Shambik, and Liao are analogous art because they are all in the field of endeavor of estimating camera motion of a 3D scene. Therefore, it would’ve been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to calibrate by capturing multiple images of the calibration pattern in different poses to estimate the intrinsic camera parameter; to use a ray-ray bundle adjustment as taught in the method of Liao in the method of Yous in view of Shambik to aid with obstacle avoidance (see Liao, Paragraph [0025]). Regarding claim 6, Yous in view of Shambik in view of Liao further teaches the method according to claim 5, further comprising performing a bundle adjustment based upon the estimation of the intrinsic parameters of the imaging device (see Yous, Col Lines, “the method comprises the step of using a ray-ray bundle adjustment”). The rationale of claim 5 has been applied herein. Regarding claim 7, Yous in view of Shambik does not expressively teach the method according to claim 1, further comprising estimating extrinsic parameters of the imaging device by unifying the matching of the one or more scene features across a plurality of images captured by the imaging device. However, Liao in a similar invention in the same field of endeavor teaches further comprising estimating extrinsic parameters of the imaging device by unifying the matching of the one or more scene features across a plurality of images captured by the imaging device (see Liao, Fig. 4, and Paragraph [0067], “A first step was to obtain the intrinsic and extrinsic parameters of the cameras. Both the camera intrinsic and extrinsic parameters were estimated using the Camera Calibration Toolbox for Matlab.”). The combination of Yous, Shambik, and Liao are analogous art because they are all in the field of endeavor of estimating camera motion of a 3D scene. Therefore, it would’ve been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to obtain extrinsic parameters of the cameras as taught in the method of Liao in the method of Yous in view of Shambik to aid with obstacle avoidance (see Liao, Paragraph [0025]). Regarding claim 8, Yous in view of Shambik in view of Liao teaches the method according to claim 7, wherein the adjusting the information system includes accounting for the estimation of the extrinsic parameters of the imaging device (see Liao, Fig. 4, and Paragraph [0067], “A first step was to obtain the intrinsic and extrinsic parameters of the cameras. Both the camera intrinsic and extrinsic parameters were estimated using the Camera Calibration Toolbox for Matlab.”). The rationale of claim 7 has been applied herein. Claim(s) 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yous et al, US 9196054 in view of Shambik et al, US 20220027642 in view of Tamura et al, WO 2020164744. Regarding claim 10, Yous in view of Shambik does not expressively teach the method according to claim 9, wherein the correcting the first image with the imaging device and the second image with the imaging device includes conversion being based upon conversion information associated with a virtualization record stored by the electronic control system. However, Tamura in a similar invention in the same field of endeavor teaches wherein the correcting the first image with the imaging device and the second image with the imaging device includes conversion being based upon conversion information associated with a virtualization record stored by the electronic control system (see Tamura, Fig. 1, electronic control unit (ECU 10), and Paragraph [0009], “convert the first viewpoint parameters of the captured image data into virtual viewpoint parameters based on the conversion information associated with a virtualization record stored by the storage means, to result in the virtual image view, wherein the virtualization record is identified at least based on the identifier, and execute at least one driver assistance and/or automated driving function based on the virtual image view”). The combination of Yous, Shambik, and Tamura are analogous art because they are all in the field of endeavor of estimating 3D scene around a vehicle. Therefore, it would’ve been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to convert first viewpoint parameters into virtual viewpoint parameters based on the conversion information associated with a virtualization record stored by the storage means as taught in the method of Tamura in the method of Yous in view of Shambik so that one processing application may be implemented across an entire fleet of mass produced models (see Tamura, Paragraph [0010]). Regarding claim 11, Yous in view of Shambik does not expressively teach the method according to claim 9, wherein the correcting the first image with the imaging device and the second image with the imaging device includes conversion being based upon conversion information including one or more of distortion compensation information, image rectification information, image refraction information, and rotational information. However, Tamura in a similar invention in the same field of endeavor teaches wherein the correcting the first image with the imaging device and the second image with the imaging device includes conversion being based upon conversion information including one or more of distortion compensation information, image rectification information, image refraction information, and rotational information (see Tamura, Paragraph [0012], “The conversion information may include at least one of distortion compensation information, image rectification information, image refraction information, and rotational information”). The combination of Yous, Shambik, and Tamura are analogous art because they are all in the field of endeavor of estimating 3D scene around a vehicle. Therefore, it would’ve been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the conversion information to include at least one of distortion compensation information, image rectification information, image refraction information, and rotational information as taught in the method of Tamura in the method of Yous in view of Shambik so that one processing application may be implemented across an entire fleet of mass produced models (see Tamura, Paragraph [0010]). Claim(s) 12-13 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yous et al, US 9196054 in view of Shambik et al, US 20220027642 in view of Bichu et al, US 20210082086. Regarding claim 12, Yous in view of Shambik does not expressively teach the method according to claim 1, wherein the adjusting the information system includes evaluating one or more of the first image with the imaging device and the second image with the imaging device to determine whether the imaging device from which the image was acquired is properly calibrated and calibrating the imaging device if it is determined that the imaging device from which the image was acquired is not properly calibrated. However, Bichu in a similar invention in the same field of endeavor teaches wherein the adjusting the information system includes evaluating one or more of the first image with the imaging device and the second image with the imaging device to determine whether the imaging device from which the image was acquired is properly calibrated and calibrating the imaging device if it is determined that the imaging device from which the image was acquired is not properly calibrated (see Bichu, Paragraph [0088], “in some embodiments, calibrating the first image comprises applying a first correction to the first image to form a first corrected image and calibrating the second image comprises applying a second correction to the second image to form a second corrected image. … In some instances, the first correction and the second correction are based at least in part on a 3D calibration process configured to correct for lens distortion and camera misalignment.”). The combination of Yous, Shambik, and Bichu are analogous art because they are all in the field of endeavor of capturing images and videos from multiple cameras. Therefore, it would’ve been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to calibrate the first image by applying a first correction and calibrating the second image by applying a second correction to form a second corrected image and in some instances the first correction and second correction are based on a 3D camera calibration process to correct for camera misalignment as taught in the method of Bichu in the method of Yous in view of Shambik to improve temporal continuity between a subsequent frame and a previous frame in a sequence of video frames (see Bichu, Abstract). Regarding claim 13, Yous in view of Shambik in view of Bichu further teaches the method according to claim 12, wherein the evaluating the one or more of the first image with the imaging device and the second image with the imaging device includes comparing one or more scene features present in one or more of a first image with a first imaging device and a second image with the first imaging device to one or more scene features present in one or more of a first image with a second imaging device and a second image with the second imaging device to determine whether the scene features captured by the first imaging device correlates with the scene features captured by the second imaging device (see Shambik, Paragraph [0148], “stereo image analysis module 404 may include instructions for detecting a set of features within the first and second sets of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and the like”). The rationale of claim 12 has been applied herein. Regarding claim 16, Yous in view of Shambik in view of Bichu further teaches the method according to claim 12, wherein the calibration of the imaging device is based on information corresponding to the imaging device provided to the one or more processors via an imaging device identifier (see Shambik, Paragraph [0414], “calibration (e.g., RT) is determined between cameras providing the images. This may be carried out using some initial understanding of the 3D information visible in frames from the corner cameras to re-draw the rolling shutter images as global shutter images. For example, rolling shutter correction using 3D information of the scene, exposure tune for each row of pixels, and Ego motion of the camera around a timestamp”). The rationale of claim 12 has been applied herein. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yous et al, US 9196054 in view of Shambik et al, US 20220027642 in view of Bichu et al, US 20210082086 in further view of Lee et al, US 20240070916. Regarding claim 14, Yous in view of Shambik in view of Bichu does not expressively teach the method according to claim 12, wherein the calibrating the imaging device includes using a calibration configuration of a first imaging device to calibrate a second imaging device. However, Lee in a similar invention in the same field of endeavor teaches wherein the calibrating the imaging device includes using a calibration configuration of a first imaging device to calibrate a second imaging device (see Lee Paragraph [0018], “The controller may be configured to estimate a location of the vehicle by calibrating capture areas of the second camera and the third camera based on the calibrated capture area of the first camera”). The combination of Yous, Shambik, Bichu, and Lee are analogous art because they are all in the field of endeavor of capturing images from multiple cameras. Therefore, it would’ve been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to estimate a location of the vehicle by calibrating capture areas of the second camera based on the calibrated area of the first camera as taught in the method of Lee in the method of Yous in view of Shambik in view of Bichu for automatically calibrating a plurality of cameras (see Lee, Paragraph [0006]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOMINIQUE JAMES whose telephone number is (703)756-1655. The examiner can normally be reached 9:00 am - 6:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOMINIQUE JAMES/Examiner, Art Unit 2666 /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Aug 17, 2023
Application Filed
Oct 28, 2025
Non-Final Rejection — §103
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 14, 2026
Examiner Interview Summary
Jan 30, 2026
Response Filed
Apr 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591976
CELL SEGMENTATION IMAGE PROCESSING METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12567138
REGISTRATION METROLOGY TOOL USING DARKFIELD AND PHASE CONTRAST IMAGING
2y 5m to grant Granted Mar 03, 2026
Patent 12548159
SCENE PERCEPTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12462681
Detection of Malfunctions of the Switching State Detection of Light Signal Systems
2y 5m to grant Granted Nov 04, 2025
Patent 12462346
MACHINE LEARNING BASED NOISE REDUCTION CIRCUIT
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.5%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month