Prosecution Insights
Last updated: April 19, 2026
Application No. 18/065,823

NAVIGATION THROUGH REAL-TIME STREET VIEW

Non-Final OA §103
Filed
Dec 14, 2022
Examiner
PAIGE, TYLER D
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1166 granted / 1276 resolved
+39.4% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
28 currently pending
Career history
1304
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
29.8%
-10.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1276 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to an application filed on 12/14/2022. The applicant submits an Information Disclosure Statement dated 12/14/2022. The applicant does not make a claim for Domestic or Foreign priority. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 - 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ozaki US 2013/0286206 in view of Farmer US 2022/0188867. As per claim 1, A processor-implemented method, the method comprising: capturing one or more visual input streams; (Ozaki paragraph 0052 discloses, “The image receiver 27 receives the video signal from the mobile apparatus 3 via the signal cable 11. The augmented reality image AP generated by the mobile apparatus 3 is included in the video signal in a predetermined cycle.”) and (Farmer paragraph 0054 teaches, “Cameras may be used to capture visual images of the environment surrounding the vehicle 120. Depending on the configuration and number of cameras, the cameras may provide a 360° view around the vehicle 120.”) collecting data, including location data and a collected visual input stream from the one or more visual input streams; (Ozaki paragraph 0054 discloses, “The navigation part 20a supplies the guiding information to be transmitted to the mobile apparatus 3 from the inter-apparatus communication part 26. The navigation part 20a generates the map image MP that shows the map of the vicinity of the vehicle 9, based on the map data 28b stored in the storage 28 and the vehicle location. Moreover, when the user sets a destination, the navigation part 20a derives a route leading to the destination from the location of the vehicle 9 at a current time point and then superimposes the derived route on the map image MP.”) identifying items from the collected visual input stream; (Farmer paragraph 0052 teaches , “The LiDAR sensor, the radar sensor, and/or any other similar types of sensors can be used to detect the vehicle 120 surroundings while the vehicle 120 is in motion or about to begin motion. For example, the LiDAR sensor may be used to bounce multiple laser beams off approaching objects to assess their distance and to provide accurate 3D information on the surrounding environment. The data obtained from the LiDAR sensor may be used in performing object identification,” and paragraph 0037 , “optical character recognition (OCR) performed on captured street images (e.g., to identify names of streets, to identify street sign text, to identify names of points of interest (e.g., parks, restaurants, fuel stations, attractions, landmarks stores, bathrooms, entertainment venues, etc.), etc.), etc.; information used to calculate routes; information used to render 2D and/or 3D graphical maps;”) determining an optimal route based on the collected data; (Farmer paragraph 0046 teaches, “the route data 163 includes the optimal route and the vehicle control system 126 automatically inputs the route data 163 into the mapping engine 128. The mapping engine 128 can generate map data 165 using the optimal route (e.g., generate a map showing the optimal route and/or instructions for taking the optimal route) and provide the map data 165 to the interior interface system 125 (e.g., via the vehicle control system 126) for display.”) and providing the optimal route to a user based on a context of the identified items. (Farmer paragraph 0046 teaches, “The displayed map data 165 can indicate an estimated time of arrival and/or show the progress of the vehicle 120 along the optimal route.”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 2, The method of claim 1, wherein identifying items is performed using visual recognition techniques. (Farmer paragraph 0054 teaches, “Cameras may be used to capture visual images of the environment surrounding the vehicle 120. Depending on the configuration and number of cameras, the cameras may provide a 360° view around the vehicle 120.”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 3, The method of claim 1, wherein the identified items include a building or landmark. (Farmer paragraph 0037 teaches, “optical character recognition (OCR) performed on captured street images (e.g., to identify names of streets, to identify street sign text, to identify names of points of interest (e.g., parks, restaurants, fuel stations, attractions, landmarks stores, bathrooms, entertainment venues, etc.), etc.), etc.; information used to calculate routes; information used to render 2D and/or 3D graphical maps;”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 4, The method of claim 1, wherein the optimal route is selected by a comparison to similar routes from the collected data. (Farmer paragraph 0030 teaches, “(e.g., route data, sensor data, perception data, vehicle 120 control data, vehicle 120 component fault and/or failure data, ride quality data, historical traffic data, historical route traversal times, etc.).”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 5, The method of claim 1, wherein providing the optimal route includes providing an augmented reality overlay over a street-level visual input stream from the one or more visual input streams. (Ozaki paragraph 0068 discloses, “The image generator 301 generates the augmented reality image AP by superimposing the guiding information and the information such as the icon based on the posted data set, on the captured image obtained by the camera controller 30a.”) As per claim 6, The method of claim 1, wherein providing the optimal route includes a natural language explanation of the context of the identified items. (Farmer paragraph 0045 teaches, “Optionally, information and content may also or instead be provided audibly (e.g., via a text to speech system) using a vehicle speaker or a mobile device speaker (e.g., a driver phone speaker or wearable device speaker).”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 7, The method of claim 1, wherein the captured location data includes geolocation data obtained using a global navigation system and a geofence. (Farmer paragraph 0045 teaches, “The vehicle control system 126 can transmit the inputted destination location 166 and/or a current location of the vehicle 120 (e.g., as a GPS data packet) as a communication 180 to the server 130 via the communication system 124 and the communications array 122.” And paragraph 0030 , “such as one or more neural networks) that may be used in generating geofences (virtual geographic boundary) and selecting content to be presented to a given user.”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 8, A computer system, the computer system comprising: one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising: capturing one or more visual input streams; (Ozaki paragraph 0052 discloses, “The image receiver 27 receives the video signal from the mobile apparatus 3 via the signal cable 11. The augmented reality image AP generated by the mobile apparatus 3 is included in the video signal in a predetermined cycle.”) and (Farmer paragraph 0054 teaches, “Cameras may be used to capture visual images of the environment surrounding the vehicle 120. Depending on the configuration and number of cameras, the cameras may provide a 360° view around the vehicle 120.”) collecting data, including location data and a collected visual input stream from the one or more visual input streams; (Ozaki paragraph 0054 discloses, “The navigation part 20a supplies the guiding information to be transmitted to the mobile apparatus 3 from the inter-apparatus communication part 26. The navigation part 20a generates the map image MP that shows the map of the vicinity of the vehicle 9, based on the map data 28b stored in the storage 28 and the vehicle location. Moreover, when the user sets a destination, the navigation part 20a derives a route leading to the destination from the location of the vehicle 9 at a current time point and then superimposes the derived route on the map image MP.”) identifying items from the collected visual input stream; (Farmer paragraph 0052 teaches, “The LiDAR sensor, the radar sensor, and/or any other similar types of sensors can be used to detect the vehicle 120 surroundings while the vehicle 120 is in motion or about to begin motion. For example, the LiDAR sensor may be used to bounce multiple laser beams off approaching objects to assess their distance and to provide accurate 3D information on the surrounding environment. The data obtained from the LiDAR sensor may be used in performing object identification,” and paragraph 0037, “optical character recognition (OCR) performed on captured street images (e.g., to identify names of streets, to identify street sign text, to identify names of points of interest (e.g., parks, restaurants, fuel stations, attractions, landmarks stores, bathrooms, entertainment venues, etc.), etc.), etc.; information used to calculate routes; information used to render 2D and/or 3D graphical maps;”) determining an optimal route based on the collected data; (Farmer paragraph 0046 , “the route data 163 includes the optimal route and the vehicle control system 126 automatically inputs the route data 163 into the mapping engine 128. The mapping engine 128 can generate map data 165 using the optimal route (e.g., generate a map showing the optimal route and/or instructions for taking the optimal route) and provide the map data 165 to the interior interface system 125 (e.g., via the vehicle control system 126) for display.”) and providing the optimal route to a user based on a context of the identified items. (Farmer paragraph 0046 teaches, “The displayed map data 165 can indicate an estimated time of arrival and/or show the progress of the vehicle 120 along the optimal route.”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 9, The computer system of claim 8, wherein identifying items is performed using visual recognition techniques. (Farmer paragraph 0054 teaches, “Cameras may be used to capture visual images of the environment surrounding the vehicle 120. Depending on the configuration and number of cameras, the cameras may provide a 360° view around the vehicle 120.”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 10, The computer system of claim 8, wherein the identified items include a building or landmark. (Farmer paragraph 0037 teaches, “optical character recognition (OCR) performed on captured street images (e.g., to identify names of streets, to identify street sign text, to identify names of points of interest (e.g., parks, restaurants, fuel stations, attractions, landmarks stores, bathrooms, entertainment venues, etc.), etc.), etc.; information used to calculate routes; information used to render 2D and/or 3D graphical maps;”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 11, The computer system of claim 8, wherein the optimal route is selected by a comparison to similar routes from the collected data. (Farmer paragraph 0030 teaches, “(e.g., route data, sensor data, perception data, vehicle 120 control data, vehicle 120 component fault and/or failure data, ride quality data, historical traffic data, historical route traversal times, etc.).”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 12, The computer system of claim 8, wherein providing the optimal route includes providing an augmented reality overlay over a street-level visual input stream from the one or more visual input streams. (Ozaki paragraph 0068 discloses, “The image generator 301 generates the augmented reality image AP by superimposing the guiding information and the information such as the icon based on the posted data set, on the captured image obtained by the camera controller 30a.”) As per claim 13, The computer system of claim 8, wherein providing the optimal route includes a natural language explanation of the context of the identified items. (Farmer paragraph 0045 teaches, “Optionally, information and content may also or instead be provided audibly (e.g., via a text to speech system) using a vehicle speaker or a mobile device speaker (e.g., a driver phone speaker or wearable device speaker).”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 14, The computer system of claim 8, wherein the captured location data includes geolocation data obtained using a global navigation system and a geofence. (Farmer paragraph 0045 teaches, “The vehicle control system 126 can transmit the inputted destination location 166 and/or a current location of the vehicle 120 (e.g., as a GPS data packet) as a communication 180 to the server 130 via the communication system 124 and the communications array 122.” And paragraph 0030 , “such as one or more neural networks) that may be used in generating geofences (virtual geographic boundary) and selecting content to be presented to a given user.”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 15, A computer program product, the computer program product comprising: one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more tangible storage medium, the program instructions executable by a processor capable of performing a method, the method comprising: (Ozaki paragraph 0064 discloses, “a program 38a is obtained by reading from a non-transitory computer-readable recording medium that stores the program 38a or by communication via the network 8 or in other methods,”) capturing one or more visual input streams; (Ozaki paragraph 0052 discloses, “The image receiver 27 receives the video signal from the mobile apparatus 3 via the signal cable 11. The augmented reality image AP generated by the mobile apparatus 3 is included in the video signal in a predetermined cycle.”) and (Farmer paragraph 0054 teaches, “Cameras may be used to capture visual images of the environment surrounding the vehicle 120. Depending on the configuration and number of cameras, the cameras may provide a 360° view around the vehicle 120.”) collecting data, including location data and a collected visual input stream from the one or more visual input streams; (Ozaki paragraph 0054 discloses, “The navigation part 20a supplies the guiding information to be transmitted to the mobile apparatus 3 from the inter-apparatus communication part 26. The navigation part 20a generates the map image MP that shows the map of the vicinity of the vehicle 9, based on the map data 28b stored in the storage 28 and the vehicle location. Moreover, when the user sets a destination, the navigation part 20a derives a route leading to the destination from the location of the vehicle 9 at a current time point and then superimposes the derived route on the map image MP.”) identifying items from the collected visual input stream; (Farmer paragraph 0052 teaches, “The LiDAR sensor, the radar sensor, and/or any other similar types of sensors can be used to detect the vehicle 120 surroundings while the vehicle 120 is in motion or about to begin motion. For example, the LiDAR sensor may be used to bounce multiple laser beams off approaching objects to assess their distance and to provide accurate 3D information on the surrounding environment. The data obtained from the LiDAR sensor may be used in performing object identification,” and paragraph 0037, “optical character recognition (OCR) performed on captured street images (e.g., to identify names of streets, to identify street sign text, to identify names of points of interest (e.g., parks, restaurants, fuel stations, attractions, landmarks stores, bathrooms, entertainment venues, etc.), etc.), etc.; information used to calculate routes; information used to render 2D and/or 3D graphical maps;”) determining an optimal route based on the collected data; (Farmer paragraph 0046 teaches, “the route data 163 includes the optimal route and the vehicle control system 126 automatically inputs the route data 163 into the mapping engine 128. The mapping engine 128 can generate map data 165 using the optimal route (e.g., generate a map showing the optimal route and/or instructions for taking the optimal route) and provide the map data 165 to the interior interface system 125 (e.g., via the vehicle control system 126) for display.”) and providing the optimal route to a user based on a context of the identified items. (Farmer paragraph 0046 teaches, “The displayed map data 165 can indicate an estimated time of arrival and/or show the progress of the vehicle 120 along the optimal route.”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 16, The computer program product of claim 15, wherein identifying items is performed using visual recognition techniques. (Farmer paragraph 0054 teaches, “Cameras may be used to capture visual images of the environment surrounding the vehicle 120. Depending on the configuration and number of cameras, the cameras may provide a 360° view around the vehicle 120.”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 17, The computer program product of claim 15, wherein the identified items include a building or landmark. (Farmer paragraph 0037 teaches, “optical character recognition (OCR) performed on captured street images (e.g., to identify names of streets, to identify street sign text, to identify names of points of interest (e.g., parks, restaurants, fuel stations, attractions, landmarks stores, bathrooms, entertainment venues, etc.), etc.), etc.; information used to calculate routes; information used to render 2D and/or 3D graphical maps;”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 18, The computer program product of claim 15, wherein the optimal route is selected by a comparison to similar routes from the collected data. (Farmer paragraph 0030 teaches, “(e.g., route data, sensor data, perception data, vehicle 120 control data, vehicle 120 component fault and/or failure data, ride quality data, historical traffic data, historical route traversal times, etc.).”) Ozaki discloses a display system with an overlay for navigation. Ozaki does not disclose identifying the optimal route for navigation. Farmer teaches of presenting the optimal route for navigation. Therefore, at the time of filing it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Farmer et.al. into the invention of Ozaki. Such incorporation is motivated by the need to ensure correct navigation to a destination. As per claim 19, The computer program product of claim 15, wherein providing the optimal route includes providing an augmented reality overlay over a street-level visual input stream from the one or more visual input streams. (Ozaki paragraph 0068 discloses, “The image generator 301 generates the augmented reality image AP by superimposing the guiding information and the information such as the icon based on the posted data set, on the captured image obtained by the camera controller 30a.”) As per claim 20, The computer program product of claim 15, wherein providing the optimal route includes a natural language explanation of the context of the identified items. (Farmer paragraph 0045 teaches, “Optionally, information and content may also or instead be provided audibly (e.g., via a text to speech system) using a vehicle speaker or a mobile device speaker (e.g., a driver phone speaker or wearable device speaker).”) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TYLER D PAIGE whose telephone number is (571)270-5425. The examiner can normally be reached M-F 7:00am - 6:00pm (mst). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at 5712703921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TYLER D PAIGE/Primary Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Dec 14, 2022
Application Filed
Feb 08, 2024
Response after Non-Final Action
Jan 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597357
AUTOMATIC AIRCRAFT TAXIING
2y 5m to grant Granted Apr 07, 2026
Patent 12592102
OPERATION DATA SUPPORT SYSTEM FOR INDUSTRIAL MACHINERY
2y 5m to grant Granted Mar 31, 2026
Patent 12586424
DRIVING DIAGNOSIS DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586425
RARE EVENT DETECTION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12579849
DETECTING AN UNUSUAL OPERATION OF A VEHICLE OUTSIDE OF A TIME FENCE AND NOTIFYING NEIGHBORING VEHICLES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+8.2%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 1276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month