Prosecution Insights
Last updated: April 19, 2026
Application No. 18/282,090

EVACUATION ROUTE GUIDANCE SYSTEM, EVACUATION ROUTE CREATION METHOD, AND RECORDING MEDIUM RECORDING PROGRAM

Final Rejection §103
Filed
Sep 14, 2023
Examiner
BRADY III, PATRICK MICHAEL
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Corportion
OA Round
2 (Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
67 granted / 119 resolved
+4.3% vs TC avg
Strong +44% interview lift
Without
With
+44.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
38 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
23.2%
-16.8% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 119 resolved cases

Office Action

§103
DETAILED ACTION This final action is in response to the reply filed 19 August 2025, which was in response to the non-final action dated 19 May 2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Claims 1, 3-10, 12-15, 17-19, and 21-24 are pending. Claims 1, 3-10, 12-15, 17-19 have been amended, claims 2, 11, 16 and 20 have been canceled, and claims 21-24 have been newly added. With regard to the 35 U.S.C. 101 rejection of claims 1, 3-10, 13-15 and 17-19 (pgs. 3-11, Action), applicant has amended the independent claims to require “performing in real time .... “, “acquiring a current location .... “, “transmitting information ... “, “acquiring one or more images shot by a plurality of cameras ... “, “passing the one or more images to a classification model ... “, “detecting movement of one or more moving bodies passing through at least a first position of the plurality of intersections”, and “transmitting information to the user terminal to cause the user terminal to display the updated route ... “. The examiner finds that having the steps performed in real-time, along with the claims as a whole, is sufficient to integrate the judicial exception into practical application. Thus, under Step 2A Prong two (see MPEP 2106), since the claims as a whole are found to integrate the judicial exception into practical application, they are eligible at pathway B, thereby concluding the eligibility analysis. Accordingly, the 35 U.S.C. rejection of claims 1, 3-10, 13-15 and 17-19 has been withdrawn. The rejection under 35 U.S.C. 101 of canceled claims 2, 11, 16 and 20 has been rendered moot because of their cancelation. With regard to the 35 U.S.C. 103 rejection of claims 1, 3-10, 13-15 and 17-19 (pgs. 11-42, Action), applicant’s amendments necessitated additional searching and consideration of new grounds of rejection. Accordingly the new grounds of rejection under 35 U.S.C. 103 are: claims 1, 3-5, 8-10, 12-14 and 17-19 in view of Miyazawa, Nickolaou and Raj; claims 6 and 15 in view of Miyazawa, Nickolaou, Raj and Hanchett; and claim 7 in view of Miyazawa, Nickolaou, Raj and Malkes. Newly added claims 21, 23 and 24 are rejected under 35 U.S.C. 103 in view of Miyazawa, Nickolaou, Raj and Sakuma and claim 22 is rejected under 35 U.S.C. 103 in view of Miyazawa, Nickolaou, Malkes, Raj and Sakuma. The rejection under 35 U.S.C. 103 of canceled claims 2, 11, 16 and 20 has been rendered moot because of their cancelation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-5, 8-10, 12-14 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication Number 2018/0228448 to Miyazawa et al. (hereafter Miyazawa) in view of U.S. Patent Publication Number 2022/0383748 to Raj et al. (hereafter Raj) and U.S. Patent Publication Number 2015/0211870 to Nickolaou. As per claim 1, Miyazawa discloses [a]n evacuation route guidance system (see at least Miyazawa, Abstract), comprising: at least a processor (see at least Miyazawa, [0101] disclosing controller 11 has the configuration of a computer including a CPU, ROM, and RAM. The controller 11 functions as an information processor that centrally controls other parts and operating processes by executing a basic control program stored in ROM, and other programs stored in storage 14 ); and a memory in circuit communication with the processor (see at least Miyazawa, [0101]), wherein the processor is configured to execute program instructions stored in the memory (see at least Miyazawa, [0101]) to perform; acquiring map information representing a hazard map created from records of past disasters or field investigations, the map information indicating locations of one or more dangerous places and locations of one or more safe areas in the target area (see at least Miyazawa, [0175] disclosing that the evacuation guidance server 5 then scores the suitability of each evacuation route determined in step S44 for evacuation of the user 7, and determines the destination and evacuation route for guiding the user 7 (step S45). The suitability of the evacuation routes is determined, for example, so that the difficulty of actual evacuation by the user 7 is not excessive. ... Whether the route includes roads or bridges that may be made impassable by the disaster is also scored. Information related to roads or bridges that may be made impassable by the disaster are previously identified by a disaster hazard map compiled by a government organization and included in the evacuation route database 55c); ... (1) ... ; ... (2) ... ; ... (3) ... ; ... (4) ... ; ... (5) ... ; ... (6) ... ; ... (7) ... ; determining an updated route (see at least Miyazawa, [0079] disclosing that the wearable device 2 also has a function for acquiring information related to the body, and more specifically physiological information related to the medical or physiological status, of the user 7. The physiological information is information that affects selection of the evacuation site when the user 7 is guided to an evacuation site, and/or selecting the evacuation route from the current location of the user 7 to the evacuation site; [0175] disclosing that the evacuation guidance server 5 determines the evacuation route based on the calculated scores. For example, the evacuation route scored to have the lowest degree of difficulty could be selected as the evacuation route. Note that if the evacuation route scoring process includes calculating and evaluating the relationship between the positioning information sent by the wearable device 2 and the evacuation facilities, such as the distance, the positioning information may be expressed by latitude and longitude instead of UTM grid coordinates), to a safe area that avoids the one or more dangerous places (see at least Miyazawa, [0079]) ... (8) ... ; and transmitting information to the user terminal to cause the user terminal to display the updated route from the current location of the user (see at least Miyazawa, [0117] disclosing that the guidance service information 14e is information the wearable device 2 receives from the evacuation guidance server 5. When guidance service information the evacuation guidance server 5 transmits is received through the gateway device 3, the controller 11 first stores the guidance service information as guidance service information 14e in the storage 14. Based on the guidance service information 14e, the controller 11 displays text and maps indicating where to take refuge and the name of the facility, the location of the evacuation site, and the route to the evacuation site, on the display 16). But, Miyazawa does not explicitly teach the following limitations taught in Raj: (1) performing in real time, at a time of occurrence of a disaster (see at least Raj, [0017] disclosing that control system may measure traffic conditions on a route in real time and inform vehicles about traffic conditions on the route; [0025] disclosing that the server 110 may comprise suitable logic, circuitry, and interfaces that may be configured to act as a data store for traffic information of the group of moving objects 102. Additionally, as a preemptive measure, the server 110 may be configured to collect real time or near real time information of events which may potentially occur or may have occurred in the geographical control zone 106 or at any location in the current travel route of the first vehicle 104); (2) acquiring a current location of a user terminal accessing a Global Positioning System (GPS) (see at least Raj, [0034] disclosing that the collected traffic information may further include GNSS information for a plurality of locations in the geographical control zone 106. The GNSS information may include, for example, vehicle locations, specific routes, specific intersections, traffic conditions on the specific routes, turns on the specified routes, accidents on the specific routes); (3) transmitting information to the user terminal to cause the user terminal to display a route from the current location to a safe area from among the one or more safe areas (see at least Raj, [0059] disclosing that the traffic information may include a plurality of image frames 304 of the group of moving objects 302 in the geographical control zone 106. The plurality of image frames 304 may be captured via the image-capture device 208 or image-capture devices integrated with the set of electronic devices 114. Such image-capture devices may capture the plurality of image frames 304 and transmit the captured plurality of image frames 304 to the first electronic device 112 directly via the communication network 116 or via the control zone master. In some embodiments, one or more electronic devices of the set of electronic devices 114 may also capture GNSS information for a plurality of locations, such as landmarks, intersections, traffic conditions, etc., in the geographical control zone 106. The captured GNSS information may be included in the traffic information and transmitted to the first electronic device 112 either directly or via the control zone master); (4) acquiring one or more images shot by a plurality of cameras placed near a plurality of intersections of a target area (see at least Raj, [0033] disclosing that each electronic device of the set of electronic devices 114 may be configured to collect traffic information, including but not limited to, a plurality of image frames of the group of moving objects 102 in the geographical control zone 106 ; [0052] disclosing that the image-capture device 208 may comprise suitable logic, circuitry, and/or interfaces that may be configured to capture a plurality of image frames of the group of moving objects 102 in a field of view (FOV) region of the image-capture device 208); (5) passing the one or more images to a classification model for identifying vehicles or pedestrians (see at least Raj, [0061] disclosing that the NN model may be pre-trained on a training dataset which includes input-output image pairs of moving objects. For example, in an input-output image pair for a moving object, an input image may denote an initial position as an initial state of the moving object, while an output image may denote next position as a next state of the moving object. The NN model may be pre-trained to predict a plurality of discrete distributions for the first moving object 308 of the group of moving objects 302. The NN model may be pre-trained to predict a movement (e.g., vehicle movement) of the first moving object 308 without attempting to reconstruct the first moving object 308 in real pixel information. The trained NN model may individually apply the predicted plurality of discrete distributions to a first image frame of the first moving object 308 from the plurality of image frames 304) ... . But, neither Miyazawa nor Raj explicitly teaches the following limitations taught in Nickolaou: (6) detecting movement of one or more moving bodies passing through at least a first portion of the plurality of intersections based on the one or more images and at least one first output from the classification mode (see at least Nickolaou, [0034] disclosing that information regarding potential concerns or hazards that is gleaned from the network of traffic cameras may be organized according to geographic zones. A geographic zone may be defined or delineated in terms of area (e.g., certain number of square miles, by radius, by zip code, by city, township, county, etc.) or in terms of the roadway (e.g., an entire road or highway, or just a segment or portion of a road could constitute a geographic zone). It may be beneficial to correlate the size of a geographic zone with the number of traffic cameras in that zone, the average volume of traffic passing through that zone, or some other suitable criteria; [0035] disclosing that with regard to FIG. 2, when a potential concern or hazard has been identified from a street level image and maybe even corroborated, step 140 uses the information associated with the corresponding image to assign that concern to a specific geographic zone. Consider the example where traffic camera 212 provides high definition video of a segment of I-70 and, from this video, street level images are used to reveal the ice patch 80 in FIG. 1. In such an example, step 140 could use the camera identifier or the camera position information that accompanied the street level images to assign this weather concern (i.e., the ice patch 80) to geographic zone 250. It is envisioned that a geographic zone, like zone 250, would have a number of different concerns (construction, traffic, weather or others) associated with it and stored in the concern profile so that when a host vehicle 54 enters or is expected to enter zone 250, the method could look at all of the current concerns in that particular zone and determine if any were relevant to the host vehicle <interpreted as classification>; [0044] disclosing that a locating system which comprises a car navigation apparatus (hereinafter referred to as "car-navi apparatus" also) installed in a vehicle and a camera installed outside the vehicle, e.g., at an intersection (hereinafter referred to as an "intersection camera"). An intersection camera installed at each intersection picks up images that can be used to grasp the condition of the intersection. For example, it takes images that show the condition of a plurality of roads branching off from the intersection. Also, the intersection camera delivers picked-up images as intersection information or delivered information. A car navigation apparatus receives intersection information from each of a plurality of intersection cameras); (7) detecting an absence of movement in at least a second portion of the plurality of intersections based on the one or more images and at least one second output from the classification model (see at least Miyazawa, [0010] disclosing the use of street level images, such as those provided by stationary or roadside traffic cameras, sensors or other devices. Many stationary traffic cameras now possess high-resolution or high-definition capabilities, which enable them to provide higher quality still images or video containing more information. The additional information extracted from the street level images allows the present system and method to better recognize, identify, classify and/or evaluate various hazards or concerns in upcoming road segments, including those that are far forward and beyond the field of view of vehicle mounted devices; [0026] disclosing that at this point, step 120 can evaluate the items identified in the street level images in order to classify any potential hazards or concerns that the present method may wish to address. Classification of such concerns can be carried out in any number of different ways. For example, potential concerns that are based on items extracted from the street level images in step 120 may be classified into one or more predetermined categories or groups, such as: construction concerns, traffic concerns, and weather concerns, to cite a few possibilities.; [0028] disclosing that he term "traffic concern," as used herein, broadly includes any object, person, condition, event, indicia and/or other item from a street level image that suggests the presence of certain traffic conditions. Traffic concerns may include, but are not limited to, traffic jams or backups, traffic patterns, stationary or slow moving objects in the road (e.g., a disabled or broken down vehicle), emergency vehicles, tow trucks, debris in the road (e.g., downed branches, power lines, etc.), emergency personnel directing traffic, paths of moving vehicles not in line with normal traffic flows, etc. <interpreted as detecting an absence of movement>); and (8) avoids the one or more dangerous places and the second portion of the plurality of intersections, and that passes through the first portion of the plurality of intersections, based on: (i) the map information, (ii) the detected movement, and (iii) the detected absence of movement (see at least Nickolaou, [0038] disclosing that If this step determines that there are in fact one or more potential concerns for the specific geographic zone in question, then the method may proceed to step 180 so that an appropriate remedial action or response can be devised for responding to the potential concern. If there are no potential concerns associated with that particular geographic zone, then the method can loop back for continued monitoring, etc. In some cases, this step may look up multiple geographic zones, such as the case when the host vehicle 54 is expected to follow a certain navigational route that will take the vehicle through more than one geographical zone; [0039] disclosing that if the host vehicle 54 is operating in an automated driving mode along a known navigational route and one or more construction concerns have been identified for a geographic zone in which the vehicle is about to enter, it is preferable that step 180 send a warning or make changes to the automated driving mode ahead of time so that an alternate route can be taken). Miyazawa, Raj and Nickolaou are analogous to claim 1 because they are in the same field of evacuation route guidance. Miyazawa relates to a guidance system that acquires user physiological information, positioning information and sends user information, physiological information and positioning information to an evacuation guidance server to generate guidance service information based on the user information, and sends the guidance service information to the wearable device of the user (see at least Miyazawa, Abstract). Raj relates to a system and a method for vehicle control in geographical control zones and prediction of future scenes in real time (see at least Raj, [0002]). Nickolaou relates to methods and systems that enhance a driving experience of an automated driving mode through the use of street level images, such as those provided by stationary traffic cameras (see at least Nickolaou, [0001]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa, to provide the benefit of (1) performing in real time, at a time of occurrence of a disaster, (2) acquiring a current location of a user terminal accessing a Global Positioning System, (3) transmitting information to the user terminal to cause the user terminal to display a route from the current location to a safe area from among the one or more safe areas, (4) acquiring one or more images shot by a plurality of cameras placed near a plurality of intersections of a target area, (5) passing the one or more images to a classification model for identifying vehicles or pedestrians, as disclosed in Raj, (6) detecting movement of one or more moving bodies passing through at least a first portion of the plurality of intersections based on the one or more images and at least one first output from the classification model, (7) detecting an absence of movement in at least a second portion of the plurality of intersections based on the one or more images and at least one second output from the classification model, and (8) avoiding the one or more dangerous places and the second portion of the plurality of intersections, and that passes through the first portion of the plurality of intersections, based on the map information, the detected movement, and the detected absence of movement, as disclosed in Nickolaou, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 3, the combination of Miyazawa, Nickolaou and Raj discloses all of the limitations of claim 1, as shown above. Raj further discloses the following limitation: wherein the processor is further configured to execute program instructions to determine, based on the one or more images, a number of the one or more moving bodies passing through the one or more intersections based on the one or more images shot by the one or more cameras (see at least Raj, [0037] disclosing that the first electronic device 112 may be further configured to receive the traffic information (e.g., the collected traffic information) from one or more electronic devices of the set of electronic devices 114. The traffic information may include the plurality of image frames of the group of moving objects 102 in the geographical control zone 106. In certain instances, the traffic information may further include one or more of, for example, location information of different moving and/or non-moving objects, 3D models or 3D scanning data of environment surrounding the one or more electronic devices, safety or events information associated certain locations in the current travel route of the first vehicle 104; [0039] disclosing that the first electronic device 112 may be further configured to generate a set of images frames of a first moving object of the group of moving objects 102 based on application of a trained NN model on the received traffic information. The first moving object may correspond to one of a vehicle (autonomous or non-autonomous), pedestrian, an animal, an aerial vehicle, a flying debris, and the like. The generated set of image frames may correspond to a set of likely positions of the first moving object at a future time instant. The plurality of image frames may be provided as an input to an initial layer of the trained NN model, which may be stored on the first electronic device 112. The trained NN model may produce the set of image frames as an output of a final NN layer of the trained NN mode), and create the updated route based on the number of one or more moving bodies passing through the one or more intersection (see at least Raj, [0043] disclosing that the first electronic device 112 may be further configured to generate first control information based on the predicted unsafe behavior. The first control information may include an alternate route for the first vehicle 104. The alternate route may lead to same destination point where the first vehicle 104 intends to reach). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa as modified by Nikolaou and Raj, to provide the benefit of having the processor is further configured to execute program instructions to determine, based on the one or more images, a number of the one or more moving bodies passing through the one or more intersections based on the one or more images shot by the one or more cameras, and create the updated route based on the number of one or more moving bodies passing through the one or more intersection, as further disclosed in Raj, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 4, the combination of Miyazawa, Nickolaou and Raj discloses all of the limitations of claim 3, as shown above. Raj further discloses the following limitation: wherein intersections from among the first portion of the plurality of intersections are prioritized for inclusion in the updated route based on a number of moving bodies passing through each of the first portion of the plurality of intersections (see at least Raj, [0088] disclosing that at a certain time instant, the first vehicle 506 may be detected by the first control zone master 504A. The first control zone master 504A may be configured to establish the geographical control zone 508 around the detected first vehicle 506 based on an input from the first vehicle 506. The first control zone master 504A may be configured to collect the traffic information from one or more electronic devices of a set of electronic devices in the communication range of the first control zone master 504A. The collected traffic information may include a plurality of image frames of the group of moving objects 510A and 510B on the first route 502A (same as that for the first vehicle 506). The traffic information may be sufficient for prediction of an alternate route (if unsafe behavior of any moving object is predicted) up to an intersection point 512. Additionally, in certain instances, the first control zone master 504A may be configured to request for the traffic information from the second control zone master 504B. The second control zone master 504B may collect and transmit the traffic information to the first control zone master 504A. The traffic information received from the second control zone master 504B may correspond to the second route 502B; [0089] disclosing that The first control zone master 504A may be configured to process the traffic information to predict the unsafe behavior of a first moving object 510A of the group of moving objects 510A and 5108. For example, the unsafe behavior of the first moving object 510A may related to a likelihood of a collision with the first vehicle 506 when the first vehicle 506 reaches the intersection point 512. The processing of the traffic information may correspond to application of the trained NN model on the traffic information. The first control zone master 504A may then generate first control information which may include an alternate route for the first vehicle 506; [0090]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa as modified by Nikolaou and Raj, to provide the benefit of having the processor is further configured to have intersections from among the first portion of the plurality of intersections be prioritized for inclusion in the updated route based on a number of moving bodies passing through each of the first portion of the plurality of intersections, as further disclosed in Raj, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 5, the combination of Miyazawa, Nickolaou and Raj discloses all of the limitations of claim 2, as shown above. Raj further discloses the following limitations: wherein the processor is further configured to execute program instructions to calculate moving body velocities of one or more moving bodies passing through the one or more intersections based on the one or more images shot by the one or more cameras based on the one or more images (see at least Raj, [0102] disclosing that with respect to step 704, an operation to acquire traffic information of the group of moving objects 102 may be initiated. In accordance with an embodiment, the first electronic device 112 may be configured to initiate the operation to acquire the traffic information of the group of moving objects 102. Such information may include, for example, a plurality of image frames of the group of moving objects 102, GNSS information or positioning information of the group of moving objects 102, their speed, acceleration, object type, or other information; [0110]), and create the updated route based on the moving velocities (see at least Raj, [0110] disclosing that with regard to step 720, first control information including an alternate route for the first vehicle 104 may be generated based on the predicted unsafe behavior. In accordance with an embodiment, the first electronic device 112 may be configured to generate the first control information including the alternate route for the first vehicle 104 based on the predicted unsafe behavior. Control may pass to an end ; [0117]; [0126]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa as modified by Nikolaou and Raj, to provide the benefit of executing program instructions to calculate moving body velocities of one or more moving bodies passing through the one or more intersections based on the one or more images shot by the one or more cameras based on the one or more images, and creating the updated route based on the moving velocities, as further disclosed in Raj, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 8, the combination of Miyazawa, Raj and Nickolaou discloses all of the limitations of claim 1, as shown above. Miyazawa further discloses the following limitation: wherein the hazard map is selected, from among a plurality of hazard maps, based on at least one of a type or scale of the disaster (see at least Miyazawa, [0175] disclosing that whether the route includes roads or bridges that may be made impassable by the disaster is also scored. Information related to roads or bridges that may be made impassable by the disaster are previously identified by a disaster hazard map compiled by a government organization and included in the evacuation route database 55c. Parameters, tables, formulae, and other information for calculating a score from the evacuation route information may also be included in the evacuation route database 55c <interpreted as a plurality of hazard maps for each type or each scale of a disaster>. In step S45, a score indicating the degree of difficulty for the user 7 is calculated for each evacuation route determined in step S43. The evacuation guidance server 5 determines the evacuation route based on the calculated scores). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa as modified by Nikolaou, to provide the benefit of having the hazard map be selected, from among a plurality of hazard maps, based on at least one of a type or scale of the disaster, as further disclosed in Miyazawa, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 9, similar to claim 1, Miyazawa discloses [a]n evacuation route guidance method (see at least Miyazawa, Abstract), performed by a computer (see at least Miyazawa, [0101]) comprising: acquiring map information representing a hazard map created from records of past disasters or field investigations, the map information indicating locations of one or more dangerous places and locations of one or more safe areas in the target area (see at least Miyazawa, [0175]); ... (1) ... ; ... (2) ... ; ... (3) ... ; ... (4) ... ; ... (5) ... ; ... (6) ... ; ... (7) ... ; determining an updated route (see at least Miyazawa, [0079]; [0175]), to a safe area that avoids the one or more dangerous places (see at least Miyazawa, [0079]) ... (8) ... ; and transmitting information to the user terminal to cause the user terminal to display the updated route from the current location of the user (see at least Miyazawa, [0117]). But, Miyazawa does not explicitly teach the following limitations taught in Raj: (1) performing in real time, at a time of occurrence of a disaster (see at least Raj, [0017]; [0025]); (2) acquiring a current location of a user terminal accessing a Global Positioning System (GPS) (see at least Raj, [0034]); (3) transmitting information to the user terminal to cause the user terminal to display a route from the current location to a safe area from among the one or more safe areas (see at least Raj, [0059]); (4) acquiring one or more images shot by a plurality of cameras placed near a plurality of intersections of a target area (see at least Raj, [0033]; [0052]); (5) passing the one or more images to a classification model for identifying vehicles or pedestrians (see at least Raj, [0061]) ... . But, neither Miyazawa nor Raj explicitly teaches the following limitations taught in Nickolaou: (6) detecting movement of one or more moving bodies passing through at least a first portion of the plurality of intersections based on the one or more images and at least one first output from the classification mode (see at least Nickolaou, [0034]; [0035]; [0044]); and (7) detecting an absence of movement in at least a second portion of the plurality of intersections based on the one or more images and at least one second output from the classification model (see at least Miyazawa, [0010]; [0026]); and (8) avoids the one or more dangerous places and the second portion of the plurality of intersections, and that passes through the first portion of the plurality of intersections, based on: (i) the map information, (ii) the detected movement, and (iii) the detected absence of movement (see at least Nickolaou, [0038]; [0039]) Miyazawa, Raj and Nickolaou are analogous to claim 9 because they are in the same field of evacuation route guidance. Miyazawa relates to a guidance system that acquires user physiological information, positioning information and sends user information, physiological information and positioning information to an evacuation guidance server to generate guidance service information based on the user information, and sends the guidance service information to the wearable device of the user (see at least Miyazawa, Abstract). Raj relates to a system and a method for vehicle control in geographical control zones and prediction of future scenes in real time (see at least Raj, [0002]). Nickolaou relates to methods and systems that enhance a driving experience of an automated driving mode through the use of street level images, such as those provided by stationary traffic cameras (see at least Nickolaou, [0001]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa, to provide the benefit of (1) performing in real time, at a time of occurrence of a disaster, (2) acquiring a current location of a user terminal accessing a Global Positioning System, (3) transmitting information to the user terminal to cause the user terminal to display a route from the current location to a safe area from among the one or more safe areas, (4) acquiring one or more images shot by a plurality of cameras placed near a plurality of intersections of a target area, (5) passing the one or more images to a classification model for identifying vehicles or pedestrians, as disclosed in Raj, (6) detecting movement of one or more moving bodies passing through at least a first portion of the plurality of intersections based on the one or more images and at least one first output from the classification model, (7) detecting an absence of movement in at least a second portion of the plurality of intersections based on the one or more images and at least one second output from the classification model, and (8) avoiding the one or more dangerous places and the second portion of the plurality of intersections, and that passes through the first portion of the plurality of intersections, based on the map information, the detected movement, and the detected absence of movement, as disclosed in Nickolaou, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 10, similar to claims 1 and 10, Miyazawa discloses [a] computer-readable non-transitory recording medium recording a program causing a computer (see at least Miyazawa, Abstract, [0101]), to perform processing of acquiring map information representing a hazard map created from records of past disasters or field investigations, the map information indicating locations of one or more dangerous places and locations of one or more safe areas in the target area (see at least Miyazawa, [0175]); ... (1) ... ; ... (2) ... ; ... (3) ... ; ... (4) ... ; ... (5) ... ; ... (6) ... ; ... (7) ... ; determining an updated route (see at least Miyazawa, [0079]; [0175]), to a safe area that avoids the one or more dangerous places (see at least Miyazawa, [0079]) ... (8) ... ; and transmitting information to the user terminal to cause the user terminal to display the updated route from the current location of the user (see at least Miyazawa, [0117]). But, Miyazawa does not explicitly teach the following limitations taught in Raj: (1) performing in real time, at a time of occurrence of a disaster (see at least Raj, [0017]; [0025]); (2) acquiring a current location of a user terminal accessing a Global Positioning System (GPS) (see at least Raj, [0034]); (3) transmitting information to the user terminal to cause the user terminal to display a route from the current location to a safe area from among the one or more safe areas (see at least Raj, [0059]); (4) acquiring one or more images shot by a plurality of cameras placed near a plurality of intersections of a target area (see at least Raj, [0033]; [0052]); (5) passing the one or more images to a classification model for identifying vehicles or pedestrians (see at least Raj, [0061]) ... . But, neither Miyazawa nor Raj explicitly teaches the following limitations taught in Nickolaou: (6) detecting movement of one or more moving bodies passing through at least a first portion of the plurality of intersections based on the one or more images and at least one first output from the classification mode (see at least Nickolaou, [0034]; [0035]; [0044]); and (7) detecting an absence of movement in at least a second portion of the plurality of intersections based on the one or more images and at least one second output from the classification model (see at least Miyazawa, [0010]; [0026]); and (8) avoids the one or more dangerous places and the second portion of the plurality of intersections, and that passes through the first portion of the plurality of intersections, based on: (i) the map information, (ii) the detected movement, and (iii) the detected absence of movement (see at least Nickolaou, [0038]; [0039]) Miyazawa, Raj and Nickolaou are analogous to claim 10 because they are in the same field of evacuation route guidance. Miyazawa relates to a guidance system that acquires user physiological information, positioning information and sends user information, physiological information and positioning information to an evacuation guidance server to generate guidance service information based on the user information, and sends the guidance service information to the wearable device of the user (see at least Miyazawa, Abstract). Raj relates to a system and a method for vehicle control in geographical control zones and prediction of future scenes in real time (see at least Raj, [0002]). Nickolaou relates to methods and systems that enhance a driving experience of an automated driving mode through the use of street level images, such as those provided by stationary traffic cameras (see at least Nickolaou, [0001]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa, to provide the benefit of (1) performing in real time, at a time of occurrence of a disaster, (2) acquiring a current location of a user terminal accessing a Global Positioning System, (3) transmitting information to the user terminal to cause the user terminal to display a route from the current location to a safe area from among the one or more safe areas, (4) acquiring one or more images shot by a plurality of cameras placed near a plurality of intersections of a target area, (5) passing the one or more images to a classification model for identifying vehicles or pedestrians, as disclosed in Raj, (6) detecting movement of one or more moving bodies passing through at least a first portion of the plurality of intersections based on the one or more images and at least one first output from the classification model, (7) detecting an absence of movement in at least a second portion of the plurality of intersections based on the one or more images and at least one second output from the classification model, and (8) avoiding the one or more dangerous places and the second portion of the plurality of intersections, and that passes through the first portion of the plurality of intersections, based on the map information, the detected movement, and the detected absence of movement, as disclosed in Nickolaou, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 12, similar to claim 3, the combination of Miyazawa, Nickolaou and Raj discloses all of the limitations of claim 10, as shown above. Raj further discloses the following limitations: wherein the detecting the movement of the one or more bodies comprises: determining, based on the one or more images, a number of the one or more moving bodies passing through the one or more intersections (see at least Raj, pg. [0037]; [0039]), and wherein the determining the updated route comprises creating the updated route based on the number of one or more moving bodies passing through the one or more intersection (see at least Raj, [0043]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa as modified by Nikolaou and Raj, to provide the benefit of determining, based on the one or more images, a number of the one or more moving bodies passing through the one or more intersections, and creating the updated route based on the number of one or more moving bodies passing through the one or more intersections, as further disclosed in Raj, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 13, similar to claim 4, the combination of Miyazawa, Nickolaou and Raj discloses all of the limitations of claim 12, as shown above. Raj further discloses the following limitation: wherein intersections from among the first portion of the plurality of intersections with more detected moving bodies are prioritized for inclusion in the updated route over intersections with fewer detected moving bodies (see at least Raj, [0088]; [0090]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa as modified by Nikolaou and further modified by Raj, to provide the benefit of having intersections from among the first portion of the plurality of intersections with more detected moving bodies be prioritized for inclusion in the updated route over intersections with fewer detected moving bodies, as further disclosed in Raj, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 14, similar to claim 5, the combination of Miyazawa and Nickolaou discloses all of the limitations of claim 11, as shown above. But, neither Miyazawa nor Nickolaou explicitly teach the following limitations taught by Hanchett: wherein the detecting the movement of the one or more moving bodies comprises: calculating moving velocities of the one or more moving bodies based on the one or more images shot by the one or more cameras (see at least Raj, [0102]), and wherein the determining the updated route comprises creating the updated route based on the moving velocities (see at least Raj, [0110]; [0117]; [0126]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa as modified by Nikolaou and Raj, to provide the benefit of executing program instructions to calculate moving body velocities of one or more moving bodies passing through the one or more intersections based on the one or more images shot by the one or more cameras based on the one or more images, and creating the updated route based on the moving velocities, as further disclosed in Raj, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 17, similar to claim 3 and 12, the combination of Miyazawa and Nickolaou discloses all of the limitations of claim 16, as shown above. But, neither Miyazawa nor Nickolaou explicitly teach the following limitations taught in Chen: determining, based on the one or more images, a number of the one or more moving bodies passing through the one or more intersections (see at least Raj, [0037]; [0039]), and determining the updated route based on the number of one or more moving bodies passing through the one or more intersection (see at least Raj, [0043]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa as modified by Nikolaou and Raj, to provide the benefit of determining, based on the one or more images, a number of the one or more moving bodies passing through the one or more intersections, and creating the updated route based on the number of one or more moving bodies passing through the one or more intersections, as further disclosed in Raj, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 18, similar to claims 4 and 13, the combination of Miyazawa, Nickolaou and Raj discloses all of the limitations of claim 17, as shown above. Raj further discloses the following limitation: wherein intersections from among the first portion of the plurality of intersections with more detected moving bodies are prioritized for inclusion in the updated route over intersections with fewer detected moving bodies (see at least Raj, [0088]; [0090]). Therefore, it would prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as disclosed in Miyazawa as modified by Nikolaou and further modified by Raj, to provide the benefit of having intersections from among the first portion of the plurality of intersections with more detected moving bodies be prioritized for inclusion in the updated route over intersections with fewer detected moving bodies, as further disclosed in Raj, with a reasonable expectation of success. Doing so would provide the benefit of providing information that gives advanced warning of a hazard or concern so that one or more remedial actions can be taken (see at least Nickolaou, [0003]). As per claim 19, similar to claims 5 and 14, the combination of Miyazawa, Nickolaou and Raj discloses all of the limitations of claim 16, as shown above. Raj fur
Read full office action

Prosecution Timeline

Sep 14, 2023
Application Filed
May 09, 2025
Non-Final Rejection — §103
Aug 05, 2025
Interview Requested
Aug 14, 2025
Applicant Interview (Telephonic)
Aug 14, 2025
Examiner Interview Summary
Aug 19, 2025
Response Filed
Dec 02, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594992
VEHICLE STEERING CONTROL DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591236
REMOTE SUPPORT SYSTEM AND REMOTE SUPPORT METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12589734
METHOD FOR DEALING WITH OBSTACLES IN AN INDUSTRIAL TRUCK
2y 5m to grant Granted Mar 31, 2026
Patent 12583517
VEHICLE STEERING CONTROL DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12577755
WORK MACHINE AND CONTROL SYSTEM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
99%
With Interview (+44.1%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 119 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month