DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to Applicant’s amendment and request for continued examination filed 01/09/2026. Claims 1-4, 6-9, 11, and 14-20 are currently pending in this application.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “highly reliable” in claim 4 is a relative term which renders the claim indefinite. The term “highly reliable” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. What is highly reliable to one of ordinary skill in the art may not be highly reliable to another of ordinary skill in the art, and the claims, in light of the Applicant’s specification, does not define the metes and bounds of the term.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 6, 8-9, 14, 16, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Oshima et al. (U.S. 2023/0316923 A1).
Claim 1, Oshima teaches:
A method for providing crosswalk pedestrian guidance (Oshima, Figs. 1 and 2, Paragraph [0031]) based on an image (Oshima, Figs. 1 and 2: 56, Paragraph [0110], The images captured by infrastructure camera 56 includes traffic infrastructure equipment, e.g. the road, as well as mobile bodies and pedestrians.) and a beacon (Oshima, Figs. 1 and 2: 40, Paragraph [0096], The portable information processing terminal 40 transmits position information, travel acceleration, schedule information, and the like, of the pedestrian 4.), comprising:
estimating a walking location by combining a beacon signal (Oshima, Paragraph [0118], The target traffic area recognizer 60 collects data from on-board equipment 20, on-board equipment 30, and infrastructure camera 56 for determining the location of pedestrians. For example, the external sensor of the on-board driving support device 21 of on-board equipment 20 is equivalent to a first-person view sensor for providing data surrounding a four-wheeled vehicle 2 (see Oshima, Paragraph [0037]). It is noted that a first-person view sensor is interpreted as a sensor for sensing from the perspective of its respective device, e.g. an external sensor of on-board equipment 20 is a first-person view sensor of the on-board equipment. Additionally, the portable information processing terminal 40 transmits position information, travel acceleration, schedule information, and the like, of the pedestrian 4. The collected information, in total, is used by the traffic area recognizer 60 to determining the location of each traffic participant.) and first-person view sensor information obtained from a sensor worn or carried by a pedestrian (Oshima, Paragraph [0096], The portable information processing terminal 40 is possessed or worn by the pedestrian 4, and transmits data, including biological information of the pedestrian 4, to the coordination support device 6.), wherein the combining is performed based on respective reliabilities of the beacon signal and the first-person view sensor information (Oshima, Paragraphs [0118-0119], The locations of each traffic participant is determined based on the combined data from vehicles 2, motorcycles 3, and pedestrians 4, and the on-board equipment 20 and 30, and portable information processing terminals 40, respectively. As per the limitation of respective reliabilities, it would have been obvious to one of ordinary skill in the art for each of the on-board equipment 20 and 30, and the portable information processing terminal 40, to have a reliability associated with the data sent to the coordination support device 6.);
analyzing a hazard factor around a pedestrian based on an image acquired from a camera corresponding to the at least one traffic light (Oshima, Paragraph [0127], The predictor 62 predicts a contact risk between any of the traffic participants that are present, which includes the pedestrians. The prediction is based on a plurality of received information by the target traffic area recognizer 60, including images captured by the infrastructure camera 56 (see Oshima, Paragraph [0118]). The hazard factor thus includes any traffic participant that may be a contact risk to a pedestrian, including a four-wheeled vehicle 2 and/or a motorcycle 3. The cameras 56 may be located adjacent to at least one traffic light 54 (see Oshima, Fig. 1).);
predicting a hazard around the pedestrian in combination by considering together the walking location, the hazard factor, and status information of the traffic light (Oshima, Paragraph [0127], The predictor 62 predicts a contact risk between any of the traffic participants that are present, which includes the pedestrians and vehicles. The prediction is based on a plurality of received information by the target traffic area recognizer 60, including traffic light state information transmitted from the traffic light control device 55 (see Oshima, Paragraph [0118]).); and
providing walking guidance to a pedestrian guidance terminal based on the predicted hazard around the pedestrian (Oshima, Paragraph [0097], Through the notification device 42 of the portable information processing terminal 40, a notification may be presented to a pedestrian, which includes risk notifications (see Oshima, Paragraphs [0102-0103]). The notification specifier 63 determines what type of notification to provide to a traffic participant based on a risk determined by predictor 62 (see Oshima, Paragraph [0129]).),
wherein estimating the walking location comprises:
receiving a first walking location estimated based on the first-person view sensor information from the pedestrian guidance terminal (Oshima, Paragraph [0096], Pedestrian information includes position information. It would have been obvious to one of ordinary skill in the art, at the time of filing, for the processing terminal 40 to have a first-person view sensor for generating the position information.);
estimating a second walking location based on the beacon signal (Oshima, Paragraph [0096], The processing terminal 40 transmits the position information and travel acceleration to the coordination support device 6. The predictor 62 of the coordination support device 6 utilizes all of its collected data to predict the future of each of the traffic participants, which includes the pedestrians (see Oshima, Paragraph [0127]).); and
estimating a third walking location by estimating a point having a highest value to be the third walking location based on at least one of probabilities or reliabilities of respective results of estimating the first walking location and the second walking location, or a combination thereof (Oshima, Paragraph [0127], It would have been obvious to one of ordinary skill in the art, at the time of filing, for the predictor 62 to select the future movement route based on reliable position data from the portable information processing terminal 40 (see Oshima, Paragraph [0096]), images from infrastructure camera 56 (see Oshima, Paragraph [0110]), and external sensor data from each on-board equipment (see Oshima, Paragraph [0037] for example), which may represent first and/or second locations, respectively. Thus, the predicted future position/travel route of a traffic participant has a highest probability because the predictor 62 calculates said position/travel route in its simulation.),
wherein analyzing the hazard factor around the pedestrian comprises:
searching for at least one camera installed in association with an identified traffic light (Oshima, Paragraphs [0110] and [0118], The coordination support device 6 receives data from all of the cameras 56 in the target traffic area 9. By receiving and utilizing the images from each of the cameras 56, the coordination support device 6 effectively “searches” for each camera 56. As can be seen in the example of Fig. 1, each camera 56 is located adjacent to or at the same intersection as each traffic light 54.);
receiving a real-time image around a crosswalk from the at least one camera (Oshima, Paragraphs [0110] and [0118], It would have been obvious to one of ordinary skill in the art, at the time of filing, for the cameras 56, who are configured to capture images containing all infrastructure equipment, e.g. the intersection, mobile bodies, and pedestrians, to be capable of capturing an image around a crosswalk. It is noted that the phrase “around a crosswalk” is interpreted as including the crosswalk, and at least a portion outside of the crosswalk.); and
recognizing a hazard factor including a speed of at least one vehicle approaching the crosswalk, a lane, an obstacle, and a degree of walking congestion (Oshima, Paragraph [0119], One example hazard includes the ability of the target traffic area recognizer 60 to utilize the received information, including the images from cameras 56, to identify the number of people in a group of pedestrians, i.e. walking congestion, as well as the moving speed of traffic participants. It is noted that the limitation “a hazard factor” is interpreted to be a single hazard that is selected from the group of at least one vehicle approaching the crosswalk, a lane, an obstacle, and a degree of walking congestion.),
estimating a third walking location by combining the first walking location with the second walking location (Oshima, Paragraph [0127], It would have been obvious to one of ordinary skill in the art for the future of each traffic participant, i.e. the pedestrian, to include at least two additional locations in addition to the first position information received from the processing terminal 40. For example, Figs. 4A-4B shows a first pedestrian 95 at one position in Fig. 4A, and another position in Fig. 4B. Additionally, based on the diagonal arrow from first pedestrian 95 in Fig. 4B, the first pedestrian 95 intends to travel to a third position.).
Oshima does not explicitly teach:
Estimating a walking location corresponding to at least one traffic light.
However, as can be seen in Fig. 1, the traffic support system 1 may be implemented in a target traffic area 9, which includes at least four traffic lights 54. Therefore, one of ordinary skill in the art would recognize that a known location of a pedestrian 4 in a target traffic area 9 that includes at least one traffic light 54 would be functionally equivalent to a location corresponding to at least one traffic light.
Claim 2, Oshima further teaches:
The method of claim 1, wherein the hazard factor is analyzed based on the real-time image acquired from the camera installed at a location that allows capturing an entire crosswalk to compensate for occlusion of the first-person view sensor (Oshima, Paragraph [0119], The target traffic area recognizer 60 utilizes images from infrastructure camera 56, which identifies traffic participants in the area of the infrastructure camera 56 (see Oshima, Paragraph [0118]). It would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the positioning, e.g. the orientation, of the infrastructure camera(s) 56 such that the cameras were allowed to capture an entire crosswalk (see Oshima, Fig. 1). Such a modification would ensure that the infrastructure camera 56 is capable of functioning according to its intended function, e.g. collecting information regarding respective traffic participants, and would therefore yield predictable results. See MPEP 2144.04. As per the limitation of “to compensate for occlusion of the first-person view sensor”, one of ordinary skill in the art would recognize that any captured data outside of the data from a first-person view sensor would be capable of compensating for deficiencies of said first-person view sensor.).
Claim 4, Oshima further teaches:
The method of claim 1, wherein estimating the third walking location is performed by utilizing a location estimation algorithm for multi-modal sensor-based navigation to achieve highly reliable walking location estimation (Oshima, Paragraph [0127], The predictor 62 selects the future movement route based on reliable position data from the portable information processing terminal 40 (see Oshima, Paragraph [0096]), images from infrastructure camera 56 (see Oshima, Paragraph [0110]), and external sensor data from each on-board equipment (see Oshima, Paragraph [0037] for example), which may represent first and/or second locations, respectively, which is functionally equivalent to a multi-modal sensor-based navigation that is highly reliable.).
Claim 6, Oshima further teaches:
The method of claim 1, wherein the at least one traffic light status information includes a color and a lighting time of the at least one traffic light (Oshima, Paragraphs [0111] and [0118]).
Claim 8, Oshima further teaches:
The method of claim 6, wherein predicting the hazard is performed based on heuristic hazard prediction of calculating a hazard degree in a corresponding hazardous situation based on hazard degrees manually set for hazardous situations designated for respective cases (Oshima, Paragraphs [0118] and [0127], The coordination support device 6 receives inputs from the on-board equipment 20 and 30, portable information processing terminals 40, infrastructure cameras 56, light control devices 55, and utilizes the received data with the predictor 62 for estimating the future movements of all of the traffic participants. The predictor 62 utilizes the data to simulate the monitoring area to determine potential risks for each traffic participant. Therefore, the method of the predictor 62 of coordination support device 6 is functionally equivalent to a heuristic prediction, e.g. a trial and error through simulation, that is set for hazardous situations, e.g. collisions between vehicles and/or pedestrians.).
Claim 9, Oshima teaches:
An apparatus for providing crosswalk pedestrian guidance (Oshima, Figs. 1 and 2: 6, Paragraph [0031]) based on an image (Oshima, Figs. 1 and 2: 56, Paragraph [0110], The images captured by infrastructure camera 56 includes traffic infrastructure equipment, e.g. the road, as well as mobile bodies and pedestrians.) and a beacon (Oshima, Figs. 1 and 2: 40, Paragraph [0096], The portable information processing terminal 40 transmits position information, travel acceleration, schedule information, and the like, of the pedestrian 4.), comprising:
a memory configured to store at least one program (Oshima, Paragraph [0034], The coordination support device 6 includes one or more computers, and thus have at least a memory for storing at least one program.); and
a processor configured to execute the program (Oshima, Paragraph [0034], The coordination support device 6 includes one or more computers, and thus have at least a processor for executing at least one program.),
wherein the program is configured to estimate a walking location by combining a beacon signal (Oshima, Paragraph [0118], The target traffic area recognizer 60 collects data from on-board equipment 20, on-board equipment 30, and infrastructure camera 56 for determining the location of pedestrians. For example, the external sensor of the on-board driving support device 21 of on-board equipment 20 is equivalent to a first-person view sensor for providing data surrounding a four-wheeled vehicle 2 (see Oshima, Paragraph [0037]). It is noted that a first-person view sensor is interpreted as a sensor for sensing from the perspective of its respective device, e.g. an external sensor of on-board equipment 20 is a first-person view sensor of the on-board equipment. Additionally, the portable information processing terminal 40 transmits position information, travel acceleration, schedule information, and the like, of the pedestrian 4. The collected information, in total, is used by the traffic area recognizer 60 to determining the location of each traffic participant.) and first-person view sensor information obtained from a sensor worn or carried by a pedestrian (Oshima, Paragraph [0096], The portable information processing terminal 40 is possessed or worn by the pedestrian 4, and transmits data, including biological information of the pedestrian 4, to the coordination support device 6.), wherein the combining is performed based on respective reliabilities of the beacon signal and the first-person view sensor information (Oshima, Paragraphs [0118-0119], The locations of each traffic participant is determined based on the combined data from vehicles 2, motorcycles 3, and pedestrians 4, and the on-board equipment 20 and 30, and portable information processing terminals 40, respectively. As per the limitation of respective reliabilities, it would have been obvious to one of ordinary skill in the art for each of the on-board equipment 20 and 30, and the portable information processing terminal 40, to have a reliability associated with the data sent to the coordination support device 6.), analyze a hazard factor around a pedestrian based on an image acquired from a camera corresponding to the at least one traffic light (Oshima, Paragraph [0127], The predictor 62 predicts a contact risk between any of the traffic participants that are present, which includes the pedestrians. The prediction is based on a plurality of received information by the target traffic area recognizer 60, including images captured by the infrastructure camera 56 (see Oshima, Paragraph [0118]). The hazard factor thus includes any traffic participant that may be a contact risk to a pedestrian, including a four-wheeled vehicle 2 and/or a motorcycle 3. The cameras 56 may be located adjacent to at least one traffic light 54 (see Oshima, Fig. 1).), predict a hazard around the pedestrian in combination by considering together the walking location, the hazard factor, and status information of the at least one traffic light (Oshima, Paragraph [0127], The predictor 62 predicts a contact risk between any of the traffic participants that are present, which includes the pedestrians and vehicles. The prediction is based on a plurality of received information by the target traffic area recognizer 60, including traffic light state information transmitted from the traffic light control device 55 (see Oshima, Paragraph [0118]).), and provide walking guidance to a pedestrian guidance terminal based on the predicted hazard around the pedestrian (Oshima, Paragraph [0097], Through the notification device 42 of the portable information processing terminal 40, a notification may be presented to a pedestrian, which includes risk notifications (see Oshima, Paragraphs [0102-0103]). The notification specifier 63 determines what type of notification to provide to a traffic participant based on a risk determined by predictor 62 (see Oshima, Paragraph [0129]).),
wherein the program is configured to, in estimating the walking location, receive a first walking location estimated based on the first-person view sensor information from the pedestrian guidance terminal (Oshima, Paragraph [0096], Pedestrian information includes position information. It would have been obvious to one of ordinary skill in the art, at the time of filing, for the processing terminal 40 to have a first-person view sensor for generating the position information.), estimate a second walking location based on the beacon signal (Oshima, Paragraph [0096], The processing terminal 40 transmits the position information and travel acceleration to the coordination support device 6. The predictor 62 of the coordination support device 6 utilizes all of its collected data to predict the future of each of the traffic participants, which includes the pedestrians (see Oshima, Paragraph [0127]).), and estimate a third walking location by estimating a point having a highest value to be the third walking location based on at least one of probabilities or reliabilities of respective results of estimating the first walking location and the second walking location, or a combination thereof (Oshima, Paragraph [0127], It would have been obvious to one of ordinary skill in the art, at the time of filing, for the predictor 62 to select the future movement route based on reliable position data from the portable information processing terminal 40 (see Oshima, Paragraph [0096]), images from infrastructure camera 56 (see Oshima, Paragraph [0110]), and external sensor data from each on-board equipment (see Oshima, Paragraph [0037] for example), which may represent first and/or second locations, respectively. Thus, the predicted future position/travel route of a traffic participant has a highest probability because the predictor 62 calculates said position/travel route in its simulation.),
wherein the program is configured to, in analyzing the hazard factor around the pedestrian, search for at least one camera installed in association with an identified traffic light (Oshima, Paragraphs [0110] and [0118], The coordination support device 6 receives data from all of the cameras 56 in the target traffic area 9. By receiving and utilizing the images from each of the cameras 56, the coordination support device 6 effectively “searches” for each camera 56. As can be seen in the example of Fig. 1, each camera 56 is located adjacent to or at the same intersection as each traffic light 54.), receive a real-time image around a crosswalk from the at least one camera (Oshima, Paragraphs [0110] and [0118], It would have been obvious to one of ordinary skill in the art, at the time of filing, for the cameras 56, who are configured to capture images containing all infrastructure equipment, e.g. the intersection, mobile bodies, and pedestrians, to be capable of capturing an image around a crosswalk. It is noted that the phrase “around a crosswalk” is interpreted as including the crosswalk, and at least a portion outside of the crosswalk.), and recognize a hazard factor including a speed of at least one vehicle approaching the crosswalk, a lane, an obstacle, and a degree of walking congestion (Oshima, Paragraph [0119], One example hazard includes the ability of the target traffic area recognizer 60 to utilize the received information, including the images from cameras 56, to identify the number of people in a group of pedestrians, i.e. walking congestion, as well as the moving speed of traffic participants. It is noted that the limitation “a hazard factor” is interpreted to be a single hazard that is selected from the group of at least one vehicle approaching the crosswalk, a lane, an obstacle, and a degree of walking congestion.).
Oshima does not explicitly teach:
Estimating a walking location corresponding to at least one traffic light.
However, as can be seen in Fig. 1, the traffic support system 1 may be implemented in a target traffic area 9, which includes at least four traffic lights 54. Therefore, one of ordinary skill in the art would recognize that a known location of a pedestrian 4 in a target traffic area 9 that includes at least one traffic light 54 would be functionally equivalent to a location corresponding to at least one traffic light.
Claim 14, Oshima further teaches:
The apparatus of claim 9, wherein the at least one traffic light status information includes a color and a lighting time of the at least one traffic light (Oshima, Paragraphs [0111] and [0118]).
Claim 16, Oshima teaches:
A pedestrian guidance terminal (Oshima, Figs. 1 and 2: 40), comprising:
a memory configured to store at least one program (Oshima, Paragraph [0096], The portable information processing terminal 40 includes a smartphone, which includes a memory configured to store at least one program.); and
a processor configured to execute the at least one program (Oshima, Paragraph [0096], The portable information processing terminal 40 includes a smartphone, which includes a processor configured to execute at least one program.),
wherein the at least one program is configured to output walking guidance information and hazard warning by determining safety in combination based on walking guidance information (Oshima, Paragraph [0097], Through the notification device 42 of the portable information processing terminal 40, a notification may be presented to a pedestrian, which includes risk notifications (see Oshima, Paragraphs [0102-0103]). The notification specifier 63 determines what type of notification to provide to a traffic participant based on a risk determined by predictor 62 (see Oshima, Paragraph [0129]).) and a result of predicting a hazard around a pedestrian (Oshima, Paragraph [0127], The predictor 62 predicts a contact risk between any of the traffic participants that are present, which includes the pedestrians and vehicles. The prediction is based on a plurality of received information by the target traffic area recognizer 60, including traffic light state information transmitted from the traffic light control device 55 (see Oshima, Paragraph [0118]).), which are estimated based on first-person view sensor information (Oshima, Paragraph [0118], The target traffic area recognizer 60 collects data from on-board equipment 20, on-board equipment 30, and infrastructure camera 56 for determining the location of pedestrians. For example, the external sensor of the on-board driving support device 21 of on-board equipment 20 is equivalent to a first-person view sensor for providing data surrounding a four-wheeled vehicle 2 (see Oshima, Paragraph [0037]). It is noted that a first-person view sensor is interpreted as a sensor for sensing from the perspective of its respective device, e.g. an external sensor of on-board equipment 20 is a first-person view sensor of the on-board equipment.) obtained from a sensor worn or carried by the pedestrian (Oshima, Paragraph [0096], The portable information processing terminal 40 is possessed or worn by the pedestrian 4, and transmits data, including biological information of the pedestrian 4, to the coordination support device 6.), and walking guidance information and a result of predicting a hazard around the pedestrian (Oshima, Paragraph [0097], Through the notification device 42 of the portable information processing terminal 40, a notification may be presented to a pedestrian, which includes risk notifications (see Oshima, Paragraphs [0102-0103]). The notification specifier 63 determines what type of notification to provide to a traffic participant based on a risk determined by predictor 62 (see Oshima, Paragraph [0129]).), which are generated based on a walking location and the hazard factor around the pedestrian by a safe walking server and received from the safe walking server (Oshima, Paragraph [0034], The coordination support device 6 includes a server.),
wherein the walking location is a third walking location by estimating a point having a highest value to be the third walking location based on at least one of probabilities or reliabilities of respective results of estimating a first walking location and a second walking location, or a combination thereof (Oshima, Paragraph [0127], It would have been obvious to one of ordinary skill in the art, at the time of filing, for the predictor 62 to select the future movement route based on reliable position data from the portable information processing terminal 40 (see Oshima, Paragraph [0096]), images from infrastructure camera 56 (see Oshima, Paragraph [0110]), and external sensor data from each on-board equipment (see Oshima, Paragraph [0037] for example), which may represent first and/or second locations, respectively. Thus, the predicted future position/travel route of a traffic participant has a highest probability because the predictor 62 calculates said position/travel route in its simulation.),
wherein the first walking location is estimated based on the first-person view sensor information from the pedestrian guidance terminal (Oshima, Paragraph [0096], Pedestrian information includes position information. It would have been obvious to one of ordinary skill in the art, at the time of filing, for the processing terminal 40 to have a first-person view sensor for generating the position information.) and the second walking location is estimated based on the beacon signal (Oshima, Paragraph [0096], The processing terminal 40 transmits the position information and travel acceleration to the coordination support device 6. The predictor 62 of the coordination support device 6 utilizes all of its collected data to predict the future of each of the traffic participants, which includes the pedestrians (see Oshima, Paragraph [0127]).),
wherein the hazard factor around the pedestrian is analyzed by searching for at least one camera installed in association with an identified traffic light (Oshima, Paragraphs [0110] and [0118], The coordination support device 6 receives data from all of the cameras 56 in the target traffic area 9. By receiving and utilizing the images from each of the cameras 56, the coordination support device 6 effectively “searches” for each camera 56. As can be seen in the example of Fig. 1, each camera 56 is located adjacent to or at the same intersection as each traffic light 54.), receiving a real- time image around a crosswalk from the at least one camera (Oshima, Paragraphs [0110] and [0118], It would have been obvious to one of ordinary skill in the art, at the time of filing, for the cameras 56, who are configured to capture images containing all infrastructure equipment, e.g. the intersection, mobile bodies, and pedestrians, to be capable of capturing an image around a crosswalk. It is noted that the phrase “around a crosswalk” is interpreted as including the crosswalk, and at least a portion outside of the crosswalk.), and recognizing a hazard factor including a speed of at least one vehicle approaching the crosswalk, a lane, an obstacle, and a degree of walking congestion (Oshima, Paragraph [0119], One example hazard includes the ability of the target traffic area recognizer 60 to utilize the received information, including the images from cameras 56, to identify the number of people in a group of pedestrians, i.e. walking congestion, as well as the moving speed of traffic participants. It is noted that the limitation “a hazard factor” is interpreted to be a single hazard that is selected from the group of at least one vehicle approaching the crosswalk, a lane, an obstacle, and a degree of walking congestion.).
Oshima does not explicitly teach:
Final walking guidance.
However, it would have been obvious to one of ordinary skill in the art, at the time of filing, for the notifications provided to the pedestrians regarding potential risks to be functionally equivalent to a final walking guidance (see Oshima, Paragraphs [0097] and [0102]). For example, in the example of Figs. 4A and 4B, if a notification is provided to pedestrian 95 regarding mobile body 94, and mobile body 94 is the last mobile body that is present, then the notification would be functionally equivalent to a final walking guidance because no more mobile bodies 94 would be present to necessitate a notification. Such a modification would not change the principal operation of the system, as a whole, and would yield predictable results.
Claim 18, Oshima further teaches:
The pedestrian guidance terminal of claim 16, wherein the at least one program is configured to transfer a first walking location estimated based on the first-person view sensor information to the safe walking server (Oshima, Paragraph [0096], It would have been obvious to one of ordinary skill in the art, at the time of filing, for the location transmitted by the portable information processing terminal 40 to be generated or received by a portion of the portable information processing terminal 40 that is functionally equivalent to a first-person view sensor. For example, portable information processing terminal 40 includes a smartphone, and it would have been obvious to one of ordinary skill in the art, at the time of filing, for the smartphone to have a location device, e.g. a GPS, for receiving its position information.).
Claim 19, Oshima further teaches:
The pedestrian guidance terminal of claim 16, wherein the at least one program is configured to output in advance primary walking information based on the walking guidance information and the result of predicting the hazard around the pedestrian (Oshima, Paragraph [0097], The notification information is primary walking information.), which are estimated based on the first-person view sensor information, before receiving the walking guidance information and the result of predicting the hazard around the pedestrian from the safe walking server (Oshima, Paragraph [0097], The notification information is generated from the coordination support device (see Oshima, Paragraphs [0110] and [0127]).).
Claims 3 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Oshima et al. (U.S. 2023/0316923 A1) in view of Baek (U.S. 2024/0203261 A1).
Claim 3, Oshima further teaches:
The method of claim 1, wherein estimating the second walking location comprises:
estimating the second walking location (Oshima, Paragraph [0096], The processing terminal 40 transmits the position information and travel acceleration to the coordination support device 6. The predictor 62 of the coordination support device 6 utilizes all of its collected data to predict the future of each of the traffic participants, which includes the pedestrians (see Oshima, Paragraph [0127]).), and
four or more traffic lights (Oshima, Fig. 1: 54).
Oshima does not specifically teach:
Estimating the second walking location through trilateration based on beacon signals received by the pedestrian guidance terminal from four or more traffic lights.
Baek teaches:
Estimating a walking location through trilateration based on beacon signals received by the pedestrian guidance terminal from four or more UWB anchors (Baek, Paragraph [0094], The pedestrian terminal 320 transmits a polling message to each UWB anchor and receives a transmission from each UWB anchor in response (see Baek, Paragraphs [0087-0093]).).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Oshima by integrating the teaching of a pedestrian terminal and UWB anchors, as taught by Baek.
The motivation would be to utilize an accurate positioning method for estimating the position of the pedestrian (see Bake, Paragraph [0097]).
Claim 11, Oshima further teaches:
The apparatus of claim 9, wherein the program is configured to, in estimating the second walking location (Oshima, Paragraph [0096], The processing terminal 40 transmits the position information and travel acceleration to the coordination support device 6. The predictor 62 of the coordination support device 6 utilizes all of its collected data to predict the future of each of the traffic participants, which includes the pedestrians (see Oshima, Paragraph [0127]).), and
four or more traffic lights (Oshima, Fig. 1: 54).
Oshima does not specifically teach:
Estimate the second walking location through trilateration based on beacon signals received by the pedestrian guidance terminal from four or more traffic lights.
Baek teaches:
Estimate a walking location through trilateration based on beacon signals received by the pedestrian guidance terminal from four or more UWB anchors (Baek, Paragraph [0094], The pedestrian terminal 320 transmits a polling message to each UWB anchor and receives a transmission from each UWB anchor in response (see Baek, Paragraphs [0087-0093]).).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Oshima by integrating the teaching of a pedestrian terminal and UWB anchors, as taught by Baek.
The motivation would be to utilize an accurate positioning method for estimating the position of the pedestrian (see Bake, Paragraph [0097]).
Claims 7, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Oshima et al. (U.S. 2023/0316923 A1) in view of Georgescu et al. (U.S. 2016/0174902 A1).
Claim 7, Oshima further teaches:
The method of claim 6, wherein predicting the hazard is performed based on a coordination support device that is pre-programmed to infer a walking direction and a hazard degree by receiving the walking location, the hazard factor, and the at least one traffic light status information as input (Oshima, Paragraphs [0118] and [0127], The coordination support device 6 receives inputs from the on-board equipment 20 and 30, portable information processing terminals 40, infrastructure cameras 56, light control devices 55, and utilizes the received data with the predictor 62 for estimating the future movements of all of the traffic participants.).
Oshima does not specifically teach:
A deep neural network that is pre-trained to infer a walking direction and a hazard degree by receiving the walking location, the hazard factor, and the at least one traffic light status information as input.
Georgescu teaches:
A deep neural network that is pre-trained (Georgescu, Paragraph [0096]).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Oshima by integrating a deep neural network as taught by Georgescu.
The motivation would be to utilizes the advantages of a neural network for a fast and robust object detection (see Georgescu, Paragraph [0003]).
Claim 15, Oshima further teaches:
The apparatus of claim 14, wherein the program is configured to, in predicting the hazard, perform hazard prediction based on a coordination support device that is pre-programmed to infer a walking direction and a hazard degree by receiving the walking location, the hazard factor, and the at least one traffic light status information as input (Oshima, Paragraphs [0118] and [0127], The coordination support device 6 receives inputs from the on-board equipment 20 and 30, portable information processing terminals 40, infrastructure cameras 56, light control devices 55, and utilizes the received data with the predictor 62 for estimating the future movements of all of the traffic participants.).
Oshima does not specifically teach:
A deep neural network that is pre-trained to infer a walking direction and a hazard degree by receiving the walking location, the hazard factor, and the traffic light status information as input.
Georgescu teaches:
A deep neural network that is pre-trained (Georgescu, Paragraph [0096]).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Oshima by integrating a deep neural network as taught by Georgescu.
The motivation would be to utilizes the advantages of a neural network for a fast and robust object detection (see Georgescu, Paragraph [0003]).
Claim 20, Oshima further teaches:
The pedestrian guidance terminal of claim 16, wherein the at least one program is configured to output final walking information based on the coordination support device that determines a final walking guidance direction and a final hazard degree by receiving a primary walking guidance direction, a primary hazard degree, a secondary walking guidance direction, and a secondary hazard degree as input (Oshima, Paragraphs [0118] and [0127], The coordination support device 6 receives inputs from the on-board equipment 20 and 30, portable information processing terminals 40, infrastructure cameras 56, light control devices 55, and utilizes the received data with the predictor 62 for estimating the future movements of all of the traffic participants. Thus, the plurality of data received from potential hazards, e.g. vehicles, represent at least a first and second hazard degree, and the plurality of data received from each pedestrian 4 represent at least a primary walking guidance direction and a secondary walking guidance direction.).
Oshima does not specifically teach:
A pre-trained deep neural network that infers a final walking guidance direction and a final hazard degree by receiving a primary walking guidance direction, a primary hazard degree, a secondary walking guidance direction, and a secondary hazard degree as input.
Georgescu teaches:
A deep neural network that is pre-trained (Georgescu, Paragraph [0096]).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Oshima by integrating a deep neural network as taught by Georgescu.
The motivation would be to utilizes the advantages of a neural network for a fast and robust object detection (see Georgescu, Paragraph [0003]).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Oshima et al. (U.S. 2023/0316923 A1) in view of Varoglu et al. (U.S. 2014/0066091 A1).
Claim 17, Oshima further teaches:
The pedestrian guidance terminal of claim 16, wherein the at least one program is configured to transfer a beacon signal to the safe walking server after the pedestrian starts walking along a path (Oshima, Paragraph [0096], The portable information processing terminal 40 transmits position information, travel acceleration, schedule information, and the like, of the pedestrian 4.).
Oshima does not specifically teach:
A beacon signal received from a smart device installed on a traffic light.
Varoglu teaches:
A beacon signal received from a smart device installed on a traffic light (Varoglu, Paragraph [0042], The mobile equipment 10W receives Bluetooth LE message from stationary equipment 10X, which includes traffic lights.).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system of Oshima by integrating the teaching of stationary equipment capable of transmitting messages as taught by Varoglu.
The motivation would be to enable more accurate location determination by a receiving mobile equipment (see Varoglu, Paragraph [0042]).
Response to Arguments
Applicant's arguments filed 01/09/2026 have been fully considered but they are not persuasive.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., first-person view sensor) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The Applicant argues on Page 11 that the Oshima reference fails to teach image or sensor data obtained from the pedestrian’s actual viewing perspective, which appears to be Applicant’s interpretation of a first-person view sensor. As stated in the rejection above, it is noted that a first-person view sensor is interpreted as a sensor for sensing from the perspective of its respective device, e.g. an external sensor of on-board equipment 20 is a first-person view sensor of the on-board equipment. The claims, as amended, do not inherently or explicitly define the Applicant’s claimed first-person view sensor away from the above interpretation.
Similarly, Applicant’s assertion that the Oshima reference fails to teach a first walking location and a second walking location on Pages 12-13, the Examiner respectfully disagrees for the same reason above. The claims, as amended, do not inherently or explicitly define Applicant’s intended definition of a sensor information and a beacon signal, respectively, to yield “heterogeneous sensing modalities”.
As per the Applicant’s argument regarding the step of a third walking location, the limitation “based on at least one of probabilities or reliabilities of respective results” does not define the necessary functional language to define how the claimed invention uses a probabiilitiy or reliability in order to generating the third walking location. The claims are interpreted as merely requiring the step of estimiating a third walking location based on the properties of the respective results, i.e. the probabilities or reliabilities. The claims are not interpreted as “using” said probabilities or reliabilities, despite the presence of the phrase “based on”.
As per the Applicant’s arguments regarding the amendments to dependent claims 2 and 4, the arguments are moot in view of the rejection above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES J YANG whose telephone number is (571)270-5170. The examiner can normally be reached 9:30am-6:00p M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN ZIMMERMAN can be reached at (571) 272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES J YANG/Primary Examiner, Art Unit 2686