DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/24/2025 has been entered.
Response to Arguments
Applicant’s arguments, filed 09/24/2025, with respect to claims 1-4, 8-11, and 32-43, have been fully considered but are moot because the arguments do not apply to the current references and current combinations of references being used in the current rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 8, 11, 32-40 and 43 are rejected under 35 U.S.C. 103 as being unpatentable over KONISHI et al. (US 20190177119 A1), hereinafter referenced as KONISHI in view of NAGAI et al. (NAGAI TORU et al., machine translation of Japanese Patent Publication JP2014191145A, published as JP 2016062414 A), hereinafter referenced as NAGAI.
Regarding claim 1, KONISHI explicitly teaches a remote monitoring system (Fig. 1, #10 called a monitoring apparatus. Paragraph [0030]-KONISHI disclose FIG. 1 is a diagram for illustrating a monitoring apparatus. In paragraph [0031]-KONISHI discloses as illustrated in FIG. 1, the elevator monitoring apparatus 10 includes a monitoring camera 2, a determination unit 3, an adjustment unit 4, a recorder 5, and a video transmission device 6. Please also see Fig. 6 and 10) comprising:
at least one memory storing instructions (Fig. 1. Paragraph [0037]-KONISHI discloses the monitoring apparatus 10 includes a memory. The memory includes a read only memory (ROM) and a random access memory (RAM)), and at least one processor (Fig. 1. Paragraph [0037]-KONISHI discloses the monitoring apparatus 10 includes a processor) configured to execute the instructions (Fig. 1. Paragraph [0037]-KONISHI discloses the recorder 5 is formed of the memory. The determination unit 3 and the adjustment unit 4 are formed of the processor and the memory. The determination unit 3 and the adjustment unit 4 are implemented by the processor executing a program stored in the memory. Further, a plurality of processors and a plurality of memories may cooperate with each other to implement the functions of the determination unit 3 and the adjustment unit 4) to:
receive an internal image of an inside object of a mobile object (Fig. 2. Paragraph [0039]-KONISHI discloses first the monitoring camera 2 takes an image of the interior of the elevator car 1 to acquire the car interior image a, and outputs the result as the car interior image data b to the determination unit 3 and the recorder 5 (Step S1) (wherein the monitoring apparatus may be a vehicle or elevator car). Please see paragraph [0122]) through a network (Fig. 1. Paragraph [0036]-KONISHI discloses the video transmission device 6 acquires the car interior image data b from the recorder 5 to transmit the car interior image data b to the outside, for example, a monitoring center. In the video transmission device 6, the image quality and the transmission frequency or the transmission interval to be used when the image is transmitted to the outside can be changed);
determine internal image quality indicating quality of the internal image (Fig. 4. Paragraph [0033]-KONISHI discloses the determination unit 3 receives the car interior image data b from the monitoring camera 2. The determination unit 3 calculates, based on the car interior image data b, the number of passengers in the elevator car 1 and a degree of positional imbalance of the passengers in the elevator car 1 to output the results as a determination result c (wherein the number of people and the positional imbalance are used to determine whether a dangerous phenomena exists). In paragraph [0034]-KONISHI discloses the adjustment unit 4 determines, based on the determination result c, an image quality and a frame rate to be used when the recorder 5 records the car interior image data b, and outputs the results as a recording density d to the recorder 5 (wherein the recording density and frame rate represent an internal image quality)) such that a quality of the important area is higher than a quality of an area other than the important area in the internal image (Fig. 2. Paragraph [0049]-KONISHI discloses the limited recording capacity of the recorder 5 can be effectively utilized by changing the image quality and the frame rate depending on the importance. Each of the image quality and the frame rate may be changed at two stages of the “high level” and the “low level”, or the importance may be ranked (wherein importance is based on whether the number of people or the positional imbalance of people within the car indicate the possibility of a dangerous phenomenon)); and
adjust the quality of the internal image based on the internal image quality (Fig. 1. Paragraph [0063]-KONISHI discloses the image data having high importance is recorded at a high image quality and a high frame rate, and is transmitted at a high image quality and a high transmission frequency. Meanwhile, the image data having low importance is recorded at a low image quality and a low frame rate, and is transmitted at a low image quality and a low transmission frequency. In this manner, the image data having high importance can be ensured to be recorded and transmitted at a high quality. Further, for the image data having low importance, the recording density and the transmission frequency can be suppressed to be low, and thus the amount of data can be reduced as a whole).
KONISHI fails to explicitly teach define an important area containing a person in the internal image based on whether the person is standing in the internal image; predict a risk of occurrence of a passenger falling down inside the mobile object based on the internal image and situation information indicating a situation of the mobile object.
However, NAGAI explicitly teaches predict a risk of occurrence of a passenger falling down (Fig. 1. Paragraph [0011]-NAGAI discloses 1A to 1D are block diagrams showing components of an in-vehicle monitoring device 1. In paragraph [0013]-NAGAI discloses the in-vehicle monitoring apparatus 1 further includes a risk determination means 15 that determines the risk of the passenger falling over based on the passenger's riding state and the traveling state of the vehicle 100. The notification means 13 notifies the safety of the passenger according to the determined degree of risk, and the vehicle control means 14 controls the vehicle 100 according to the determined degree of risk. Further in paragraph [0047]-NAGAI discloses the danger degree determination means 15 includes an information storage unit 110 for storing the boarding state and the traveling state of the vehicle 100 shown in FIG. 2 and an operation control unit 109 for determining the degree of risk of the passenger falling based on the boarding state and the traveling state. Further, the information storage unit 110 stores a program for determining the degree of danger based on the riding state and the traveling state) inside the mobile object (Fig. 2, #100 called a vehicle. Paragraph [0012]-NAGAI discloses the in-vehicle monitoring device 1 is mounted on, for example, a vehicle 100 for public passenger transport for the purpose of transporting a large number of passengers, such as a bus, a train, and a new transportation system. In paragraph [0019]-NAGAI discloses as shown in FIG. 2, the vehicle 100 includes an in-vehicle camera 105. In paragraph [0020]-NAGAI discloses FIG. 4 is a perspective view showing an example of the arrangement of the in-vehicle camera 105 through a part of the vehicle 100. The in-vehicle camera 105 captures images of seats, passengers, baggage, etc. in the vehicle 100 (wherein the in-vehicle camera 105 may be equipped with a fisheye lens and pan head control and may be placed on the ceiling of the vehicle 100 in the front, middle and rear direction of the vehicle 100). Please also read paragraph [0022]) based on the internal image (Paragraph [0032]-NAGAI discloses the riding condition grasping means 11 processes the image obtained from the in-vehicle camera 105 and the information obtained from the in-vehicle sensor 107 by the arithmetic control unit 109, and the necessary information from the information storage unit 110 by the arithmetic control unit 109. Further in paragraph [0042]-NAGAI discloses the boarding state grasping means 11 processes the image and information obtained from the in-vehicle camera 105 and the in-vehicle sensor 107 by the operation control unit 109, reads necessary information from the information storage unit 110, or the information storage unit 110 (wherein the riding state and/or boarding state is determined from the internal image and may include a passenger’s attribute information, location within the vehicle, position and attitude, such as the state of sitting or standing, presence or absence of movement and whether a handrail is being used in a standing position). Please also read paragraph [0022-0024, 0032 and 0071]) and situation information indicating a situation of the mobile object (Fig. 4. Paragraph [0046]-NAGAI discloses the traveling state grasping means 12 processes the information or image obtained from the vehicle sensor 108 or the out-of-vehicle camera 106 by the operation control unit 109 and reads necessary information from the information storage unit 110 or the information storage unit 110. The traveling state grasping means 12 detects the velocity, acceleration, angular velocity, inter-vehicle distance and relative velocity with the preceding vehicle, distance and relative velocity with the obstacle ahead of the host vehicle 100, and distance with the following vehicle (wherein the traveling state also includes, for example, whether vehicle is traveling along areas with high variable acceleration such as a steep slope or curves). Please also see Fig. 5-6 and read paragraph [0051-0053 and 0078-0081]]);
define an important area containing a person in the internal image based on whether the person is standing in the internal image (Fig. 4. Paragraph [0034]-NAGAI discloses when the in-vehicle camera 105 includes the face authentication camera 105 a, the arithmetic control unit 109 can calculate the boarding state by age group to obtain the boarding state of the elderly person who is relatively easy to fall. In paragraph [0056]-NAGAI discloses the tracking of the passenger P may be combined to determine the degree of danger. For example, when the passenger P rides, the danger degree determination unit 15 uses the operation control unit 109 to identify the age and sex of the passenger P based on the image of the face recognition camera 105a, such as an elderly person or a junior person (wherein the risk level is set to high). Then, the danger degree determination unit 15 performs image processing using the arithmetic control unit 109, the in-vehicle camera 105, and the like to perform tracking processing of the passenger P having a high degree of danger, and finally the position where the passenger P's riding condition is determined. When detecting a riding state such as the passenger P standing up or moving on the passage A while the vehicle 100 is traveling, the risk degree determination unit 15 increases the risk degree).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KONISHI of having a remote monitoring system, with the teachings of NAGAI of having define an important area containing a person in the internal image based on whether the person is standing in the internal image; predict a risk of occurrence of a passenger falling down inside the mobile object based on the internal image and situation information indicating a situation of the mobile object.
Wherein having KONISHI’s remote monitoring system having define an important area containing a person in the internal image based on whether the person is standing in the internal image; predict a risk of occurrence of a passenger falling down inside the mobile object based on the internal image and situation information indicating a situation of the mobile object.
The motivation behind the modification would have been to obtain a remote monitoring system that more effectively and efficiently monitors driving and passenger behaviors as well as safety concerns, since both Konishi and NAGAI concern image analysis and monitoring systems. Wherein Konishi’s systems and methods improve the ability to monitor the interior of an area more efficiently by allowing an image to be recorded and transmitted more efficiently, while NAGAI’s systems and methods improves driving technology by monitoring and managing driving and passenger risks. Please see Konishi et al. (US 20190177119 A1), Abstract and Paragraph [0028 and 0122-0125] and NAGAI et al. (NAGAI et al., machine translation of Japanese Patent Publication JP2014191145A, published as JP 2016062414 A), Abstract and Paragraph [0092 and 0095].
Regarding claim 2, KONISHI in view of NAGAI explicitly teach the remote monitoring system according to Claim 1, KONISHI fails to explicitly teach wherein the at least one processor is configured to execute the instructions to predict acceleration of the mobile object in response to the situation information about the mobile object and predict the risk based on a result of the predicted acceleration.
However, NAGAI explicitly teaches wherein the at least one processor (Paragraph [0030]-NAGAI discloses the arithmetic control unit 109 is configured by, for example, an electronic circuit such as a central processing unit (CPU). The information storage unit 110 is configured of, for example, a main storage device to which the CPU can directly access, and stores various programs and information) is configured to execute the instructions to predict acceleration of the mobile object in response to the situation information about the mobile object (Fig. 4. Paragraph [0050]-NAGAI discloses a relatively large acceleration occurs while the vehicle 100 starts moving from a stopped state and increases in speed to shift to constant speed traveling. A relatively large acceleration in the reverse direction occurs during the period from when the vehicle 100 travels at a constant speed to when the vehicle decelerates and stops. Acceleration is also generated when the vehicle is decelerated to secure the distance to the preceding vehicle when traveling at a low speed, or accelerated to narrow the distance to the preceding vehicle. Furthermore, during turning, running on a steep slope, or running on a sharp curve, acceleration may occur not only in the front-rear direction of the vehicle 100 but also in the left-right direction or the up-down direction. In paragraph [0051]-NAGAI discloses the danger degree determination unit 15 executes the program stored in the information storage unit 110 by the arithmetic control unit 109, and the traveling state of the vehicle 100 is based on information such as speed, acceleration, angular velocity, route map, traveling route, etc. It is determined whether the vehicle is in any one of running, stopping, turning, running on a steep slope, or running on a sharp curve, from stop to start) and predict the risk based on a result of the predicted acceleration (Fig. 4. Paragraph [0051]-NAGAI discloses the arithmetic control unit 109 determines the position and posture of the passenger P in the vehicle 100, the boarding state of elderly people over 60, and the traveling state of the vehicle 100. Based on the determination, the degree of risk is determined (wherein multiple risk levels are assessed, which may be based the position/type/density of passengers, the use of hand rails, and whether the vehicle will experience large variation in acceleration/deceleration due to the vehicle being in between start and stop or running on a steep slope or curve). Further in paragraph [0054]-NAGAI discloses the degree of risk can be determined in consideration of other various information. If the inter-vehicle distance with the preceding vehicle and the relative speed exceed the predetermined threshold and there is a risk of collision with the preceding vehicle unless the host vehicle 100 is decelerated, the risk is high. Please also see Fig. 8 and read paragraph [0052-0053 and 0080]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KONISHI in view of NAGAI of having a remote monitoring system, with the teachings of NAGAI of having wherein the at least one processor is configured to execute the instructions to predict acceleration of the mobile object in response to the situation information about the mobile object and predict the risk based on a result of the predicted acceleration.
Wherein having KONISHI’s remote monitoring system having wherein the at least one processor is configured to execute the instructions to predict acceleration of the mobile object in response to the situation information about the mobile object and predict the risk based on a result of the predicted acceleration.
The motivation behind the modification would have been to obtain a remote monitoring system that more effectively and efficiently monitors driving and passenger behaviors as well as safety concerns, since both KONISHI and NAGAI concern image analysis and monitoring systems. Wherein KONISHI’s systems and methods improve the ability to monitor the interior of an area more efficiently by allowing an image to be recorded and transmitted more efficiently, while NAGAI’s systems and methods improves driving technology by monitoring and managing driving and passenger risks. Please see KONISHI et al. (US 20190177119 A1), Abstract and Paragraph [0028 and 0122-0125] and NAGAI et al. (NAGAI et al., machine translation of Japanese Patent Publication JP2014191145A, published as JP 2016062414 A), Abstract and Paragraph [0092 and 0095].
Regarding claim 3, KONISHI in view of NAGAI explicitly teach the remote monitoring system according to Claim 2, KONISHI fails to explicitly teach wherein the at least one processor is configured to execute the instructions to compare an absolute value of the predicted acceleration with a threshold value, and predict that there is a risk of the accident in a case where the absolute value of the predicted acceleration is greater than or equal to the threshold value.
However, NAGAI explicitly teaches wherein the at least one processor (Paragraph [0030]-NAGAI discloses the arithmetic control unit 109 is configured by, for example, an electronic circuit such as a central processing unit (CPU). The information storage unit 110 is configured of, for example, a main storage device to which the CPU can directly access, and stores various programs and information) is configured to execute the instructions to compare an absolute value of the predicted acceleration with a threshold value, and predict that there is a risk of the accident (Fig. 4. Paragraph [0048]-NAGAI discloses FIG. 8 is a table showing an example of determination criteria for determining the risk of a passenger falling (wherein the determination is based on traveling state, riding state and boarding state, riding/boarding state includes information such as passenger position, age, use of hand rail and traveling state includes information such as speed, acceleration, angular velocity, inter-vehicle distance as well as a determination of whether the vehicle is in a state of typically associated with large variation in acceleration/deceleration). In paragraph [0050]-NAGAI discloses a relatively large acceleration occurs while the vehicle 100 starts moving from a stopped state and increases in speed to shift to constant speed traveling. A relatively large acceleration in the reverse direction occurs during the period from when the vehicle 100 travels at a constant speed to when the vehicle decelerates and stops. Acceleration is also generated when the vehicle is decelerated to secure the distance to the preceding vehicle when traveling at a low speed, or accelerated to narrow the distance to the preceding vehicle. Furthermore, during turning, running on a steep slope, or running on a sharp curve, acceleration may occur not only in the front-rear direction of the vehicle 100 but also in the left-right direction or the up-down direction) in a case where the absolute value of the predicted acceleration is greater than or equal to the threshold value (Fig. 4. Paragraph [0051]-NAGAI discloses it is determined whether the vehicle is in any one of running, stopping, turning, running on a steep slope, or running on a sharp curve, from stop to start. In paragraph [0053]-NAGAI discloses when all passengers P standing in the aisle are gripped by a strap or handrail, and the vehicle 100 is turning, the risk is the third highest. When the passenger P standing in the aisle is densely packed and the vehicle 100 is traveling in a steep gradient, the risk is the second highest. When the passenger P standing in the aisle is densely packed, and the vehicle 100 is traveling on a sharp curve, the level of risk is highest 6. Further in paragraph [0054]-NAGAI discloses if the inter-vehicle distance with the preceding vehicle and the relative speed exceed the predetermined threshold and there is a risk of collision with the preceding vehicle unless the host vehicle 100 is decelerated, the risk is high. Please also read paragraph [0078-0081]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KONISHI in view of NAGAI of having a remote monitoring system, with the teachings of NAGAI of having wherein the at least one processor is configured to execute the instructions to compare an absolute value of the predicted acceleration with a threshold value, and predict that there is a risk of the accident in a case where the absolute value of the predicted acceleration is greater than or equal to the threshold value.
Wherein having KONISHI’s remote monitoring system having wherein the at least one processor is configured to execute the instructions to compare an absolute value of the predicted acceleration with a threshold value, and predict that there is a risk of the accident in a case where the absolute value of the predicted acceleration is greater than or equal to the threshold value.
The motivation behind the modification would have been to obtain a remote monitoring system that more effectively and efficiently monitors driving and passenger behaviors as well as safety concerns, since both KONISHI and NAGAI concern image analysis and monitoring systems. Wherein KONISHI’s systems and methods improve the ability to monitor the interior of an area more efficiently by allowing an image to be recorded and transmitted more efficiently, while NAGAI’s systems and methods improves driving technology by monitoring and managing driving and passenger risks. Please see KONISHI et al. (US 20190177119 A1), Abstract and Paragraph [0028 and 0122-0125] and NAGAI et al. (NAGAI et al., machine translation of Japanese Patent Publication JP2014191145A, published as JP 2016062414 A), Abstract and Paragraph [0092 and 0095].
Regarding claim 4, KONISHI in view of NAGAI explicitly teach the remote monitoring system according to Claim 1, KONISHI further teaches wherein in a case where the result of the predicted risk indicates presence of a risk of the accident, the at least one processor is configured to execute the instructions to determine higher quality for the quality of the internal image compared to a case in which the result of the predicted risk indicates no risk of the accident (Fig. 1. Paragraph [0049]-KONISHI discloses the limited recording capacity of the recorder 5 can be effectively utilized by changing the image quality and the frame rate depending on the importance. Each of the image quality and the frame rate may be changed at two stages of the “high level” and the “low level”, or the importance may be ranked (wherein importance is based on whether the number of people or the positional imbalance of people within the car indicates the possibility of a dangerous phenomenon). Further in paragraph [0063]-KONISHI discloses the image data having high importance is recorded at a high image quality and a high frame rate, and is transmitted at a high image quality and a high transmission frequency. The image data having low importance is recorded at a low image quality and a low frame rate, and is transmitted at a low image quality and a low transmission frequency).
Regarding claim 8, KONISHI in view of NAGAI explicitly teach the remote monitoring system according to Claim 1, KONISHI fails to explicitly teach wherein the situation information includes information about a position of the mobile object, and the at least one processor is configured to execute the instructions to predict, based on information about the position of the mobile object, at least one of a situation in which the mobile object stops at a station where a passenger of the mobile object gets on or off the mobile object or a situation in which the mobile object leaves the station, and in a case where a situation in which the mobile object stops at or leaves the station is predicted, predict that there is a risk of occurrence of the accident.
However, NAGAI explicitly teaches wherein the situation information includes information about a position of the mobile object, and the at least one processor (Paragraph [0030]-NAGAI discloses the arithmetic control unit 109 is configured by, for example, an electronic circuit such as a central processing unit (CPU). The information storage unit 110 is configured of, for example, a main storage device to which the CPU can directly access, and stores various programs and information) is configured to execute the instructions to predict, based on information about the position of the mobile object (Paragraph [0047]-NAGAI discloses the danger degree determination means 15 includes an information storage unit 110 for storing the boarding state and the traveling state of the vehicle 100 shown in FIG. 2 and an operation control unit 109 for determining the degree of risk of the passenger falling based on the boarding state and the traveling state. Further, the information storage unit 110 stores a program for determining the degree of danger based on the riding state and the traveling state (wherein the riding state and/or boarding state may include a passenger’s attribute information, location within the vehicle, position and attitude, such as the state of sitting or standing, presence or absence of movement and whether a handrail is being used in a standing position, and the traveling state may include route map, traveling map, direction, velocity, acceleration, angular velocity, inter-vehicle distance, relative velocity with the preceding vehicle, distance and relative velocity with the obstacle ahead of the host vehicle 100, and distance with the following vehicle)), at least one of a situation in which the mobile object stops at a station where a passenger of the mobile object gets on or off the mobile object or a situation in which the mobile object leaves the station, and in a case where a situation in which the mobile object stops at or leaves the station is predicted, predict that there is a risk of occurrence of the accident (Paragraph [0104]-NAGAI discloses when the vehicle 100 stops at the stop and the passenger P is getting on and off, the getting-in state grasping means 11 determines that the getting-in state of the passenger P is moving in step S1. Further, in step S2, the traveling state grasping means 12 grasps that the vehicle 100 is stopped. In this case, in step S4, for example, the degree of danger is determined to be high by the degree-of-risk determination means 15. Then, in step S3, the notification means 13 displays that the passenger P is moving on the monitor 103, or the driver 104 is notified that the passenger P is moving by audio guidance by the speaker 104. As a result, the driver's attention is drawn and the fall of the passenger P is prevented). In paragraph [0106]-NAGAI discloses then, in step S2, when it is determined that the vehicle is stopped by the traveling state grasping means 12 and the riding state of all the passengers P is seated or gripped by a strap or handrail, the danger in step S4. The degree of danger is determined to be low by the degree determination means 15. Here, when there is a passenger P who is not seated and is not gripped by a strap or a handrail, the degree of risk may be determined to be high. In this case, steps S1, S2, S4, and S3 are sequentially repeated as in the case where the passenger P is moving. Please also read paragraph [0050-0053 and 0105]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KONISHI in view of NAGAI of having a remote monitoring system, with the teachings of NAGAI of having wherein the situation information includes information about a position of the mobile object, and the at least one processor is configured to execute the instructions to predict, based on information about the position of the mobile object, at least one of a situation in which the mobile object stops at a station where a passenger of the mobile object gets on or off the mobile object or a situation in which the mobile object leaves the station, and in a case where a situation in which the mobile object stops at or leaves the station is predicted, predict that there is a risk of occurrence of the accident.
Wherein having KONISHI’s remote monitoring system having wherein the situation information includes information about a position of the mobile object, and the at least one processor is configured to execute the instructions to predict, based on information about the position of the mobile object, at least one of a situation in which the mobile object stops at a station where a passenger of the mobile object gets on or off the mobile object or a situation in which the mobile object leaves the station, and in a case where a situation in which the mobile object stops at or leaves the station is predicted, predict that there is a risk of occurrence of the accident.
The motivation behind the modification would have been to obtain a remote monitoring system that more effectively and efficiently monitors driving and passenger behaviors as well as safety concerns, since both KONISHI and NAGAI concern image analysis and monitoring systems. Wherein KONISHI’s systems and methods improve the ability to monitor the interior of an area more efficiently by allowing an image to be recorded and transmitted more efficiently, while NAGAI’s systems and methods improves driving technology by monitoring and managing driving and passenger risks. Please see KONISHI et al. (US 20190177119 A1), Abstract and Paragraph [0028 and 0122-0125] and NAGAI et al. (NAGAI et al., machine translation of Japanese Patent Publication JP2014191145A, published as JP 2016062414 A), Abstract and Paragraph [0092 and 0095].
Regarding claim 11, KONISHI in view of NAGAI explicitly teach the remote monitoring system according to Claim 1, KONISHI fails to explicitly teach wherein the situation information includes a distance between the mobile object and another mobile object present around the mobile object, and the at least one processor is configured to execute the instructions to predict, based on the distance between the mobile object and the another mobile object, a situation in which the mobile object is highly likely to come into contact with the another mobile object, and in a case where a situation is predicted in which the mobile object is likely to come into contact with the another mobile object, and in a case where an absolute value of predicted value of acceleration owing to motion to avoid the contact is greater than or equal to a threshold value, the at least one processor is configured to execute the instructions to predict that there is a risk of occurrence of the accident.
However, NAGAI explicitly teach wherein the situation information includes a distance between the mobile object and another mobile object present around the mobile object (Fig. 4. Paragraph [0046]-NAGAI discloses the traveling state grasping means 12 detects the velocity, acceleration, angular velocity, inter-vehicle distance and relative velocity with the preceding vehicle, distance and relative velocity with the obstacle ahead of the host vehicle 100, and distance with the following vehicle), and the at least one processor (Paragraph [0030]-NAGAI discloses the arithmetic control unit 109 is configured by, for example, an electronic circuit such as a central processing unit (CPU). The information storage unit 110 is configured of, for example, a main storage device to which the CPU can directly access, and stores various programs and information) is configured to execute the instructions to predict, based on the distance between the mobile object and the another mobile object, a situation in which the mobile object is highly likely to come into contact with the another mobile object, and in a case where a situation is predicted in which the mobile object is likely to come into contact with the another mobile object, and in a case where an absolute value of predicted value of acceleration owing to motion to avoid the contact is greater than or equal to a threshold value, the at least one processor is configured to execute the instructions to predict that there is a risk of occurrence of the accident (Fig. 4. Paragraph [0054]-NAGAI discloses the degree of risk can be determined in consideration of other various information. For example, if the inter-vehicle distance with the preceding vehicle and the relative speed exceed the predetermined threshold and there is a risk of collision with the preceding vehicle unless the host vehicle 100 is decelerated, the risk is high. Additionally in paragraph [0080]-NAGAI discloses the risk determination means 15 or the vehicle control means 14 can determine the risk of occurrence of an accident based on the distance between the front and rear vehicles or obstacles and the own vehicle 100 or the relative speed and the riding condition of the passenger P. For example, when a sudden braking or a rear-end collision is predicted based on the distance between the front and rear vehicles or obstacles and the own vehicle 100, the danger level is increased according to the riding state of the passenger P. The warning can be given to the driver and the passenger P in advance, and the fall of the passenger P can be suppressed more reliably. Please also read paragraph [0051-0053]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KONISHI in view of NAGAI of having a remote monitoring system, with the teachings of NAGAI of having wherein the situation information includes a distance between the mobile object and another mobile object present around the mobile object, and the at least one processor is configured to execute the instructions to predict, based on the distance between the mobile object and the another mobile object, a situation in which the mobile object is highly likely to come into contact with the another mobile object, and in a case where a situation is predicted in which the mobile object is likely to come into contact with the another mobile object, and in a case where an absolute value of predicted value of acceleration owing to motion to avoid the contact is greater than or equal to a threshold value, the at least one processor is configured to execute the instructions to predict that there is a risk of occurrence of the accident.
Wherein having KONISHI’s remote monitoring system having wherein the situation information includes a distance between the mobile object and another mobile object present around the mobile object, and the at least one processor is configured to execute the instructions to predict, based on the distance between the mobile object and the another mobile object, a situation in which the mobile object is highly likely to come into contact with the another mobile object, and in a case where a situation is predicted in which the mobile object is likely to come into contact with the another mobile object, and in a case where an absolute value of predicted value of acceleration owing to motion to avoid the contact is greater than or equal to a threshold value, the at least one processor is configured to execute the instructions to predict that there is a risk of occurrence of the accident.
The motivation behind the modification would have been to obtain a remote monitoring system that more effectively and efficiently monitors driving and passenger behaviors as well as safety concerns, since both KONISHI and NAGAI concern image analysis and monitoring systems. Wherein KONISHI’s systems and methods improve the ability to monitor the interior of an area more efficiently by allowing an image to be recorded and transmitted more efficiently, while NAGAI’s systems and methods improves driving technology by monitoring and managing driving and passenger risks. Please see KONISHI et al. (US 20190177119 A1), Abstract and Paragraph [0028 and 0122-0125] and NAGAI et al. (NAGAI et al., machine translation of Japanese Patent Publication JP2014191145A, published as JP 2016062414 A), Abstract and Paragraph [0092 and 0095].
Regarding claim 32, KONISHI in view of NAGAI explicitly teach the remote monitoring system according to Claim 1, KONISHI further teaches wherein the at least one processor is further configured to:
detect an attribute of the person in the internal image (Fig. 1. Paragraph [0045]-KONISHI discloses the monitoring apparatus according to Embodiment 1 of the present invention adjusts the recording density d and the communication frequency e of the car interior image data b acquired by the monitoring camera 2 in accordance with the number of people present in the space to be monitored and the positional imbalance of the people (wherein an attribute of a person is their position)).
KONISHI fails to explicitly teach define the important area containing the person in the internal image based on the attribute of the person and whether the person is standing in the internal image.
However, NAGAI explicitly teach define the important area containing the person in the internal image based on the attribute of the person and whether the person is standing in the internal image (Fig. 4. Paragraph [0056]-NAGAI discloses the tracking of the passenger P may be combined to determine the degree of danger. The danger degree determination unit 15 uses the operation control unit 109 to identify the age and sex of the passenger P based on the image of the face recognition camera 105a, such as an elderly person or a junior person (wherein the falling risk level is set to high). Then, the danger degree determination unit 15 performs image processing using the arithmetic control unit 109, the in-vehicle camera 105, and the like to perform tracking processing of the passenger P having a high degree of danger, and finally the position where the passenger P's riding condition is determined. When detecting a riding state such as the passenger P standing up or moving on the passage A while the vehicle 100 is traveling, the risk degree determination unit 15 increases the risk degree. In paragraph [0059]-NAGAI discloses the notification unit 13 causes at least one of the riding state and the risk degree to be displayed on the monitor 103 as a display device, and performs notification on the safety of the passenger P. In paragraph [0060]-NAGAI discloses elderly people, passengers P or baggage L who are not gripped by a strap or handrail in a state of standing in the aisle may be displayed by being distinguished by marks of different shapes or different colors. Please also see Fig. 5-6 and read paragraph [0070]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KONISHI in view of NAGAI of having a remote monitoring system, with the teachings of NAGAI of having define the important area containing the person in the internal image based on the attribute of the person and whether the person is standing in the internal image.
Wherein having KONISHI’s remote monitoring system having define the important area containing the person in the internal image based on the attribute of the person and whether the person is standing in the internal image.
The motivation behind the modification would have been to obtain a remote monitoring system that more effectively and efficiently monitors driving and passenger behaviors as well as safety concerns, since both KONISHI and NAGAI concern image analysis and monitoring systems. Wherein KONISHI’s systems and methods improve the ability to monitor the interior of an area more efficiently by allowing an image to be recorded and transmitted more efficiently, while NAGAI’s systems and methods improves driving technology by monitoring and managing driving and passenger risks. Please see KONISHI et al. (US 20190177119 A1), Abstract and Paragraph [0028 and 0122-0125] and NAGAI et al. (NAGAI et al., machine translation of Japanese Patent Publication JP2014191145A, published as JP 2016062414 A), Abstract and Paragraph [0092 and 0095].
Regarding claim 33, KONISHI in view of NAGAI explicitly teach the remote monitoring system according to Claim 32, KONISHI fails to explicitly teach wherein the at least one processor is further configured to: in a case where the person is determined to be a vulnerable person requiring special attention, the vulnerable person including at least one of a child, an elderly person, and a person with a disability, predict the risk of the occurrence of the passenger falling down to a first level that is higher than a second level set in a case where the person is not determined to be the vulnerable person.
However, NAGAI explicitly teaches wherein the at least one processor (Paragraph [0030]-NAGAI discloses the arithmetic control unit 109 is configured by, for example, an electronic circuit such as a central processing unit (CPU). The information storage unit 110 is configured of, for example, a main storage device to which the CPU can directly access, and stores various programs and information) is further configured to: in a case where the person is determined to be a vulnerable person requiring special attention, the vulnerable person including at least one of a child, an elderly person, and a person with a disability, predict the risk of the occurrence of the passenger falling down to a first level that is higher than a second level set in a case where the person is not determined to be the vulnerable person (Fig. 1. Paragraph [0048]-NAGAI discloses FIG. 8 is a table showing an example of determination criteria for determining the risk of a passenger falling. The danger degree determination means 15 reads the program stored in the information storage unit 110, the riding condition of the passenger P, and the traveling condition of the vehicle 100 by the arithmetic control unit 109, and the danger of the passenger falling. In paragraph [0056]-NAGAI discloses when the passenger P rides, the danger degree determination unit 15 uses the operation control unit 109 to identify the age and sex of the passenger P based on the image of the face recognition camera 105a, such as an elderly person or a junior person (wherein the falling risk level is set to high for elderly, pregnant and junior persons). The danger degree determination unit 15 performs image processing using the arithmetic control unit 109, the in-vehicle camera 105, and the like to perform tracking processing of the passenger P having a high degree of danger, and finally the position where the passenger P's riding condition is determined. When detecting a riding state such as the passenger P standing up or moving on the passage, while the vehicle 100 is traveling, the risk degree determination unit 15 increases the risk degree).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KONISHI in view of NAGAI of having a remote monitoring system, with the teachings of NAGAI of having wherein the at least one processor is further configured to: in a case where the person is determined to be a vulnerable person requiring special attention, the vulnerable person including at least one of a child, an elderly person, and a person with a disability, predict the risk of the occurrence of the passenger falling down to a first level that is higher than a second level set in a case where the person is not determined to be the vulnerable person.
Wherein having KONISHI’s remote monitoring system having wherein the at least one processor is further configured to: in a case where the person is determined to be a vulnerable person requiring special attention, the vulnerable person including at least one of a child, an elderly person, and a person with a disability, predict the risk of the occurrence of the passenger falling down to a first level that is higher than a second level set in a case where the person is not determined to be the vulnerable person.
The motivation behind the modification would have been to obtain a remote monitoring system that more effectively and efficiently monitors driving and passenger behaviors as well as safety concerns, since both KONISHI and NAGAI concern image analysis and monitoring systems. Wherein KONISHI’s systems and methods improve the ability to monitor the interior of an area more efficiently by allowing an image to be recorded and transmitted more efficiently, while NAGAI’s systems and methods improves driving technology by monitoring and managing driving and passenger risks. Please see KONISHI et al. (US 20190177119 A1), Abstract and Paragraph [0028 and 0122-0125] and NAGAI et al. (NAGAI et al., machine translation of Japanese Patent Publication JP2014191145A, published as JP 2016062414 A), Abstract and Paragraph [0092 and 0095].
Regarding claim 34, KONISHI explicitly teaches a remote monitoring method (Fig. 1. Paragraph [0030]-KONISHI disclose FIG. 1 is a diagram for illustrating a monitoring apparatus. In paragraph [0031]-KONISHI discloses as illustrated in FIG. 1, the elevator monitoring apparatus 10 includes a monitoring camera 2, a determination unit 3, an adjustment unit 4, a recorder 5, and a video transmission device 6. Please also see Fig. 6 and 10) comprising:
receiving an internal image of an inside of a mobile object (Fig. 2. Paragraph [0039]-KONISHI discloses first the monitoring camera 2 takes an image of the interior of the elevator car 1 to acquire the car interior image a, and outputs the result as the car interior image data b to the determination unit 3 and the recorder 5 (Step S1) (wherein the monitoring apparatus may be a vehicle or elevator car). Please see paragraph [0122]) through a network (Fig. 1. Paragraph [0036]-KONISHI discloses the video transmission device 6 acquires the car interior image data b from the recorder 5 to transmit the car interior image data b to the outside, for example, a monitoring center. In the video transmission device 6, the image quality and the transmission frequency or the transmission interval to be used when the image is transmitted to the outside can be changed);
determining internal image quality indicating quality of the internal image (Fig. 4. Paragraph [0033]-KONISHI discloses the determination unit 3 receives the car interior image data b from the monitoring camera 2. The determination unit 3 calculates, based on the car interior image data b, the number of passengers in the elevator car 1 and a degree of positional imbalance of the passengers in the elevator car 1 to output the results as a determina