DETAILED ACTION
1. The Office Action is in response to Application 18918556 filed on 10/17/2024. Claim 1-20 are pending.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 18918556 filed on 10/17/2024.
Priority # Filling Data Country
10-2023-0152849 2023-11-07 KR
CLAIM INTERPRETATION
4. The following is a quotation of 35 U.S.C. 112(f):
(f) ELEMENT IN CLAIM FOR A COMBINATION.—An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“an image processing module configured…” in independent claim 11, and its dependent claims;
“an event detection module configured to…” in independent claim 11, and its dependent claims;
“an image storage module configured to…” in independent claim 11, and its dependent claims;
“a data communication module configured to …” in independent claim 11, and its dependent claims;
“an occupant state detection module configured to…” in independent claim 11, and its dependent claims;
“a control module configured to…” in independent claim 11, and its dependent claims;
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
A review of the specification shows that:
“an image processing module configured…” in independent claim 11, and its dependent claims corresponds to component 110 in fig. 1 and paragraph 0043; it is combination of hardware and software;
“an event detection module configured …” in independent claim 11, and its dependent claims corresponds to component 140 in fig. 1 and paragraph 0043; it is combination of hardware and software;
“an image storage module configured …” in independent claim 11, and its dependent claims corresponds to component 150 in fig. 1 and paragraph 0043; it is combination of hardware and software;
“a data communication module configured …” in independent claim 11, and its dependent claims corresponds to component 130 in fig. 1 and paragraph 0043; it is combination of hardware and software;
“an occupant state detection module configured…” in independent claim 11, and its dependent claims corresponds to component 120 in fig. 1 and paragraph 0043; it is combination of hardware and software;
“a control module configured …” in independent claim 11, and its dependent claims corresponds to component 170 in fig. 1 and paragraph 0043; it is combination of hardware and software;
If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may:
(1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
Claim Rejections - 35 USC § 112
6. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
7. Claim 1 and its dependent claims 2-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
For claim 1, it recites “recorded images”, in “transmitting recorded images related to the event to a management server”. However, it is not clear where the recorded images come from; and it recites previously as: “providing an image of an event related to a vehicle”, since the method is to provides an image, how can it transmits images instead of an image?
Thus the scope of the claim and its dependent claims 2-10 are unclear.
8. Claim 11 and its dependent claims 12-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
For claim 11, it recites “images”, in “receive images acquired by the at least one camera related to the event through the image processing module”. However, it is not clear how many images the camera takes since it recites previously as: “a camera configured to acquire an image of a situation outside or inside the vehicle”, since the camera only acquires an image, how can it takes multiple images instead of an image?
it recites “the at least one camera”, in “receive images acquired by the at least one camera related to the event through the image processing module”. However, it is not clear how many cameras there are since it recites previously as: “the system comprising: a camera configured to acquire an image of a situation outside or inside the vehicle”, since there is only a camera, how the “ the at least one camera” come from, does it means can have more than one camera?
Thus the scope of the claim and its dependent claims 12-17 are unclear.
Claim Rejections - 35 USC § 103
9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
10. Claims 1-3, 11-12, 14, 18-20 are rejected are rejected under 35 U.S.C. 103 as being unpatentable over Sato et al. (US 20070001512) and in view of OTA (WO 2023047505).
Regarding claim 1, Sato teaches a method (fig. 1) of providing images of an event related to a vehicle (fig. 1, paragraph 0005), the method comprising:
determining that the event has occurred (paragraph 00037, a collision is detected is deemed the base time (collision time));
determining that the event is related to an emergency situation by detecting an occupant in the vehicle by a sensor (paragraph 0042, … In the time period of several seconds (for example, three seconds) which is the predetermined time period leading up to the operation time, relatively crucial information about the state of consciousness of the occupants as well as the presence and severity of injuries is detected based on the actions and countenance of the occupants shown in the images of the occupants taken by the camera 11);
and in response to determining that the event is related to the emergency situation (paragraph 0037, … the time at which a collision is detected; paragraph 0038, … relatively crucial information about the status of the occupants immediately prior to the collision (for example, whether or not an occupant suffered a seizure or adopted a defensive stance) is detected), transmitting recorded images (fig. 1, component 14, image data storage apparatus) related to the event to a management server (fig. 1; component 30 is the management server; paragraph 0048, … in the case of an emergency such as a collision, image frames with a relatively high priority level are sent to the emergency reporting center 30 preferentially over image frames with a relatively low priority level).
It is noticed that Sato does not disclose explicitly of detecting a biosignal of an occupant in the vehicle.
OTA discloses of detecting a biosignal of an occupant in the vehicle (fig. 1, component 230; page 11, The biometric information acquiring unit 230 acquires biometric information using the captured image including the body part specified by the part specifying unit; the biosignal is used to detect emergency, as shown in page 11-12, …the vehicle information acquired from the vehicle sensor 310 may be used to detect the occupant, or the detection result of the occupant before the occurrence of the accident may be used. For example, it can be determined by using the results of skeleton detection and face detection in infrared light images taken before the accident, or by determining that the seat sensor information is above a threshold and that the occupant is in the seat… Captures and acquires changes in luminance on the face surface that are considered to be caused by the blood flow of the occupant… Breathing: identified from the movement of body parts (chest, shoulders, abdomen, etc.) identified in the captured image ).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that detecting a biosignal of an occupant in the vehicle as a modification to the method for the benefit of that have accurate detection of the occupant (page 11).
Regarding claim 11, Sato teaches a system (fig. 1) for providing an image of an event related to a vehicle (fig. 1, paragraph 0005), the system comprising:
a camera (fig. 1, component 11) configured to acquire an image of a situation outside or inside the vehicle (paragraph 0021);
an image processing module (fig. 1, component 21) configured to process the image acquired by the at least one camera (paragraph 0026,);
an event detection module (fig. 1, component 13) configured to detect whether a collision event has occurred (paragraph 00037, a collision is detected is deemed the base time (collision time));
an image storage module (fig. 1, 14) configured to, in response to the collision event being detected by the event detection module (paragraph 0038… Relatively crucial information, such as the collision targe…. is detected in the predetermined time period… the highest priority level P5 is set for image frames corresponding to the time period), receive images acquired by the at least one camera related to the event through the image processing module, and store the received images (paragraph 0024);
a data communication module (fig. 1, component 12) configured to transmit data (as shown in fig. 1);
detect a state of an occupant of the vehicle (paragraph 0042, … In the time period of several seconds (for example, three seconds) which is the predetermined time period leading up to the operation time, relatively crucial information about the state of consciousness of the occupants as well as the presence and severity of injuries is detected based on the actions and countenance of the occupants shown in the images of the occupants taken by the camera 11);
and a control module (fig. 1, component 23) configured to, in response to the collision event being detected by the event detection module (paragraph 0037, … the time at which a collision is detected ) and it being determined that it is an emergency situation based on the state of the occupant (paragraph 0038, … relatively crucial information about the status of the occupants immediately prior to the collision (for example, whether or not an occupant suffered a seizure or adopted a defensive stance) is detected), transmit recorded images stored in the image storage module through the data communication module (fig. 1; paragraph 0048, … in the case of an emergency such as a collision, image frames with a relatively high priority level are sent to the emergency reporting center 30 preferentially over image frames with a relatively low priority level).
It is noticed that Sato does not disclose explicitly of occupant state detection module.
OTA discloses of occupant state detection module (fig. 1, component 220; page 11, ; In identifying the parts of the body of the occupant, first, the body part identifying section 220 detects the occupant whose body part is to be identified; the biosignal is used to detect emergency, as shown in page 11-12, …the vehicle information acquired from the vehicle sensor 310 may be used to detect the occupant, or the detection result of the occupant before the occurrence of the accident may be used. For example, it can be determined by using the results of skeleton detection and face detection in infrared light images taken before the accident, or by determining that the seat sensor information is above a threshold and that the occupant is in the seat… Captures and acquires changes in luminance on the face surface that are considered to be caused by the blood flow of the occupant… Breathing: identified from the movement of body parts (chest, shoulders, abdomen, etc.) identified in the captured image).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that of occupant state detection module as a modification to the system for the benefit of that have accurate detection of the occupant (page 11).
Regarding claim 18, Sato teaches a method (fig. 1) of providing images of an event related to a vehicle (fig. 1, paragraph 0005), the method comprising:
capturing a first set of images related to the vehicle (fig. 1, component 11; paragraph 0021);
determining whether the event has occurred (paragraph 00037, a collision is detected is deemed the base time (collision time));
in response to determining that the event has occurred, storing the first set of images captured before the event (fig. 1, 14; paragraph 0038… Relatively crucial information, such as the collision targe…. is detected in the predetermined time period… the highest priority level P5 is set for image frames corresponding to the time period; paragraph 0024);
determining whether the event is related to an emergency situation by detecting an occupant in the vehicle (paragraph 0042, … In the time period of several seconds (for example, three seconds) which is the predetermined time period leading up to the operation time, relatively crucial information about the state of consciousness of the occupants as well as the presence and severity of injuries is detected based on the actions and countenance of the occupants shown in the images of the occupants taken by the camera 11);
and in response to determining that the event is related to the emergency situation (paragraph 0037, … the time at which a collision is detected; paragraph 0038, … relatively crucial information about the status of the occupants immediately prior to the collision (for example, whether or not an occupant suffered a seizure or adopted a defensive stance) is detected), transmitting the first set of images related to the event along with a rescue request signal (fig. 1; paragraph 0048, … in the case of an emergency such as a collision, image frames with a relatively high priority level are sent to the emergency reporting center 30 preferentially over image frames with a relatively low priority level; the highest priority level images are rescue request by default).
It is noticed that Sato does not disclose explicitly of detecting a biosignal of an occupant in the vehicle.
OTA discloses of detecting a biosignal of an occupant in the vehicle (fig. 1, component 230; page 11, The biometric information acquiring unit 230 acquires biometric information using the captured image including the body part specified by the part specifying unit; the biosignal is used to detect emergency, as shown in page 11-12, …the vehicle information acquired from the vehicle sensor 310 may be used to detect the occupant, or the detection result of the occupant before the occurrence of the accident may be used. For example, it can be determined by using the results of skeleton detection and face detection in infrared light images taken before the accident, or by determining that the seat sensor information is above a threshold and that the occupant is in the seat… Captures and acquires changes in luminance on the face surface that are considered to be caused by the blood flow of the occupant… Breathing: identified from the movement of body parts (chest, shoulders, abdomen, etc.) identified in the captured image).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that detecting a biosignal of an occupant in the vehicle as a modification to the method for the benefit of that have accurate detection of the occupant (page 11).
Regarding claim 2, the combination of Sato and OTA teaches the limitations recited in claim 1 as discussed above. In addition, Sato further discloses that transmitting the recorded images along with a rescue request signal to the management server (as shown in fig. 1, component 30 is the management server; paragraph 0048, … in the case of an emergency such as a collision, image frames with a relatively high priority level are sent to the emergency reporting center 30 preferentially over image frames with a relatively low priority level; the highest priority level images are rescue request by default).
Regarding claim 3, the combination of Sato and OTA teaches the limitations recited in claim 1 as discussed above. In addition, Sato further discloses acquiring indoor or outdoor images of the vehicle (fig. 1; paragraph 0021, … outputs captured images of the vehicle interior and occupants as well as images of the external environment); and continuously transmitting the indoor or outdoor images to the management server (fig. 2).
Regarding claim 12, the combination of Sato and OTA teaches the limitations recited in claim 11 as discussed above. In addition, Sato further discloses that transmit a rescue request signal to t through the data communication module (as shown in fig. 1, component 30 is the management server; paragraph 0048, … in the case of an emergency such as a collision, image frames with a relatively high priority level are sent to the emergency reporting center 30 preferentially over image frames with a relatively low priority level; the highest priority level images are rescue request by default).
Regarding claim 14, the combination of Sato and OTA teaches the limitations recited in claim 11 as discussed above. In addition, OTA further discloses that detect the state of the occupant by one of or any combination of a change in pressure of an airbag after being deployed, a change in tension of a seatbelt, or a movement of the occupant inside the vehicle (page 7, … detecting a vehicle accident using a signal from the vehicle sensor… seat sensor… airbag operation notification).
The motivation of combination is the same as in claim 11’s rejection.
Regarding claim 19, the combination of Sato and OTA teaches the limitations recited in claim 18 as discussed above. In addition, Sato further discloses that : in response to determining that the event is related to the emergency situation, capturing a second set of images after the event including vehicle interior images of the occupant in the vehicle; and transmitting the second set of images (fig. 1; page 21, …The camera 11, which is mounted to the roof inside the vehicle and has a horizontal field of view of 360.degree., outputs captured images of the vehicle interior and occupants as well as images of the external environment around the vehicle captured through the vehicle windows).
Regarding claim 20, the combination of Sato and OTA teaches the limitations recited in claim 18 as discussed above. In addition, Sato further discloses that transmitting information relating to the state of the occupant (paragraph 0042, … In the time period of several seconds (for example, three seconds) which is the predetermined time period leading up to the operation time, relatively crucial information about the state of consciousness of the occupants as well as the presence and severity of injuries is detected based on the actions and countenance of the occupants shown in the images of the occupants taken by the camera 11; fig. 1);
OTA further discloses that in response to determining that the event is related to the emergency situation, detecting a state of the occupant by sensing one of or any combination of a change in pressure of an airbag after being deployed, a change in tension of a seatbelt, or a movement of the occupant inside the vehicle; (page 7, … detecting a vehicle accident using a signal from the vehicle sensor… seat sensor… airbag operation notification).
The motivation of combination is the same as in claim 18’s rejection.
11. Claim 4-7, 13 are rejected are rejected under 35 U.S.C. 103 as being unpatentable over Sato et al. (US 20070001512) and in view of OTA (WO 2023047505)and further in view of WANG et al. (CN 108423003).
Regarding claim 4, the combination of Sato and OTA teaches the limitations recited in claim 1 as discussed above. In addition, Sato further discloses that transmitting the recorded images to at least one terminal (fig. 1).
It is noticed that Sato does not disclose explicitly of authenticating a driver of the vehicle and transmit based on the authenticating.
WANG discloses of authenticating a driver of the vehicle and transmit based on the authenticating (fig. 1; page 8, photographing by the driver monitoring camera at the vehicle end, photo transmission for human face comparison to the background monitoring centre through the cloud server) the driver face recognition work attendance checking authentication. authentication means of the driver face recognition identity
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that authenticating a driver of the vehicle and transmit based on the authenticating as a modification to the method for the benefit of that make sure the driver is the right person to increase the security level (page 8).
Regarding claim 5, the combination of Sato, OTA and WANG teaches the limitations recited in claim 4 as discussed above. In addition, WANG further discloses that in response to a request from the at least one terminal, providing indoor or outdoor images from the vehicle to the at least one terminal (page 8, …a vehicle warning the fatigue information are uploaded to the cloud server, and stored, the cloud server of the monitoring staff fatigue early-warning information monitoring centre, the monitoring centre dials telephone of the driver).
The motivation of combination is the same as in claim 4’s rejection.
Regarding claim 6, the combination of Sato, OTA and WANG teaches the limitations recited in claim 4 as discussed above. In addition, WANG further discloses that comprises determining whether the driver is a registered owner, and the at least one terminal includes an authorized terminal of a person selected by the registered owner (page 8, …the vehicle end to driver human face comparison, or photographing by the driver monitoring camera at the vehicle end, photo transmission for human face comparison to the background monitoring centre through the cloud server) the driver face recognition work attendance checking authentication).
The motivation of combination is the same as in claim 4’s rejection.
Regarding claim 7, the combination of Sato, OTA and WANG teaches the limitations recited in claim 4 as discussed above. In addition, WANG further discloses that determining whether the driver is a registered owner, and the at least one terminal includes an account terminal of the registered owner and an authorized terminal of a person selected by the registered ownerr (page 8, … photo transmission for human face comparison to the background monitoring centre through the cloud server) the driver face recognition work attendance checking authentication; in which, the cloud server has an account terminal of the registered owner).
The motivation of combination is the same as in claim 4’s rejection.
Regarding claim 13, the combination of Sato and OTA teaches the limitations recited in claim 11 as discussed above. In addition, Sato further discloses that after transmitting the recorded images, transmit images acquired through the at least one camera to the data communication module via the image processing module (fig. 1).
It is noticed that Sato does not disclose explicitly of transmit real-time images.
WANG discloses of transmit real-time images (fig. 1; page 12, photographing the current driver by a first camera to obtain a target picture, at the same time of starting counting the current driver driving duration in real time).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that transmit real-time images as a modification to the method for the benefit of that send images fast (page 12).
12. Claim 8-10, 15-16 are rejected are rejected under 35 U.S.C. 103 as being unpatentable over Sato et al. (US 20070001512) and in view of OTA (WO 2023047505)and further in view of Chen (US 20040174253).
Regarding claim 8, the combination of Sato and OTA teaches the limitations recited in claim 1 as discussed above.
It is noticed that Sato does not disclose explicitly of selecting a supporter based on a location of the event; and transmitting the recorded images to a support terminal of the selected supporter.
Chen discloses of selecting a supporter based on a location of the event; and transmitting the recorded images to a support terminal of the selected supporter (fig. 3; paragraph 0018, In case of a car accident, the transmitting unit (10) transfers immediate pictures of the location of the car (C) to the next police station or to a receiving unit (D) of the car owner, thus enabling them to take immediate appropriate measures).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that selecting a supporter based on a location of the event; and transmitting the recorded images to a support terminal of the selected supporter as a modification to the method for the benefit of that get help in short period of time (paragraph 0018).
Regarding claim 9, the combination of Sato, OTA and Chen teaches the limitations recited in claim 8 as discussed above. In addition, Chen further discloses that transmitting indoor or outdoor images from the vehicle to the support terminal of the selected supporter through streaming (fig. 3).
The motivation of combination is the same as in claim 8’s rejection.
Regarding claim 10, the combination of Sato and OTA teaches the limitations recited in claim 1 as discussed above.
It is noticed that Sato does not disclose explicitly of determining whether a request signal of an emergency rescue or an emergency call has been generated; and in response to it being determined that the request signal has been generated, transmitting the request signal along with the recorded images.
Chen discloses of determining whether a request signal of an emergency rescue or an emergency call has been generated; and in response to it being determined that the request signal has been generated, transmitting the request signal along with the recorded images (fig. 3; paragraph 0018, In case of a car accident, the transmitting unit (10) transfers immediate pictures of the location of the car (C) to the next police station or to a receiving unit (D) of the car owner, thus enabling them to take immediate appropriate measures).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that determining whether a request signal of an emergency rescue or an emergency call has been generated; and in response to it being determined that the request signal has been generated, transmitting the request signal along with the recorded images as a modification to the method for the benefit of that get help in short period of time (paragraph 0018).
Regarding claim 15, the combination of Sato and OTA teaches the limitations recited in claim 11 as discussed above.
It is noticed that Sato does not disclose explicitly of in response to it being determined that it is an emergency based on the state of the occupant after the collision event, generate a request signal of an emergency rescue or an emergency call.
Chen discloses of in response to it being determined that it is an emergency based on the state of the occupant after the collision event, generate a request signal of an emergency rescue or an emergency call (fig. 3; paragraph 0018, In case of a car accident, the transmitting unit (10) transfers immediate pictures of the location of the car (C) to the next police station or to a receiving unit (D) of the car owner, thus enabling them to take immediate appropriate measures).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that in response to it being determined that it is an emergency based on the state of the occupant after the collision event, generate a request signal of an emergency rescue or an emergency call as a modification to the system for the benefit of that get help in short period of time (paragraph 0018).
Regarding claim 16, the combination of Sato, OTA and Chen teaches the limitations recited in claim 15 as discussed above. In addition, Sato further discloses that transmit the recorded images stored in the image storage module through the data communication module (fig. 1);
Chen further discloses of transmit the request signal of the emergency rescue or the emergency call through the data communication module (fig. 3; paragraph 0018, In case of a car accident, the transmitting unit (10) transfers immediate pictures of the location of the car (C) to the next police station or to a receiving unit (D) of the car owner, thus enabling them to take immediate appropriate measures).
The motivation of combination is the same as in claim 15’s rejection.
13. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Sato et al. (US 20070001512) and in view of OTA (WO 2023047505)and further in view of Chen (US 20040174253) and further in view of WANG et al. (CN 108423003).
Regarding claim 17, the combination of Sato, OTA and Chen teaches the limitations recited in claim 15 as discussed above. In addition, Sato further discloses that after transmitting the recorded images, transmit images acquired through the at least one camera to the data communication module via the image processing module (fig. 1).
It is noticed that Sato does not disclose explicitly of transmit real-time images.
WANG discloses of transmit real-time images (fig. 1; page 12, photographing the current driver by a first camera to obtain a target picture, at the same time of starting counting the current driver driving duration in real time).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that transmit real-time images as a modification to the system for the benefit of that send images fast (page 12).
Conclusion
14 The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See form 892.
15. Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAIHAN JIANG whose telephone number is (571)272-1399. The examiner can normally be reached on flexible.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath Perungavoor can be reached on (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-270-0655.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZAIHAN JIANG/Primary Examiner, Art Unit 2488