DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant(s) Response to Official Action
The response filed on 11/10/2025 has been entered and made of record.
Response to Arguments/Amendments
Presented arguments have been fully considered, but are rendered moot in view of the new ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 12-13, 34 are rejected under 35 U.S.C. 103 as being unpatentable over Okada Okada (US 2020/0412944 A1) in view of Aoki (JP-2016118995-A).
As per claim 1, Okada discloses an occupant information acquisition apparatus (Okada: Abstract) comprising:
at least one memory configured to store program code (Okada: Para. [0022] discloses the functional blocks depicted are implemented in hardware such as devices and mechanical apparatus exemplified by a CPU and a memory of a computer, and in software such as a computer program.);
at least one processor configured to operate as instructed by the program code (Okada: Para. [0022] discloses the functional blocks depicted are implemented in hardware such as devices and mechanical apparatus exemplified by a CPU and a memory of a computer, and in software such as a computer program.), the program code including:
riding information acquisition code configured to cause at least one of the at least one processor to acquire riding information of an occupant (82) on a rear seat (76) of a vehicle from a sensor (seating sensor) configured to detect the occupant (Okada: Figs. 1, 3 & Paras. [0017], [0023], [0028] disclose processor 18 using passenger detector 32 to acquire the seat information of an occupant 82 via seating sensor that detects the occupant in the rear seat 76.);
imaging condition control code configured to cause at least one of the at least one processor to activate, in an imaging device (10), a second imaging condition different from a first imaging condition for capturing image data of an occupant (80) on a driver's seat (74) and cause the imaging device to capture image data of the occupant (82) on the rear seat (Okada: Figs. 1, 3 & Paras. [0017], [0028]-[0029], [0042], [0044] disclose processor 18 using imaging condition determiner 34 and imaging controller 14 to activate in drive recorder 10, a second imaging condition different from a first imaging condition for capturing image data of an occupant 80 on a driver's seat 74 and cause the drive recorder 10 to capture image data of the occupant 82 on the rear seat 76 via camera 42.); and
image data acquisition code configured to cause at least one of the at least one processor to acquire the image data from the first and second imaging conditions captured by the imaging device (Okada: Fig. 3 & Para. [0023] disclose processor 18 acquiring the image data pertinent to the first and second imaging conditions via camera 42.),
wherein the imaging device (10) includes:
imaging unit (42) disposed in front of the driver's seat (Okada: Fig. 3 & Para. [0017] disclose the camera 42 can be mounted at a position of a rear-view mirror of the vehicle 70.), and
wherein the imaging condition control code is further configured to cause at least one of the at least one processor to activate the second imaging condition (Okada: Figs. 1, 3 & Paras. [0017], [0028]-[0029], [0042], [0044] disclose processor 18 using imaging condition determiner 34 and imaging controller 14 to activate the second imaging condition (which is different from imaging the occupant on the driver's seat in the first imaging condition) to control camera 42 via drive recorder 10.).
However, Okada does not explicitly disclose “… including a second angle of view … which is different from a first angle of view when imaging…”.
Further, Aoki is in the same field of endeavor and teaches including a second angle of view which is different from a first angle of view when imaging when imaging (Aoki: Paras. [0032]-[0033], [0036] disclose an in-vehicle camera 2 is a pan/tilt/zoom camera that can change its angle of view [includes a second angle of view which is different from a first angle of view].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Okada and Aoki before him or her, to modify the camera system of Okada to include the different camera angles feature as described in Aoki. The motivation for doing so would have been to improve accurate image capturing by providing a configuration that enables the camera to monitor a larger area.
As per claim 2, Okada-Aoki disclose the occupant information acquisition apparatus according to claim 1, further comprising occupant determination code configured to cause at least one of the at least one processor to determine whether the riding information acquisition unit has acquired the riding information (Okada: Para. [0028] discloses the passenger detector 32 may detect whether or not a passenger is seated and the seating position of the passenger based on the image data acquired by the image acquisition interface 22),
wherein the imaging condition control code is further configured to cause at least one of the at least one processor to activate the second imaging condition in the imaging device in a case where the occupant determination code configured to cause at least one of the at least one processor to determine that the riding information has been acquired by the riding information acquisition unit (Okada: Figs. 1, 3 & Paras. [0017], [0028]-[0029] disclose processor 18 using imaging condition determiner 34 and imaging controller 14 to activate in drive recorder 10, the second imaging condition after the riding information has been acquired.).
As per claim 12, An occupant information acquisition system (Okada: Abstract) comprising:
a sensor (seating sensor) configured to detect an occupant on a rear seat (76) of a vehicle (Okada: Figs. 1, 3 & Paras. [0017], [0023], [0028] disclose processor 18 using passenger detector 32 to acquire the seat information of an occupant 82 via seating sensor that detects the occupant in the rear seat 76.);
an imaging device (10) configured to capture image data of the occupant in the vehicle (Okada: Fig. 3 & Para. [0017] disclose the drive recorder 10 configured to capture image data of the occupant in the vehicle 70 via camera 42.); and
an occupant information acquisition apparatus (20) connected to the sensor (seating sensor) and the imaging device (10) via a communication line (Okada: Fig. 3 & Paras. [0023]-[0024], [0028] disclose the vehicle information acquisition interface 20 may acquire these items of information via a controller area network (CAN) of the vehicle 70.), wherein the occupant information acquisition apparatus (20) includes:
at least one memory configured to store program code (Okada: Para. [0022] discloses the functional blocks depicted are implemented in hardware such as devices and mechanical apparatus exemplified by a CPU and a memory of a computer, and in software such as a computer program.);
at least one processor configured to operate as instructed by the program code, the program code including (Okada: Para. [0022] discloses the functional blocks depicted are implemented in hardware such as devices and mechanical apparatus exemplified by a CPU and a memory of a computer, and in software such as a computer program.):
riding information acquisition code configured to cause at least one of the at least one processor to acquire riding information of the occupant from the sensor (Okada: Figs. 1, 3 & Paras. [0017], [0023], [0028] disclose processor 18 using passenger detector 32 to acquire the seat information of an occupant 82 via seating sensor that detects the occupant in the rear seat 76.),
imaging condition control code configured to cause at least one of the at least one processor to activate, in the imaging device, a second imaging condition different from a first imaging condition for capturing image data of an occupant on a driver's seat and cause the imaging device to capture image data of the occupant on the rear seat (Okada: Figs. 1, 3 & Paras. [0017], [0028]-[0029], [0042], [0044] disclose processor 18 using imaging condition determiner 34 and imaging controller 14 to activate in drive recorder 10, a second imaging condition different from a first imaging condition for capturing image data of an occupant 80 on a driver's seat 74 and cause the drive recorder 10 to capture image data of the occupant 82 on the rear seat 76 via camera 42.), and
image data acquisition code configured to cause at least one of the at least one processor to acquire the image data from the first and second imaging conditions captured by the imaging device
wherein the imaging device (10) includes:
imaging unit (42) disposed in front of the driver's seat (Okada: Fig. 3 & Para. [0017] disclose the camera 42 can be mounted at a position of a rear-view mirror of the vehicle 70.), and
wherein the imaging condition control code is further configured to cause at least one of the at least one processor to activate the second imaging condition (Okada: Figs. 1, 3 & Paras. [0017], [0028]-[0029], [0042], [0044] disclose processor 18 using imaging condition determiner 34 and imaging controller 14 to activate the second imaging condition (which is different from imaging the occupant on the driver's seat in the first imaging condition) to control camera 42 via drive recorder 10.).
However, Okada does not explicitly disclose “… including a second angle of view … which is different from a first angle of view when imaging…”.
Further, Aoki is in the same field of endeavor and teaches including a second angle of view which is different from a first angle of view when imaging when imaging (Aoki: Paras. [0032]-[0033], [0036] disclose an in-vehicle camera 2 is a pan/tilt/zoom camera that can change its angle of view [includes a second angle of view which is different from a first angle of view].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Okada and Aoki before him or her, to modify the camera system of Okada to include the different camera angles feature as described in Aoki. The motivation for doing so would have been to improve accurate image capturing by providing a configuration that enables the camera to monitor a larger area.
As per claim 13, the claim recites analogous limitations to claim 2 above, and is therefore rejected on the same premise.
As per claim 34, the claim recites analogous limitations to claims 1 & 12 above, and is therefore rejected on the same premise.
Claims 3-6, 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Okada in view of Aoki in further view of Lintz et al., hereinafter referred to as Lintz (US 2019/0168669 A1).
As per claim 3, Okada-Aoki disclose the occupant information acquisition apparatus according to claim 1, wherein
the imaging device (10) includes
wherein the imaging condition control code is further configured to cause at least one of the at least one processor to activate the second imaging condition in at least one of the imaging unit and the light source (Okada: Figs. 1, 3 & Paras. [0017], [0028]-[0029] disclose processor 18 using imaging condition determiner 34 and imaging controller 14 to activate the second imaging condition to control camera 42 via drive recorder 10.).
However, Okada-Aoki do not explicitly disclose “… a light source disposed in front of the driver's seat …”.
Further, Lintz is in the same field of endeavor and teaches a light source disposed in front of the driver's seat (Lintz: Figs. 7-8 & Paras. [0064]-[0066], [0071] disclose a light source 218 or 216 disposed in front of the driver's seat at the rear-view assembly 10.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Okada-Aoki and Lintz before him or her, to modify the in-vehicle camera system of Okada-Aoki to include the light source feature as described in Lintz. The motivation for doing so would have been to improve image quality by providing a configuration that enables activation of additional components in situations in which there is insufficient light.
As per claim 4, Okada-Aoki disclose the occupant information acquisition apparatus according to claim 3, wherein the imaging condition control code is further configured to cause at least one of the at least one processor to cause the imaging device to capture the image data of the occupant on the rear seat under the second imaging condition before or after causing the imaging device to capture the image data of the occupant on the driver's seat under the first imaging condition (Okada: Figs. 1, 3 & Paras. [0017], [0028]-[0029] disclose processor 18 using imaging condition determiner 34 and imaging controller 14 to control camera 42 to capture the image data of the occupant on the rear seat under the second imaging condition after causing the imaging device to capture the image data of the occupant on the driver's seat under the first imaging condition.).
As per claim 5, Okada-Aoki disclose the occupant information acquisition apparatus according to claim 1, wherein
the imaging device (10) includes
first imaging unit (42) disposed in front of the driver's seat (Okada: Fig. 1),
at least one of second imaging unit (vehicle mounted camera 40) different from the first imaging unit (42) (Okada: Figs. 1, 3 & Paras. [0017], [0024], [0028]-[0029] disclose at least one of second imaging unit vehicle mounted camera 40 different from the first imaging unit 42 and processor 18 using imaging condition determiner 34 and imaging controller 14 to control/activate camera 42 to capture the image data of the occupant on the rear seat under the second imaging condition.).
However, Okada-Aoki do not explicitly disclose “… a first light source disposed in front of the driver's seat … and a second light source different from the first light source …”.
Further, Lintz is in the same field of endeavor and teaches a first light source disposed in front of the driver's seat and a second light source different from the first light source (Lintz: Figs. 7-8 & Paras. [0064]-[0066], [0071] disclose a light source 218 or 216 disposed in front of the driver's seat at the rear-view assembly 10.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Okada-Aoki and Lintz before him or her, to modify the in-vehicle camera system of Okada-Aoki to include the first and second light sources feature as described in Lintz. The motivation for doing so would have been to improve image quality by providing a configuration that enables activation of additional components in situations in which there is insufficient light.
As per claim 6, Okada-Aoki disclose the occupant information acquisition apparatus according to claim 5, wherein
the imaging condition control code is further configured to cause at least one of the at least one processor to cause the imaging device to capture the image data of the occupant on the rear seat under the second imaging condition simultaneously with causing the imaging device to capture the image data of the occupant on the driver's seat under the first imaging condition (Okada: Figs. 1, 3 & Paras. [0017], [0028]-[0029] disclose processor 18 using imaging condition determiner 34 and imaging controller 14 to control camera 42 to capture the image data of the occupant on the rear seat under the second imaging condition after causing the imaging device to capture the image data of the occupant on the driver's seat under the first imaging condition.).
As per claims 14-17, the claims recite analogous limitations to claims 3-6 above, and is/are therefore rejected on the same premise.
Claims 7-9, 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Okada in view of Aoki in further view of Keishun et al., hereinafter referred to as Keishun (JP-2016137856-A).
As per claim 7, Okada-Aoki disclose the occupant information acquisition apparatus according claim 1, wherein the sensor detects the occupant on the rear seat (Okada: Para. [0028] discloses the sensor detects the occupant on the rear seat.).
However, Okada-Aoki do not explicitly disclose “… from opening and closing of a door of the rear seat.”.
Further, Keishun is in the same field of endeavor and teaches a sensor detecting the occupant on the rear seat from opening and closing of a door of the rear seat (Keishun: Paras. [0022]-[0023] disclose a sensor 23 detecting the occupant on the rear seat from opening and closing of a door of the rear seat.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Okada-Aoki and Keishun before him or her, to modify the vehicle seat detection configuration of Okada-Aoki to include the door sensor feature as described in Keishun. The motivation for doing so would have been to improve occupant presence detection by providing a multi-faceted seat detection configuration.
As per claim 8, Okada-Aoki disclose the occupant information acquisition apparatus according to claim 1, wherein the sensor detects the occupant on the rear seat (Okada: Para. [0028] discloses the sensor detects the occupant on the rear seat.).
However, Okada do not explicitly disclose “… from opening and closing of fastener of a seat belt of the rear seat.”.
Further, Keishun is in the same field of endeavor and teaches from opening and closing of fastener of a seat belt of the rear seat (Keishun: Paras. [0022]-[0023], [0027] disclose a seat belt sensor 21 detecting the occupant.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Okada-Aoki and Keishun before him or her, to modify the vehicle seat detection configuration of Okada-Aoki to include the seat belt sensor feature as described in Keishun. The motivation for doing so would have been to improve occupant presence detection by providing a multi-faceted seat detection configuration.
As per claim 9, Okada-Aoki disclose the occupant information acquisition apparatus according to claim 1, wherein the sensor detects the occupant on the rear seat (Okada: Para. [0028] discloses the sensor detects the occupant on the rear seat.).
However, Okada-Aoki do not explicitly disclose “… from a weight applied to a seating portion of the rear seat.”.
Further, Keishun is in the same field of endeavor and teaches from a weight applied to a seating portion of the rear seat (Keishun: Paras. [0022]-[0023] disclose a sensor 20 detecting the occupant via weight detection.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Okada-Aoki and Keishun before him or her, to modify the vehicle seat detection configuration of Okada-Aoki to include the seat weight sensor feature as described in Keishun. The motivation for doing so would have been to improve occupant presence detection by providing a multi-faceted seat detection configuration.
As per claims 18-19, the claim(s) recites analogous limitations to claim(s) 7-8 above, and is/are therefore rejected on the same premise.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Okada in view of Aoki in further view of Nishiyama et al., hereinafter referred to as Nishiyama (US 2021/0206384 A1).
As per claim 10, Okada-Aoki disclose the occupant information acquisition apparatus according to claim 1, wherein the sensor includes the imaging device (10) configured to detect the occupant on the rear seat (Okada: Figs. 1, 3 & Para. [0028] discloses the sensor detects the occupant on the rear seat.).
However, Okada-Aoki do not explicitly disclose “… from movement of a moving object on the rear seat.”.
Further, Nishiyama is in the same field of endeavor and teaches detecting an occupant on a rear seat from movement of a moving object on the rear seat (Nishiyama: Paras. [0016], [0019] disclose the motion detecting device 14 is composed of a non-contact sensor, such as a distance sensor or a camera that captures an image inside the vehicle, and an electronic circuit installed with a program for analyzing a detection signal detected by the sensor. The motion detecting device 14 detects the motion of an occupant.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Okada-Aoki and Nishiyama before him or her, to modify the vehicle seat detection configuration of Okada-Aoki to include the moving object detection feature as described in Nishiyama. The motivation for doing so would have been to improve occupant user experience by providing automated vehicle conditions in response to the type of motion detected.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Okada in view of Aoki in further view of Tsuchiya (US 2020/0398699 A1).
As per claim 11, Okada-Aoki disclose the occupant information acquisition apparatus according to claim 1 (Okada: Abstract),
However, Okada-Aoki do not explicitly disclose “… further comprising occupant specifying code configured to cause at least one of the at least one processor to cause a face authentication apparatus to perform face authentication by using the acquired image data and specify the occupant.”
Further, Tsuchiya is in the same field of endeavor and teaches further comprising occupant specifying code configured to configured to cause a face authentication apparatus (100a) to perform face authentication by using the acquired image data and specify the occupant (Tsuchiya: Paras. [0064]-[0067] disclose a personal authentication unit 15 configured to cause a face authentication apparatus 100a to perform face authentication by using the acquired image data and authenticate the occupant).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Okada-Aoki and Tsuchiya before him or her, to modify the in-vehicle camera system of Okada-Aoki to include the face authentication feature as described in Tsuchiya. The motivation for doing so would have been to improve safety protocols by providing a configuration that enables multi-occupant detection when occupants are placed in the same seat.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and can be viewed in the list of references.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEET DHILLON whose telephone number is (571)270-5647. The examiner can normally be reached M-F: 5am-1:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V. Perungavoor can be reached at 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PEET DHILLON/Primary Examiner
Art Unit: 2488
Date: 01-18-2026