CTFR 18/538,223 CTFR 101738 DETAILED ACTION Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Amendments Applicant’s Amendment filed on 3/10/2026 has been entered and made of record. Currently Pending claims: 1-4, 7-9, and 12-15 Independent claims: 1 and 7 Amended claims: 1, 4, 7, 12, and 14 Cancelled claims: 5-6 and 10-11 New claims: 15 Response to Arguments This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on 3/10/2026. Applicant’s arguments regarding objections to the drawings, specification, and claims, regarding rejections under 35 U.S.C. §112(b) with respect to claims 4 and 14, and regarding rejections under 35 U.S.C. §101 with respect to claims 1-5, 7-10, and 14, see pages 8-9, filed on 3/10/2026, have been fully considered and are persuasive. The rejections under 35 U.S.C. §112(b) and 35 U.S.C. §101 have been withdrawn. The objections to the drawings, specifications, and claims have also been withdrawn. Applicant’s arguments regarding rejections under 35 U.S.C §103 with respect to claims 1-14, see pages 9-12, filed on 3/10/2026, have been fully considered and are not persuasive. Applicant on pages 10-11 argues stating: “In other words, Costin simply describes serial communication of sensor data between two ECUs. As described in Costin, the primary ECU 220 does not process at least one portion of the sensor data using a driver monitoring function before transmitting the processed sensor data to the backup ECU 250, to enable the backup ECU 250 to provide a further monitoring function. Furthermore, Costin's backup ECU 250 does not receive the output of a driver monitoring function provided by the primary ECU 220 and does not use this output to provide a further monitoring function. As such, the combination of Nakamura and Costin does not describe a "first computing device configured to provide at least a first monitoring function of the system based on the image data, the first monitoring function including a driver monitoring function for monitoring a driver's state of attention... a further computing device, receiving the driver monitoring function output, configured to provide at least one further monitoring function of the system based on the driver monitoring function output... at least one portion of the image data processed using the driver monitoring function is provided by the first computing device to the further computing device, wherein the at least one portion of the image data processed using the driver monitoring function includes processed image data that are required for the further computing device to provide the further monitoring function," as recited in amended claim 1.” The Examiner respectfully disagrees. Costin discloses ECU’s 520 and 550 that are implemented with feedback from the SoC 522 and 552 to the repeaters 529 and 559. This allows the repeaters to transmit image data processed at the SoCs. The Examiner relies upon Nakamura to provide the driver monitoring function. Nakamura also provides the transmittance of monitoring function output as the protection controller receives the output of the monitoring controller as described in ¶0124. Therefore, Nakamura in view of Costin teaches “…wherein the at least one portion of the image data processed using the driver monitoring function includes processed image data that are required for the further computing device to provide the further monitoring function”, as recited in amended claim 1. Please note that the amended claim 1 now also includes the limitations recited in the original claim 6, which was rejected using US 10,957,028 to Shibata. Therefore, claim 1 is now rejected accordingly. As detailed below in the rejection, the amended and newly added claims are rejected using the same references cited previously, however, as necessitated by amendment. Claim Rejections - 35 USC § 103 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-23-aia AIA The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 07-21-aia AIA Claim s 1-4, 7-9, 12-13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Nakamura et al. (US 2020/0094763) (hereafter, "Nakamura") in view of Shibata et al. (US 10,957,028) and further in view of Costin et al. (US 2019/0250611) (hereafter, “Costin”) . Regarding claim 1 , Nakamura discloses a system configured to monitor a vehicle interior ( Abstract, An occupant monitoring device for a vehicle is configured to monitor an occupant sitting on a seat provided in the vehicle ), the system comprising: at least one interface for receiving image data captured with an image capturing device ( Figure 5, #31, #53; ¶0066, The optical unit 52 includes the onboard imaging device 53; ¶0070, determines the position and movement of the upper body of each occupant sitting on the corresponding seat 5 from a vehicle-interior image captured by the onboard imaging device 53. As the monitoring controller 31 makes determinations based on images collected by imaging device 53, Examiner considers this to indicate the presence of an “interface” ); [ a control device configured to control the image capturing device, wherein the image capturing device is configured in such a way that image data can be provided in at least two different contexts, and wherein the system is configured in such a way that a respective context can be specified using the control device ]; a first computing device configured to provide at least a first monitoring function of the system based on the image data ( ¶0070, the monitoring controller 31 … determines the position and movement of the upper body of each occupant sitting on the corresponding seat 5 from a vehicle-interior image captured by the onboard imaging device 53 ), the first monitoring function including a driver monitoring function for monitoring a driver's state of attention, including by capturing: (i) a body posture ( ¶0095, The monitoring controller 31 identifies the position and movement of the upper body of the occupant in the captured image. Examiner considers the position and movement of the upper body to comprise posture and considers this to fully disclose the elements of the limitation listed in the alternative ) or (ii) a viewing direction or (iii) a position of a head or of a face or of the driver's eyes, or (iv) an opening of the eyes, or (v) a blinking frequency of the driver, for drowsiness detection or distraction detection ( ¶0095, determines whether the occupant is dozing or driving inattentively . Examiner considers this to fully disclose the elements of the limitation listed in the alternative ) or detection of vital signs of the driver or gesture detection; a further computing device ( ¶0058, The automobile 1 may be provided with a plurality of ECUs 20. In this case, the plurality of ECUs 20 operate in cooperation with each other to function as a controller ), receiving the driver monitoring function output ( ¶0059, a protection controller 33 for occupants … functions of the controller realized in the ECU 20; ¶0124, In step ST21 of the occupant protection control in FIG. 9, the protection controller 33 determines whether the occupant behavior information output in step ST9 is acquired; ¶0125, If the occupant behavior information is acquired, the protection controller 33 causes the process to proceed to step ST22. Examiner considers protection controller 33 as an individual ECU in the embodiment where 20 is a plurality of ECUs and to receive the output of monitoring controller 31, which is also a separate ECU ), configured to provide at least one further monitoring function of the system based on the driver monitoring function output (¶0129, If the setting data 69 in the protection memory 61 is updated only in accordance with the occupant behavior information corresponding to a case resembling a minor collision, the protection controller 33 determines that the settings are to be changed in accordance with new occupant behavior information. Examiner considers changing the settings as a further monitoring function ); wherein the system is configured in such a way that the image data are provided via the interface to the first computing device ( Figure 5, #31, #52; ¶0070, determines the position and movement of the upper body of each occupant sitting on the corresponding seat 5 from a vehicle-interior image captured by the onboard imaging device 53. As there is a connection between monitoring controller 31 and optical unit 52, Examiner considers this indicate that 31 is “receiving image data via the interface” ) [ and that at least one portion of the image data ] processed using the driver monitoring function ( ¶0095, The monitoring controller 31 identifies the position and movement of the upper body of the occupant in the captured image ) [ is provided by the first computing device to the further computing device, wherein the at least one portion of the image data ] processed using the driver monitoring function ( ¶0095, The monitoring controller 31 identifies the position and movement of the upper body of the occupant in the captured image ) [ includes processed image data that are required for the further computing device to provide the ] further monitoring function ( (¶0129, If the setting data 69 in the protection memory 61 is updated only in accordance with the occupant behavior information corresponding to a case resembling a minor collision, the protection controller 33 determines that the settings are to be changed in accordance with new occupant behavior information. Examiner considers changing the settings as a further monitoring function ). However, Nakamura fails to explicitly disclose a control device configured to control the image capturing device, wherein the image capturing device is configured in such a way that image data can be provided in at least two different contexts, and wherein the system is configured in such a way that a respective context can be specified using the control device and that at least one portion of the image data processed using the function is provided by the first computing device to the further computing device, wherein the at least one portion of the image data processed using the function includes processed image data that are required for the further computing device to provide the further function. Shibata teaches a control device configured to control the image capturing device ( Figure 2, #60, the control unit; Col. 4, line 33-34, Control unit 60 controls signal read-out from each pixel of image sensor 30 ), wherein the image capturing device is configured in such a way that image data can be provided in at least two different contexts ( Figure 5 diagram of different contexts; Col. 7 lines 25-28 When the travel scene is determined as traveling straight, control unit 60 sets, as the first partial region, segment 200 of image sensor 30 that includes a pixel receiving at least light from straight ahead (context 1 moving straight ahead); Col. 8, lines 25-28, When the travel scene is determined as steering right, control unit 60 sets, as the first partial region, at least one segment 200 including pixel 90 that receives light from a steering direction (the right side) of vehicle 1 (context 2 steering right) ), and wherein the system is configured in such a way that a respective context can be specified using the control device ( Col. 7 lines 25-28; Col. 8, lines 25-28, control unit 60 sets). Costin teaches that at least one portion of the image data processed using the function is provided by the first computing device to the further computing device ( ¶0064, System 500 is a block diagram of the system for sharing sensor data between multiple ECUs; ¶0065, System 500 also includes bridging communication links 578 and 579; ¶0067, Repeater 559 can produce a second aggregated sensor data using the previously aggregated sensor data and sensor data received from SoC 552 via data interface 561. System 500 describes to transfer of sensor data between two ECUs after sensor data passes through the processor. Examiner considers this to imply image data processed by a function. When combined with the monitoring function of Nakamura, this would indicate image data processed by the driver monitoring function ), wherein the at least one portion of the image data processed using the function includes processed image data that are required for the further computing device to provide the further function ( ¶0039, If one of the SoCs fails for a variety of reasons, the other SoC can continue to operate; Repeater 559 can produce a second aggregated sensor data using the previously aggregated sensor data and sensor data received from SoC 552 via data interface 561. Since the SoCs can operate in case of the other’s failure, Examiner considers this to indicate that “data that are required” are transmitted to the second ECU. Examiner considers aggregated image data as “processed image data”. As Costin does not specify the function, Examiner relies on Nakamura to provide the “further monitoring function” ). Nakamura, Costin, and Shibata are all analogous to the claimed invention because they are in the field of vehicle controllers and sensors. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the image controller from Shibata and the sensor data sharing system of Costin with the occupant monitoring system of Nakamura. The suggestion/motivation for incorporating Shibata would have been for the benefit of reducing the data transmission rate, as suggested by Shibata at Col. 5, lines 20-24, With the above-described configuration, it is possible to appropriately capture an image of a moving object. Furthermore, it is also possible to reduce the amount of data transmission (or the data transmission rate) of image data between imaging device 10 and ECU 12 . The suggestion/motivation for incorporating Costin would have been decreasing the number of links to each ECU, as suggested by Costin at ¶0042, An advantage provided by such a configuration is that the number of links required under such an implementation is reduced by a factor corresponding to the number of ECUs in the system relative to conventional solutions that connect each sensor to each ECU . This method of improving Nakamura was within the ordinary ability of one of ordinary skill in the art based on the teachings of Shibata and Costin. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Nakamura with the teachings of Shibata and Costin to obtain the invention as specified in claim 1. Regarding claim 2 , in which claim 1 is incorporated, Nakamura discloses an image capturing device ( Figure 2, #53; ¶0067, the onboard imaging device 53 ), configured to capture the image data, configured to capture at least one portion of the vehicle interior (¶0067, captures an image of the entire upper bodies of the two occupants sitting on the pair of left and right seats ), wherein the image capturing device is configured to provide the captured image data via the interface to the first computing device ( Figure 5, #31, #52; ¶0070, determines the position and movement of the upper body of each occupant sitting on the corresponding seat 5 from a vehicle-interior image captured by the onboard imaging device 53. As there is a connection between monitoring controller 31 and optical unit 52, Examiner considers this indicate that 31 is “receiving image data via the interface” ). Regarding claim 3 , Nakamura in view of Shibata and further in view of Costin discloses the system according to claim 2. However, Nakamura fails to explicitly disclose wherein the image capturing device includes a camera. Costin teaches wherein the image capturing device includes a camera ( Claim 16 line 6, a camera ). A camera is an obvious image capturing device to a person of ordinary skill in the art. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Nakamura with the teachings of Costin by having a camera to obtain the invention as specified in claim 3. Regarding claim 4 , in which claim 1 is incorporated, Nakamura discloses wherein the at least one further monitoring function of the further computing device ( ¶0058, lines 7-10, The automobile 1 may be provided with a plurality of ECUs 20. In this case, the plurality of ECUs 20 operate in cooperation with each other to function as a controller ), includes one of the following functions: a) an occupant monitoring function, for: (i) detection of vital signs of occupants, or (ii) gesture detection, or (iii) detection of activities in the vehicle interior, or (iv) detection of situations in the vehicle interior ( ¶0134, If the protection controller 33 acquires, for example, the occupant behavior information corresponding to a case resembling a major collision, the protection controller 33 determines that the acquired behavior of either one of the upper body and the head of the occupant is the behavior during a collision. Examiner considers the occupant behavior during a collision as a “situation in the vehicle interior” and considers this to disclose all elements of the claim recited in the alternative ), or (v) detection of the presence/absence of occupants in the vehicle interior, or (vi) detection of the presence/absence of objects in the vehicle interior; b) a videotelephony function; c) face detection function; or d) an intrusion detection in the vehicle interior. Regarding claim 7 , Nakamura discloses a method for operating a system for monitoring a vehicle interior, the system being configured to monitor the vehicle interior ( Abstract, An occupant monitoring device for a vehicle is configured to monitor an occupant sitting on a seat provided in the vehicle ), the system including: at least one interface for receiving image data captured with an image capturing device ( Figure 5, #31, #53; ¶0066, The optical unit 52 includes the onboard imaging device 53; ¶0070, determines the position and movement of the upper body of each occupant sitting on the corresponding seat 5 from a vehicle-interior image captured by the onboard imaging device 53. As the monitoring controller 31 makes determinations based on images collected by imaging device 53, Examiner considers this to indicate the presence of an “interface” ), [ a control device configured to control the image capturing device ], a first computing device configured to provide at least a first monitoring function of the system based on the image data ( ¶0070, the monitoring controller 31 … determines the position and movement of the upper body of each occupant sitting on the corresponding seat 5 from a vehicle-interior image captured by the onboard imaging device 53 ), the first monitoring function including a driver monitoring function for monitoring a driver's state of attention, including by capturing: (i) a body posture ( ¶0095, The monitoring controller 31 identifies the position and movement of the upper body of the occupant in the captured image. Examiner considers the position and movement of the upper body to comprise posture and considers this to fully disclose the elements of the limitation listed in the alternative ) or (ii) a viewing direction or (iii) a position of a head or of a face or of the driver's eyes, or (iv) an opening of the eyes, or (v) a blinking frequency of the driver, for drowsiness detection or distraction detection ( ¶0095, determines whether the occupant is dozing or driving inattentively . Examiner considers this to fully disclose the elements of the limitation listed in the alternative ) or detection of vital signs of the driver or gesture detection, a further computing device ( ¶0058, lines 7-10, The automobile 1 may be provided with a plurality of ECUs 20. In this case, the plurality of ECUs 20 operate in cooperation with each other to function as a controller ) configured to receive the driver monitoring function output ( ¶0059, a protection controller 33 for occupants … functions of the controller realized in the ECU 20; ¶0124, In step ST21 of the occupant protection control in FIG. 9, the protection controller 33 determines whether the occupant behavior information output in step ST9 is acquired; ¶0125, If the occupant behavior information is acquired, the protection controller 33 causes the process to proceed to step ST22. Examiner considers protection controller 33 as an individual ECU in the embodiment where 20 is a plurality of ECUs and to receive the output of monitoring controller 31, which is also a separate ECU ) and provide at least one further monitoring function of the system based on the driver monitoring function output (¶0129, If the setting data 69 in the protection memory 61 is updated only in accordance with the occupant behavior information corresponding to a case resembling a minor collision, the protection controller 33 determines that the settings are to be changed in accordance with new occupant behavior information. Examiner considers changing the settings as a further monitoring function ); the method comprising the following steps: [ specifying a context for providing the image data, using the control device, providing the image data according to the specified context ] via the interface to the first computing device using the image capturing device ( Figure 5, #31, #52; ¶0070, determines the position and movement of the upper body of each occupant sitting on the corresponding seat 5 from a vehicle-interior image captured by the onboard imaging device 53. As there is a connection between monitoring controller 31 and optical unit 52, Examiner considers this indicate that 31 is “receiving image data via the interface” ) ; [ and providing at least one portion of the image data ] processed using the driver monitoring function ( ¶0095, The monitoring controller 31 identifies the position and movement of the upper body of the occupant in the captured image ) [ to the further computing device using the first computing device, wherein the at least one portion of the image data ] processed using the driver monitoring function ( ¶0095, The monitoring controller 31 identifies the position and movement of the upper body of the occupant in the captured image ) [ includes processed image data that are required for the further computing device to provide the ] further monitoring function ( ¶0129, If the setting data 69 in the protection memory 61 is updated only in accordance with the occupant behavior information corresponding to a case resembling a minor collision, the protection controller 33 determines that the settings are to be changed in accordance with new occupant behavior information. Examiner considers changing the settings as a further monitoring function ). However, Nakamura fails to explicitly disclose a control device configured to control the image capturing device; and specifying a context for providing the image data, using the control device, providing the image data according to the specified context; and providing at least one portion of the image data processed using the function to the further computing device using the first computing device, wherein the at least one portion of the image data processed using the function includes processed image data that are required for the further computing device to provide the further function. Shibata teaches a control device configured to control the image capturing device ( Figure 2, #60, the control unit; Col. 4, line 33-34, Control unit 60 controls signal read-out from each pixel of image sensor 30 ); and specifying a context for providing the image data ( Figure 5 diagram of different contexts; Col. 7 lines 25-28 When the travel scene is determined as traveling straight, control unit 60 sets, as the first partial region, segment 200 of image sensor 30 that includes a pixel receiving at least light from straight ahead (context 1 moving straight ahead); Col. 8, lines 25-28, When the travel scene is determined as steering right, control unit 60 sets, as the first partial region, at least one segment 200 including pixel 90 that receives light from a steering direction (the right side) of vehicle 1 (context 2 steering right) ), using the control device, providing the image data according to the specified context ( Col. 7 lines 25-28; Col. 8, lines 25-28, control unit 60 sets) . Costin teaches providing at least one portion of the image data processed using the function to the further computing device using the first computing device ( ¶0064, System 500 is a block diagram of the system for sharing sensor data between multiple ECUs; ¶0065, System 500 also includes bridging communication links 578 and 579; ¶0067, Repeater 559 can produce a second aggregated sensor data using the previously aggregated sensor data and sensor data received from SoC 552 via data interface 561. System 500 describes to transfer of sensor data between two ECUs after sensor data passes through the processor. Examiner considers this to imply image data processed by a function. When combined with the monitoring function of Nakamura, this would indicate image data processed by the driver monitoring function ), wherein the at least one portion of the image data processed using the function includes processed image data that are required for the further computing device to provide the further function ( ¶0039, If one of the SoCs fails for a variety of reasons, the other SoC can continue to operate; Repeater 559 can produce a second aggregated sensor data using the previously aggregated sensor data and sensor data received from SoC 552 via data interface 561. Since the SoCs can operate in case of the other’s failure, Examiner considers this to indicate that “data that are required” are transmitted to the second ECU. As Costin does not specify the function, Examiner relies on Nakamura to provide the “further monitoring function” ). Nakamura, Costin, and Shibata are all analogous to the claimed invention because they are in the field of vehicle controllers and sensors. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the image controller from Shibata and the sensor data sharing system of Costin with the occupant monitoring system of Nakamura. The suggestion/motivation for incorporating Shibata would have been for the benefit of reducing the data transmission rate, as suggested by Shibata at Col. 5, lines 20-24, With the above-described configuration, it is possible to appropriately capture an image of a moving object. Furthermore, it is also possible to reduce the amount of data transmission (or the data transmission rate) of image data between imaging device 10 and ECU 12 . The suggestion/motivation for incorporating Costin would have been decreasing the number of links to each ECU, as suggested by Costin at ¶0042, An advantage provided by such a configuration is that the number of links required under such an implementation is reduced by a factor corresponding to the number of ECUs in the system relative to conventional solutions that connect each sensor to each ECU . This method of improving Nakamura was within the ordinary ability of one of ordinary skill in the art based on the teachings of Shibata and Costin. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Nakamura with the teachings of Shibata and Costin to obtain the invention as specified in claim 7. Regarding claim 8 , in which claim 7 is incorporated, Nakamura discloses wherein the method further comprises: capturing image data using the image capturing device ( Figure 2, #53; ¶0067, the onboard imaging device 53 ), for capturing at least one portion of the vehicle interior (¶0067, captures an image of the entire upper bodies of the two occupants sitting on the pair of left and right seats ). Regarding claim 9 , Nakamura in view of Shibata and further in view of Costin discloses the system according to claim 8. However, Nakamura fails to explicitly disclose wherein the image capturing device includes a camera. Costin teaches wherein the image capturing device includes a camera ( Claim 16 line 6, a camera ). A camera is an obvious image capturing device to a person of ordinary skill in the art. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Nakamura with the teachings of Costin by having a camera to obtain the invention as specified in claim 9. Regarding claim 12 , Nakamura in view of Shibata further in view of Costin discloses the method according to claim 7. However, Nakamura fails to explicitly disclose wherein the first monitoring function of the first computing device and the at least one further monitoring function of the further computing device are performed at least temporarily simultaneously, and the image capturing device at least temporarily provides image data in a first context as a function of the first monitoring function, wherein, in the first context, the captured image data are output according to a first output format, and the image capturing device at least temporarily provides image data in at least one further context as a function of the further monitoring function, wherein, in the further context, the captured image data are output according to a further output format. Shibata teaches the image capturing device at least temporarily provides image data in a first context as a function of the first monitoring function, wherein, in the first context, the captured image data are output according to a first output format ( Col. 7 lines 25-28, When the travel scene is determined as traveling straight, control unit 60 sets, as the first partial region, segment 200 of image sensor 30 that includes a pixel receiving at least light from straight ahead. Examiner considers the travel scene a context and the pixel format and output format ), and the image capturing device at least temporarily provides image data in at least one further context as a function of the further monitoring function, wherein, in the further context, the captured image data are output according to a further output format ( Col. 8, lines 25-28, When the travel scene is determined as steering right, control unit 60 sets, as the first partial region, at least one segment 200 including pixel 90 that receives light from a steering direction (the right side) of vehicle 1 ). Costin teaches wherein the first monitoring function of the first computing device and the at least one further monitoring function of the further computing device are performed at least temporarily simultaneously ( Figure 9, #910, 930; ¶0080, Method 900 starts at a step 901 and proceeds in serial or parallel processing to steps 910 and/or 930 in various orders or combinations ). Nakamura, Costin, and Shibata are all analogous to the claimed invention because they are in the field of vehicle controllers and sensors. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the image controller from Shibata and the sensor data sharing system of Costin with the occupant monitoring system of Nakamura. The suggestion/motivation for incorporating Shibata would have been for the benefit of reducing the data transmission rate, as suggested by Shibata at Col. 5, lines 20-24, With the above-described configuration, it is possible to appropriately capture an image of a moving object. Furthermore, it is also possible to reduce the amount of data transmission (or the data transmission rate) of image data between imaging device 10 and ECU 12 . The suggestion/motivation for incorporating Costin would have been decreasing the number of links to each ECU, as suggested by Costin at ¶0042, An advantage provided by such a configuration is that the number of links required under such an implementation is reduced by a factor corresponding to the number of ECUs in the system relative to conventional solutions that connect each sensor to each ECU . This method of improving Nakamura was within the ordinary ability of one of ordinary skill in the art based on the teachings of Shibata and Costin. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Nakamura with the teachings of Shibata and Costin to obtain the invention as specified in claim 12. Regarding claim 13 , Nakamura in view of Shibata further in view of Costin discloses the method according to claim 12. However, Nakamura fails to explicitly disclose wherein switching between the first context and the at least one further context takes place according to a pattern that can be specified as a function of the monitoring functions to be performed. Shibata teaches wherein switching between the first context and the at least one further context takes place according to a pattern that can be specified as a function of the monitoring functions to be performed ( Col. 15, lines 6-8, Sensor control unit 76 changes the sensing method for active sensor 16 according to the result of object detection by object detection unit 74 ). Nakamura, Costin, and Shibata are all analogous to the claimed invention because they are in the field of vehicle controllers and sensors. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the image controller from Shibata and the sensor data sharing system of Costin with the occupant monitoring system of Nakamura. The suggestion/motivation for incorporating Shibata would have been for the benefit of reducing the data transmission rate, as suggested by Shibata at Col. 5, lines 20-24, With the above-described configuration, it is possible to appropriately capture an image of a moving object. Furthermore, it is also possible to reduce the amount of data transmission (or the data transmission rate) of image data between imaging device 10 and ECU 12 . This method of improving Nakamura was within the ordinary ability of one of ordinary skill in the art based on the teachings of Shibata and Costin. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Nakamura with the teachings of Shibata and Costin to obtain the invention as specified in claim 13. Regarding claim 15 , in which claim 1 is incorporated, Nakamura discloses wherein the at least one further monitoring function ( ¶0129, If the setting data 69 in the protection memory 61 is updated only in accordance with the occupant behavior information corresponding to a case resembling a minor collision, the protection controller 33 determines that the settings are to be changed in accordance with new occupant behavior information ) of the system is different from the driver monitoring function (( ¶0070, the monitoring controller 31 … determines the position and movement of the upper body of each occupant sitting on the corresponding seat 5 from a vehicle-interior image captured by the onboard imaging device 53. The protection controller determines collision type and changes settings while the monitoring controller determines the occupant’s position and movements ) . 07-22-aia AIA Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Nakamura et al. (US 2020/0094763) (hereafter, "Nakamura") in view of Shibata et al. (US 10,957,028) and further in view of Costin et al. (US 2019/0250611) (hereafter, “Costin”) as applied to claim s 1-4, 7-9, 12-13, and 15 above, and further in view of Murao et al. (US 2020/0029052) (hereafter, “Murao”) . Regarding claim 14 , Nakamura in view of Shibata further in view of Costin discloses the method according to claim 7. However, Nakamura fails to explicitly disclose wherein at least one encryption method or authentication method is provided for securing communications between the first computing device and the further computing device. Murao teaches wherein at least one encryption method or authentication method is provided for securing communications between the first computing device and the further computing device ( ¶0072, Further, by either encrypting an image or generating a signature for the image in the image branch circuit 125 and/or the compression circuit 127 ). Nakamura, Shibata, Costin, and Murao are all analogous to the claimed invention because they are in the field of vehicle controllers and sensors. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the encryption method from Murao with the image controller from Shibata, the sensor data sharing system of Costin, and the occupant monitoring system of Nakamura. The suggestion/motivation for doing so would have been to have the encryption of the data that has the benefit of guaranteeing the authenticity of the output image, as suggested by Murao at ¶0072, lines 15-16, authenticity of the output image can be guaranteed . This method of improving the occupant monitoring system of Nakamura was within the ordinary ability of one of ordinary skill in the art based on the teachings of Shibata, Costin, and Murao. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to modify Nakamura and the teachings of Shibata and Costin with Murao to obtain the invention as specified in claim 14 . Conclusion 07-39 AIA THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 07-96 AIA The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tahara et al. (US 2021/0034887) discloses a passenger state detection device that performs monitoring functions at a first ECU and sends image data to a second ECU (¶0048, The first control unit 10 has a function of appropriately outputting, to each unit in a second control unit 60, each of the captured image data, the face image data, the face part area data, the structure image data, and information indicating the feature amount detected by the feature amount detecting unit 15). Mobbs (US 2022/0417472) discloses a system with two ECUs providing separate ADAS/infotainment functions with image data transfer from the first ECU to the second ECU (¶0118, In addition to sending the sensor data to the AD SoC 828, the first deserializer 816A may send the sensor data to the IVI ECU 838 along the second path 818B). Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMAO DING whose telephone number is (571)272-7237. The examiner can normally be reached Mon-Fri 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOMAO DING/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676 Application/Control Number: 18/538,223 Page 2 Art Unit: 2676 Application/Control Number: 18/538,223 Page 3 Art Unit: 2676 Application/Control Number: 18/538,223 Page 4 Art Unit: 2676 Application/Control Number: 18/538,223 Page 5 Art Unit: 2676 Application/Control Number: 18/538,223 Page 6 Art Unit: 2676 Application/Control Number: 18/538,223 Page 7 Art Unit: 2676 Application/Control Number: 18/538,223 Page 8 Art Unit: 2676 Application/Control Number: 18/538,223 Page 9 Art Unit: 2676 Application/Control Number: 18/538,223 Page 10 Art Unit: 2676 Application/Control Number: 18/538,223 Page 11 Art Unit: 2676 Application/Control Number: 18/538,223 Page 12 Art Unit: 2676 Application/Control Number: 18/538,223 Page 13 Art Unit: 2676 Application/Control Number: 18/538,223 Page 14 Art Unit: 2676 Application/Control Number: 18/538,223 Page 15 Art Unit: 2676 Application/Control Number: 18/538,223 Page 16 Art Unit: 2676 Application/Control Number: 18/538,223 Page 17 Art Unit: 2676 Application/Control Number: 18/538,223 Page 18 Art Unit: 2676 Application/Control Number: 18/538,223 Page 19 Art Unit: 2676 Application/Control Number: 18/538,223 Page 20 Art Unit: 2676 Application/Control Number: 18/538,223 Page 21 Art Unit: 2676 Application/Control Number: 18/538,223 Page 22 Art Unit: 2676 Application/Control Number: 18/538,223 Page 23 Art Unit: 2676 Application/Control Number: 18/538,223 Page 24 Art Unit: 2676