Prosecution Insights
Last updated: April 19, 2026
Application No. 18/497,045

SYSTEM FOR TRAINING AND VALIDATING VEHICULAR OCCUPANT MONITORING SYSTEM

Final Rejection §103
Filed
Oct 30, 2023
Examiner
CROCKETT, JOSHUA BRIGHAM
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Magna Electronics Inc.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
13 granted / 18 resolved
+10.2% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Claims 1, 19, and 24 are amended. Claims 1-27 are pending in this action. Applicant’s arguments, see pg. 8-10, filed 28 January 2026, with respect to the rejection of claims 1-27 under 35 U.S.C. 103 have been fully considered and are persuasive. Specifically, applicant argues that Kawamura et al. (US 20220415085 A1; hereafter, Kawamura) does not disclose determining a light condition and adapting the first visual characteristic based on the determined light condition. The applicant argues that Rangesh et al. ("Take-over Time Prediction for Autonomous Driving in the Real-World: Robust Models, Data Augmentation, and Evaluation" full reference on the PTO-892 included with the action filed 29 October 2025; hereafter, Rangesh) likewise does not disclose those limitations. The examiner agrees. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Kim et al. (US 20220300591 A1; hereafter, Kim). Kim discloses: determining a light condition of the accessed frame of image data ([0187] "The user authentication device 100 predicts the illumination level and light direction of the image by recognizing the location and/or direction of a light source based on the contour, shade and phase in the face patch according to the anatomical step structure of the face in the registered face image." The "illumination level and light direction" are understood as a light condition); adapting the first artificial visual characteristic to the determined light condition ([0187] "The user authentication device 100 adjusts the illumination level and light direction of the mask of the initially generated fake masked image to match the predicted illumination level and light direction of the registered face image." The mask is understood as a first artificial visual characteristic and it is adjusted according to the light condition); The full rejection, including motivations to combine, is included below in the section "Claim Rejections - 35 USC § 103". Therefore, claims 1-27 are rejected under 35 U.S.C. 103. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7, 9-10, 17, 19-21, and 23-27 are rejected under 35 U.S.C. 103 as being unpatentable over Kawamura et al. (US 20220415085 A1; hereafter, Kawamura) in view of Rangesh et al. ("Take-over Time Prediction for Autonomous Driving in the Real-World: Robust Models, Data Augmentation, and Evaluation" full reference on the PTO-892 included with this action; hereafter, Rangesh) in further view of Kim et al. (US 20220300591 A1; hereafter, Kim). Regarding claim 1, Kawamura discloses: A method for training a vehicular occupant monitoring system, the method comprising: accessing a frame of image data ([0027] the system acquires images from an acquisition device) captured by a camera ([0022] the acquisition device may be a camera) generating a first artificial visual characteristic for the occupant ([0028] a concealing image mask, which is understood as an artificial visual characteristic, is stored in a database. As it is being stored it must have been generated at a previous time); generating a first modified frame of image data, wherein the first modified frame of image data comprises the accessed frame of the image data modified to include the adapted first artificial visual characteristic overlaying a first portion of the occupant ([0030] the concealing mask image, i.e. artificial characteristic, is super imposed on the original image to generate an image with the person in the image partially concealed. The adapted aspect is taught later in combination with Kim); generating a second artificial visual characteristic for the occupant ([0028] there are a plurality of concealing mask image objects, at least one of which is understood as a second artificial visual characteristic), wherein the second artificial visual characteristic is different than the adapted first artificial visual characteristic ([0028] the plurality of concealing mask images are a plurality of types, therefore, it is understood that the second concealing mask image is different than the first. The adapted aspect is taught later in combination with Kim); generating a second modified frame of image data, wherein the second modified frame of image data comprises the accessed frame of image data modified to include the second artificial visual characteristic overlaying a second portion of the occupant ([0030] "When there are a plurality of concealing mask images, the partially concealed image generation unit 103 generates, for each concealing mask image, concealed images in which a relevant concealing mask image is superposed on each of the face images." Based on the use of "each concealing mask image" and "each of the face images", it is understood that each face image, such as the first image, has a plurality of images generated related to it, one image with each one of the concealing mask images. At least one of these images is understood as the second modified frame of image data); and training ([0049]-[0050] models are trained) using (i) the accessed frame of image data ([0049] a model is trained using the original images), (ii) the first modified frame of image data ([0050] a model is trained using the concealed image, i.e. the modified frame of image data) and (iii) the second modified frame of image data ([0050] as the model is trained using the plural "concealed images" it is understood to include at least the first and second modified frames of image data). Kawamura is in the same field of endeavor of the instant application of preparing data for machine learning training with a concealing object, i.e. artificial visual characteristic, on an image of a person (Kawamura, [0023]). Kawamura does not disclose expressly that the camera is disposed at a vehicle and viewing at least a portion of an occupant in the vehicle, and training the vehicular occupant monitoring system. Rangesh discloses: image data captured by a camera (pg. iii col. 2 para. 3, cameras are installed to capture data) disposed at a vehicle (pg. iii col. 2 para. 3, the cameras are installed in a vehicle) and viewing at least a portion of an occupant present in the vehicle (pg. iii col. 2 para. 3, the cameras view the occupant of the vehicle); PNG media_image1.png 120 348 media_image1.png Greyscale and training (pg. vi col. 2 para. 2, the LSTM model is trained) PNG media_image2.png 34 348 media_image2.png Greyscale the vehicular occupant monitoring system (pg. vi col. 1 para. 4, the model monitors the driver of the vehicle therefore is understood as a vehicular monitoring system) PNG media_image3.png 164 344 media_image3.png Greyscale Kawamura and Rangesh are combinable because Rangesh is in the related field of endeavor of augmenting data for training a machine learning model (Rangesh, pg. iv col. 1 para. 4). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the vehicle orientation of the camera and vehicle occupant monitoring system of Rangesh with the invention of Kawamura. The motivation for doing so would have been to "reliably predict takeover times for various secondary activities being performed by the drivers" (Rangesh, pg. ix col. 1 para. 2). Therefore, it would have been obvious to combine Rangesh with Kawamura. Kawamura in view of Rangesh does not disclose expressly to determine a light condition of the image and adapting the visual characteristic to the determined light condition. Kim discloses: determining a light condition of the accessed frame of image data ([0187] "The user authentication device 100 predicts the illumination level and light direction of the image by recognizing the location and/or direction of a light source based on the contour, shade and phase in the face patch according to the anatomical step structure of the face in the registered face image." The "illumination level and light direction" are understood as a light condition); adapting the first artificial visual characteristic to the determined light condition ([0187] "The user authentication device 100 adjusts the illumination level and light direction of the mask of the initially generated fake masked image to match the predicted illumination level and light direction of the registered face image." The mask is understood as a first artificial visual characteristic and it is adjusted according to the light condition); Kim is combinable with Kawamura in view of Rangesh because it is from the same field of endeavor of generating a virtual feature for an image for training a model (Kim, [0019]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the light condition determination and visual characteristic adapting of Kim with the invention of Kawamura in view of Rangesh. The motivation for doing so would have been " the fake masked image that matches the registered face image better is acquired, thereby improving the matching accuracy" (Kim, [0187]). Therefore, it would have been obvious to combine Kim with Kawamura in view of Rangesh to obtain the invention as specified in claim 1. Regarding claim 2, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura further discloses: The method of claim 1, wherein the first artificial visual characteristic comprises at least one selected from the group consisting of (i) a hat ([0028] the concealing object may be a hat), (ii) a beard and (iii) a tattoo (because of the wording “one selected from the group consisting of”, the examiner is interpreting this claim as a list of optional embodiments. Therefore, a teaching of one embodiment teaches the limitations of the claim). Regarding claim 3, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura further discloses: The method of claim 1, wherein the first artificial visual characteristic and the second artificial visual characteristic each comprise synthetic image data ([0030] the concealing mask image, i.e. artificial characteristic, is super imposed on the original image which is a synthetic modification of the original image. Therefore, the concealing mask image is understood as synthetic image data). Regarding claim 4, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura further discloses: The method of claim 1, wherein the first artificial visual characteristic and the second artificial visual characteristic do not overlay the eyes of the occupant ([0028] the concealing object may be a hat which does not overlay the eyes of the occupant). Regarding claim 5, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura further discloses: The method of claim 1, wherein training the vehicular occupant monitoring system comprises training a machine learning model ([0049]-[0050] machine learning models are trained) Kawamura does not disclose expressly that the training of the machine learning model is of a machine learning model of a vehicular occupant monitoring system. Rangesh discloses: training a machine learning model (pg. vi col. 2 para. 2, the LSTM model is trained) PNG media_image2.png 34 348 media_image2.png Greyscale of the vehicular occupant monitoring system (pg. vi col. 1 para. 4, the model monitors the driver of the vehicle therefore is understood as a vehicular monitoring system). PNG media_image3.png 164 344 media_image3.png Greyscale It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the machine learning model of the vehicle occupant monitoring system of Rangesh with the invention of Kawamura. The motivation for doing so would have been to "reliably predict takeover times for various secondary activities being performed by the drivers" (Rangesh, pg. ix col. 1 para. 2). Therefore, it would have been obvious to combine Rangesh with Kawamura to obtain the invention as specified in claim 5. Regarding claim 7, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura discloses: The method of claim 1, wherein the first portion of the occupant comprises one selected from the group consisting of (i) hands of the occupant (because of the wording “one selected from the group consisting of”, the examiner is interpreting this claim as a list of optional embodiments. Therefore, a teaching of one embodiment teaches the limitations of the claim), (ii) hair of the occupant ([0030] relevant artificial characteristics are overlaid on the occupant. [0028] the artificial characteristic includes a hat which a person of ordinary skill in the art would understand to be associated with a hair portion of the occupant) and (iii) the face of the occupant ([0030] relevant artificial characteristics are overlaid on the occupant. [0028] the artificial characteristic includes sunglasses which a person of ordinary skill in the art would understand to be associated with a face portion of the occupant). Regarding claim 9, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura further discloses: The method of claim 1, wherein the first portion of the occupant and the second portion of the occupant are different ([0030] as "each" artificial characteristic is superimposed on "each" image, and as [0028] lists hats and sunglasses, it is understood that at least the hair portion and the face portion are used in superimposing which are different portions). Regarding claim 10, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura does not disclose expressly that the image data is recorded using the camera while it is disposed at the vehicle. Rangesh discloses: The method of claim 1, wherein accessing the image data captured by the camera disposed at the vehicle comprises recording the image data using the camera while the camera is disposed at the vehicle (pg. iii col. 2 para. 3, the cameras are installed in a vehicle and record data while being disposed in the vehicle). Regarding claim 17, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura does not disclose expressly wherein the occupant is the driver of the vehicle and the vehicular monitoring system comprises a driver monitoring system. Rangesh discloses: The method of claim 1, wherein the occupant of the vehicle is a driver of the vehicle (pg. iii col. 2 para. 3, the driver is the subject of the cameras, therefore the occupant is a driver of the vehicle) and the vehicular occupant monitoring system comprises a vehicular driver monitoring system (pg. iii col. 2 para. 3, the system monitors the driver during take-over request events which is understood as a vehicular driver monitoring system). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the driver monitoring system of Rangesh with the invention of Kawamura. The motivation for doing so would have been to "reliably predict takeover times for various secondary activities being performed by the drivers" (Rangesh, pg. ix col. 1 para. 2). Therefore, it would have been obvious to combine Rangesh with Kawamura to obtain the invention as specified in claim 17. Regarding claim 19, Kawamura discloses: A method for training a vehicular occupant monitoring system, the method comprising: accessing a frame of image data ([0027] the system acquires images from an acquisition device) captured by a camera ([0022] the acquisition device may be a camera) generating a first artificial visual characteristic for the occupant ([0028] a concealing image mask, which is understood as an artificial visual characteristic, is stored in a database. As it is being stored it must have been generated at a previous time); generating a first modified frame of image data, wherein the first modified frame of image data comprises the accessed frame of the image data modified to include the adapted first artificial visual characteristic overlaying a first portion of the occupant ([0030] the concealing mask image, i.e. artificial characteristic, is super imposed on the original image to generate an image with the person in the image partially concealed. The adapted aspect is taught later in combination with Kim); wherein at least one selected from the group consisting of (i) the first artificial visual characteristic comprises a hat ([0028] the concealing object may be a hat) and the first portion of the occupant comprises hair of the occupant ([0030] relevant artificial characteristics are overlaid on the occupant. [0028] the artificial characteristic includes a hat which a person of ordinary skill in the art would understand to be associated with a hair portion of the occupant), (ii) the first artificial visual characteristic comprises a beard and the first portion of the occupant comprises the face of the occupant and (iii) the first artificial visual characteristic comprises a tattoo and the first portion of the occupant comprises one selected from the group consisting of (a) hands of the occupant and (b) the face of the occupant (because of the wording “one selected from the group consisting of”, the examiner is interpreting this claim as a list of optional embodiments. Therefore, a teaching of one embodiment teaches the limitations of the claim); generating a second artificial visual characteristic for the occupant ([0028] there are a plurality of concealing mask image objects, at least one of which is understood as a second artificial visual characteristic), wherein the second artificial visual characteristic is different than the adapted first artificial visual characteristic ([0028] the plurality of concealing mask images are a plurality of types, therefore, it is understood that the second concealing mask image is different than the first. The adapted aspect is taught later in combination with Kim); generating a second modified frame of image data, wherein the second modified frame of image data comprises the accessed frame of image data modified to include the second artificial visual characteristic overlaying a second portion of the occupant ([0030] "When there are a plurality of concealing mask images, the partially concealed image generation unit 103 generates, for each concealing mask image, concealed images in which a relevant concealing mask image is superposed on each of the face images." Based on the use of "each concealing mask image" and "each of the face images", it is understood that each face image, such as the first image, has a plurality of images generated related to it, one image with each one of the concealing mask images. At least one of these images is understood as the second modified frame of image data); and training ([0049]-[0050] models are trained) using (i) the accessed frame of image data ([0049] a model is trained using the original images), (ii) the first modified frame of image data ([0050] a model is trained using the concealed image, i.e. the modified frame of image data) and (iii) the second modified frame of image data ([0050] as the model is trained using the plural "concealed images" it is understood to include at least the first and second modified frames of image data). Kawamura is in the same field of endeavor of the instant application of preparing data for machine learning training with a concealing object, i.e. artificial visual characteristic, on an image of a person (Kawamura, [0023]). Kawamura does not disclose expressly that the camera is disposed at a vehicle and viewing at least a portion of an occupant in the vehicle, and training the vehicular occupant monitoring system. Rangesh discloses: image data captured by a camera (pg. iii col. 2 para. 3, cameras are installed to capture data) disposed at a vehicle (pg. iii col. 2 para. 3, the cameras are installed in a vehicle) and viewing at least a portion of an occupant present in the vehicle (pg. iii col. 2 para. 3, the cameras view the occupant of the vehicle); PNG media_image4.png 130 382 media_image4.png Greyscale and training (pg. vi col. 2 para. 2, the LSTM model is trained) PNG media_image2.png 34 348 media_image2.png Greyscale the vehicular occupant monitoring system (pg. vi col. 1 para. 4, the model monitors the driver of the vehicle therefore is understood as a vehicular monitoring system) PNG media_image3.png 164 344 media_image3.png Greyscale It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the vehicle orientation of the camera and vehicle occupant monitoring system of Rangesh with the invention of Kawamura. The motivation for doing so would have been to "reliably predict takeover times for various secondary activities being performed by the drivers" (Rangesh, pg. ix col. 1 para. 2). Therefore, it would have been obvious to combine Rangesh with Kawamura. Kawamura in view of Rangesh does not disclose expressly to determine a light condition of the image and adapting the visual characteristic to the determined light condition. Kim discloses: determining a light condition of the accessed frame of image data ([0187] "The user authentication device 100 predicts the illumination level and light direction of the image by recognizing the location and/or direction of a light source based on the contour, shade and phase in the face patch according to the anatomical step structure of the face in the registered face image." The "illumination level and light direction" are understood as a light condition); adapting the first artificial visual characteristic to the determined light condition ([0187] "The user authentication device 100 adjusts the illumination level and light direction of the mask of the initially generated fake masked image to match the predicted illumination level and light direction of the registered face image." The mask is understood as a first artificial visual characteristic and it is adjusted according to the light condition); It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the light condition determination and visual characteristic adapting of Kim with the invention of Kawamura in view of Rangesh. The motivation for doing so would have been " the fake masked image that matches the registered face image better is acquired, thereby improving the matching accuracy" (Kim, [0187]). Therefore, it would have been obvious to combine Kim with Kawamura in view of Rangesh to obtain the invention as specified in claim 19. Regarding claim 20, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 19. Kawamura further discloses: The method of claim , wherein the first artificial visual characteristic and the second artificial visual characteristic each comprise synthetic image data ([0030] the concealing mask image, i.e. artificial characteristic, is super imposed on the original image which is a synthetic modification of the original image. Therefore, the concealing mask image is understood as synthetic image data). Regarding claim 21, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 19. Kawamura further discloses: The method of claim 19, wherein training the vehicular occupant monitoring system comprises training a machine learning model ([0049]-[0050] machine learning models are trained) Kawamura does not disclose expressly that the training of the machine learning model is of a machine learning model of a vehicular occupant monitoring system. Rangesh discloses: training a machine learning model (pg. vi col. 2 para. 2, the LSTM model is trained) PNG media_image2.png 34 348 media_image2.png Greyscale of the vehicular occupant monitoring system (pg. vi col. 1 para. 4, the model monitors the driver of the vehicle therefore is understood as a vehicular monitoring system). PNG media_image3.png 164 344 media_image3.png Greyscale It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the machine learning model of the vehicle occupant monitoring system of Rangesh with the invention of Kawamura. The motivation for doing so would have been to "reliably predict takeover times for various secondary activities being performed by the drivers" (Rangesh, pg. ix col. 1 para. 2). Therefore, it would have been obvious to combine Rangesh with Kawamura to obtain the invention as specified in claim 21. Regarding claim 23, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 19. Kawamura further discloses: The method of claim 19, wherein the first portion of the occupant and the second portion of the occupant are different ([0030] as "each" artificial characteristic is superimposed on "each" image, and as [0028] lists hats and sunglasses, it is understood that at least the hair portion and the face portion are used in superimposing which are different portions). Regarding claim 24, Kawamura discloses: A method for training a vehicular occupant monitoring system, the method comprising: recording a frame of image data ([0027] the system acquires images from an acquisition device) using a camera ([0022] the acquisition device may be a camera) generating a first artificial visual characteristic for the occupant ([0028] a concealing image mask, which is understood as an artificial visual characteristic, is stored in a database. As it is being stored it must have been generated at a previous time); generating a first modified frame of image data, wherein the first modified frame of image data comprises the recorded frame of the image data modified to include the adapted first artificial visual characteristic overlaying a first portion of the occupant ([0030] the concealing mask image, i.e. artificial characteristic, is super imposed on the original image to generate an image with the person in the image partially concealed. The adapted aspect is taught later in combination with Kim); generating a second artificial visual characteristic for the occupant ([0028] there are a plurality of concealing mask image objects, at least one of which is understood as a second artificial visual characteristic), wherein the second artificial visual characteristic is different than the adapted first artificial visual characteristic ([0030] as "each" artificial characteristic is superimposed on "each" image, and as [0028] lists hats and sunglasses, it is understood that at least the hair portion and the face portion are used in superimposing which are different portions. The adapted aspect is taught later in combination with Kim), and wherein the first artificial visual characteristic and the second artificial visual characteristic each comprise synthetic image data ([0030] the concealing mask image, i.e. artificial characteristic, is super imposed on the original image which is a synthetic modification of the original image. Therefore, the concealing mask image is understood as synthetic image data); generating a second modified frame of image data, wherein the second modified frame of image data comprises the recorded frame of image data modified to include the second artificial visual characteristic overlaying a second portion of the occupant ([0030] "When there are a plurality of concealing mask images, the partially concealed image generation unit 103 generates, for each concealing mask image, concealed images in which a relevant concealing mask image is superposed on each of the face images." Based on the use of "each concealing mask image" and "each of the face images", it is understood that each face image, such as the first image, has a plurality of images generated related to it, one image with each one of the concealing mask images. At least one of these images is understood as the second modified frame of image data); and training ([0049]-[0050] models are trained) using (i) the recorded frame of image data ([0049] a model is trained using the original images), (ii) the first modified frame of image data ([0050] a model is trained using the concealed image, i.e. the modified frame of image data) and (iii) the second modified frame of image data ([0050] as the model is trained using the plural "concealed images" it is understood to include at least the first and second modified frames of image data). Kawamura does not disclose expressly that the camera is disposed at a vehicle and viewing at least a portion of an occupant in the vehicle, and training the vehicular occupant monitoring system. Rangesh discloses: image data captured by a camera (pg. iii col. 2 para. 3, cameras are installed to capture data) disposed at a vehicle (pg. iii col. 2 para. 3, the cameras are installed in a vehicle) and viewing at least a portion of an occupant present in the vehicle (pg. iii col. 2 para. 3, the cameras view the occupant of the vehicle); PNG media_image4.png 130 382 media_image4.png Greyscale and training (pg. vi col. 2 para. 2, the LSTM model is trained) PNG media_image2.png 34 348 media_image2.png Greyscale the vehicular occupant monitoring system (pg. vi col. 1 para. 4, the model monitors the driver of the vehicle therefore is understood as a vehicular monitoring system) PNG media_image3.png 164 344 media_image3.png Greyscale It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the vehicle orientation of the camera and vehicle occupant monitoring system of Rangesh with the invention of Kawamura. The motivation for doing so would have been to "reliably predict takeover times for various secondary activities being performed by the drivers" (Rangesh, pg. ix col. 1 para. 2). Therefore, it would have been obvious to combine Rangesh with Kawamura. Kawamura in view of Rangesh does not disclose expressly to determine a light condition of the image and adapting the visual characteristic to the determined light condition. Kim discloses: determining a light condition of the accessed frame of image data ([0187] "The user authentication device 100 predicts the illumination level and light direction of the image by recognizing the location and/or direction of a light source based on the contour, shade and phase in the face patch according to the anatomical step structure of the face in the registered face image." The "illumination level and light direction" are understood as a light condition); adapting the first artificial visual characteristic to the determined light condition ([0187] "The user authentication device 100 adjusts the illumination level and light direction of the mask of the initially generated fake masked image to match the predicted illumination level and light direction of the registered face image." The mask is understood as a first artificial visual characteristic and it is adjusted according to the light condition); It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the light condition determination and visual characteristic adapting of Kim with the invention of Kawamura in view of Rangesh. The motivation for doing so would have been " the fake masked image that matches the registered face image better is acquired, thereby improving the matching accuracy" (Kim, [0187]). Therefore, it would have been obvious to combine Kim with Kawamura in view of Rangesh to obtain the invention as specified in claim 24. Regarding claim 25, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 24. Kawamura further discloses: The method of claim 24, wherein the first artificial visual characteristic comprises at least one selected from the group consisting of (i) a hat ([0028] the concealing object may be a hat), (ii) a beard and (iii) a tattoo (because of the wording “one selected from the group consisting of”, the examiner is interpreting this claim as a list of optional embodiments. Therefore, a teaching of one embodiment teaches the limitations of the claim). Regarding claim 26, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 24. Kawamura further discloses: The method of claim 24, wherein the first artificial visual characteristic and the second artificial visual characteristic do not overlay the eyes of the occupant ([0028] the concealing object may be a hat which does not overlay the eyes of the occupant). Regarding claim 27, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 24. Kawamura further discloses: The method of claim 24, wherein training the vehicular occupant monitoring system comprises training a machine learning model ([0049]-[0050] machine learning models are trained) Kawamura does not disclose expressly that the training of the machine learning model is of a machine learning model of a vehicular occupant monitoring system. Rangesh discloses: training a machine learning model (pg. vi col. 2 para. 2, the LSTM model is trained) PNG media_image2.png 34 348 media_image2.png Greyscale of the vehicular occupant monitoring system (pg. vi col. 1 para. 4, the model monitors the driver of the vehicle therefore is understood as a vehicular monitoring system). PNG media_image3.png 164 344 media_image3.png Greyscale It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the machine learning model of the vehicle occupant monitoring system of Rangesh with the invention of Kawamura. The motivation for doing so would have been to "reliably predict takeover times for various secondary activities being performed by the drivers" (Rangesh, pg. ix col. 1 para. 2). Therefore, it would have been obvious to combine Rangesh with Kawamura to obtain the invention as specified in claim 27. Claims 6, 8, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Kawamura et al. (US 20220415085 A1; hereafter, Kawamura) in view of Rangesh et al. ("Take-over Time Prediction for Autonomous Driving in the Real-World: Robust Models, Data Augmentation, and Evaluation" full reference on the PTO-892 included with this action; hereafter, Rangesh) in further view of Kim et al. (US 20220300591 A1; hereafter, Kim) and of Bowers et al. (US 9361447 B1; hereafter, Bowers). Regarding claim 6, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura in view of Rangesh in further view of Kim does not disclose expressly generating a third modified image with both the first and second artificial visual characteristic overlaying a respective portion of the occupant. Bowers discloses: The method of claim 1, further comprising generating third modified image data, wherein the third modified image data comprises the accessed frame of image data with the first artificial visual characteristic and the second artificial visual characteristic each overlaying (col. 9 line 23-27, multiple artificial visual characteristics may be overlaid on the same image) a respective portion of the occupant (col. 9 line 23-27, a person of ordinary skill in the art would understand that the hat would be overlaid the respective hair portion and a beard would be overlaid in a respective face portion in this example). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the image with more than one artificial visual characteristic of Bowers with the invention of Kawamura in view of Rangesh in further view of Kim. The motivation for doing so would have been "The process is illustratively configured so as to require a particular number of iterations and associated number of selected overlay effects that are sufficient to satisfy a specified minimum entropy measure" (Bowers, col. 6 line 28-31). In other words, selecting a number of overlay effects allows for greater variation or entropy. Greater variation would fulfill the suggestion of Rangesh to increase the number of samples to improve training (Rangesh, pg. ii col. 2 para. 2). Therefore, it would have been obvious to combine Bowers with Kawamura in view of Rangesh in further view of Kim to obtain the invention as specified in claim 6. Regarding claim 8, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura in view of Rangesh in further view of Kim does not disclose that the first and second portion are the same. Bowers discloses: The method of claim 1, wherein the first portion of the occupant and the second portion of the occupant are the same (col. 9 line 23-27, multiple artificial visual characteristics may be overlaid on the same image including a beard and sunglasses which would both be on the face portion of an occupant, therefore the first and second portion would be the same). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to make the first and second portion the same as in Bowers with the invention of Kawamura in view of Rangesh in further view of Kim. The motivation for doing so would have been "The process is illustratively configured so as to require a particular number of iterations and associated number of selected overlay effects that are sufficient to satisfy a specified minimum entropy measure" (Bowers, col. 6 line 28-31). In other words, selecting a number of overlay effects allows for greater variation or entropy. More options for variation become available when artificial visual characteristics may occupy the same portion. Greater variation would fulfill the suggestion of Rangesh to increase the number of samples to improve training (Rangesh, pg. ii col. 2 para. 2). Therefore, it would have been obvious to combine Bowers with Kawamura in view of Rangesh in further view of Kim to obtain the invention as specified in claim 8. Regarding claim 22, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 19. Kawamura in view of Rangesh in further view of Kim does not disclose that the first and second portion are the same. Bowers discloses: The method of claim 19, wherein the first portion of the occupant and the second portion of the occupant are the same (col. 9 line 23-27, multiple artificial visual characteristics may be overlaid on the same image including a beard and sunglasses which would both be on the face portion of an occupant, therefore the first and second portion would be the same). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to make the first and second portion the same as in Bowers with the invention of Kawamura in view of Rangesh in further view of Kim. The motivation for doing so would have been "The process is illustratively configured so as to require a particular number of iterations and associated number of selected overlay effects that are sufficient to satisfy a specified minimum entropy measure" (Bowers, col. 6 line 28-31). In other words, selecting a number of overlay effects allows for greater variation or entropy. More options for variation become available when artificial visual characteristics may occupy the same portion. Greater variation would fulfill the suggestion of Rangesh to increase the number of samples to improve training (Rangesh, pg. ii col. 2 para. 2). Therefore, it would have been obvious to combine Bowers with Kawamura in view of Rangesh in further view of Kim to obtain the invention as specified in claim 22. Claims 11-16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kawamura et al. (US 20220415085 A1; hereafter, Kawamura) in view of Rangesh et al. ("Take-over Time Prediction for Autonomous Driving in the Real-World: Robust Models, Data Augmentation, and Evaluation" full reference on the PTO-892 included with this action; hereafter, Rangesh) in further view of Kim et al. (US 20220300591 A1; hereafter, Kim) and of Peterson et al. (US 20210323473 A1; hereafter, Peterson). Regarding claim 11, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura in view of Rangesh in further view of Kim does not disclose expressly that the camera is dispose in the interior rearview mirror assembly of the vehicle. Peterson discloses: The method of claim 1, wherein the camera is disposed at an interior rearview mirror assembly of the vehicle ([0031] the camera is in the interior rearview mirror assembly). Peterson is combinable with Kawamura in view of Rangesh in further view of Kim because it is from the related field of endeavor of providing a system for driver monitoring (Peterson, [0004]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the rearview mirror disposed camera of Peterson with the invention of Kawamura in view of Rangesh in further view of Kim. The motivation for doing so would have been "an interior rearview mirror assembly that has a driver monitoring camera disposed at the mirror head so as to move in tandem with the mirror head when the mirror head is adjusted relative to an interior portion of the vehicle to adjust the driver's rearward view" (Peterson, [0004]). Therefore, it would have been obvious to combine Peterson with Kawamura in view of Rangesh in further view of Kim to obtain the invention as specified in claim 11. Regarding claim 12, Kawamura in view of Rangesh in further view of Kim and of Peterson discloses the subject matter of claim 11. Kawamura in view of Rangesh in further view of Kim does not disclose expressly that the camera in the rearview mirror assembly views through a mirror reflective element. Peterson further discloses: The method of claim 11, wherein the camera is disposed within a mirror head of the interior rearview mirror assembly of the vehicle ([0031] the camera is in the interior rearview mirror assembly), and wherein the camera views through a mirror reflective element of the mirror head of the interior rearview mirror assembly of the vehicle ([0031] the camera view through the mirror reflective element). Regarding claim 13, Kawamura in view of Rangesh in further view of Kim and of Peterson discloses the subject matter of claim 11. Kawamura in view of Rangesh in further view of Kim does not disclose expressly that the data captured by the camera is processed by an ECU and that the ECU is disposed at the interior of the rearview mirror assembly. Peterson discloses: The method of claim 11, wherein image data captured by the camera is processed by an ECU ([0035] the driver monitoring system (DMS) PCB receives data from the camera which is then communicated to the control PCB. [0031] the control PCB is understood as an ECU because it is an electronic control unit) and wherein the ECU is disposed at the interior rearview mirror assembly of the vehicle ([0031] and Fig. 4, the control PCB is within the mirror of the vehicle). Regarding claim 14, Kawamura in view of Rangesh in further view of Kim and of Peterson discloses the subject matter of claim 11. Kawamura in view of Rangesh in further view of Kim does not disclose expressly that the data captured by the camera is processed by an ECU and that the ECU is disposed remote from the mirror assembly. Peterson discloses: The method of claim 11, wherein image data captured by the camera is processed by an ECU ([0035] the driver monitoring system (DMS) PCB receives data from the camera which is then communicated to the control PCB. [0031] the control PCB is understood as an ECU because it is an electronic control unit), and wherein the ECU is disposed at the vehicle remote from the interior rearview mirror assembly (for the purpose of examination, the examiner interprets "disposed at the vehicle remote from" as any location in the vehicle which is not in the mirror. The examiner is not interpreting "the vehicle remote" as a single term describing a single object. [0052] the PCB may be disposed outside the mirror head and in the vehicle). Regarding claim 15, Kawamura in view of Rangesh in further view of Kim and of Peterson discloses the subject matter of claim 14. Kawamura in view of Rangesh in further view of Kim does not disclose expressly that the image data captured by the camera is transferred to the ECU by a coaxial cable. Peterson discloses: The method of claim 14, wherein image data captured by the camera is transferred to the ECU via a coaxial cable ([0060] connections between cameras and PCBs, i.e. ECU, may be coaxial cables). Regarding claim 16, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura in view of Rangesh in further view of Kim does not disclose expressly that the data captured by the camera is processed by an EDU and that the ECU can process at least one driving assist system of the vehicle. Peterson discloses: The method of claim 1, wherein image data captured by the camera is processed by an ECU ([0035] the driver monitoring system (DMS) PCB receives data from the camera which is then communicated to the control PCB. [0031] the control PCB is understood as an ECU because it is an electronic control unit), and wherein the ECU is operable to process the image data for at least one driving assist system of the vehicle (Claim 1, the system may process data from a forward viewing camera to identify hazards which is understood as a driving assist system. Claim 21, the second processor and the first processor of claim 1 may be the same processor, i.e. the ECU). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to include an ECU which processes a driving assist system as taught by Peterson in the invention of Kawamura in view of Rangesh in further view of Kim. The motivation for doing so would have been "to determine driving conditions and/or potential hazards ahead of the vehicle" (Peterson, [0004]). Therefore, it would have been obvious to combine Peterson with Kawamura in view of Rangesh in further view of Kim to obtain the invention as specified in claim 16. Regarding claim 18, Kawamura in view of Rangesh in further view of Kim discloses the subject matter of claim 1. Kawamura in view of Rangesh in further view of Kim does not disclose expressly wherein the occupant is the passenger of the vehicle and the vehicular monitoring system comprises an occupant detection system. Peterson discloses: The method of claim 1, wherein the occupant of the vehicle is a passenger of the vehicle ([0031] the system may view the passenger of the vehicle) and the vehicular occupant monitoring system comprises a vehicular occupant detection system ([0031] the system includes an occupant monitoring system which is understood as a detection system). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the occupant detection system of Peterson with the invention of Kawamura in view of Rangesh in further view of Kim. The motivation for doing so would have been to "provide occupant detection and/or monitoring functions" (Peterson, [0031]). Therefore, it would have been obvious to combine Peterson with Kawamura in view of Rangesh in further view of Kim to obtain the invention as specified in claim 18. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Suggu et al., US 11702011 B1, discloses a system for augmenting images of a driver of a vehicle to improve training of a model. See also Saggu et al. US 11699282 B1. Frolov et al. "Image Synthesis Pipeline for CNN-Based Sensing Systems." (full reference on PTO-892 included with this action) discloses a system which augments image data for training machine learning models and considers the lighting situation in an image in order to match the augmentations with the lighting situation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA B CROCKETT whose telephone number is (571)270-7989. The examiner can normally be reached Monday-Thursday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA B. CROCKETT/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Oct 30, 2023
Application Filed
Oct 28, 2025
Non-Final Rejection — §103
Jan 27, 2026
Response Filed
Mar 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592060
ARTIFICIAL INTELLIGENCE DEVICE AND 3D AGENCY GENERATING METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12587704
VIDEO DATA TRANSMISSION AND RECEPTION METHOD USING HIGH-SPEED INTERFACE, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12567150
EDITING PRESEGMENTED IMAGES AND VOLUMES USING DEEP LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12561839
SYSTEMS AND METHODS FOR CALIBRATING IMAGE SENSORS OF A VEHICLE
2y 5m to grant Granted Feb 24, 2026
Patent 12529639
METHOD FOR ESTIMATING HYDROCARBON SATURATION OF A ROCK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+27.5%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month