DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Amendment
Applicant submitted amendments on 10/06/2025. The Examiner acknowledges the amendment and has reviewed the claims accordingly.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The IDS(s) dated 9/21/2022 and 3/2/2023 that have been previously considered remain placed in the application file.
Overview
Claims 1-24 are pending in this application and have been considered below.
Claims 1-24 are rejected.
Applicant Arguments
In regards to Argument 1, Applicant states the cited reference fails to disclose the claimed feature of “the modifying comprising adding one or more artifacts into the image to simulate a weather condition”. Accordingly, amended independent Claims 1, 8 and 15 are patentable over the prior art (See Remarks, page 11-13 under Section II).
Examiner’s Response
In response to Argument 1, it has been considered but is moot in view of new ground(s) of rejection based on the amendments. A new reference, Halder, has been introduced which in Abstract discloses “the modifying comprising adding one or more artifacts into the image to simulate a weather condition”. Halder introduces a similar process as described in the amended claim where to improve the robustness to rain, a physically based rain rendering pipeline for realistically inserting rain into clear weather images is used. After reviewing the amendments, the Examiner interprets that Wang in view of Halder teaches on the amended claims that were presented. The details of the rejection are listed below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 8-12, 15-19, and 22-24 is/are rejected under 35 U.S.C. 103 as obvious over Wang et al (NPL: “Towards Visible and Therman Drone Monitoring with Convolutional Neural Networks”, hereafter referred to as Wang) in view of Halder et al (NPL: “Physics-Based Rendering for Improving Robustness to Rain”, hereafter referred to as Halder).
Claim 1
Regarding Claim 1, Wang teaches a method comprising:
obtaining an image of a scene (Wang discloses "first step in developing the drone monitoring system is to collect drone flying images and videos for the purpose of training and testing," Section 3A, paragraph 1);
identifying one or more labels for one or more objects captured in the image (Wang discloses "model-based drone augmentation technique that automatically generates visible drone images with a bounding box label on the drone's location," Abstract);
generating one or more domain-specific augmented images by modifying the image, the one or more domain-specific augmented images associated with the one or more labels (Wang discloses "model-based augmentation technique to acquire more training images with the ground-truth labels. Augmentation techniques include geometric transformations, illumination variation, and image quality," Section 3B, paragraph 2-4); and
training or retraining a machine learning model using the one or more domain-specific augmented images and the one or more labels (Wang discloses "we propose to exploit a large number of synthetic drone images, which are generated by conventional image processing and 3D rendering algorithms, along with a few real 2D and 3D data to train the CNN. We develop an adversarial data augmentation technique, a modified Cycle-GAN-based generation approach, to create more thermal drone images to train the thermal drone detector," Section 1, paragraph 6).
Wang does not explicitly teach all of the modifying comprising adding one or more artifacts into the image to simulate a weather condition.
However, Halder teaches the modifying comprising adding one or more artifacts into the image to simulate a weather condition (Halder in Abstract discloses “to improve the robustness to rain, we present a physically based rain rendering pipeline for realistically inserting rain into clear weather images”).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang by adding artifacts to simulate weather that is taught by Halder, since both reference are analogous art in the field of image modification; thus, one of ordinary skilled in the art would be motivated to combine the references since Wang’s image modification for domain-specific augmentation in object detection training with Halder’s weather-simulation method yields the predictable result of improved robustness to rainy/adverse conditions.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 2
Regarding Claim 2, Wang in view of Halder teaches the method of Claim 1, wherein:
identifying the one or more labels comprises identifying the one or more labels using an initial machine learning model (Wang discloses "the first step in developing the drone monitoring system is to collect drone flying images and videos for the purpose of training and testing. We annotate each drone sequence with a tight bounding box around the drone. The ground truth can be used in CNN training," Section 3A, paragraph 1-2. The initial system uses labeled data (bounding boxes) for training the model.); and
training or retraining the machine learning model comprises retraining the initial machine learning model (Wang discloses "we propose to exploit a large number of synthetic drone images, which are generated by conventional image processing and 3D rendering algorithms, along with a few real 2D and 3D data to train the CNN," Section 1, paragraph 6).
Claim 3
Regarding Claim 3, Wang in view of Halder teaches the method of Claim 1, wherein generating the one or more domain-specific augmented images comprises at least one of: modifying the image to include a different amount of motion blur; and modifying the image to include a different lighting condition; (Wang discloses "this augmentation technique is used to simulate blurred drones caused by camera's motion and out-of-focus. To simulate drones in the shadows, we generate regular shadow maps by using random lines and irregular shadow maps via Perlin noise [28]," Section 3B, paragraph 3).
Claim 4
Regarding Claim 4, Wang in view of Halder teaches the method of Claim 1, wherein generating the one or more domain-specific augmented images comprises applying at least one geometric transformation to the image and to the one or more labels ("we apply geometric transformations such as image translation, rotation and scaling," Section 3B, paragraph 3).
Claim 5
Regarding Claim 5, Wang in view of Halder teaches the method of Claim 1, further comprising using the machine learning model to perform object detection (Wang discloses "to monitor the drones efficiently during the nighttime, we train our CNN-based thermal drone detector using infrared thermal images," Section 3C, paragraph 1).
Claim 8
Regarding Claim 8, Wang teaches an apparatus comprising:
At least one processor configured to:
obtain an image of a scene (Wang discloses "first step in developing the drone monitoring system is to collect drone flying images and videos for the purpose of training and testing," Section 3A, paragraph 1);
identify one or more labels for one or more objects captured in the image (Wang discloses "model-based drone augmentation technique that automatically generates visible drone images with a bounding box label on the drone's location," Abstract);
generate one or more domain-specific augmented images by modifying the image, the one or more domain-specific augmented images associated with the one or more labels (Wang discloses "model-based augmentation technique to acquire more training images with the ground-truth labels. Augmentation techniques include geometric transformations, illumination variation, and image quality," Section 3B, paragraph 2-4); and
train or retrain a machine learning model using the one or more domain-specific augmented images and the one or more labels (Wang discloses "we propose to exploit a large number of synthetic drone images, which are generated by conventional image processing and 3D rendering algorithms, along with a few real 2D and 3D data to train the CNN. We develop an adversarial data augmentation technique, a modified Cycle-GAN-based generation approach, to create more thermal drone images to train the thermal drone detector," Section 1, paragraph 6).
Wang does not explicitly teach all of the modifying comprising adding one or more artifacts into the image to simulate a weather condition.
However, Halder teaches the modifying comprising adding one or more artifacts into the image to simulate a weather condition (Halder in Abstract discloses “to improve the robustness to rain, we present a physically based rain rendering pipeline for realistically inserting rain into clear weather images”).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang by adding artifacts to simulate weather that is taught by Halder, since both reference are analogous art in the field of image modification; thus, one of ordinary skilled in the art would be motivated to combine the references since Wang’s image modification for domain-specific augmentation in object detection training with Halder’s weather-simulation method yields the predictable result of improved robustness to rainy/adverse conditions.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 9
Regarding Claim 9, Wang in view of Halder teaches the apparatus of Claim 8, wherein:
the at least one processor is configured to identify the one or more labels using an initial machine learning model (Wang discloses "the first step in developing the drone monitoring system is to collect drone flying images and videos for the purpose of training and testing. We annotate each drone sequence with a tight bounding box around the drone. The ground truth can be used in CNN training," Section 3A, paragraph 1-2. The initial system uses labeled data (bounding boxes) for training the model.); and
the at least one processor is configured to train or retrain the initial machine learning model using the one or more domain-specific augmented images and the one or more labels (Wang discloses "we propose to exploit a large number of synthetic drone images, which are generated by conventional image processing and 3D rendering algorithms, along with a few real 2D and 3D data to train the CNN," Section 1, paragraph 6).
Claim 10
Regarding Claim 10, Wang in view of Halder teaches the apparatus of Claim 8, wherein, to generate the one or more domain-specific augmented images, the at least one processor is configured to at least one of: modify the image to include a different amount of motion blur; and modify the image to include a different lighting condition; (Wang discloses "this augmentation technique is used to simulate blurred drones caused by camera's motion and out-of-focus. To simulate drones in the shadows, we generate regular shadow maps by using random lines and irregular shadow maps via Perlin noise [28]," Section 3B, paragraph 3).
Claim 11
Regarding Claim 11, Wang in view of Halder teaches the apparatus of Claim 8, wherein, to generate the one or more domain-specific augmented images, the at least one processor is configured to apply at least one geometric transformation to the image and to the one or more labels (Wang discloses "we apply geometric transformations such as image translation, rotation and scaling," Section 3B, paragraph 3).
Claim 12
Regarding Claim 12, Wang in view of Halder teaches the apparatus of Claim 8, wherein the at least one processor is further configured to use the machine learning model to perform object detection (Wang discloses "to monitor the drones efficiently during the nighttime, we train our CNN-based thermal drone detector using infrared thermal images," Section 3C, paragraph 1).
Claim 15
Regarding Claim 15, Wang teaches a non-transitory machine-readable medium containing instructions that when executed cause at least one processor to:
obtain an image of a scene (Wang discloses "first step in developing the drone monitoring system is to collect drone flying images and videos for the purpose of training and testing," Section 3A, paragraph 1);
identify one or more labels for one or more objects captured in the image (Wang discloses "model-based drone augmentation technique that automatically generates visible drone images with a bounding box label on the drone's location," Abstract);
generate one or more domain-specific augmented images by modifying the image, the one or more domain-specific augmented images associated with the one or more labels (Wang discloses "model-based augmentation technique to acquire more training images with the ground-truth labels. Augmentation techniques include geometric transformations, illumination variation, and image quality," Section 3B, paragraph 2-4); and
train or retrain a machine learning model using the one or more domain-specific augmented images and the one or more labels (Wang discloses "we propose to exploit a large number of synthetic drone images, which are generated by conventional image processing and 3D rendering algorithms, along with a few real 2D and 3D data to train the CNN. We develop an adversarial data augmentation technique, a modified Cycle-GAN-based generation approach, to create more thermal drone images to train the thermal drone detector," Section 1, paragraph 6).
Wang does not explicitly teach all of the modifying comprising adding one or more artifacts into the image to simulate a weather condition.
However, Halder teaches the modifying comprising adding one or more artifacts into the image to simulate a weather condition (Halder in Abstract discloses “to improve the robustness to rain, we present a physically based rain rendering pipeline for realistically inserting rain into clear weather images”).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang by adding artifacts to simulate weather that is taught by Halder, since both reference are analogous art in the field of image modification; thus, one of ordinary skilled in the art would be motivated to combine the references since Wang’s image modification for domain-specific augmentation in object detection training with Halder’s weather-simulation method yields the predictable result of improved robustness to rainy/adverse conditions.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 16
Regarding Claim 16, Wang in view of Halder teaches the non-transitory machine-readable medium of Claim 15, wherein
the instructions that when executed cause the at least one processor to identify the one or more labels comprise instructions that when executed cause the at least one processor to identify the one or more labels using an initial machine learning model (Wang discloses "the first step in developing the drone monitoring system is to collect drone flying images and videos for the purpose of training and testing. We annotate each drone sequence with a tight bounding box around the drone. The ground truth can be used in CNN training," Section 3A, paragraph 1-2. The initial system uses labeled data (bounding boxes) for training the model.); and
the instructions that when executed cause the at least one processor to train or retrain the machine learning model comprise instructions that when executed cause the at least one processor to retrain the initial machine learning model (Wang discloses "we propose to exploit a large number of synthetic drone images, which are generated by conventional image processing and 3D rendering algorithms, along with a few real 2D and 3D data to train the CNN," Section 1, paragraph 6).
Claim 17
Regarding Claim 17, Wang in view of Halder teaches the non-transitory machine-readable medium of Claim 15, wherein the instructions that when executed cause the at least one processor to generate the one or more domain-specific augmented images comprise: instructions that when executed cause the at least one processor to at least one of: modify the image to include a different amount of motion blur; and modify the image to include a different lighting condition(Wang discloses "this augmentation technique is used to simulate blurred drones caused by camera's motion and out-of-focus. To simulate drones in the shadows, we generate regular shadow maps by using random lines and irregular shadow maps via Perlin noise [28]," Section 3B, paragraph 3).
Claim 18
Regarding Claim 18, Wang in view of Halder teaches the non-transitory machine-readable medium of Claim 15, wherein the instructions that when executed cause the at least one processor to generate the one or more domain-specific augmented images comprise: instructions that when executed cause the at least one processor to apply at least one geometric transformation to the image and to the one or more labels (Wang discloses "we apply geometric transformations such as image translation, rotation and scaling," Section 3B, paragraph 3).
Claim 19
Regarding Claim 19, Wang in view of Halder teaches the non-transitory machine-readable medium of Claim 15, further containing instructions that when executed cause the at least one processor to use the machine learning model to perform object detection (Wang discloses "to monitor the drones efficiently during the nighttime, we train our CNN-based thermal drone detector using infrared thermal images," Section 3C, paragraph 1).
Claim 22
Regarding Claim 22, Wang in view of Halder teaches the method of claim 1, wherein the weather condition comprises precipitation (Halder in Abstract discloses “to improve the robustness to rain, we present a physically based rain rendering pipeline for realistically inserting rain into clear weather images”).
Claim 23
Regarding Claim 23, Wang in view of Halder teaches the apparatus of claim 8, wherein the weather condition comprises precipitation (Halder in Abstract discloses “to improve the robustness to rain, we present a physically based rain rendering pipeline for realistically inserting rain into clear weather images”).
Claim 24
Regarding Claim 24, Wang in view of Halder teaches the non-transitory machine-readable medium of claim 15, wherein the weather condition comprises precipitation (Halder in Abstract discloses “to improve the robustness to rain, we present a physically based rain rendering pipeline for realistically inserting rain into clear weather images”).
Claims 6-7, 13-14, and 20-21 are rejected under 35 U.S.C. 103 as obvious over NPL “Towards Visible and Therman Drone Monitoring with Convolutional Neural Networks”, (Wang et al.) in view of US Patent Publication 2018 0012093 A1, (Micks et al.).
Claim 6
Regarding Claim 6, Wang et al. teach the method of Claim 1, wherein:
Wang et al. do not explicitly teach all of the image of the scene captures a scene around a vehicle; and the one or more objects captured in the image comprise one or more objects around the vehicle.
However, Micks et al. teach the image of the scene captures a scene around a vehicle (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]); and
the one or more objects captured in the image comprise one or more objects around the vehicle (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang by capturing a scene around a vehicle that is taught by Micks, to make the invention that can perform object detection using augmented pseudo-labeling around a vehicle; thus, one of ordinary skilled in the art would be motivated to combine the references since “it is extremely important that autonomous vehicles and driving assistance systems operate safely and are able to accurately navigate roads and avoid other vehicles even in situations where both autonomous vehicles and human-driven vehicles are present” ([3]).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 7
Regarding Claim 7, Wang et al. teach the method of Claim 1, wherein:
Wang et al. do not explicitly teach all of the image of the scene captures a scene within a vehicle; and the one or more objects captured in the image comprise one or more portions of a driver’s body.
However, Micks et al. teach the image of the scene captures a scene within a vehicle (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]); and
the one or more objects captured in the image comprise one or more portions of a driver’s body (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang by capturing a scene within a vehicle that is taught by Micks, to make the invention that can perform object detection using augmented pseudo-labeling within a vehicle; thus, one of ordinary skilled in the art would be motivated to combine the references since “it is extremely important that autonomous vehicles and driving assistance systems operate safely and are able to accurately navigate roads and avoid other vehicles even in situations where both autonomous vehicles and human-driven vehicles are present” ([3]). For example, one would be motivated to combine the references since capturing an image within a vehicle can improve autonomous driving by analyzing human gestures to infer potential movements of a driver.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 13
Regarding Claim 13, Wang et al. teach the apparatus of Claim 8, wherein:
Wang et al. do not explicitly teach all of the image of the scene captures a scene around a vehicle; and the one or more objects captured in the image comprise one or more objects around the vehicle.
However, Micks et al. teach the image of the scene captures a scene around a vehicle (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]); and
the one or more objects captured in the image comprise one or more objects around the vehicle (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Towards Visible and Therman Drone Monitoring with Convolutional Neural Networks” as taught by Wang et al. to use “Predicting Vehicle Movements Based On Driver Body Language” as taught by Micks et al.
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang by capturing a scene around a vehicle that is taught by Micks, to make the invention that can perform object detection using augmented pseudo-labeling around a vehicle; thus, one of ordinary skilled in the art would be motivated to combine the references since “it is extremely important that autonomous vehicles and driving assistance systems operate safely and are able to accurately navigate roads and avoid other vehicles even in situations where both autonomous vehicles and human-driven vehicles are present” ([3]).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 14
Regarding Claim 14, Wang et al. teach the apparatus of Claim 8, wherein:
Wang et al. do not explicitly teach all of the image of the scene captures a scene within a vehicle; and the one or more objects captured in the image comprise one or more portions of a driver’s body.
However, Micks et al. teach the image of the scene captures a scene within a vehicle (Micks discloses “The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]); and
the one or more objects captured in the image comprise one or more portions of a driver’s body (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang by capturing a scene within a vehicle that is taught by Micks, to make the invention that can perform object detection using augmented pseudo-labeling within a vehicle; thus, one of ordinary skilled in the art would be motivated to combine the references since “it is extremely important that autonomous vehicles and driving assistance systems operate safely and are able to accurately navigate roads and avoid other vehicles even in situations where both autonomous vehicles and human-driven vehicles are present” ([3]). For example, one would be motivated to combine the references since capturing an image within a vehicle can improve autonomous driving by analyzing human gestures to infer potential movements of a driver.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 20
Regarding Claim 20, Wang et al. teach the non-transitory machine-readable medium of Claim 15, wherein:
Wang et al. do not explicitly teach all of the image of the scene captures a scene around a vehicle; and the one or more objects captured in the image comprise one or more objects around the vehicle.
However, Micks et al. teach the image of the scene captures a scene around a vehicle (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]); and
the one or more objects captured in the image comprise one or more objects around the vehicle (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang by capturing a scene around a vehicle that is taught by Micks, to make the invention that can perform object detection using augmented pseudo-labeling around a vehicle; thus, one of ordinary skilled in the art would be motivated to combine the references since “it is extremely important that autonomous vehicles and driving assistance systems operate safely and are able to accurately navigate roads and avoid other vehicles even in situations where both autonomous vehicles and human-driven vehicles are present” ([3]).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 21
Regarding Claim 21, Wang et al. teach the non-transitory machine-readable medium of Claim 15, wherein:
Wang et al. do not explicitly teach all of the image of the scene captures a scene within a vehicle; and the one or more objects captured in the image comprise one or more portions of a driver’s body.
However, Micks et al. teach the image of the scene captures a scene within a vehicle (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]); and
the one or more objects captured in the image comprise one or more portions of a driver’s body (Micks discloses "The camera system 110 may include one or more cameras, such as visible wavelength cameras or infrared cameras. The camera system 110 may provide a video feed or periodic images, which can be processed for object detection, road identification and positioning, or other detection or positioning. In one embodiment, the camera system 110 may include two or more cameras, which may be used to provide ranging (e.g., detecting a distance) for objects within view. In one embodiment, image processing may be used on captured camera images or video to detect vehicles, drivers, gestures, and/or body language of a driver," paragraph [29]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang by capturing a scene within a vehicle that is taught by Micks, to make the invention that can perform object detection using augmented pseudo-labeling within a vehicle; thus, one of ordinary skilled in the art would be motivated to combine the references since “it is extremely important that autonomous vehicles and driving assistance systems operate safely and are able to accurately navigate roads and avoid other vehicles even in situations where both autonomous vehicles and human-driven vehicles are present” ([3]). For example, one would be motivated to combine the references since capturing an image within a vehicle can improve autonomous driving by analyzing human gestures to infer potential movements of a driver.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUSTIN P CASCAIS whose telephone number is (703)756-5576. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mr. O’Neal Mistry can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.P.C./Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Date: 2/6/2026