Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 1/28/2026 has been entered. Claims 1-7, 9-20 are pending
Response to Arguments
Applicant’s arguments, see ‘Rejections Under 35 USC 102 and 103’ paragraphs 1-3, filed 1/28/2026, with respect to the rejections of claims 1 and the similar claims 15 and 20 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Das (US 20210237761 A1).
Applicant's arguments filed 1/28/2026 have been fully considered but they are not persuasive. The Applicant argues “The other cited references do not remedy the deficiencies of Godsey.” The Examiner respectfully disagrees. Das has segmentation data and also discloses a bird’s eye view and grids as described further below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 4, 5, 6, 7, 10, 12, 13, 14, 15, 16, 18, 19, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Godsey (US 20210155157) in view of Das (US 20210237761 A1).
Regarding claim 1 Godsey discloses
A computer (Paragraph 0019, “The sensor fusion system 300 also includes sensor driver 314 for implementation of control from sensor fusion central control 312 and optimization module 316. This allows the sensor fusion system 300 to use information from one sensor to enable or control the operation of another sensor. The optimization module 316 may be a machine learning module or may operate on specific algorithms” where the sensor fusion system is tantamount to a computer) implemented method for displaying information to an occupant of a vehicle, the method comprising the following steps carried out by computer hardware components: determining data associated with radar responses captured by at least one radar sensor mounted on the vehicle (Paragraph 0020, “The rear sensor system 450 sends radar signals in the areas 404, scanning vertically. The radar signals may also scan horizontally. The radar signals are designed to detect objects behind the vehicle 402”; Paragraph 0012, "The camera module 160 captures images of the RCFV The object detection radar unit 152 complements the camera module 160 by detecting objects within the RCFV 102 and providing the location of detected objects as an overlay to the camera information that is presented on a rear camera display 162 located in the vehicle"; Figure 4 elements 420, 430, 440); determining a visualization of the data (Paragraph 0020, “The rear sensor system 450 detects the objects, identifies location and overlays target alerts 442, 444 to the video image”); and displaying the visualization to the occupant of the vehicle (Paragraph 0020, “The rear sensor system 450 detects the objects, identifies location and overlays target alerts 442, 444 to the video image”). Godsey does not disclose wherein the data segmentation data based on the radar responses, wherein the segmentation data comprises a bird’s eye view grid map.
Das discloses
Wherein the data segmentation data based on the radar responses, wherein the segmentation data comprises a birds eye view grid map (Paragraph 0051, "The perception component 228 may comprise one or more perception pipelines that may detect object(s) in in an environment surrounding the vehicle 202 (e.g., identify that an object exists), classify the object(s) (e.g., determine an object type associated with a detected object), segment sensor data and/or other representations of the environment (e.g., identify a portion of the sensor data and/or representation of the environment as being associated with a detected object and/or an object type)"; Paragraph 0014, “one pipeline may generate an object detection associated with an object and another pipeline may not generate a detection at all. For example, a radar or lidar pipeline may detect an object but the object may be occluded from a camera's field of view”; Paragraph 0013, "In some examples, the regions of interest discussed herein may be a three-dimensional ROI and/or a two-dimensional ROI (e.g., a top-down/bird's eye perspective of the ROI)"; Paragraph 0015, "For example, the aggregated data may comprise a lidar, vision, and/or radar occupancy grid").
Godsey discloses image data but not segmenting the data. Segmenting the data can be advantageous in that it facilitates classifying objects, differentiating between objects, and tracking objects that are partially obscured. A bird’s eye view is advantageous in that it can make parking easier and reduce blind spots. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Das to add segmentation to facilitate classifying, differentiating, and tracking objects, and to add a bird’s eye view to improve parking and reduce blind spots.
Regarding claim 3 the combination of Godsey and Das discloses
The computer implemented method of claim 1. Godsey further discloses further comprising the following step carried out by the computer hardware components: determining a trigger based on a driving situation (Paragraph 0024, “If the detection score is greater than the detection threshold, 712, then the system may apply a secondary detection criteria to the area. This criteria may identify weather conditions, the time of day, or other parameters, which are evaluated to see that they are as specified”); wherein the visualization is determined based on the trigger (Paragraph 0024, “For example, the camera data may be weighted higher on a sunny day than on a cloudy night. The weighting helps the system to determine the accuracy and importance of the camera data over other sensors. If the detection criteria is not violated, 718, then processing returns to continue monitoring, 702. If the detection score is greater than the detection threshold, 712, then the system applies an alert to the area of the object ton the display. This may be a box surrounding the area, an audible alarm, a flashing alarm and so forth”).
Regarding claim 4 the combination of Godsey and Das discloses
The computer implemented method of claim 3.Godsey further discloses wherein the driving situation comprises at least one of a fog situation, a rain situation, a snow situation, a traffic situation, a traffic jam situation, a darkness situation, or a situation related to other road users (Paragraph 0024, “If the detection score is greater than the detection threshold, 712, then the system may apply a secondary detection criteria to the area. This criteria may identify weather conditions, the time of day, or other parameters, which are evaluated to see that they are as specified”).
Regarding claim 5 the combination of Godsey and Das discloses
The computer implemented method of claim 3. Godsey further discloses wherein the trigger is determined based on at least one of a camera (Paragraph 0024, “This criteria may identify weather conditions, the time of day, or other parameters, which are evaluated to see that they are as specified. For example, the camera data may be weighted higher on a sunny day than on a cloudy night. The weighting helps the system to determine the accuracy and importance of the camera data over other sensors. If the detection criteria is not violated, 718, then processing returns to continue monitoring, 702. If the detection score is greater than the detection threshold, 712, then the system applies an alert to the area of the object ton the display”), a rain sensor, vehicle to vehicle communication, a weather forecast, a clock, a light sensor, a navigation system, or an infrastructure to vehicle communication.
Regarding claim 6 the combination of Godsey and Das discloses
The computer implemented method of claim 1. Godsey further discloses wherein the visualization comprises information of a navigation system (Paragraph 0013, “a camera overlay module 156 for providing the location information for overlay on the image presented to the driver on the display 162, and a camera interface unit 158”; Paragraph 0024, “This criteria may identify weather conditions, the time of day, or other parameters, which are evaluated to see that they are as specified. For example, the camera data may be weighted higher on a sunny day than on a cloudy night. The weighting helps the system to determine the accuracy and importance of the camera data over other sensors. If the detection criteria is not violated, 718, then processing returns to continue monitoring, 702. If the detection score is greater than the detection threshold, 712, then the system applies an alert to the area of the object ton the display” where, for example, an alert can tell the driver to not go in a certain direction due to traffic or a flood).
Regarding claim 7 the combination of Godsey and Das discloses
The computer implemented method of claim 1. Godsey further discloses wherein the data comprises object information based on the radar responses (Paragraph 0015, “In the scenario of FIG. 2, a child 202 is the radar target riding behind vehicle 100. The object detection radar 152 detects the target and target information, which includes the range to the target; in some embodiments the target information includes the Radar Cross Section (RCS) size, velocity, and other parameters”).
Regarding claim 10 Godsey discloses
The computer implemented method of claim 1. Godsey does not disclose wherein the data comprises classification data based on the radar responses.
Das discloses
Wherein the data comprises classification data based on the radar responses (Paragraph 0051, "The perception component 228 may comprise one or more perception pipelines that may detect object(s) in in an environment surrounding the vehicle 202 (e.g., identify that an object exists), classify the object(s) (e.g., determine an object type associated with a detected object), segment sensor data and/or other representations of the environment (e.g., identify a portion of the sensor data and/or representation of the environment as being associated with a detected object and/or an object type)"; Paragraph 0014, “one pipeline may generate an object detection associated with an object and another pipeline may not generate a detection at all. For example, a radar or lidar pipeline may detect an object but the object may be occluded from a camera's field of view”).
Godsey and Das are analogous art as they both disclose displaying information to a vehicle occupant. Godsey discloses image data but not classifying objects. Adding a classification step can be advantageous in that it facilitates tracking objects and confirming that an object is detected to mitigate false positives. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Das to add classification to facilitate object tracking and reduce false positives.
Regarding claim 12 the combination of Godsey and Das discloses
The computer implemented method of claim 1. Godsey further discloses wherein the visualization comprises a driver alert Paragraph 0020, “The rear sensor system 450 detects the objects, identifies location and overlays target alerts 442, 444 to the video image”.
Regarding claim 13 Godsey discloses
The computer implemented method of claim 1. Godsey does not disclose wherein the visualization is displayed in an augmented reality display.
Das discloses
Wherein the visualization is displayed in an augmented reality display (Paragraph 0012, "For example, a vision pipeline 302 may output an environment representation 308 based at least in part on vision data 310 (e.g., sensor data comprising one or more RGB images, thermal images)"; Paragraph 0024, "For example, the techniques discussed herein may be applied to mining, manufacturing, augmented reality, etc.").
Godsey and Das are analogous art as they both disclose displaying information to a vehicle occupant. Godsey discloses displaying image data but not using augmented reality (AR). Using augmented reality with a navigation display can be advantageous in that it can provide an improved passenger experience. Augmented reality is versatile enough to point out particular features of the road versus a generic turn right command, where the AR can show to turn right but avoid an obstacle that the passenger may not see due to natural camouflage. Additionally, the use of AR can consolidate all of the relevant driving information on one display simultaneously instead of the passenger needing to look at multiple source (e.g. a camera view and driving instructions). As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Das to add augmented reality to the display to mark specific things about the environment and consolidate all driving information in one display.
Regarding claim 14 the combination of Godsey and Das discloses
The computer implemented method of claim 1. Godsey further discloses wherein the visualization is determined based on combining the data with other sensor data (Paragraph 0012, "The camera module 160 captures images of the RCFV The object detection radar unit 152 complements the camera module 160 by detecting objects within the RCFV 102 and providing the location of detected objects as an overlay to the camera information that is presented on a rear camera display 162 located in the vehicle").
Regarding claim 15 Godsey discloses
A computer system comprising a plurality of computer hardware components (Paragraph 0019, “The sensor fusion system 300 also includes sensor driver 314 for implementation of control from sensor fusion central control 312 and optimization module 316. This allows the sensor fusion system 300 to use information from one sensor to enable or control the operation of another sensor. The optimization module 316 may be a machine learning module or may operate on specific algorithms” where the sensor fusion system is tantamount to a computer; Paragraph 0012, "The camera module 160 captures images of the RCFV The object detection radar unit 152 complements the camera module 160 by detecting objects within the RCFV 102 and providing the location of detected objects as an overlay to the camera information that is presented on a rear camera display 162 located in the vehicle"; Paragraph 0030, "Moreover, the separation of various system components in the aspects described above should not be understood as requiring such separation in all aspects, and it should be understood that the described program components and systems can generally be integrated together in a single hardware product or packaged into multiple hardware products") configured to perform a computer implemented method for displaying information to an occupant of a vehicle, the method comprising the following steps carried out by the plurality of computer hardware components: determining data associated with radar responses captured by at least one radar sensor mounted on the vehicle Paragraph 0020, “The rear sensor system 450 sends radar signals in the areas 404, scanning vertically. The radar signals may also scan horizontally. The radar signals are designed to detect objects behind the vehicle 402”; Paragraph 0012, "The camera module 160 captures images of the RCFV The object detection radar unit 152 complements the camera module 160 by detecting objects within the RCFV 102 and providing the location of detected objects as an overlay to the camera information that is presented on a rear camera display 162 located in the vehicle"; Figure 4 elements 420, 430, 440; determining a visualization of the data (Paragraph 0020, “The rear sensor system 450 detects the objects, identifies location and overlays target alerts 442, 444 to the video image”); and displaying the visualization to the occupant of the vehicle (Paragraph 0020, “The rear sensor system 450 detects the objects, identifies location and overlays target alerts 442, 444 to the video image”). Godsey does not disclose wherein the data comprises segmentation data based on the radar responses, wherein the segmentation data comprises a bird’s eye view grid map.
Das discloses
Wherein the data comprises segmentation data based on the radar responses, wherein the segmentation data comprises a birds eye view grid map (Paragraph 0051, "The perception component 228 may comprise one or more perception pipelines that may detect object(s) in in an environment surrounding the vehicle 202 (e.g., identify that an object exists), classify the object(s) (e.g., determine an object type associated with a detected object), segment sensor data and/or other representations of the environment (e.g., identify a portion of the sensor data and/or representation of the environment as being associated with a detected object and/or an object type)"; Paragraph 0014, “one pipeline may generate an object detection associated with an object and another pipeline may not generate a detection at all. For example, a radar or lidar pipeline may detect an object but the object may be occluded from a camera's field of view”; Paragraph 0013, "In some examples, the regions of interest discussed herein may be a three-dimensional ROI and/or a two-dimensional ROI (e.g., a top-down/bird's eye perspective of the ROI)"; Paragraph 0015, "For example, the aggregated data may comprise a lidar, vision, and/or radar occupancy grid").
Godsey discloses image data but not segmenting the data. Segmenting the data can be advantageous in that it facilitates classifying objects, differentiating between objects, and tracking objects that are partially obscured. A bird’s eye view is advantageous in that it can make parking easier and reduce blind spots. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Das to add segmentation to facilitate classifying, differentiating, and tracking objects, and to add a bird’s eye view to improve parking and reduce blind spots.
Regarding claim 16 the combination of Godsey and Das discloses
A vehicle comprising the computer system of claim 15. Godsey further discloses and the at least one radar sensor (Paragraph 0020, “The rear sensor system 450 sends radar signals in the areas 404, scanning vertically. The radar signals may also scan horizontally. The radar signals are designed to detect objects behind the vehicle 402”; Figure 1 element 170).
Regarding claim 18 the combination of Godsey and Das discloses
The vehicle of claim 16. Godsey further discloses wherein the visualization comprises information of a navigation system (Paragraph 0013, “a camera overlay module 156 for providing the location information for overlay on the image presented to the driver on the display 162, and a camera interface unit 158”; Paragraph 0024, “This criteria may identify weather conditions, the time of day, or other parameters, which are evaluated to see that they are as specified. For example, the camera data may be weighted higher on a sunny day than on a cloudy night. The weighting helps the system to determine the accuracy and importance of the camera data over other sensors. If the detection criteria is not violated, 718, then processing returns to continue monitoring, 702. If the detection score is greater than the detection threshold, 712, then the system applies an alert to the area of the object ton the display” where, for example, an alert can tell the driver to not go in a certain direction due to traffic or a flood).
Regarding claim 19 Godsey discloses
The vehicle of claim 16. Godsey does not disclose wherein the visualization is displayed in an augmented reality display.
Das discloses
Wherein the visualization is displayed in an augmented reality display (Paragraph 0012, "For example, a vision pipeline 302 may output an environment representation 308 based at least in part on vision data 310 (e.g., sensor data comprising one or more RGB images, thermal images)"; Paragraph 0024, "For example, the techniques discussed herein may be applied to mining, manufacturing, augmented reality, etc.").
Godsey and Das are analogous art as they both disclose displaying information to a vehicle occupant. Godsey discloses displaying image data but not using augmented reality (AR). Using augmented reality with a navigation display can be advantageous in that it can provide an improved passenger experience. Augmented reality is versatile enough to point out particular features of the road versus a generic turn right command, where the AR can show to turn right but avoid an obstacle that the passenger may not see due to natural camouflage. Additionally, the use of AR can consolidate all of the relevant driving information on one display simultaneously instead of the passenger needing to look at multiple source (e.g. a camera view and driving instructions). As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Das to add augmented reality to the display to mark specific things about the environment and consolidate all driving information in one display.
Regarding claim 20 Godsey further discloses
A non-transitory computer readable medium storing computer-executable instructions that (Paragraph 0019, “The sensor fusion system 300 also includes a communication module, vehicle interface and control 304, and system memory 306.”), when executed by a processor, cause the processor to perform a method for (Paragraph 0019, “The sensor fusion system 300 also includes sensor driver 314 for implementation of control from sensor fusion central control 312 and optimization module 316. This allows the sensor fusion system 300 to use information from one sensor to enable or control the operation of another sensor. The optimization module 316 may be a machine learning module or may operate on specific algorithms” where the sensor fusion system or optimization module is tantamount to a processor) displaying information to an occupant of a vehicle, the method comprising: determining data associated with radar responses captured by at least one radar sensor mounted on the vehicle (Paragraph 0020, “The rear sensor system 450 sends radar signals in the areas 404, scanning vertically. The radar signals may also scan horizontally. The radar signals are designed to detect objects behind the vehicle 402”; Paragraph 0012, "The camera module 160 captures images of the RCFV The object detection radar unit 152 complements the camera module 160 by detecting objects within the RCFV 102 and providing the location of detected objects as an overlay to the camera information that is presented on a rear camera display 162 located in the vehicle"; Figure 4 elements 420, 430, 440); determining a visualization of the data (Paragraph 0020, “The rear sensor system 450 detects the objects, identifies location and overlays target alerts 442, 444 to the video image”); and displaying the visualization to the occupant of the vehicle (Paragraph 0020, “The rear sensor system 450 detects the objects, identifies location and overlays target alerts 442, 444 to the video image”). Godsey does not disclose wherein the data comprises segmentation data based on the radar responses, wherein the segmentation data comprises a bird’s eye view grid map.
Das discloses
Wherein the data comprises segmentation data based on the radar responses, wherein the segmentation data comprises a birds eye view grid map (Paragraph 0051, "The perception component 228 may comprise one or more perception pipelines that may detect object(s) in in an environment surrounding the vehicle 202 (e.g., identify that an object exists), classify the object(s) (e.g., determine an object type associated with a detected object), segment sensor data and/or other representations of the environment (e.g., identify a portion of the sensor data and/or representation of the environment as being associated with a detected object and/or an object type)"; Paragraph 0014, “one pipeline may generate an object detection associated with an object and another pipeline may not generate a detection at all. For example, a radar or lidar pipeline may detect an object but the object may be occluded from a camera's field of view”; Paragraph 0013, "In some examples, the regions of interest discussed herein may be a three-dimensional ROI and/or a two-dimensional ROI (e.g., a top-down/bird's eye perspective of the ROI)"; Paragraph 0015, "For example, the aggregated data may comprise a lidar, vision, and/or radar occupancy grid").
Godsey discloses image data but not segmenting the data. Segmenting the data can be advantageous in that it facilitates classifying objects, differentiating between objects, and tracking objects that are partially obscured. A bird’s eye view is advantageous in that it can make parking easier and reduce blind spots. As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Das to add segmentation to facilitate classifying, differentiating, and tracking objects, and to add a bird’s eye view to improve parking and reduce blind spots.
Claims 2, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Godsey (US 20210155157) in view of Das (US 20210237761 A1) further in view of Kasarla (US 20220289175 A1).
Regarding claim 2 the combination of Godsey and Das discloses
The computer implemented method of claim 1. The combination of Godsey and Das does not disclose wherein the visualization comprises a surround view of a surrounding of the vehicle.
Kasarla discloses
Wherein the visualization comprises a surround view of a surrounding of the vehicle (Paragraph 0011, “Optionally, the vision system may provide display, such as a rearview display or a top down or bird's eye or surround view display or the like”).
Godsey and Kasarla are analogous art as they both disclose displaying information to a vehicle occupant. Godsey discloses displaying the vehicle’s external environment but it does not disclose displaying a surrounding view. A surrounding view would be advantageous for simultaneously seeing multiple obstacles around the vehicle from multiple directions (e.g. a child at the back and the passenger side of the car). As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Kasarla to add a surrounding view to facilitate a simultaneous view of obstacles from multiple directions.
Regarding claim 17 the combination of Godsey and Das discloses
The vehicle of claim 16. The combination of Godsey and Das does not disclose wherein the visualization comprises a surround view of a surrounding of the vehicle.
Kasarla discloses
Wherein the visualization comprises a surround view of a surrounding of the vehicle (Paragraph 0011, “Optionally, the vision system may provide display, such as a rearview display or a top down or bird's eye or surround view display or the like”).
Godsey and Kasarla are analogous art as they both disclose displaying information to a vehicle occupant. Godsey discloses displaying the vehicle’s external environment but it does not disclose displaying a surrounding view. A surrounding view would be advantageous for simultaneously seeing multiple obstacles around the vehicle from multiple directions (e.g. a child at the back and the passenger side of the car). As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Kasarla to add a surrounding view to facilitate a simultaneous view of obstacles from multiple directions.
Claims 9, 11 are rejected under 35 U.S.C. 103 as being unpatentable over Godsey (US 20210155157) in view of Das (US 20210237761 A1) further in view of Goh (US 11373411 B1).
Regarding claim 9 the combination of Godsey and Das discloses
The computer implemented method claim 1. The combination of Godsey and Das does not disclose further comprising the following step carried out by the computer hardware components: determining a height of an object based on the classification.
Goh discloses
Further comprising the following step carried out by the computer hardware components: determining a height of an object based on the classification (Column 6 lines 54-59, "A vertical dimension of the two-dimensional image annotation (e.g., in pixels relative to the image) can be utilized to estimate the distance of the object from the location where the two-dimensional image 422 was captured based on an assumed height of the object (e.g., based on the vehicle classification 324)").
Godsey and Goh are analogous art as they both disclose displaying information to a vehicle occupant. Godsey discloses image data and using machine learning but does Godsey does not specifically mention a target height nor a target height determined through classification. Height information is useful in determining if an object is a potential collision risk (i.e., a highway sign above the road). Using the classification to determine the height can simplify or reduce the computational load. As in, if the target is already classified through some computation (such as machine learning), it is computationally easier to assume a height based on that classification than to do another computation. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Goh to assume a height based on classification to ease the computational burden of the device.
Regarding claim 11 the combination of Godsey and Das discloses
The computer implemented method claim 10. The combination of Godsey and Das does not disclose further comprising the following step carried out by the computer hardware components: determining a height of an object based on the classification.
Goh discloses
Further comprising the following step carried out by the computer hardware components: determining a height of an object based on the classification (Column 6 lines 54-59, "A vertical dimension of the two-dimensional image annotation (e.g., in pixels relative to the image) can be utilized to estimate the distance of the object from the location where the two-dimensional image 422 was captured based on an assumed height of the object (e.g., based on the vehicle classification 324)").
Godsey and Goh are analogous art as they both disclose displaying information to a vehicle occupant. Godsey discloses image data and using machine learning but does Godsey does not specifically mention a target height nor a target height determined through classification. Height information is useful in determining if an object is a potential collision risk (i.e., a highway sign above the road). Using the classification to determine the height can simplify or reduce the computational load. As in, if the target is already classified through some computation (such as machine learning), it is computationally easier to assume a height based on that classification than to do another computation. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Godsey with Goh to assume a height based on classification to ease the computational burden of the device.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER D DOZE whose telephone number is (571)272-0392. The examiner can normally be reached Monday-Friday 9:00am - 6:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Resha Desai can be reached at (571) 270-7792. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER DAVON DOZE/
Examiner, Art Unit 3648
/RESHA DESAI/Supervisory Patent Examiner, Art Unit 3648