DETAILED ACTION
Response to Arguments
Applicants' arguments filed 02/06/2025 have been fully considered but they are not persuasive. With respect to the Rejection under 35 U.S.C 103 based on Bosch as modified by Cherukuri, the Applicant states that the prior art fails to disclose “a non-stereo camera system that captures image data representing the target object located in the environment surrounding the vehicle, wherein the non-stereo camera system does not provide accurate depth estimation of the target object by itself,” as recited by amended independent claim 1. The Examiner respectfully disagrees and maintains the art rejection.
The Applicant purports that Bosh fails to disclose the claimed “non-stereo camera,” citing that the depth camera of Bosh is not a “non-stereo camera.” In response the Examiner respectfully reminds the Applicant that claim 1 does not withhold or exclude the additional use of a stereo camera, it simply positively recites a “non-stereo camera system.” As the claim does not exclude the use of a stereo camera, it does not exclude the use of both a “non-stereo camera system” and a “stereo camera.” Bosch discloses on paragraph [0016] that the “depth camera” may be a RGB-D camera, specifically citing the use of an Intel RealSense D435i. NPL document D400 (RealSense-D400-Series-Datasheet-Mar-2026) discloses on table 2-2 “Depth Camera Products SKU Descriptions,” that the Intel RealSense D435i depth camera comprises a plurality of sensors, including a RGB color sensor. RGB color sensors are a type of “non-stereo” cameras. As supported by NPL document D400, it is well known to one of ordinary skill in that art that RGB-D cameras comprise a plurality of sensors, including “non-stereo” RGB cameras. Although Bosh discloses the use of a stereo camera, the Examiner maintains that the prior art also discloses a “non-stereo” camera. As claim 1 does not withhold or exclude the additional use of a stereo camera, the Examiner maintains the art rejection.
The Applicant Amends the independent claims to include the new limitation “wherein the non-stereo camera system does not provide accurate depth estimation of the target object by itself.” With respect to the newly amended limitation above, the Applicant purports that “the depth camera taught by Bosch does not teach a non-stereo camera system that ‘does not provide accurate depth perception’ as recited by claim 1.” In response the Examiner respectfully points to the explicit language of the above newly amended limitation. The above limitation does not merely recite that non-stereo camera “does not provide accurate depth perception” as purported by the Applicant; the above limitation instead states “does not provide accurate depth estimation of the target object by itself.” The broadest reasonable interpretation of the above limitation indicates that “accurate depth estimation” is provided by more than one source, including the “non-stereo camera system.” The limitation does not further limit what additional sources provide the “accurate depth estimation,” just that the “non-stereo camera system” does not solely provide this estimation. Paragraph [0029] of Bosch discloses using UWB measurements and data fusion techniques to improve a systems visual odometry. As visual odometry is the process of determining the position and orientation of a system with respect to the systems environment, depth estimation is a necessary component of visual odometry. Bosch discloses providing “accurate depth estimation” with more than one source including the use of a non-stereo camera.
It is further noted that the above newly amended limitation does not withhold or exclude the additional use of a stereo camera. The Examiner maintains that Bosch discloses “a non-stereo camera system that captures image data representing the target object located in the environment surrounding the vehicle, wherein the non-stereo camera system does not provide accurate depth estimation of the target object by itself.”
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1 and similarly claims 17 and 20, it is not clear of what encompasses and is meant by the limitation “wherein the non-stereo camera system does not provide accurate depth estimation of the target object by itself.” As claimed the term “by itself “ is excessively broad in nature and the meets and bounds of the claimed term cannot be ascertained by one skilled in the art. As claimed the term “by itself “ implies multiple systems contribute to “accurate depth estimation of the target object;” however, it is not explicitly clear what is providing “accurate depth estimation.” It is unclear if the limitation is indented to mean that depth estimation is provided by multiple subsystems in the broadly claimed “non-stereo camera system,” or provided by separate systems such a UWB radar systems. Review of the disclosure reveals in figure 2 that the depth and angle module 52 exclusively receives data from camera 40. Paragraph [0032] of the specification states “Although a non-stereo camera system 24 is described, it is to be appreciated that a stereo camera system may be used as well” and paragraph [0033] states “It is to be appreciated that although a single camera 40 is illustrated in FIG. 1, in embodiments the vehicle 10 may include more than one camera 40 as well.” The disclosure suggests that “accurate depth estimation” is provided by multiple cameras including either stereo or non-stereo cameras; however, the Examiner can not identify explicit clarification on the term “by itself” or what additional systems the above limitation is referencing. It suggested applicant amend the claims to be consistent with the disclosed “by itself.” For examination purposes the above limitation will be generally interpreted to mean that “accurate depth estimation” is provided by more than one source, including the “non-stereo camera system.”
Claims 2-16 and 18-29 are also rejected based on their dependency of the defected parent claim(s).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-7 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over BOSCH(DE112020007655T5) in view of Cherukuri(US20240012090A1).
Regarding claim 1, BOSCH discloses
An object detection system for a vehicle that estimates a location of a target object located in an environment (“ the data processing unit 140 is further designed to generate at least one of the absolute position information, the trajectory information, an extent of an object and/or its movement in the environment” [0018]) surrounding the vehicle (“The mobile device can be a vehicle” [0013]), the object detection system comprising: an ultra-wide band (UWB) sensor network (“The UWB system, including UWB tags and UWB anchors”[0029]) including three or more anchors(“three UWB anchors” [0028]) […] that are in wireless communication with a tag mounted to the target object (“at least one ultra-wideband (UWB) tag attached to the mobile device” [0028]), wherein each anchor sends and receives sensor signals that indicate real-time distances between each anchor and the tag (“determine the absolute position information about the UWB tag based on time lengths from the transmission of the UWB signal to the reception of the UWB signal by the plurality of UWB anchors” [0028]); a non-stereo camera system that captures image data representing the target object located in the environment surrounding the vehicle (“The depth camera can be an RGB-D camera e.g. B. Intel RealSense D435i” [0016]” & “ the imaging unit 110 includes a depth camera designed to obtain image stream data about the position and orientation of an object in the environment” [0016]) wherein the non-stereo camera system does not provide accurate depth estimation of the target object by itself (“The UWB system, including UWB tags and UWB anchors, can provide absolute metric position information via UWB tags, thus advantageously balancing the scale problem of visual hodometry and IMU by integrating this absolute metric position information into data fusion to improve localization precision.” [0029]); and one or more controllers in electronic communication with the UWB sensor network and the non-stereo camera system (FIG.1, Part 140), wherein the one or more controllers includes one or more processors that execute instructions to: estimate a camera-based location of the target object based on the image data (“a depth camera designed to obtain image stream data about the position and orientation of an object” [0016]), wherein the camera-based location is adjusted to account for a calibrated camera estimated depth determined during an initial calibration procedure ("SLAM algorithm is used to accumulate 3D points recorded by the RGB-D camera during the SLAM procedure” [0020]); estimate a UWB-based location of the target object by executing one or more range-based localization algorithms that analyze the sensor signals (“The UWB system, including UWB tags and UWB anchors, can provide absolute metric position information via UWB tags” [0029]); and
fuse together the camera-based location of the target object and the UWB-based location of the target object by a Bayesian filter to estimate the location of the target object (“data fusion is performed using filter-based data fusion algorithms, with at least Kalman filters “ [0022]).
Bosh discloses a combined UWB and non-stereo camera system including the use of anchors and tags to locate a target object. Bosh does not explicitly disclose that the three or more anchors are placed on the target object. Cherukuri teaches in the same filed of endeavor of UWB network positioning. Cherukuri discloses, anchors mounted to the vehicle that are in wireless communication with a tag mounted to the target object (“the UWB anchors are mounted on a vehicle” [0042]).
Cherukuri teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh with the teachings of Cherukuri to incorporate the features of anchors mounted to the vehicle so as to gain the advantage of improving reception range. Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
Regarding claim 2, BOSCH as modified by Cherukuri disclose all of the limitations of claim 1. BOSCH discloses wherein, the Bayesian filter is a Kalman filter (“data fusion is performed using filter-based data fusion algorithms, with at least Kalman filters “ [0022]).
Regarding claim 3, BOSCH as modified by Cherukuri disclose all of the limitations of claim 2. BOSCH discloses wherein, a process model of the Kalman filter predicts a plurality of state vectors of the vehicle (“distributed and centralized Kalman filters are used for data fusion” [0022]).
Regarding claim 4, BOSCH as modified by Cherukuri disclose all of the limitations of claim 3. BOSCH discloses wherein, a measurement model of the Kalman filter performs an update of the plurality of state vectors of the vehicle determined by the process model based on an observation vector (“distributed Kalman filters can filter data from multiple data sources in real time, improving the system's fault tolerance and reducing the computational load” [0022]).
Regarding claim 5, BOSCH as modified by Cherukuri disclose all of the limitations of claim 1. BOSCH discloses wherein, the calibrated camera estimated depth represents a raw camera depth of the target object (“The imaging unit 110 includes a depth camera designed to obtain image stream data about the position and orientation of an object in the environment” [0016]) determined based on the image data captured by the non-stereo camera system (“The depth camera can be an RGB-D camera” [0016]) that is calibrated based on a real depth of the target object ("SLAM algorithm is used to accumulate 3D points recorded by the RGB-D camera during the SLAM procedure” [0020]).
Regarding claim 6, BOSCH as modified by Cherukuri disclose all of the limitations of claim 5. BOSCH discloses wherein, the real depth of the target object is determined based on the sensor signals from the UWB sensor network (“the absolute position information about the UWB tag based on time lengths from the transmission of the UWB signal to the reception of the UWB signal by the plurality of UWB anchors” [0028]).
Regarding claim 7, BOSCH as modified by Cherukuri disclose all of the limitations of claim 1. BOSCH discloses wherein, the initial calibration procedure includes: executing one or more rotated object detection algorithms that determine a rotated bounding box that identifies the target object located within a corresponding image frame of the image data (“In graph-based SLAM, a graph is constructed whose nodes represent positions of the mobile device or landmarks, and where edges between two nodes represent sensor measurements” [0024]).
Regarding claim 15, BOSCH as modified by Cherukuri disclose all of the limitations of claim 1. BOSCH discloses wherein, the one or more processors of the one or more controllers execute instructions to:
build a vector map based on an attractive field force between the vehicle and a target location, repulsive field forces between the vehicle and the target object and the vehicle and one or more remaining objects located in the environment, and an overall field force at a current location of the vehicle (“a movement distance 14-11-2025 - Page 12 of the mobile device and point cloud maps of the environment based on at least one of the images, the relative motion information and the first absolute position information” [0036])
Regarding claim 16, BOSCH as modified by Cherukuri disclose all of the limitations of claim 15. BOSCH discloses wherein, the target location represents a destination location of the vehicle (“The mobile device can be a vehicle” [0013]).
Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over BOSCH(DE112020007655T5) as modified by Cherukuri(US20240012090A1) as applied in claim 7, in view of Smith(US20200108923A1).
Regarding claim 8, BOSCH as modified by Cherukuri disclose all of the limitations of claim 7. BOSCH does not explicitly disclose the use of the YOLO Darknet-53 algorithm. Smith teaches in the same field of endeavor of UWB network positioning. Smith discloses wherein, the rotated object detection algorithm is the you only look once (YOLO) Darknet-53 algorithm with a recurrent neural network (RNN) (“provided by the You Only Look Once (YOLO) detection architecture known in the industry” [0054]).
Smith teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh as modified by Cherukuri with the teachings of Smith to incorporate the features of the YOLO Darknet-53 algorithm so as to gain the advantage of improving detection and tracking ([0054], Smith). Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
Regarding claim 9, BOSCH as modified by Cherukuri disclose all of the limitations of claim 7. BOSCH does not explicitly disclose a YOLO Darknet-53 algorithm wherein the respective location parameters of the rotated bounding box are configured. Smith teaches in the same field of endeavor of UWB network positioning. Smith discloses wherein, the initial calibration procedure includes: determining a plurality of location parameters of the rotated bounding box, (“provided by the You Only Look Once (YOLO) detection architecture known in the industry” [0054]) wherein the location parameters of the rotated bounding box include an x-axis location coordinate, a y-axis pixel coordinate, a width of the rotated bounding box, a height of the rotated bounding box, and an angular orientation of the rotated bounding box relative to the horizontal axis of the corresponding image frame (“the gimbal orientation supporting the camera can be used to determine azimuthal and elevation of the target aerial vehicle” [0054]).
Smith teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh as modified by Cherukuri with the teachings of Smith to incorporate the features of determining a plurality of location parameters of the rotated bounding box in order to initialize the YOLO Darknet-53 algorithm so as to gain the advantage of improving detection and tracking ([0054], Smith). Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
Regarding claim 10, BOSCH as modified by Cherukuri and further modified by Smith disclose all of the limitations of claim 9. BOSCH discloses wherein, the initial calibration procedure includes:
determining a raw camera depth based on: [Equation not reproduced] (“Sensor data fusion for localization can then be performed “ [0023]) wherein
d
c
a
m
represent the raw camera depth,
h
b
represents the height of the rotated bounding box
h
b
,
f
c
a
m
represents a focal length of a camera that is part of the non-stereo camera system, and
d
r
e
a
l
represents a real depth of the target object determined based on the sensor signals received from the three or more anchors (“performing data fusion of at least the position data, the visual hodometry information, the relative movement information and the first absolute position information about the mobile device” [0018]).
Claims 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over BOSCH(DE112020007655T5) as modified by Cherukuri(US20240012090A1) and Smith(US20200108923A1) as applied in claim 10, further in view of ZHU(CN115239759A).
Regarding claim 11, BOSCH as modified by Cherukuri and smith disclose all of the limitations of claim 10. BOSCH as modified by Cherukuri and smith does not explicitly disclose a linear relationship between a raw camera depth and the center of a corresponding image. ZHU teaches in the same field of endeavor of UWB network positioning. ZHU discloses wherein, a relationship between the raw camera depth and a center of the corresponding image frame is expressed by an equation of a line: [equation not reproduced] where 𝛽 represents a gradient of the line and 𝛾 represents a y-axis intercept point of the line (“take the depth value of the center pixel within the bounding rectangle of the target in the current frame, and the depth values of the pixels at the four vertices of a rectangle constructed with the center pixel as the center” [n0037])
ZHU teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh as modified by Cherukuri and Smith with the teachings of ZHU to incorporate the features of a linear relationship between a raw camera depth and the center of a corresponding image so as to gain the advantage of improving tracking accuracy ([n0037], ZHU). Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
Regarding claims 12 and 13, BOSCH as modified by Cherukuri and Smith as in view of ZHU do not explicitly disclose solving equations for the initial calibration procedure as claimed; however, it is well known in the art of experimentation that one derives his or her own formulation to operate a system. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the initial calibration procedure as claimed in the system of BOSCH as modified by Cherukuri and Smith as in view of ZHU so as to gain the advantage of improving tracking accuracy ([n0037], ZHU), since it is well known in the art to derive a mathematical (algorithm, formula, equation) to operate a system.
Claims 14 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over BOSCH(DE112020007655T5) as modified by Cherukuri(US20240012090A1) as applied in claim 1, in view of ZHU(CN115239759A).
Regarding claim 14, BOSCH as modified by Cherukuri disclose all of the limitations of claim 1. BOSCH as modified by Cherukuri does not explicitly disclose calibrating to account for lens distortion. ZHU teaches in the same field of endeavor of UWB network positioning. ZHU discloses wherein, the calibrated camera estimated depth accounts for an error introduced by lens distortion of a camera that is part of the non-stereo camera system, and wherein the error is linearly related to a center of a corresponding image frame of the image data (“cameras have radial and tangential distortion when they leave the factory, calibration is performed for horizontally placed cameras” [n0068]).
ZHU teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh as modified by Cherukuri with the teachings of ZHU to incorporate the features of calibrating to account for lens distortion so as to gain the advantage of improving tracking accuracy. Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
Regarding claim 17, BOSCH discloses
An object detection system for a vehicle that estimates a location of a target object located in an environment (“ the data processing unit 140 is further designed to generate at least one of the absolute position information, the trajectory information, an extent of an object and/or its movement in the environment” [0018]) surrounding the vehicle (“The mobile device can be a vehicle” [0013]), the object detection system comprising: an ultra-wide band (UWB) sensor network (“The UWB system, including UWB tags and UWB anchors”[0029]) including three or more anchors (“three UWB anchors” [0028]) […] that are in wireless communication with a tag mounted to the target object (“at least one ultra-wideband (UWB) tag attached to the mobile device” [0028]), wherein each anchor sends and receives sensor signals that indicate real-time distances between each anchor and the tag (“determine the absolute position information about the UWB tag based on time lengths from the transmission of the UWB signal to the reception of the UWB signal by the plurality of UWB anchors” [0028]);
a non-stereo camera system that includes a camera that captures image data representing the target object located in the environment surrounding the vehicle (“The depth camera can be an RGB-D camera e.g. B. Intel RealSense D435i” [0016]” & “ the imaging unit 110 includes a depth camera designed to obtain image stream data about the position and orientation of an object in the environment” [0016]) wherein the non-stereo camera system does not provide accurate depth estimation of the target object by itself (“The UWB system, including UWB tags and UWB anchors, can provide absolute metric position information via UWB tags, thus advantageously balancing the scale problem of visual hodometry and IMU by integrating this absolute metric position information into data fusion to improve localization precision.” [0029]); and one or more controllers in electronic communication with the UWB sensor network and the non-stereo camera system (FIG.1, Part 140), wherein the one or more controllers includes one or more processors that execute instructions to: estimate a camera-based location of the target object based on the image data (“a depth camera designed to obtain image stream data about the position and orientation of an object” [0016]), wherein the camera-based location is adjusted to account for a calibrated camera estimated depth determined during an initial calibration procedure ("SLAM algorithm is used to accumulate 3D points recorded by the RGB-D camera during the SLAM procedure” [0020]), […];
estimate a UWB-based location of the target object by executing one or more range-based localization algorithms that analyze the sensor signals (“The UWB system, including UWB tags and UWB anchors, can provide absolute metric position information via UWB tags” [0029]); and
fuse together the camera-based location of the target object and the UWB-based location of the target object by a Kalman filter to estimate the location of the target object (“data fusion is performed using filter-based data fusion algorithms, with at least Kalman filters “ [0022]).
Bosh discloses a combined UWB and non-stereo camera system including the use of anchors and tags to locate a target object. Bosh does not explicitly disclose that the three or more anchors are placed on the target object. Cherukuri teaches in the same filed of endeavor of UWB network positioning. Cherukuri discloses, anchors mounted to the vehicle that are in wireless communication with a tag mounted to the target object (“the UWB anchors are mounted on a vehicle” [0042]).
Cherukuri teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh with the teachings of Cherukuri to incorporate the features of anchors mounted to the vehicle so as to gain the advantage of improving reception range. Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
BOSCH as modified by Cherukuri does not explicitly disclose calibrating to account for lens distortion. ZHU teaches in the same field of endeavor of UWB network positioning. ZHU discloses wherein, wherein the calibrated camera estimated depth accounts for an error introduced by lens distortion of the camera and the error is linearly related to a center of a corresponding image frame of the image data (“cameras have radial and tangential distortion when they leave the factory, calibration is performed for horizontally placed cameras” [n0068]).
ZHU teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh as modified by Cherukuri with the teachings of ZHU to incorporate the features of calibrating to account for lens distortion so as to gain the advantage of improving tracking accuracy. Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
Regarding claim 18, BOSCH as modified by Cherukuri and further modified by ZHU disclose all of the limitations of claim 17. BOSCH discloses wherein, the initial calibration procedure includes: executing one or more rotated object detection algorithms that determine a rotated bounding box that defines the target object located within a corresponding image frame of the image data (“In graph-based SLAM, a graph is constructed whose nodes represent positions of the mobile device or landmarks, and where edges between two nodes represent sensor measurements” [0024]).
Regarding claim 19, BOSCH as modified by Cherukuri and further modified by ZHU disclose all of the limitations of claim 18. BOSCH as modified by Cherukuri does not explicitly disclose the use of the YOLO Darknet-53 algorithm. ZHU teaches in the same field of endeavor of UWB network positioning. ZHU discloses wherein, the rotated object detection algorithm is the you only look once (YOLO) Darknet-53 algorithm with a recurrent neural network (RNN) (“the YOLO object detection algorithm in the Darknet deep learning framework” [n0073])
ZHU teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh as modified by Cherukuri with the teachings of ZHU to incorporate the features of the YOLO Darknet-53 algorithm so as to gain the advantage of improving image detection and tracking. Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
Regarding claim 20, BOSCH discloses,
An object detection system for a vehicle that estimates a location of a target object located in an environment (“ the data processing unit 140 is further designed to generate at least one of the absolute position information, the trajectory information, an extent of an object and/or its movement in the environment” [0018]) surrounding the vehicle (“The mobile device can be a vehicle” [0013]), the object detection system comprising: an ultra-wide band (UWB) sensor network (“The UWB system, including UWB tags and UWB anchors”[0029]) including three or more anchors (“three UWB anchors” [0028]) […] that are in wireless communication with a tag mounted to the target object (“at least one ultra-wideband (UWB) tag attached to the mobile device” [0028]), wherein each anchor sends and receives sensor signals that indicate real-time distances between each anchor and the tag (“determine the absolute position information about the UWB tag based on time lengths from the transmission of the UWB signal to the reception of the UWB signal by the plurality of UWB anchors” [0028]); a non-stereo camera system that includes a camera that captures image data representing the target object located in the environment surrounding the vehicle (“The depth camera can be an RGB-D camera e.g. B. Intel RealSense D435i” [0016]” & “ the imaging unit 110 includes a depth camera designed to obtain image stream data about the position and orientation of an object in the environment” [0016]) wherein the non-stereo camera system does not provide accurate depth estimation of the target object by itself (“The UWB system, including UWB tags and UWB anchors, can provide absolute metric position information via UWB tags, thus advantageously balancing the scale problem of visual hodometry and IMU by integrating this absolute metric position information into data fusion to improve localization precision.” [0029]); and one or more controllers in electronic communication with the UWB sensor network and the non-stereo camera system (FIG.1, Part 140), wherein the one or more controllers includes one or more processors that execute instructions to: estimate a camera-based location of the target object based on the image data (“a depth camera designed to obtain image stream data about the position and orientation of an object” [0016]), wherein the camera-based location is adjusted to account for a calibrated camera estimated depth determined during an initial calibration procedure ("SLAM algorithm is used to accumulate 3D points recorded by the RGB-D camera during the SLAM procedure” [0020]), […], and wherein the initial calibration procedure includes executing one or more rotated object detection algorithms that determine a rotated bounding box that defines the target object located within a corresponding image frame of the image data (“In graph-based SLAM, a graph is constructed whose nodes represent positions of the mobile device or landmarks, and where edges between two nodes represent sensor measurements” [0024]), estimate a UWB-based location of the target object by executing one or more range-based localization algorithms that analyze the sensor signals (“The UWB system, including UWB tags and UWB anchors, can provide absolute metric position information via UWB tags” [0029]); and fuse together the camera-based location of the target object and the UWB-based location of the target object by a Kalman filter to estimate the location of the target object(“data fusion is performed using filter-based data fusion algorithms, with at least Kalman filters “ [0022]).
Bosh discloses a combined UWB and non-stereo camera system including the use of anchors and tags to locate a target object. Bosh does not explicitly disclose that the three or more anchors are placed on the target object. Cherukuri teaches in the same filed of endeavor of UWB network positioning. Cherukuri discloses, anchors mounted to the vehicle that are in wireless communication with a tag mounted to the target object (“the UWB anchors are mounted on a vehicle” [0042]).
Cherukuri teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh with the teachings of Cherukuri to incorporate the features of anchors mounted to the vehicle so as to gain the advantage of improving reception range. Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
BOSCH as modified by Cherukuri does not explicitly disclose calibrating to account for lens distortion. ZHU teaches in the same field of endeavor of UWB network positioning .ZHU discloses wherein, wherein the calibrated camera estimated depth accounts for an error introduced by lens distortion of the camera and the error is linearly related to a center of a corresponding image frame of the image data (“cameras have radial and tangential distortion when they leave the factory, calibration is performed for horizontally placed cameras” [n0068]).
ZHU teaches in the same field of endeavor of UWB network positioning. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Bosh as modified by Cherukuri with the teachings of ZHU to incorporate the features of calibrating to account for lens distortion so as to gain the advantage of improving tracking accuracy. Also, since it has been held that if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill (MPEP 2143).
For applicant’s benefit portions of the cited reference(s) have been cited to aid in the review of the rejection(s). While every attempt has been made to be thorough and consistent within the rejection it is noted that the PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS. See MPEP 2141.02 VI.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLAYTON PAUL RIDDER whose telephone number is (571)272-2771. The examiner can normally be reached Monday thru Friday ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jack Keith can be reached on (571) 272-6878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.P.R./Examiner, Art Unit 3646
/JACK W KEITH/Supervisory Patent Examiner, Art Unit 3646