DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
Claims 1-20 are currently pending.
Independent claims 1, 9, and 17 and dependent claims 13 and 20 have been amended by applicant’s amendments received 20 January 2026. No new matter has been introduced.
Prior objections of the drawings have been overcome by amendment and are therefore withdrawn.
Prior objections of claims 9 and 17 have been overcome by amendment and are therefore withdrawn.
Prior rejections of claim 20 under USC § 112(b) have been overcome by amendment and are therefore withdrawn.
Response to Arguments
Applicant’s arguments, see Remarks, pg. 11, filed 20 January 2026, with respect to the rejection(s) of claim(s) 1-6, 8-10, 15-18 and 20 under 35 USC § 102(a)(1) and (a)(2) and rejections of claims 7 and 13 under 35 USC § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art references in response to filed amendments to independent claims 1, 9, and 17 and dependent claims 13.
Applicant’s response (Remarks, pg. 11 lines 1-17) regarding the rejection under 35 USC § 102(a)(1) and (a)(2) of claims 1-6, 8-10, 15-18 and 20 argues that the cited prior art (Lee, US 20220185324 A1) does not teach the newly incorporated limitations, to which the examiner agrees. However, upon further search and consideration newly found prior art, as cited below, has been referenced.
Applicant’s response (Remarks, pg. 11 line 18 – pg. 12, line 2) regarding the rejection under 35 USC § 103 of claims 7 and 11-14, 19 with cited prior art (Afrouzi, US 20190035100 A1 and Banerjee, US 20230237783 A1, respectively)) indicates that neither Afrouzi nor Banerjee teach the newly incorporated limitations to the independent claims, to which the examiner agrees. Further, applicant indicates that Banerjee does not teach the newly incorporated limitations to claim 13, which examiner also agrees with. However, again upon further search and consideration newly found prior art, as cited below, has been referenced and the rejections have been updated.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim 9 includes the following limitations, which all recite a “means” and will undergo the three-prong test:
A motion tracker means, comprising:
a means for receiving first data, wherein the first data comprise a first frame of a first scene of an environment detected by a camera or image sensor;
a means for receiving second data, wherein the second data comprise a second frame of a second scene of an environment detected by a light detection and ranging (LIDAR) sensor, wherein at least a subset of the second scene corresponds to the first scene;
a means for transforming the second data to generate transformed second data corresponding to the first frame;
a means for determining a first weighting factor for the first data and a second weighting factor for the transformed second data;
a means for weighting the first data using the first weighting factor to generate first weighted data;
a means for weighting the transformed second data using the second weighting factor to generate second weighted data;
and a means for combining the first weighted data and the second weighted data to generate a combined image data.
For all limitations outline above, the three-prong test is as follows:
All limitations use the term “means” as a generic, non-structural term for performing the claimed function.
The term “means” is linked by the transition word “for” in each limitation.
The term “means” is not modified by sufficient structure, material, or acts for performing the claimed function within the claim limitation.
Therefore, all limitations within claim 9 will be interpreted as invoking 35 U.S.C. 112(f) and given the broadest reasonable interpretation based on information within the specification. There is sufficient structure described in the specification, such that all claimed necessary means for processing are understood to be performable by the controller (150), as shown in Figs. 1 and 2, and described in the specification in [0011] – [0017]. As the controller is described as being able to receive the first and second data from the camera and lidar sensors, respectively, and then further processed, the ‘means’ in claim 9 will be interpreted to be accomplished by a controller, processor which can control a system, and the like.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 8-10, 15-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 20220185324 A1) and in view of Ebrahimi Afrouzi et al. (hereinafter Afrouzi (‘920), US 20220026920 A1).
Regarding claim 1, Lee teaches a non-transitory computer readable medium having instructions stored therein that ([0080]), when executed by a controller, cause the controller to:
receive first data, wherein the first data comprise a first frame of a first scene of an environment detected by a camera or image sensor ([0130], [0199]; Fig. 20, step (1904) collects camera frame data);
receive second data, wherein the second data comprise a second frame of a second scene of an environment detected by a light detection and ranging (LIDAR) sensor, wherein at least a subset of the second scene corresponds to the first scene ([0158], [0160], [0198]; Fig. 20, step (1902) collects LiDAR frame data where at least part of the FoV of the LiDAR overlaps with the FoV of the camera);
transform the second data to generate transformed second data corresponding to the first frame ([0162], where information and features distinguishable in both LiDAR and camera information are used to merge the LiDAR data to the camera data);
dynamically determine a first weighting factor for the first data and a second weighting factor for the transformed second data ([0163] - [0166]; where system may set bounding boxes or designate object point clusters for merging based on current sensor information, and bounding boxes and filters may be used to ignore segments or whole amounts of either first or second data, and filtering essentially assigns a weighting to both data sets), based on previous values of the first data and the second data ([0170] - [0175]; where data and image IDS for objects may be updated and tracked based on subsequent IDs/ object locations/data sets);
weight the first data using the first weighting factor to generate first weighted data ([0163] - [0166], where filtering is applied to camera data);
weight the transformed second data using the second weighting factor to generate second weighted data ([0163] - [0166], where filtering is applied to the LiDAR data after it has been transformed to align with camera data);
and combine the first weighted data and the second weighted data to generate a combined image data ([0130], [0204]; Fig. 20, step (1908) merges the camera and lidar data).
Lee does not teach determining the first and the second weighting factor in connection with a probability density function.
Afrouzi (‘920) teaches a method and a system which involves operating a cleaning robot, where sensor data is collected and the system may dynamically determine the first and the second weighting factor in connection with a probability density function ([0767] – [0768], [817], [1095] where the system with multiple sensors such as a camera and LIDAR may dynamically merge images/sensor data which includes assigning weight to more recently collected data and incorporates a probability density function into the weighting factor).
Therefore, to one of ordinary skill in the art before the effective filing date of the claimed invention, it would have been obvious prima facie to modify Lee to incorporate the teachings of Afrouzi (‘920) to utilize a probability density function to determine weighting factors for multiple sensor data with a reasonable expectation of success. Afrouzi (‘920) notes that weighting more recent sensor data helps to smooth sensor data ([0767]), and incorporating this into the system of Lee would have a predictable result of both smoothing sensor data and determining how much more probable certain features of sensor readings are, therefore allowing more accurate weighting of the sensor data.
Regarding claim 2, Lee as modified above teaches the non-transitory computer readable medium of claim 1, further having instructions stored therein that, when executed by the controller, cause the controller to
identify the first frame corresponding to a timing of the second frame for the transformation of second data, wherein the first data comprise a sequence of first frames ([0135], [0144] - [0149]; Figs. 15A-D where the system identifies a timing difference between the LiDAR and camera data, taken at multiple points in time, and accounts for it).
Regarding claim 3, Lee as modified above teaches the non-transitory computer readable medium of claim 1, further having instructions stored therein that, when executed by the controller, cause the controller to
extract key points of the first scene from the first frame, and to match extracted key points with extracted key points of a previous frame of the camera or image sensor ([0130] - [0136]; the system classifies objects of interest in camera information, and may utilize multiple frames acquired over a period of time to follow moving objects).
Regarding claim 4, Lee as modified above teaches the non-transitory computer readable medium of claim 3, further having instructions stored therein that, when executed by the controller, cause the controller to
transform the matched key points of the first frame to coordinates of the environment ([0169], [0173], 0190] - [0193], image merging system may store location of objects within a global map, indicative of location in the environment relative to one or more other objects, within a specific distance range from the vehicle, etc.).
Regarding claim 5, Lee as modified above teaches the non-transitory computer readable medium of claim 1, further having instructions stored therein that, when executed by the controller, cause the controller to
extract of key points of the second scene from the second frame, and to match the extracted key points with extracted key points of a previous frame of the LIDAR sensor ([0132], [0163], [0213]; the system indicates LiDAR point clusters indicative of regions of interest, which may be continually updated to increase resolution or to confirm or reassess an object location.).
Regarding claim 6, Lee as modified above teaches the non-transitory computer readable medium of claim 5, further having instructions stored therein that, when executed by the controller, cause the controller to
transform the matched key points of the second frame to correspond to key points of the first frame ([0161] - [0163]; where the image merging system (1350) associates at least one portion of the LiDAR information with at least one pixel of the camera information).
Regarding claim 8, Lee as modified above teaches the non-transitory computer readable medium of claim 1, further having instructions stored therein that, when executed by the controller, cause the controller to
transmit the combined image data to a motion tracking module configured for tracking a motion in the environment ([0090], [0093]; Fig. 4, planning module (404) and localization module (408)works with control module (406) and additionally receives data representing the AV's position for control in the environment).
Regarding claim 9, Lee teaches a motion tracker means ([0114] - [0122]; Figs. 1, 13A where a controller (1102) includes one or more processors, short and/or long-term data storage, and instructions stored in memory to carry out operations of the controller (1102) and which may include image merging system (1350), programmed to process the following), comprising:
a means for receiving first data, wherein the first data comprise a first frame of a first scene of an environment detected by a camera or image sensor ([0130], [0199]; Fig. 20, step (1904) collects camera frame data);
a means for receiving second data, wherein the second data comprise a second frame of a second scene of an environment detected by a light detection and ranging (LIDAR) sensor, wherein at least a subset of the second scene corresponds to the first scene ([0158], [0160], [0198]; Fig. 20, step (1902) collects LiDAR frame data where at least part of the FoV of the LiDAR overlaps with the FoV of the camera);
a means for transforming the second data to generate transformed second data corresponding to the first frame ([0162], where information and features distinguishable in both LiDAR and camera information are used to merge the LiDAR data to the camera data);
a means for dynamically determine a first weighting factor for the first data and a second weighting factor for the transformed second data ([0163] - [0166]; where system may set bounding boxes or designate object point clusters for merging based on current sensor information, and bounding boxes and filters may be used to ignore segments or whole amounts of either first or second data, and filtering essentially assigns a weighting to both data sets), based on previous values of the first data and the second data ([0170] - [0175]; where data and image IDS for objects may be updated and tracked based on subsequent IDs/ object locations/data sets);
a means for weight the first data using the first weighting factor to generate first weighted data ([0163] - [0166], where filtering is applied to camera data);
a means for weighting the transformed second data using the second weighting factor to generate second weighted data ([0163] - [0166], where filtering is applied to the LiDAR data after it has been transformed to align with camera data);
and a means for combining the first weighted data and the second weighted data to generate a combined image data ([0130], [0204]; Fig. 20, step (1908) merges the camera and lidar data).
Lee does not teach determining the first and the second weighting factor in connection with a probability density function.
Afrouzi (‘920) teaches a method and a system which involves operating a cleaning robot, where sensor data is collected and the system may dynamically determine the first and the second weighting factor in connection with a probability density function ([0767] – [0768], [817], [1095] where the system with multiple sensors such as a camera and LIDAR may dynamically merge images/sensor data which includes assigning weight to more recently collected data and incorporates a probability density function into the weighting factor).
Therefore, to one of ordinary skill in the art before the effective filing date of the claimed invention, it would have been obvious prima facie to modify Lee to incorporate the teachings of Afrouzi (‘920) to utilize a probability density function to determine weighting factors for multiple sensor data with a reasonable expectation of success. Afrouzi (‘920) notes that weighting more recent sensor data helps to smooth sensor data ([0767]), and incorporating this into the system of Lee would have a predictable result of both smoothing sensor data and determining how much more probable certain features of sensor readings are, therefore allowing more accurate weighting of the sensor data.
Regarding claim 10, Lee as modified above teaches the motion tracker of claim 9, wherein
the second scene has about a same field of view as a field of view of the camera or image sensor ([0040], [0096], [0099], [0163] where the system can compensate for the differences in FoVs , as the LiDAR may have a 360-degree FoV and the camera may have 120 degrees, and merges the camera and LiDAR data where LiDAR data which overlaps with the camera FoV may be retained as the second scene and other information may be discarded.).
Claim 15 is similarly rejected to claim 8.
Regarding claim 16, Lee as modified above teaches the motion tracker of claim 9, wherein
the motion tracker is configured to perform motion tracking in real time ([0039], [0058]).
Regarding claim 17, Lee teaches an autonomous system ([0061]; Fig. 1 autonomous vehicle (AV) system (120)) comprising
a camera or image sensor ([0064]; Fig. 1 camera (122)),
a light detection and ranging (LIDAR) sensor ([0064]; Fig. 1 LiDAR (123)),
and a tracking module for tracking a motion of the robot through an environment ([0090]; Fig. 4, planning module (404) works with control module (406) and additionally receives data representing the AV's position), the tracking module configured to:
receive first data, wherein the first data comprise a first frame of a first scene of an environment detected by a camera or image sensor ([0130], [0199]; Fig. 20, step (1904) collects camera frame data);
receive second data, wherein the second data comprise a second frame of a second scene of an environment detected by a light detection and ranging (LIDAR) sensor, wherein at least a subset of the second scene corresponds to the first scene ([0158], [0160], [0198]; Fig. 20, step (1902) collects LiDAR frame data where at least part of the FoV of the LiDAR overlaps with the FoV of the camera);
transform the second data to generate transformed second data corresponding to the first frame ([0162], where information and features distinguishable in both LiDAR and camera information are used to merge the LiDAR data to the camera data);
dynamically determine a first weighting factor for the first data and a second weighting factor for the transformed second data ([0163] - [0166]; where system may set bounding boxes or designate object point clusters for merging based on current sensor information, and bounding boxes and filters may be used to ignore segments or whole amounts of either first or second data, and filtering essentially assigns a weighting to both data sets), based on previous values of the first data and the second data ([0170] - [0175]; where data and image IDS for objects may be updated and tracked based on subsequent IDs/ object locations/data sets);
weight the first data using the first weighting factor to generate first weighted data ([0163] - [0166], where filtering is applied to camera data);
weight the transformed second data using the second weighting factor to generate second weighted data ([0163] - [0166], where filtering is applied to the LiDAR data after it has been transformed to align with camera data);
and combine the first weighted data and the second weighted data to generate a combined image data ([0130], [0204]; Fig. 20, step (1908) merges the camera and lidar data).
Lee does not teach determining the first and the second weighting factor in connection with a probability density function.
Afrouzi (‘920) teaches a method and a system which involves operating a cleaning robot, where sensor data is collected and the system may dynamically determine the first and the second weighting factor in connection with a probability density function ([0767] – [0768], [817], [1095] where the system with multiple sensors such as a camera and LIDAR may dynamically merge images/sensor data which includes assigning weight to more recently collected data and incorporates a probability density function into the weighting factor).
Therefore, to one of ordinary skill in the art before the effective filing date of the claimed invention, it would have been obvious prima facie to modify Lee to incorporate the teachings of Afrouzi (‘920) to utilize a probability density function to determine weighting factors for multiple sensor data with a reasonable expectation of success. Afrouzi (‘920) notes that weighting more recent sensor data helps to smooth sensor data ([0767]), and incorporating this into the system of Lee would have a predictable result of both smoothing sensor data and determining how much more probable certain features of sensor readings are, therefore allowing more accurate weighting of the sensor data.
Regarding claim 18, Lee as modified above teaches the autonomous system of claim 17, wherein
the camera or image sensor comprises a camera mounted on a gimbal, and wherein the first data comprise orientation data of the gimbal ([0155]; where the camera system may pan to an area of an object and reorients based on the camera information, including the updated field-of-view. As is well known in the art of LiDAR and ranging, gimbals are utilizing in mounting as they are pivoted supports which allow rotation or inclination of an imaging device.).
Regarding claim 20, Lee as modified above teaches the autonomous system of claim 17, wherein
the autonomous system comprises an autonomous vehicle, an autonomous mobile robot, a drone, or an unmanned aerial vehicle ([0001], [0042] - [0044]).
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 20220185324 A1) in view of Ebrahimi Afrouzi et al. (hereinafter Afrouzi (‘920), US 20220026920 A1), and further in view of Ebrahimi Afrouzi et al. (hereinafter Afrouzi (‘100), US 20190035100 A1).
Regarding claim 7, Lee as modified above teaches the non-transitory computer readable medium of claim 1, that when executed, cause a transformation of the second data but is silent on the exact transformation performed.
Afrouzi (‘100) teaches a system and method for combining data from multiple sensors, such as a camera and LIDAR, where the second data undergoes a transformation which
rotates and translates the second data to coordinates of the environment to transform the received second data ([0043], [0049] - [0051]; where coordinates of one or both images may be translated and/or rotated before combining into a coordinate system with a shared origin).
Therefore, to one of ordinary skill in the art before the effective filing date of the claimed invention, it would have been obvious prima facie to modify Lee to incorporate the teachings of Afrouzi (‘100) to specifically note that the second data will undergo a transformation to the coordinates of the environment via a rotation and translation with a reasonable expectation of success. Afrouzi (‘100) notes that when two field of views are compared, and an overlap is identified between the two, that identifying matching patterns, such as with edge detection or pattern recognition, allow for use of matrices and/or vectors to reorient a sensor’s data to a shared origin for calibration ([0047]). Use of these types of transformation of data are well known in LiDAR and ranging, and in the system of Lee would have a predictable result of calibrating and orienting all data with respect to the same origin.
Claim(s) 11-12 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 20220185324 A1) in view of Ebrahimi Afrouzi et al. (hereinafter Afrouzi (‘920), US 20220026920 A1), and further in view of Banerjee et al. (hereinafter Banerjee, US 20230237783 A1).
Regarding claim 11, Lee as modified above teaches the motion tracker of claim 9 but is silent on the frame rates of the camera and LIDAR systems.
Banerjee teaches a system where the camera or image sensor has a first frame rate and LIDAR sensor has a second frame rate, different from the first frame rate ([0034] - [0035], it is known that imaging cameras and LiDAR operate at different rates of acquisition, where cameras can operate at up to 60 frames a second while LiDAR operate at much lower or much higher FPS at the loss of some temporal resolution).
Therefore, to one of ordinary skill in the art before the effective filing date of the claimed invention, it would have been obvious prima facie to modify Lee to incorporate the teachings of Banerjee to utilize specific frame rates, which are different for the camera and LiDAR system, with a reasonable expectation of success. It is well known to one of ordinary skill in the art of LIDAR and ranging that imaging systems such as cameras will have different frame rates than LIDAR systems, and that additionally systems can choose specific frame rates to fit their intended purposes.
Regarding claim 12, Lee as modified above teaches the motion tracker of claim 9 but is silent on the specific ratios of the weighting factors.
Banerjee teaches a system where a plurality of images are acquired from multiple sensors, where the image information is merged with weighting factors associated with different sensor images, and the first weighting factor and the second weighting factor add up to 1 ([0017] - [0018], [0043], where sensor data from a camera and a lidar sensor can be merged, and merging a plurality of images includes an add and normalize layer with outputs normalized to 1; normalization to 1 for the two add and normalize layers would set the two weighting factors to add to 1.).
Therefore, to one of ordinary skill in the art before the effective filing date of the claimed invention, it would have been obvious prima facie to modify Lee to incorporate the teachings of Banerjee to normalize the data from the two sensors to a standardized value such as 1 with a reasonable expectation of success. As Banerjee notes, different sensors have different resolutions and sensitivity depending on different conditions, and combining the sensor modalities with weighting and normalization will allow the system to benefit from the strengths of each sensor ([0015]). This would incorporate the normalization factor of 1 into the system of Lee with a predictable result of normalizing data from two sensors before it is merged for simplified merging purposes.
Claim 19 is similarly rejected to claim 11.
Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 20220185324 A1) in view of Ebrahimi Afrouzi et al. (hereinafter Afrouzi (‘920), US 20220026920 A1), and further in view of Blaes et al. (hereinafter Blaes, US 20190293756 A1).
Regarding claims 13 and 14, Lee as modified above teaches the motion tracker of claim 9 but is silent on the specifics of assigning weighting ratios based on environmental variables and re-projection error values, and therefore the specifics of the environmental variables.
Blaes teaches a system where a plurality of data are acquired from multiple sensors, where the image information is merged with weighting factors and the first weighting factor and the second weighting factor are further determined based on an environmental condition ([0054], [0093]), where entropy value may incorporate environmental conditions) and reduction of re-projection error associated with the camera or image sensor and consecutive LIDAR sensor data points error ([0054], [0068], [0087], [0092] - [0098]; Fig. 5, steps (506) and (508), where entropy values, or the amount of error associated with projection processes on sensor data, may be linked to a probability distribution, environmental conditions, and previous data, where entropy is aimed to be below a threshold value).
Blaes also teaches weighting factors based on an environment condition, where the environment condition is at least one of lighting condition, image texture, image blurriness factor, laser scan range, tunnels ([0059], where the state of the environment, or characteristics associated with the environment, can include time of day, season, weather condition, darkness/light, etc.).
Therefore, to one of ordinary skill in the art before the effective filing date of the claimed invention, it would have been obvious prima facie to further modify Lee to incorporate the teachings of Blaes to specifically weight the first or second data more based on environmental factors while incorporating a reduction in projection errors with a reasonable expectation of success. Blaes notes that states of the environment are used in autonomous driving systems’ perception components, and improved processing and calibration of those systems provides more accurate, and therefore safer, navigation ([0013], [0059]), and further notes that by reducing entropy (which includes data projection errors), a lowest entropy value may be used to calibrate one or more sensors with improved precision ([0051]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kavulya et al. (US 20190130601 A1) teaches a technology for sensor information fusion, specifically for a camera and a lidar system, which weights the detection data from the different sensors and merges the data, where the weights may depend on a variety of factors such as environmental factors and range of the system.
Yadav (US 20220180099 A1) teaches a method, apparatus and system for detecting and mapping a tunnel based on a combination of data such as a camera and lidar.
Su et al. (US 20220292806 A1) teaches a method and system for object detection based on at least data from a LIDAR system and a camera system, where a convolution layer or layers may be applied to the data as it is analyzed, which may include weighting.
Kimura et al. (US 20100177197 A1) teaches an imaging apparatus which may use multiple sensor data (such as a camera and an accelerometer) to correct images based on prior data and/or by incorporating a probability density function.
Liu et al. (US 20220404460 A1) teaches a sensor calibration method and apparatus, where a reprojection errors of an image from a sensor, such as a camera, is based on prior images and global location information.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kara Richter whose telephone number is (571)272-2763. The examiner can normally be reached Monday - Thursday, 8A-5P EST, Fridays are variable.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Helal Algahaim can be reached at (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.M.R./Examiner, Art Unit 3645
/HELAL A ALGAHAIM/SPE , Art Unit 3645