DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2022-0130349, filed on 10/12/2022.
Status of Claims
Claims 1-20 of US Application 18/229,440 filed on 11/26/2025 are presently examined. Claims 1 and 11 are amended.
Response to Arguments
Regarding 35 U.S.C. 101, Applicant's arguments filed 11/26/2025 have been fully considered but they are not persuasive. The mere use of sensors to gather data does not amount to significantly more and does not integrate an otherwise mental process into a practical application; Applicants’ statement that they are more closely coupled with a physical system to solve a technical problem is not persuasive. The generic computer components are merely used to gather data and the processor is merely used to perform otherwise mental processes.
Applicant’s argument with respect to claims 1 and 11 in regard to the newly added limitation about the conversion of velocities between center of gravity and a predetermined point of the vehicle have been considered but are moot because the scope has changed requiring an updated ground of rejection. New reference Lu teaches the explicit conversion of velocity between center of gravity and other points of the vehicle.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 USC § 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claims 1-10 are directed to a method; claims 11-20 are directed to system with a memory and processor (i.e. a machine). Therefore, claims 1-20 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 1 recites similar limitations as independent claim 11 and will be used as a representative claim.
Claim 1 is recited below and limitations that recite an abstract idea are emphasized in bolding below:
An object detection method, comprising:
determining, by a processor, an error parameter associated with an amount of movement of a vehicle through a predetermined regression method based on positioning information of the vehicle and dynamics information of the vehicle with respect to a center of gravity of the vehicle, wherein the error parameter includes parameters associated with a conversion between velocities of the center of gravity and a predetermined point of the vehicle;
determining, by the processor, a velocity of a predetermined point of the vehicle, based on a fixed error parameter stored in a memory or a corrected fixed error parameter, through a comparison between the error parameter and the fixed error parameter;
generating, by the processor, a local map in consideration of the amount of movement of the vehicle based on the determined velocity; and
detecting, by the processor, an object around the vehicle based on the local map.
The examiner submits that the above bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. The bolded limitations in the context of this claim encompasses a person mentally determining an error parameter using the mathematical concept of a regression method of the amount of movement of a vehicle based on the positioning and dynamics information of the vehicle with respect to the vehicle’s center of gravity, and comparing said error parameter with a fixed error parameter and determining the velocity of a point of the vehicle, and imagining a local map in consideration of the movement of the vehicle based on the velocity. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
An object detection method, comprising:
determining, by a processor, an error parameter associated with an amount of movement of a vehicle through a predetermined regression method based on positioning information of the vehicle and dynamics information of the vehicle with respect to a center of gravity of the vehicle, wherein the error parameter includes parameters associated with a conversion between velocities of the center of gravity and a predetermined point of the vehicle;
determining, by the processor, a velocity of a predetermined point of the vehicle, based on a fixed error parameter stored in a memory or a corrected fixed error parameter, through a comparison between the error parameter and the fixed error parameter;
generating, by the processor, a local map in consideration of the amount of movement of the vehicle based on the determined velocity; and
detecting, by the processor, an object around the vehicle based on the local map.
For the following reason(s), the examiner submits that the above underlined additional limitations do not integrate the above-noted abstract idea into a practical application.
The examiner submits that these additional limitations merely use generic computer components or sensors to perform an insignificant extra-solution activity of data gathering, and further, a computer (processor, generic computer components) performing otherwise mental judgements is not sufficient to integrate the abstract idea into a practical application.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a processor or generic computer components to gather data and perform the otherwise mental judgements amounts to nothing more than applying the exception using generic computer components. Generally applying an exception using a generic computer component cannot provide an inventive concept. Further the additional limitations are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application, merely use generic computer components in their ordinary capacity to perform an otherwise mental process or judgement, and do not amount to significantly more. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).
Dependent claim(s) 2-10 and 12-20 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application, merely use generic computer components in their ordinary capacity to perform an otherwise mental process or judgement or data gathering, or recite mere mathematical concepts, and do not amount to significantly more.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 6-9, 11, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Adams et al. (US 20210276577 A1) in view of Kumar et al. (US 20090319186 A1) and Lu (US 7885750 B2), hereinafter referred to as Adams, Kumar, and Lu, respectively.
Regarding claims 1 and 11, Adams discloses An object detection method, comprising:
determining, by a processor, an error parameter associated with an amount of movement of a vehicle through a predetermined regression method based on positioning information of the vehicle and dynamics information of the vehicle ([FIG. 2] at least steps 208, 212, and 216. Receiving a signal of a localization component providing positioning and dynamics information about the vehicle. A residual is determined for that signal, which is a difference between observed and estimated value determined through regression.);
determining, by the processor, a velocity of a predetermined point of the vehicle, based on a fixed error parameter stored in a memory or a corrected fixed error parameter, through a comparison between the error parameter and the fixed error parameter ([0061] “a residual may include a difference between an observed velocity at a time and a velocity at a previous time step … compare the residuals to pre-determine threshold residuals … based on the comparison, the planner component may determine the extent (e.g., size) of the error.” );
generating, by the processor, a local map in consideration of the amount of movement of the vehicle based on the determined velocity ([0033] “the localization component 104 may include and/or request/receive map data associated with map(s) 108 of an environment and may continuously determine a location and/or orientation (e.g., state) of the vehicle within the map(s) 108”); and
detecting, by the processor, an object around the vehicle based on the local map ([column 22, lines 8-11] “the perception component 422 may perform object detection, segmentation, and/or classification based at least in part on sensor data received from the sensor system(s) 406.”).
Adams fails to explicitly disclose determining, by a processor, an error parameter associated with an amount of movement of a vehicle through a predetermined regression method based on positioning information of the vehicle and dynamics information of the vehicle positioning information of the vehicle and dynamics information of the vehicle with respect to a center of gravity of the vehicle
However, Kumar teaches dynamics information of the vehicle with respect to a center of gravity of the vehicle ([0019] “vehicle 10 moves along an underlying terrain at velocity (V). V has two separate components: an x-component or longitudinal velocity (Vx), and a y-component or lateral velocity (Vy) of the vehicle 10. Vx and Vy are both oriented within a body coordinate system (having its origin at CG)”).
It would have been obvious to one of ordinary skill in the art to modify Adams with Kumar‘s teaching of position information of the vehicle and dynamics information with respect to a predetermined center of gravity of the vehicle. One would be motivated with reasonable expectation of success to use the center of gravity in order to use the same frame of reference between onboard sensors and the corrective GPS data (Kumar [0047] “Pn_gps and Pe_gps must have the same frame of reference as the positions Pn and Pe that are determined during method 300.”).
Adams fails to disclose the error parameter includes parameters associated with a conversion between velocities of the center of gravity and a predetermined point of the vehicle.
However, Lu teaches the error parameter includes parameters associated with a conversion between velocities of the center of gravity and a predetermined point of the vehicle ([column 10, line 57-63] “a suitable speed sensor may include a sensor at every wheel that is averaged by the ISS unit 26. The algorithms used in SS may translate the wheel speeds into the travel speed of the vehicle. Yaw rate, steering angle, wheel speed, and possibly a slip angle estimate at each wheel may be translated back to the speed of the vehicle at the center of gravity.” [column 8 lines 15-18] “an IMU sensor measures enough information to be used to numerically translate the IMU sensor output to the motion variables at any location on the vehicle body.”).
It would have been obvious to one of ordinary skill in the art to modify Adams with Lu‘s teaching of translating velocity between center of gravity and other points of a vehicle. One would be motivated with reasonable expectation of success to translate velocity between center of gravity and other points of a vehicle in order to measure velocity characteristics at locations of interest that the sensor cannot be mounted to (Lu [column 8 lines 11-14] “For practical reasons, the centralized sensor cluster like an TMU may not be mounted on the same location which is of interest for computation purposes such as the vehicle's center of gravity or the rear axle of the vehicle body.”).
Regarding claims 6 and 16, Adams discloses The object detection method of The object detection method of claim 1 wherein the dynamics information includes a first velocity and a yaw rate output from a VDISP sensor of the vehicle, and wherein the determining of the error parameter associated with the amount of movement of the vehicle is performed based on a second velocity of the vehicle determined based on the positioning information of the vehicle ([0058] “The state information may include one or more of the one or more of an x-, y-, z-position, x-, v-, z-velocities and/or accelerations, roll, pitch, or yaw, roll, pitch” [0017] “the estimated value may be based in part on a value associated with a previous time step (e.g., value associated with a measurement taken 0.1 second prior to the current measurement) … a residual may include a difference between an observed velocity at a time and a velocity at a previous time step.”).
Regarding claims 7 and 17, Adams discloses The object detection method of claim 6, wherein the error parameter associated with the amount of movement of the vehicle includes a scale factor of the first velocity ([0019] “assign weights (e.g., 1.2, 1.5, etc.) to residual(s)”), a bias of the first velocity ([0024] “The correction factors may include IMU biases, wheel diameter scale factors”),
Adams fails to disclose the error parameter associated with the amount of movement of the vehicle includes … a vector from the predetermined center of gravity to the predetermined point of the vehicle
However, Kumar teaches the error parameter associated with the amount of movement of the vehicle includes … a vector from the predetermined center of gravity to the predetermined point of the vehicle ([0019] “vehicle 10 moves along an underlying terrain at velocity (V). V has two separate components: an x-component or longitudinal velocity (Vx), and a y-component or lateral velocity (Vy) of the vehicle 10. Vx and Vy are both oriented within a body coordinate system (having its origin at CG)” [0049] “The gain values (e.g., the 2.times.3 matrix) are tuned manually through a process of trial and error to adjust the Y' (as described above with reference to FIG. 200) and Ve and Vn (as described above with reference to FIG. 300) with the goal of achieving a Pn_error and Pe_error that are equal to zero” Both the measured and determined difference (error) are related to the center of gravity of the vehicle and the directions north and east.).
It would have been obvious to one of ordinary skill in the art to modify Adams with Kumar‘s teaching of sensor and GPS sensor data sharing a common reference frame. One would be motivated with reasonable expectation of success to use the center of gravity in order to use the same frame of reference between onboard sensors and the corrective GPS data for determining the true vehicle’s movement (Kumar [0047] “Pn_gps and Pe_gps must have the same frame of reference as the positions Pn and Pe that are determined during method 300.”).
Regarding claims 8 and 18, Adams fails to explicitly disclose The object detection method of claim 1, wherein the amount of movement of the vehicle is determined by integrating a longitudinal velocity and a lateral velocity included in the determined velocity.
However, Kumar teaches the amount of movement of the vehicle is determined by integrating a longitudinal velocity and a lateral velocity included in the determined velocity ([claim 5] “integrate the adjusted first directional velocity to determine a distance in the North direction from a reference position; and integrate the adjusted second directional velocity to determine a distance in the East direction from a reference position.”).
It would have been obvious to one of ordinary skill in the art to modify Adams with Kumar‘s teaching of integrating velocity data to determine the amount of movement. One would be motivated with reasonable expectation of success to integrate velocity which determines movement in order to subsequently set the steering angle and velocity based on the determined heading and distances (Kumar [claim 15] “setting the appropriate steering angle and velocity for the vehicle based on data including the heading, the first distance, and the second distance”).
Regarding claims 9 and 19, Adams discloses The object detection method of claim 1, wherein the generating of the local map includes generating the local map including data output from an object detection sensor of the vehicle, based on pre-stored map data, the amount of movement of the vehicle, the positioning information of the vehicle, and the data output from the object detection sensor ([0088] “the perception component 422 may perform object detection, segmentation, and/or classification based at least in part on sensor data received from the sensor system(s) 406.” [0089] “receive sensor data from the sensor system(s) 406, map data associated with a map (e.g., of the map(s) which may be in storage 430), and/or perception data output from the perception component 422 (e.g., processed sensor data), and may output predictions associated with one or more objects within the environment of the vehicle 402.” [0033] “the localization component 104 may include and/or request/receive map data associated with map(s) 108 of an environment and may continuously determine a location and/or orientation (e.g., state) of the vehicle within the map(s) 108.”).
Claims 2-3 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Adams in view of Kumar and Lu as applied to claims 1 and 11 above, and further in view of Chafekar et al. (US 20200115039 A1), hereinafter referred to as Chafekar.
Regarding claims 2 and 12, Adams fails to disclose The object detection method of claim 1, further including: in response to a difference between the error parameter and the fixed error parameter being greater than or equal to a predetermined threshold value, correcting the fixed error parameter via a low-pass filter (LPF) to generate the corrected fixed error parameter ([0121] “apply an auto aggressive moving average filter on measurements captured by sensors and/or determined by components of the vehicle, such as an IMU, a localizer, or the like. For example, the first measurement may be determined utilizing a function, such as an auto regressive moving average (ARMA) filter.”).
However, Chafekar teaches in response to a difference between the error parameter and the fixed error parameter being greater than or equal to a predetermined threshold value, correcting the fixed error parameter via a low-pass filter (LPF) to generate the corrected fixed error parameter ([0042] “one coefficient 52 may be signaled to the correction factor updater 38 and may allow for adapting a filter, e.g., a low-pass filter … This low-pass filter may be used to remove the noise from calculated correction coefficients, i.e., correction factors 34 … if the rate of change of the physical parameter (pressure) due to actual change in pressure is well above the upper bound of a potential drift”)
It would have been obvious to one of ordinary skill in the art to modify Adams with Chafekar‘s teaching of using a low pass filter to update correction factors. One would be motivated with reasonable expectation of success to use a low pass filter to update the correction coefficient in order to reduce noise and more accurately measure the physical parameter (Chafekar [0042] “reduce noise” [0032] “setting the device into a condition to accurately measure the physical parameter in the current condition.”).
Regarding claims 3 and 13, Adams discloses The object detection method of claim 2, wherein, the determining of the velocity includes: determining the velocity based on the corrected fixed error parameter in response to the difference being greater than or equal to the predetermined threshold value, and determining the velocity based on the fixed error parameter in response to the difference being less than the predetermined threshold value ([column 16, lines 2-12] “a residual may include a difference between an observed velocity at a time and a velocity at a previous time step. Based on determining the difference, the planner component may determine that a system associated with the velocity measurement may be generating an error. In various examples, the planner component may compare the residuals to pre-determine threshold residuals associated with the measurements. In such examples, based on the comparison, the planner component may determine the extent (e.g., size) of the error.” [column 16, lines 24-31] “At operation 214, the planner component 202 may determine a value associated with a correction factor of the signal. The correction factor may include an IMU bias, a map distortion factor, wheel diameter scale factor, and the like. The correction factor may be used to correct for input errors. In some examples, the correction factor may be applied to input errors that are known to follow a predefined model.”).
Claims 4-5 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Adams in view of Kumar and Lu as applied to claims 1 and 11 above, and further in view of Duenas et al. (US 12352579 B1), hereinafter referred to as Duenas.
Regarding claims 4 and 14, Adams fails to disclose The object detection method of claim 1, wherein the fixed error parameter is determined and provided through prior learning by an external system of the vehicle.
However, Duenas teaches the fixed error parameter is determined and provided through prior learning by an external system of the vehicle ([column 19, lines 26-28] “the machine learned model 126 may be trained to determine one or more errors associated with the auxiliary localization data 120.” [column 9, lines 55-58] “the machine learned model 126 may be trained on the remote computing device or system and transmitted to the vehicle for use in determining the localization performance metric during vehicle operation.”)
It would have been obvious to one of ordinary skill in the art to modify Adams with Duenas’ teaching of a remote computing system producing a trained machine learning model that determines errors of vehicle sensing equipment for later transmission of the model to vehicles. One would be motivated with reasonable expectation of success to use a trained machine learning model in order to enable vehicles to ensure highly accurate location information by monitoring metrics for error identification (Duenas [column 3 lines 43-46] “enable the vehicle to ensure highly accurate location information by monitoring the metrics and/or enable quick and accurate error identification in a localization component based on the localization performance metrics.”).
Regarding claims 5 and 15, Adams discloses The object detection method of claim 4, wherein the fixed error parameter includes a scale factor of an output velocity of a vehicle dynamics input signal processing (VDISP) sensor of the vehicle ([0019] “assign weights (e.g., 1.2, 1.5, etc.) to residual(s)”), a bias of the output velocity ([0024] “The correction factors may include IMU biases, wheel diameter scale factors”),
fails to disclose the fixed error parameter includes … a vector from the center of gravity of the vehicle to the predetermined point of the vehicle.
However, Kumar teaches the fixed error parameter includes a vector from the center of gravity of the vehicle to the predetermined point of the vehicle ([0019] “vehicle 10 moves along an underlying terrain at velocity (V). V has two separate components: an x-component or longitudinal velocity (Vx), and a y-component or lateral velocity (Vy) of the vehicle 10. Vx and Vy are both oriented within a body coordinate system (having its origin at CG)” [0049] “The gain values (e.g., the 2.times.3 matrix) are tuned manually through a process of trial and error to adjust the Y' (as described above with reference to FIG. 200) and Ve and Vn (as described above with reference to FIG. 300) with the goal of achieving a Pn_error and Pe_error that are equal to zero” Both the measured and determined difference (error) are related to the center of gravity of the vehicle and the directions north and east.).
It would have been obvious to one of ordinary skill in the art to modify Adams with Kumar‘s teaching of sensor and GPS sensor data sharing a common reference frame. One would be motivated with reasonable expectation of success to use the center of gravity in order to use the same frame of reference between onboard sensors and the corrective GPS data for determining the true vehicle’s movement (Kumar [0047] “Pn_gps and Pe_gps must have the same frame of reference as the positions Pn and Pe that are determined during method 300.”).
Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Adams in view of Kumar and Lu as applied to claims 1 and 11 above, and further in view of Wang et al. (US 20190258737 A1), hereinafter referred to as Wang.
Regarding claims 10 and 20, Adams fails to disclose The object detection method of claim 19, wherein the local map includes a grid map, and wherein the detecting of the object includes:
determining a score for each grid based on a number of data output from the object detection sensor and accumulated therein for a predetermined time period; and
in response to an average value of scores of neighboring grids which include the data being greater than or equal to a predetermined threshold value, determining data of the neighboring grids as data of a static object.
However, Wang teaches determining a score for each grid based on a number of data output from the object detection sensor and accumulated therein for a predetermined time period ([0036] “as sensor datasets are accumulated over time, the process 100 may include determining whether a voxel is occupied by an object”); and
in response to an average value of scores of neighboring grids which include the data being greater than or equal to a predetermined threshold value, determining data of the neighboring grids as data of a static object ([0019] “LIDAR data may be accumulated in the voxel space, with an individual voxel including processed data, such as a number of data points observed, an average intensity of returns” [0020] “updating a map including the voxel space based at least in part on the occupancy state associated with the voxels (e.g., based on the one or more counters being meeting or exceeding a threshold)” [0061] “as sensor datasets are accumulated with respect to individual voxels, negative information may be associated with the individual voxels, for example, indicating they are occupied with a static object. As data is accumulated over time, the information may be aggregated, for example, in part, to determine whether a voxel represents open space or a static object.”).
It would have been obvious to one of ordinary skill in the art to modify Adams with Wang‘s teaching of creating an occupancy voxel map based on accumulated sensor data and confidence values. One would be motivated with reasonable expectation of success to use generate an occupancy map and determine whether neighboring regions contain static objects using an accumulative sensing method in order to determine whether an object is static or dynamic using a threshold accumulation of sensing data (Wang [0037] “determining whether the counted instances of measurements associated with the voxel meets or exceeds a threshold number of instances, which may indicate that the voxel is occupied. A change in occupancy of the voxel may indicate that an associated object is a dynamic object. For example, if the object is determined to be present at a first time but not at later subsequent times, it may be an indication that the object is a dynamic object, since its presence in the voxel ceased”).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK R HEIM whose telephone number is (571)270-0120. The examiner can normally be reached M-F 9-6 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.R.H./Examiner, Art Unit 3668
/Fadey S. Jabr/Supervisory Patent Examiner, Art Unit 3668