Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The August 21, 2025 Information Disclosure Statements includes a statement that the references are based on a communication from a foreign patent office. The examiner requests a copy of the referenced communication(s).
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The issue is that occupancy grid mapping is known, and thus does not describe this invention.
Claim Objections
Claims 8 and 18 are objected to because of the following informalities:
Claims 8 and 18 recite “associated with associated with” (note the repetition).
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 (all claims) are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 1 recites “using the mapping model,” but this is unlimited functional claiming due to the wide variety of machine learning models available. MPEP 2173.05(g). Specifying a particular architecture, such as a convolutional neural network, is expected to overcome this rejection (the examiner has not identified support for a CNN, it is used as an example of specificity).
Claim 11 recites corresponding language and is likewise rejected.
Claims 3 and 13 recite “performing object detection on the plurality of images of the environment,” but this is unlimited functional claiming due to the wide variety of ways that this could be accomplished. MPEP 2173.05(g).
Claims 4 and 14 recite “neural-network-based visual object detection technique,” but this is unlimited functional claiming due to the wide variety of ways that this could be accomplished. MPEP 2173.05(g).
Dependent claims are likewise rejected.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 (all claims) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 11 recite “detection and ranging system,” but this is new terminology. MPEP 2173.05(a). For example, the claims recite that this system generates point clouds, but this more than just detection or ranging.
Claims 1 and 11 recite “hyperparameters,” but this has different meanings in machine learning and Bayesian statistics. See, e.g., https://en.wikipedia.org/wiki/Hyperparameter:
PNG
media_image1.png
175
755
media_image1.png
Greyscale
The present claims are directed to both machine learning and Bayesian statistics, so it is unclear which meaning was intended. In the interest of compact prosecution, the examiner has mapped both meanings.
Claims 4 and 14 recite “neural-network-based visual object detection technique,” but this is new terminology. MPEP 2173.05(a).
Claims 5 and 15 recite “estimated,” but this is unclear because the independent claims recite actual measurements (thus precluding an estimate).
Claims 7 and 17 recite “the hyperparameters for modeling each grid cell,” but this lacks sufficient antecedent basis because the earlier recitations of hyperparameters were not for modeling each grid cell. MPEP 2173.05(e).
Claims 8 and 18 recite “Bayesian Learning based model,” and this raises two issues. First, it is new terminology. MPEP 2173.05(a). Removing the word “based” overcomes this rejection. Second, the claims do not specify what is being learned, i.e., what the model is of. If, for example, the intent is that Bayesian inference is performed on the sequence of observations through time for a given grid cell, that would be definite.
Claims 8 and 18 recite “index set” and “indices,” but it is not clear what is meant by this. Specification, [0039] suggests that it is just a set of values (e.g., a vector), but that leaves the word “index” without any meaning (i.e., it is just a set rather than an index set).
Claims 8 and 18 recite “determining a [first/second] set of hyperparameters for the [first/second] index set,” but it is unclear what is meant by a hyperparameter for a set of numbers.
Claims 8 and 18 recite “wherein the first set of hyperparameters is associated with associated with higher likelihood of non- zero occupancy than the second set of hyperparameters when utilized by the mapping model,” but it is not clear what this means, or how it would be evaluated. For example, are the hyperparameters for the set, or are they for the mapping model. If they are for the mapping model, how does one determine the occupancy likelihood (would you repeatedly run the model with random inputs and count the occupancy rate)?
Dependent claims are likewise rejected.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 (all claims) are rejected under 35 U.S.C. 102(a)(1) and/or (a)(2) as being anticipated by US20210354690A1 (“Yershov”)
1. A system comprising:
at least one detection and ranging system configured to transmit signals, receive reflected signals corresponding to reflections of the transmitted signals by one or more objects in an environment around the at least one detection and ranging system, and generate point cloud data indicating positions of the objects; (Yershov, title “Vehicle operation using a dynamic occupancy grid”)
computer-readable memory configured to store side information representing the environment; and (Yershov, Fig. 3, main memory 306, ROM 308, storage device 310)
processing circuitry configured to: (Yershov, Fig. 3, processor 304)
receive the point cloud data from the at least one detection and ranging system; (Yershov, [0092] “For example, LiDAR data is collections of 3D or 2D points (also known as a point clouds) that are used to construct a representation of the environment 190.”)
receive the side information from the computer-readable memory; (Yershov, abstract, “dynamic occupancy grid (DOG)” Yershov’s DOG includes a previously generated occupancy grid map of the environment, which as per the below claim limitation, is a type of side information.)
determine hyperparameters for a mapping model based on the side information; and (Yershov, [0119] “Therefore, the method estimates not only the occupancy, but also parameters of the dynamical model, such as, for example, velocities or forces.” These parameters teach the machine learning understanding of “hyperparameter.” Yershov, abstract, For each grid cell, a probability density function is generated based on the LiDAR data. Yershov’s probability density function teaches the Bayesian understanding of “hyperparameter.”)
process the point cloud data using the mapping model using the hyperparameters to generate an occupancy grid map of the environment, (Yershov, abstract, For each grid cell, a probability density function is generated based on the LiDAR data.)
wherein the side information includes one or more of:
a digital map of the environment, (Yershov, [0090] “In an embodiment, data used by the localization module 408 includes high-precision maps of the roadway geometric properties, maps describing road network connectivity properties, maps describing roadway physical properties … .”)
an image of the environment, or (Yershov, Fig. 5, camera 502c)
a previously generated occupancy grid map of the environment. (Yershov, [0119] “Example sensors 121, 122, 123 are illustrated and described in more detail with reference to FIG. 1. The DOG 1300 is thus dynamically updated with time.” Yershov’s DOG is a dynamic occupancy grid. That it is updated with time teaches the claimed previously generated (i.e., the grid that is being updated was previously generated).)
Claim 2 is rejected as per claim 1.
3. The system of claim 2, wherein the processing circuitry is further configured to:
generate detected object data by performing object detection on the plurality of images of the environment, wherein the detected object data includes locations and confidence scores for objects detected in the plurality of images. (Yershov, [0157] “In an embodiment, the occupancy confidence is determined based on at least one of a maturity, a flicker, a LiDAR return intensity, or fusion metrics of the sensor data.” Yershov’s fusion teaches the claimed images because Yershov fuses in a camera (as per claim 1).)
4. The system of claim 3, wherein, to generate the detected object data, the processing circuitry is further configured to:
process the plurality of images using a neural-network-based visual object detection technique to generate the locations and confidence scores. (Yershov, [0179] “The DOG circuit identifies the pedestrian 192 from the one or more objects 608 using a machine learning model.”)
5. The system of claim 3, wherein the processing circuitry is further configured to:
generate a predicted map including estimated object positions based on the plurality of previously generated occupancy grid maps. (Yershov, abstract “For each grid cell, a probability density function is generated based on the LiDAR data. The probability density function represents a probability that the portion of the environment represented by the grid cell is occupied by an object.”)
6. The system of claim 5, wherein the processing circuitry is further configured to:
generate an estimated occupancy grid map based on the detected object data and the predicted map. (Yershov, abstract “For each grid cell, a probability density function is generated based on the LiDAR data. The probability density function represents a probability that the portion of the environment represented by the grid cell is occupied by an object.”)
Claim 7 is rejected as per claim 1.
8. The system of claim 7, wherein the mapping model is a Bayesian Learning based model and wherein determining the hyperparameters based on the estimated occupancy grid map includes: (Yershov, [0137] “The output generated by the DOG circuit is therefore a Bayesian estimate for the particle density function ƒ(t, x, y, v).” Yershov’s Bayesian functions teach the claimed Bayesian learning model.)
determining an index set of occupied grid cells of the estimated occupancy grid map; (Yershov, abstract “For each grid cell, a probability density function is generated based on the LiDAR data. The probability density function represents a probability that the portion of the environment represented by the grid cell is occupied by an object.”)
determining an index set of unoccupied grid cells of the estimated occupancy grid map; (Yershov, abstract “For each grid cell, a probability density function is generated based on the LiDAR data. The probability density function represents a probability that the portion of the environment represented by the grid cell is occupied by an object.”)
determining a first set of hyperparameters for the index set of occupied grid cells; and (Yershov, [0137] “The output generated by the DOG circuit is therefore a Bayesian estimate for the particle density function ƒ(t, x, y, v).”)
determining a second set of hyperparameters for the index set of unoccupied grid cells, (Yershov, [0137] “The output generated by the DOG circuit is therefore a Bayesian estimate for the particle density function ƒ(t, x, y, v).”)
wherein the first set of hyperparameters is associated with associated with higher likelihood of non-zero occupancy than the second set of hyperparameters when utilized by the mapping model. (Yershov, [0139] “The technology described herein can therefore be used in tracking not just objects 608, but also free space.”)
9. The system of claim 1, wherein the at least one detection and ranging system includes LiDAR transceiver circuitry. (Yershov, Fig. 5, LiDAR 502a)
10. The system of claim 1, wherein the at least one detection and ranging system includes radar transceiver circuitry. (Yershov, Fig. 5, RADAR 502b)
Claims 11-20 are rejected as per claims 1-10.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ORANGE/Primary Examiner, Art Unit 2663