DETAILED ACTION
This Non-Final Office Action is in response to preliminary amendments filed 11/4/2024.
Claims 1-11 have been amended.
Claims 1-11 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d).
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/4/2024 has been considered by the examiner.
Drawings
The drawings are objected to because steps 501-505 of Figure 5 require labels. Specifically, MPEP 1.84(o) reads “Legends. Suitable descriptive legends may be used subject to approval by the Office, or may be required by the examiner where necessary for understanding of the drawing.” In this particular case, the Examiner has required legends for the blank numbered blocks in Figure 5, due to one of ordinary skill in the art not being able to interpret these figures without manually labeling these components with guidance from the specification.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The amendments to the specification filed 11/4/2024 have been entered.
Key to Interpreting this Office Action
For readability, all claim language has been underlined.
Citations from prior art are provided at the end of each limitation in parentheses.
Any further explanations that were deemed necessary by the Examiner are provided at the end of each claim limitation.
The Applicant is encouraged to contact the Examiner directly if there are any questions or concerns regarding the current Office Action.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 10 and 11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Due to their claim structure, claims 10 and 11 are tested as a separate statutory category from method claim 1, from which claims 10 and 11 depend.
With respect to claim 10, the claimed invention does not fall within at least one of the four categories of patent eligible subject matter recited in 35 U.S.C. 101 (process, machine, manufacture, or composition of matter) because (1) in view of the ordinary and customary meaning of “computer program,” a program encompasses forms of non-transitory tangible media as well as transitory propagating signals, which are non-statutory per se (In re Nuijten), and (2) Applicant’s specification fails to limit the term “program” to only non-transitory tangible media. Per MPEP 2106.03(I), “[p]roducts that do not have a physical or tangible form, such as information (often referred to as "data per se") or a computer program per se (often referred to as "software per se") when claimed as a product without any structural recitation” are not directed to any of the statutory categories. Therefore, the claim as a whole is non-statutory.
The examiner suggests amending the claimed “computer program” to recite "a non-transitory computer readable storage medium comprising a computer program" which would serve to exclude non-statutory subject matter from the claim's scope.
With respect to claim 11, the claimed invention does not fall within at least one of the four categories of patent eligible subject matter recited in 35 U.S.C. 101 (process, machine, manufacture, or composition of matter) because (1) in view of the ordinary and customary meaning of “computer readable medium,” a program on a computer readable medium is broad enough to encompass forms of non-transitory tangible media as well as transitory propagating signals, which are non-statutory per se (In re Nuijten), and (2) Applicant’s specification fails to limit the term “computer readable medium” to only non-transitory tangible media. One skilled in the art would reasonably conclude that the scope of the claim covers transitory media such as electromagnetic and other signals, as it is commonplace to use transitory signals as a means for recording executable software for transmission to a computing device. Therefore the claim as a whole is non-statutory.
The examiner suggests amending the claim to recite "non-transitory computer readable medium" which would serve to exclude non-statutory subject matter from the claim's scope.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 7 is under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 7 recites the limitation of the neural network is set up to process input data for a variable number of objects, the input data containing for each of the objects the distances between position predictions for the pairs of combinations (emphasis added). One of ordinary skill in the art cannot determine if the limitation of “the objects” in claim 7 is referencing the “variable number of objects” of claim 7 or the “set of objects” of claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 7, and 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Shen et al. (US 2022/0252420 A1), hereinafter Shen, in view of Yu et al. (US 2020/0409372 A1), hereinafter Yu.
Claim 1
Shen discloses the claimed method for controlling a robot device (i.e. intelligent vehicle 100) (see Figure 4, depicting the high level information fusion method, where the fused information is used to generate a driving decision instruction for intelligent vehicle 100 for automatic travelling, as described in ¶0063), comprising receiving, from each sensor of a plurality of sensors, a respective sensor data set from the sensor (see ¶0079-0080, with respect to step S401 of Figure 4, regarding receiving a sensing information set sent by at least one vehicle or roadside device, where each sensing information set is an information set obtained by a sensor system in a vehicle or a roadside device and includes information about targets in a sensing region of the respective vehicle or roadside device; ¶0092-0094, with respect to step S402 of Figure 4, regarding obtaining a sensing information set of the self-vehicle, where the sensing information set is obtained from fused information about a plurality of targets obtained from different sensing devices in the sensor system of the self-vehicle; ¶0063, regarding that in-vehicle computing system 102 receives sensing information set sent by sensor system 101, defined in ¶0062, and obtains a sensing information set sent by communications system 103). The in-vehicle computing system 102 of the intelligent vehicle 100 (see Figure 2) receives a sensing information set (i.e. “respective sensor data set”) from its own sensor system 101 and surrounding vehicles and/or roadside devices via its communications system 103. The surrounding vehicles and/or roadside devices include their own respective sensor systems, thus contributing to the “plurality of sensors.” The particular mounting location of the “plurality of sensors” is not claimed.
Shen further discloses that the claimed method comprises determining, for each object of a set of objects containing at least one object, for each of a plurality of different combinations of the sensor data sets, a position prediction for the object by way of sensor data fusion of the sensor data sets according to the combination of the sensor data sets (see ¶0113, with respect to step S4014 of Figure 5A, regarding the targets in each of the plurality of sensing information sets is mapped based on position information about the targets in the sensing information sets, such that each target is represented by a position point on the map, where the sensing information sets are fused information about the target obtained from different sensors, as described in ¶0094-0095; ¶0065, regarding that the structure included in vehicle 140 may be the same as the structure included in intelligent vehicle 100). Given that the same target represented by different sensing information sets may vary in position on the map (see ¶0115-0116), the position of the target taught by Shen may be reasonably interpreted as a “position prediction.” Further, the fusion applied to the sensing information set of the intelligent vehicle 100 (see ¶0094-0095) may be reasonably applied to the other vehicles in the environment that transmit sensing information sets, given the same systems are installed (see ¶0065). A “combination of the sensor data sets” may be reasonably represented by the sensing information set generated by a specific device with its own combination of sensors, e.g., intelligent vehicle, other vehicle, or roadside device. The limitation of “each object of a set of objects containing at least one object” only requires one object.
Shen further discloses that the claimed method comprises determining, for each object of the set of objects, for each pair of a plurality of pairs of combinations, a distance between the position predictions determined for the object according to the combinations of the pair (see ¶0130-0134, regarding that a difference between information about a target sensed by roadside devices, other vehicles, and the intelligent vehicle and the fused information about the same target is obtained). The limitation of “a plurality of pairs of combinations” is interpreted as distinct from the limitation of “a plurality of different combinations of the sensor data sets,” due to lack of claimed relationships.
Shen further discloses that the “determined distances” are used to determine confidence information for the position predictions from distances between position predictions for the pairs of combinations (see ¶0130-0134, regarding that the confidence of the sensing information set of each device is updated based on the distances), and Shen does not further disclose that this determination of “confidence information” is made by feeding the determined distances to a neural network trained. However, implementing a neural network to perform the determination of confidence information taught by Shen would be obvious, in light of Yu.
Specifically, Yu teaches the known technique of feeding position deviation, defined as a difference in the position of a target object detected by roadside sensing apparatus and the position of a target object detected by a vehicle sensing apparatus in ¶0100, (similar to the determined distances taught by Shen) to a neural network trained to determine a matching result with an associated confidence, as described in ¶0129-0130 (similar to the confidence information taught by Shen) for the position of a target object detected by roadside sensing apparatus and the position of a target object detected by a vehicle sensing apparatus (similar to the position predictions taught by Shen) from distances between position predictions for the roadside result set from the roadside sensing apparatus and the vehicle result set from the vehicle sensing apparatus, defined as including a plurality of sensors in ¶0016 (similar to the pairs of combinations taught by Shen) (see ¶0099-0112, regarding that the position deviation is input to a deviation network, defined as a back propagation neural network, to generate a matching result with an associated confidence that is subsequently evaluated for adjusting the deviation network).
Since the systems of Shen and Yu are directed to the same purpose, i.e. fusing sensor data from a plurality of sensors and sources, including roadside units and vehicles, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the technique of determin[ing] confidence information for the position predictions from distances between position predictions for the pairs of combinations taught by Shen, so as to feed the determined distances to a neural network trained to perform the “determine” step taught by Shen, in light of Yu, with the predictable result of providing a means for evaluating confidence that can be adjusted and refined throughout operation (¶0027-0030 of Yu), thus improving accuracy.
Shen further discloses controlling the robot device using one or a plurality of the position predictions taking into account the confidence information (see ¶0135, regarding that each piece of information about the target is fused based on the confidence, such that the fused information is used to generate a driving decision instruction for intelligent vehicle 100 for automatic travelling, as described in ¶0063).
Claim 2
Shen further discloses that the set of objects contains a plurality of objects (see ¶0112, regarding each target in the sensing information sets is processed, where the targets are defined as particular objects in ¶0049).
Claim 3
Shen further discloses that the set of objects comprises objects in a predetermined sub-area of the surroundings of the robot device detected by the sensors (see ¶0096, with respect to step S403 of Figure 4, regarding that the sensing information set is filtered based on the region of interest (ROI), defined as a geographical range associated with the self-vehicle in ¶0099; ¶0050, regarding that a particular sensing range is associated with the plurality of sensors in the sensor system of the vehicle).
Claim 4
Yu further teaches that the neural network receives as input for the vehicle sensing apparatus and roadside sensing apparatus, defined as each including a plurality of sensors in ¶0016 (similar to each pair of the plurality of pairs of combinations taught by Shen) a position deviation, defined as a difference in the position of a target object detected by roadside sensing apparatus and the position of a target object detected by a vehicle sensing apparatus in ¶0100, (similar to the distance between the position predictions determined for the object according to the combinations of the pair taught by Shen) and one or a plurality of results of object detection using roadside and vehicle result sets (similar to the sensor data sets taught by Shen) (see ¶0099-0103, with respect to Figure 5, depicting the position deviation, speed deviation, size deviation, and color deviation input to the deviation network, defined as a back propagation neural network) and is trained to determine the confidence information from the input (see ¶0129-0130, regarding that the matching result S with an associated confidence is determined from the deviation network).
Claim 7
Yu further teaches that the neural network is set up to process input data for a variable number of objects (see ¶0096, regarding that the vehicle and roadside sensing data includes a quantity of target objects within a respective sensing range), the input data containing for each of the objects the distances between position predictions for the pairs of combinations (see ¶0099-0100, with respect to Figure 5, regarding the position deviation input to the deviation network).
Claim 9
The combination of Shen and Yu discloses a robot control device set up to perform a method (see ¶0186 of Shen, regarding that intelligent vehicle 100 performs the method of the embodiments with respect to Figures 4, 5A, and 5B) according to claim 1, as discussed in the rejection of claim 1.
Claim 10
The combination of Shen and Yu discloses a computer program comprising instructions that, when executed by a processor, cause the processor to carry out a method (see ¶0187 of Shen) according to claim 1, as discussed in the rejection of claim 1.
Claim 11
The combination of Shen and Yu discloses a computer-readable medium which stores instructions that, when executed by a processor, cause the processor to carry out a method (see ¶0187 of Shen) according to claim 1, as discussed in the rejection of claim 1.
Allowable Subject Matter
Claims 5, 6, and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
With respect to claim 5, the closest prior art of record, Shen and Yu, taken alone or in combination, does not teach that the claimed set of objects contains a plurality of objects and the neural network is set up to be invariant to a permutation of the objects, in light of the overall claim.
Specifically, the deviation network (i.e. “neural network”) of Yu is not designed to be invariant to the permutation of targets (i.e. “objects”), given that the deviation network evaluates a single pair (i.e. roadsidei and vehiclej) at a time (see ¶0026) and does not feed the entire set of roadside targets (i.e. resultr) and vehicle targets (i.e. resultv) at once. No reasonable combination of prior art can be made to teach this claimed feature, in light of the overall claim.
Claim 6 is objected to for incorporating the allowable subject matter of claim 5 by dependency.
With respect to claim 8, the closest prior art of record, Shen and Yu, taken alone or in combination, does not teach training the neural network by supervised learning using training data elements, wherein:
each training data element comprises a training input element having, for each pair of the combinations, a distance between position predictions for one or a plurality of objects of known position and a training target output element, and
the training target output element comprises, for each of the combinations, a training target output for the confidence information given by, for each of the one or a plurality of objects, a distance between the position prediction for the object according to the combination and the known position of the object, in light of the overall claim.
While the deviation network (i.e. “neural network”) of Yu may be reasonably interpreted as being trained by supervised learning, given that the evaluation result is used to adjust the deviation network to reduce the error in future matching results generated by the deviation network (see ¶0111-0112), Yu does not perform supervised learning using training data elements, such that each training data element comprises a training input element having, for each pair of the combinations, a distance between position predictions for one or a plurality of objects of known position and a training target output element, and the training target output element comprises, for each of the combinations, a training target output for the confidence information given by, for each of the one or a plurality of objects, a distance between the position prediction for the object according to the combination and the known position of the object, as claimed. No reasonable combination of prior art can be made to teach this claimed feature in light of the overall claim.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Specifically, Lee et al. (US 2022/0332327 A1) teaches determining the reliability of shared sensor fusion information (see ¶0124), Topiwala et al. (US 10,621,779 B1) teaches performing weighted combination of measurements obtained from two different sources using the credibility coefficients of each measurement (see col. 15, lines 41-55), and Kang et al. (US 2021/0174516 A1) teaches evaluating a reliability of each of the sets of detected image feature information by comparing the sets of the predicted image feature information and the sets of detected image feature information (see abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sara J Lewandroski whose telephone number is (571)270-7766. The examiner can normally be reached Monday-Friday, 9 am-5 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramya P Burgess can be reached at (571)272-6011. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARA J LEWANDROSKI/Examiner, Art Unit 3661